Oct 14 07:17:03 localhost kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 14 07:17:03 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 14 07:17:03 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 14 07:17:03 localhost kernel: BIOS-provided physical RAM map:
Oct 14 07:17:03 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 14 07:17:03 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 14 07:17:03 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 14 07:17:03 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 14 07:17:03 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 14 07:17:03 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 14 07:17:03 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 14 07:17:03 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 14 07:17:03 localhost kernel: NX (Execute Disable) protection: active
Oct 14 07:17:03 localhost kernel: APIC: Static calls initialized
Oct 14 07:17:03 localhost kernel: SMBIOS 2.8 present.
Oct 14 07:17:03 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 14 07:17:03 localhost kernel: Hypervisor detected: KVM
Oct 14 07:17:03 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 14 07:17:03 localhost kernel: kvm-clock: using sched offset of 4315048700 cycles
Oct 14 07:17:03 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 14 07:17:03 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 14 07:17:03 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 14 07:17:03 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 14 07:17:03 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 14 07:17:03 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 14 07:17:03 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 14 07:17:03 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 14 07:17:03 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 14 07:17:03 localhost kernel: Using GB pages for direct mapping
Oct 14 07:17:03 localhost kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 14 07:17:03 localhost kernel: ACPI: Early table checksum verification disabled
Oct 14 07:17:03 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 14 07:17:03 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 07:17:03 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 07:17:03 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 07:17:03 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 14 07:17:03 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 07:17:03 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 07:17:03 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 14 07:17:03 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 14 07:17:03 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 14 07:17:03 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 14 07:17:03 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 14 07:17:03 localhost kernel: No NUMA configuration found
Oct 14 07:17:03 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 14 07:17:03 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 14 07:17:03 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 14 07:17:03 localhost kernel: Zone ranges:
Oct 14 07:17:03 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 14 07:17:03 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 14 07:17:03 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 14 07:17:03 localhost kernel:   Device   empty
Oct 14 07:17:03 localhost kernel: Movable zone start for each node
Oct 14 07:17:03 localhost kernel: Early memory node ranges
Oct 14 07:17:03 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 14 07:17:03 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 14 07:17:03 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 14 07:17:03 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 14 07:17:03 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 14 07:17:03 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 14 07:17:03 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 14 07:17:03 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 14 07:17:03 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 14 07:17:03 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 14 07:17:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 14 07:17:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 14 07:17:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 14 07:17:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 14 07:17:03 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 14 07:17:03 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 14 07:17:03 localhost kernel: TSC deadline timer available
Oct 14 07:17:03 localhost kernel: CPU topo: Max. logical packages:   8
Oct 14 07:17:03 localhost kernel: CPU topo: Max. logical dies:       8
Oct 14 07:17:03 localhost kernel: CPU topo: Max. dies per package:   1
Oct 14 07:17:03 localhost kernel: CPU topo: Max. threads per core:   1
Oct 14 07:17:03 localhost kernel: CPU topo: Num. cores per package:     1
Oct 14 07:17:03 localhost kernel: CPU topo: Num. threads per package:   1
Oct 14 07:17:03 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 14 07:17:03 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 14 07:17:03 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 14 07:17:03 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 14 07:17:03 localhost kernel: Booting paravirtualized kernel on KVM
Oct 14 07:17:03 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 14 07:17:03 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 14 07:17:03 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 14 07:17:03 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 14 07:17:03 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 14 07:17:03 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 14 07:17:03 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 14 07:17:03 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 14 07:17:03 localhost kernel: random: crng init done
Oct 14 07:17:03 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 14 07:17:03 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 14 07:17:03 localhost kernel: Fallback order for Node 0: 0 
Oct 14 07:17:03 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 14 07:17:03 localhost kernel: Policy zone: Normal
Oct 14 07:17:03 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 14 07:17:03 localhost kernel: software IO TLB: area num 8.
Oct 14 07:17:03 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 14 07:17:03 localhost kernel: ftrace: allocating 49162 entries in 193 pages
Oct 14 07:17:03 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 14 07:17:03 localhost kernel: Dynamic Preempt: voluntary
Oct 14 07:17:03 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 14 07:17:03 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 14 07:17:03 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 14 07:17:03 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 14 07:17:03 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 14 07:17:03 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 14 07:17:03 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 14 07:17:03 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 14 07:17:03 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 14 07:17:03 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 14 07:17:03 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 14 07:17:03 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 14 07:17:03 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 14 07:17:03 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 14 07:17:03 localhost kernel: Console: colour VGA+ 80x25
Oct 14 07:17:03 localhost kernel: printk: console [ttyS0] enabled
Oct 14 07:17:03 localhost kernel: ACPI: Core revision 20230331
Oct 14 07:17:03 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 14 07:17:03 localhost kernel: x2apic enabled
Oct 14 07:17:03 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 14 07:17:03 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 14 07:17:03 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 14 07:17:03 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 14 07:17:03 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 14 07:17:03 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 14 07:17:03 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 14 07:17:03 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 14 07:17:03 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 14 07:17:03 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 14 07:17:03 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 14 07:17:03 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 14 07:17:03 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 14 07:17:03 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 14 07:17:03 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 14 07:17:03 localhost kernel: x86/bugs: return thunk changed
Oct 14 07:17:03 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 14 07:17:03 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 14 07:17:03 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 14 07:17:03 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 14 07:17:03 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 14 07:17:03 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 14 07:17:03 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 14 07:17:03 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 14 07:17:03 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 14 07:17:03 localhost kernel: landlock: Up and running.
Oct 14 07:17:03 localhost kernel: Yama: becoming mindful.
Oct 14 07:17:03 localhost kernel: SELinux:  Initializing.
Oct 14 07:17:03 localhost kernel: LSM support for eBPF active
Oct 14 07:17:03 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 14 07:17:03 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 14 07:17:03 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 14 07:17:03 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 14 07:17:03 localhost kernel: ... version:                0
Oct 14 07:17:03 localhost kernel: ... bit width:              48
Oct 14 07:17:03 localhost kernel: ... generic registers:      6
Oct 14 07:17:03 localhost kernel: ... value mask:             0000ffffffffffff
Oct 14 07:17:03 localhost kernel: ... max period:             00007fffffffffff
Oct 14 07:17:03 localhost kernel: ... fixed-purpose events:   0
Oct 14 07:17:03 localhost kernel: ... event mask:             000000000000003f
Oct 14 07:17:03 localhost kernel: signal: max sigframe size: 1776
Oct 14 07:17:03 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 14 07:17:03 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 14 07:17:03 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 14 07:17:03 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 14 07:17:03 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 14 07:17:03 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 14 07:17:03 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 14 07:17:03 localhost kernel: node 0 deferred pages initialised in 10ms
Oct 14 07:17:03 localhost kernel: Memory: 7765956K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616208K reserved, 0K cma-reserved)
Oct 14 07:17:03 localhost kernel: devtmpfs: initialized
Oct 14 07:17:03 localhost kernel: x86/mm: Memory block size: 128MB
Oct 14 07:17:03 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 14 07:17:03 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 14 07:17:03 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 14 07:17:03 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 14 07:17:03 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 14 07:17:03 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 14 07:17:03 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 14 07:17:03 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 14 07:17:03 localhost kernel: audit: type=2000 audit(1760426220.883:1): state=initialized audit_enabled=0 res=1
Oct 14 07:17:03 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 14 07:17:03 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 14 07:17:03 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 14 07:17:03 localhost kernel: cpuidle: using governor menu
Oct 14 07:17:03 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 14 07:17:03 localhost kernel: PCI: Using configuration type 1 for base access
Oct 14 07:17:03 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 14 07:17:03 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 14 07:17:03 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 14 07:17:03 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 14 07:17:03 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 14 07:17:03 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 14 07:17:03 localhost kernel: Demotion targets for Node 0: null
Oct 14 07:17:03 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 14 07:17:03 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 14 07:17:03 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 14 07:17:03 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 14 07:17:03 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 14 07:17:03 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 14 07:17:03 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 14 07:17:03 localhost kernel: ACPI: Interpreter enabled
Oct 14 07:17:03 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 14 07:17:03 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 14 07:17:03 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 14 07:17:03 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 14 07:17:03 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 14 07:17:03 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 14 07:17:03 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [3] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [4] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [5] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [6] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [7] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [8] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [9] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [10] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [11] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [12] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [13] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [14] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [15] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [16] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [17] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [18] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [19] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [20] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [21] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [22] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [23] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [24] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [25] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [26] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [27] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [28] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [29] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [30] registered
Oct 14 07:17:03 localhost kernel: acpiphp: Slot [31] registered
Oct 14 07:17:03 localhost kernel: PCI host bridge to bus 0000:00
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 14 07:17:03 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 14 07:17:03 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 14 07:17:03 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 14 07:17:03 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 14 07:17:03 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 14 07:17:03 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 14 07:17:03 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 14 07:17:03 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 14 07:17:03 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 14 07:17:03 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 14 07:17:03 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 14 07:17:03 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 14 07:17:03 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 14 07:17:03 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 14 07:17:03 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 14 07:17:03 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 14 07:17:03 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 14 07:17:03 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 14 07:17:03 localhost kernel: iommu: Default domain type: Translated
Oct 14 07:17:03 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 14 07:17:03 localhost kernel: SCSI subsystem initialized
Oct 14 07:17:03 localhost kernel: ACPI: bus type USB registered
Oct 14 07:17:03 localhost kernel: usbcore: registered new interface driver usbfs
Oct 14 07:17:03 localhost kernel: usbcore: registered new interface driver hub
Oct 14 07:17:03 localhost kernel: usbcore: registered new device driver usb
Oct 14 07:17:03 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 14 07:17:03 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 14 07:17:03 localhost kernel: PTP clock support registered
Oct 14 07:17:03 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 14 07:17:03 localhost kernel: NetLabel: Initializing
Oct 14 07:17:03 localhost kernel: NetLabel:  domain hash size = 128
Oct 14 07:17:03 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 14 07:17:03 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 14 07:17:03 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 14 07:17:03 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 14 07:17:03 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 14 07:17:03 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 14 07:17:03 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 14 07:17:03 localhost kernel: vgaarb: loaded
Oct 14 07:17:03 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 14 07:17:03 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 14 07:17:03 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 14 07:17:03 localhost kernel: pnp: PnP ACPI init
Oct 14 07:17:03 localhost kernel: pnp 00:03: [dma 2]
Oct 14 07:17:03 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 14 07:17:03 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 14 07:17:03 localhost kernel: NET: Registered PF_INET protocol family
Oct 14 07:17:03 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 14 07:17:03 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 14 07:17:03 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 14 07:17:03 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 14 07:17:03 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 14 07:17:03 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 14 07:17:03 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 14 07:17:03 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 14 07:17:03 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 14 07:17:03 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 14 07:17:03 localhost kernel: NET: Registered PF_XDP protocol family
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 14 07:17:03 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 14 07:17:03 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 14 07:17:03 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 14 07:17:03 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 96568 usecs
Oct 14 07:17:03 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 14 07:17:03 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 14 07:17:03 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 14 07:17:03 localhost kernel: ACPI: bus type thunderbolt registered
Oct 14 07:17:03 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 14 07:17:03 localhost kernel: Initialise system trusted keyrings
Oct 14 07:17:03 localhost kernel: Key type blacklist registered
Oct 14 07:17:03 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 14 07:17:03 localhost kernel: zbud: loaded
Oct 14 07:17:03 localhost kernel: integrity: Platform Keyring initialized
Oct 14 07:17:03 localhost kernel: integrity: Machine keyring initialized
Oct 14 07:17:03 localhost kernel: Freeing initrd memory: 85808K
Oct 14 07:17:03 localhost kernel: NET: Registered PF_ALG protocol family
Oct 14 07:17:03 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 14 07:17:03 localhost kernel: Key type asymmetric registered
Oct 14 07:17:03 localhost kernel: Asymmetric key parser 'x509' registered
Oct 14 07:17:03 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 14 07:17:03 localhost kernel: io scheduler mq-deadline registered
Oct 14 07:17:03 localhost kernel: io scheduler kyber registered
Oct 14 07:17:03 localhost kernel: io scheduler bfq registered
Oct 14 07:17:03 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 14 07:17:03 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 14 07:17:03 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 14 07:17:03 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 14 07:17:03 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 14 07:17:03 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 14 07:17:03 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 14 07:17:03 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 14 07:17:03 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 14 07:17:03 localhost kernel: Non-volatile memory driver v1.3
Oct 14 07:17:03 localhost kernel: rdac: device handler registered
Oct 14 07:17:03 localhost kernel: hp_sw: device handler registered
Oct 14 07:17:03 localhost kernel: emc: device handler registered
Oct 14 07:17:03 localhost kernel: alua: device handler registered
Oct 14 07:17:03 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 14 07:17:03 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 14 07:17:03 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 14 07:17:03 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 14 07:17:03 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 14 07:17:03 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 14 07:17:03 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 14 07:17:03 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 14 07:17:03 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 14 07:17:03 localhost kernel: hub 1-0:1.0: USB hub found
Oct 14 07:17:03 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 14 07:17:03 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 14 07:17:03 localhost kernel: usbserial: USB Serial support registered for generic
Oct 14 07:17:03 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 14 07:17:03 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 14 07:17:03 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 14 07:17:03 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 14 07:17:03 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 14 07:17:03 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 14 07:17:03 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 14 07:17:03 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-14T07:17:02 UTC (1760426222)
Oct 14 07:17:03 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 14 07:17:03 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 14 07:17:03 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 14 07:17:03 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 14 07:17:03 localhost kernel: usbcore: registered new interface driver usbhid
Oct 14 07:17:03 localhost kernel: usbhid: USB HID core driver
Oct 14 07:17:03 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 14 07:17:03 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 14 07:17:03 localhost kernel: Initializing XFRM netlink socket
Oct 14 07:17:03 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 14 07:17:03 localhost kernel: Segment Routing with IPv6
Oct 14 07:17:03 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 14 07:17:03 localhost kernel: mpls_gso: MPLS GSO support
Oct 14 07:17:03 localhost kernel: IPI shorthand broadcast: enabled
Oct 14 07:17:03 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 14 07:17:03 localhost kernel: AES CTR mode by8 optimization enabled
Oct 14 07:17:03 localhost kernel: sched_clock: Marking stable (1371002070, 142622460)->(1603771730, -90147200)
Oct 14 07:17:03 localhost kernel: registered taskstats version 1
Oct 14 07:17:03 localhost kernel: Loading compiled-in X.509 certificates
Oct 14 07:17:03 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 14 07:17:03 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 14 07:17:03 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 14 07:17:03 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 14 07:17:03 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 14 07:17:03 localhost kernel: Demotion targets for Node 0: null
Oct 14 07:17:03 localhost kernel: page_owner is disabled
Oct 14 07:17:03 localhost kernel: Key type .fscrypt registered
Oct 14 07:17:03 localhost kernel: Key type fscrypt-provisioning registered
Oct 14 07:17:03 localhost kernel: Key type big_key registered
Oct 14 07:17:03 localhost kernel: Key type encrypted registered
Oct 14 07:17:03 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 14 07:17:03 localhost kernel: Loading compiled-in module X.509 certificates
Oct 14 07:17:03 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 14 07:17:03 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 14 07:17:03 localhost kernel: ima: No architecture policies found
Oct 14 07:17:03 localhost kernel: evm: Initialising EVM extended attributes:
Oct 14 07:17:03 localhost kernel: evm: security.selinux
Oct 14 07:17:03 localhost kernel: evm: security.SMACK64 (disabled)
Oct 14 07:17:03 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 14 07:17:03 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 14 07:17:03 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 14 07:17:03 localhost kernel: evm: security.apparmor (disabled)
Oct 14 07:17:03 localhost kernel: evm: security.ima
Oct 14 07:17:03 localhost kernel: evm: security.capability
Oct 14 07:17:03 localhost kernel: evm: HMAC attrs: 0x1
Oct 14 07:17:03 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 14 07:17:03 localhost kernel: Running certificate verification RSA selftest
Oct 14 07:17:03 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 14 07:17:03 localhost kernel: Running certificate verification ECDSA selftest
Oct 14 07:17:03 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 14 07:17:03 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 14 07:17:03 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 14 07:17:03 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 14 07:17:03 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 14 07:17:03 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 14 07:17:03 localhost kernel: clk: Disabling unused clocks
Oct 14 07:17:03 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 14 07:17:03 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 14 07:17:03 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 14 07:17:03 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 14 07:17:03 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 14 07:17:03 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 14 07:17:03 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 14 07:17:03 localhost kernel: Run /init as init process
Oct 14 07:17:03 localhost kernel:   with arguments:
Oct 14 07:17:03 localhost kernel:     /init
Oct 14 07:17:03 localhost kernel:   with environment:
Oct 14 07:17:03 localhost kernel:     HOME=/
Oct 14 07:17:03 localhost kernel:     TERM=linux
Oct 14 07:17:03 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64
Oct 14 07:17:03 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 14 07:17:03 localhost systemd[1]: Detected virtualization kvm.
Oct 14 07:17:03 localhost systemd[1]: Detected architecture x86-64.
Oct 14 07:17:03 localhost systemd[1]: Running in initrd.
Oct 14 07:17:03 localhost systemd[1]: No hostname configured, using default hostname.
Oct 14 07:17:03 localhost systemd[1]: Hostname set to <localhost>.
Oct 14 07:17:03 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 14 07:17:03 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 14 07:17:03 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 14 07:17:03 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 14 07:17:03 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 14 07:17:03 localhost systemd[1]: Reached target Local File Systems.
Oct 14 07:17:03 localhost systemd[1]: Reached target Path Units.
Oct 14 07:17:03 localhost systemd[1]: Reached target Slice Units.
Oct 14 07:17:03 localhost systemd[1]: Reached target Swaps.
Oct 14 07:17:03 localhost systemd[1]: Reached target Timer Units.
Oct 14 07:17:03 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 14 07:17:03 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 14 07:17:03 localhost systemd[1]: Listening on Journal Socket.
Oct 14 07:17:03 localhost systemd[1]: Listening on udev Control Socket.
Oct 14 07:17:03 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 14 07:17:03 localhost systemd[1]: Reached target Socket Units.
Oct 14 07:17:03 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 14 07:17:03 localhost systemd[1]: Starting Journal Service...
Oct 14 07:17:03 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 14 07:17:03 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 14 07:17:03 localhost systemd[1]: Starting Create System Users...
Oct 14 07:17:03 localhost systemd[1]: Starting Setup Virtual Console...
Oct 14 07:17:03 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 14 07:17:03 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 14 07:17:03 localhost systemd[1]: Finished Create System Users.
Oct 14 07:17:03 localhost systemd-journald[306]: Journal started
Oct 14 07:17:03 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/1a1d621ed70142a8a9a32d332c90e100) is 8.0M, max 153.6M, 145.6M free.
Oct 14 07:17:03 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Oct 14 07:17:03 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Oct 14 07:17:03 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 14 07:17:03 localhost systemd[1]: Started Journal Service.
Oct 14 07:17:03 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 14 07:17:03 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 14 07:17:03 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 14 07:17:03 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 14 07:17:03 localhost systemd[1]: Finished Setup Virtual Console.
Oct 14 07:17:03 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 14 07:17:03 localhost systemd[1]: Starting dracut cmdline hook...
Oct 14 07:17:03 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Oct 14 07:17:03 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 14 07:17:03 localhost systemd[1]: Finished dracut cmdline hook.
Oct 14 07:17:03 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 14 07:17:03 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 14 07:17:03 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 14 07:17:03 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 14 07:17:04 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 14 07:17:04 localhost kernel: RPC: Registered udp transport module.
Oct 14 07:17:04 localhost kernel: RPC: Registered tcp transport module.
Oct 14 07:17:04 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 14 07:17:04 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 14 07:17:04 localhost rpc.statd[443]: Version 2.5.4 starting
Oct 14 07:17:04 localhost rpc.statd[443]: Initializing NSM state
Oct 14 07:17:04 localhost rpc.idmapd[448]: Setting log level to 0
Oct 14 07:17:04 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 14 07:17:04 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 14 07:17:04 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct 14 07:17:04 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 14 07:17:04 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 14 07:17:04 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 14 07:17:04 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 14 07:17:04 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 14 07:17:04 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 14 07:17:04 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 14 07:17:04 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 14 07:17:04 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 14 07:17:04 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 14 07:17:04 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 14 07:17:04 localhost systemd[1]: Reached target Network.
Oct 14 07:17:04 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 14 07:17:04 localhost systemd[1]: Starting dracut initqueue hook...
Oct 14 07:17:04 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 14 07:17:04 localhost systemd[1]: Reached target System Initialization.
Oct 14 07:17:04 localhost systemd[1]: Reached target Basic System.
Oct 14 07:17:04 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 14 07:17:04 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 14 07:17:04 localhost kernel:  vda: vda1
Oct 14 07:17:04 localhost kernel: libata version 3.00 loaded.
Oct 14 07:17:04 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 14 07:17:04 localhost kernel: scsi host0: ata_piix
Oct 14 07:17:04 localhost kernel: scsi host1: ata_piix
Oct 14 07:17:04 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 14 07:17:04 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 14 07:17:04 localhost systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 14 07:17:04 localhost systemd[1]: Reached target Initrd Root Device.
Oct 14 07:17:04 localhost kernel: ata1: found unknown device (class 0)
Oct 14 07:17:04 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 14 07:17:04 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 14 07:17:04 localhost systemd-udevd[466]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 07:17:04 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 14 07:17:04 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 14 07:17:04 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 14 07:17:04 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 14 07:17:04 localhost systemd[1]: Finished dracut initqueue hook.
Oct 14 07:17:04 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 14 07:17:04 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 14 07:17:04 localhost systemd[1]: Reached target Remote File Systems.
Oct 14 07:17:04 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 14 07:17:04 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 14 07:17:04 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 14 07:17:04 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Oct 14 07:17:04 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 14 07:17:04 localhost systemd[1]: Mounting /sysroot...
Oct 14 07:17:05 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 14 07:17:05 localhost kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 14 07:17:05 localhost kernel: XFS (vda1): Ending clean mount
Oct 14 07:17:05 localhost systemd[1]: Mounted /sysroot.
Oct 14 07:17:05 localhost systemd[1]: Reached target Initrd Root File System.
Oct 14 07:17:05 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 14 07:17:05 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 14 07:17:05 localhost systemd[1]: Reached target Initrd File Systems.
Oct 14 07:17:05 localhost systemd[1]: Reached target Initrd Default Target.
Oct 14 07:17:05 localhost systemd[1]: Starting dracut mount hook...
Oct 14 07:17:05 localhost systemd[1]: Finished dracut mount hook.
Oct 14 07:17:05 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 14 07:17:05 localhost rpc.idmapd[448]: exiting on signal 15
Oct 14 07:17:05 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 14 07:17:05 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 14 07:17:05 localhost systemd[1]: Stopped target Network.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Timer Units.
Oct 14 07:17:05 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 14 07:17:05 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Basic System.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Path Units.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Remote File Systems.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Slice Units.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Socket Units.
Oct 14 07:17:05 localhost systemd[1]: Stopped target System Initialization.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Local File Systems.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Swaps.
Oct 14 07:17:05 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped dracut mount hook.
Oct 14 07:17:05 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 14 07:17:05 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 14 07:17:05 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 14 07:17:05 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 14 07:17:05 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 14 07:17:05 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 14 07:17:05 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 14 07:17:05 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 14 07:17:05 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 14 07:17:05 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 14 07:17:05 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 14 07:17:05 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 14 07:17:05 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Closed udev Control Socket.
Oct 14 07:17:05 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Closed udev Kernel Socket.
Oct 14 07:17:05 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 14 07:17:05 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 14 07:17:05 localhost systemd[1]: Starting Cleanup udev Database...
Oct 14 07:17:05 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 14 07:17:05 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 14 07:17:05 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Stopped Create System Users.
Oct 14 07:17:05 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 14 07:17:05 localhost systemd[1]: Finished Cleanup udev Database.
Oct 14 07:17:05 localhost systemd[1]: Reached target Switch Root.
Oct 14 07:17:05 localhost systemd[1]: Starting Switch Root...
Oct 14 07:17:05 localhost systemd[1]: Switching root.
Oct 14 07:17:05 localhost systemd-journald[306]: Journal stopped
Oct 14 08:52:08 compute-0 sudo[269400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:52:08 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 08:52:08 compute-0 sudo[269400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:08 compute-0 sudo[269400]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:08 compute-0 sudo[269426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:52:08 compute-0 sudo[269426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:08 compute-0 sudo[269426]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:08 compute-0 sudo[269451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 08:52:08 compute-0 sudo[269451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:08 compute-0 ceph-mon[74249]: pgmap v996: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:08 compute-0 nova_compute[259627]: 2025-10-14 08:52:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:08 compute-0 nova_compute[259627]: 2025-10-14 08:52:08.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:08 compute-0 nova_compute[259627]: 2025-10-14 08:52:08.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 08:52:09 compute-0 sudo[269451]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:52:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:52:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 08:52:09 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:52:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 08:52:09 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:52:09 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 1c274801-dbdd-4eb9-840b-b3beebf48130 does not exist
Oct 14 08:52:09 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3a7efc4c-312d-4248-98fe-7ac402ffea62 does not exist
Oct 14 08:52:09 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 068fb985-434c-4df6-806e-9d4a62a820ac does not exist
Oct 14 08:52:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 08:52:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:52:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 08:52:09 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:52:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:52:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:52:09 compute-0 sudo[269506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:52:09 compute-0 sudo[269506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:09 compute-0 sudo[269506]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:09 compute-0 sudo[269531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:52:09 compute-0 sudo[269531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:09 compute-0 sudo[269531]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:09 compute-0 sudo[269556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:52:09 compute-0 sudo[269556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:09 compute-0 sudo[269556]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:09 compute-0 sudo[269581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 08:52:09 compute-0 sudo[269581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:52:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:52:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:52:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:52:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:52:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:52:09 compute-0 podman[269649]: 2025-10-14 08:52:09.808759586 +0000 UTC m=+0.057088172 container create aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 08:52:09 compute-0 systemd[1]: Started libpod-conmon-aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a.scope.
Oct 14 08:52:09 compute-0 podman[269649]: 2025-10-14 08:52:09.788708344 +0000 UTC m=+0.037036910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:52:09 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:52:09 compute-0 podman[269649]: 2025-10-14 08:52:09.917916315 +0000 UTC m=+0.166244931 container init aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 14 08:52:09 compute-0 podman[269649]: 2025-10-14 08:52:09.928868054 +0000 UTC m=+0.177196600 container start aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 08:52:09 compute-0 podman[269649]: 2025-10-14 08:52:09.932691038 +0000 UTC m=+0.181019634 container attach aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 08:52:09 compute-0 tender_pascal[269666]: 167 167
Oct 14 08:52:09 compute-0 systemd[1]: libpod-aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a.scope: Deactivated successfully.
Oct 14 08:52:09 compute-0 podman[269649]: 2025-10-14 08:52:09.938685285 +0000 UTC m=+0.187013871 container died aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:52:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-409aef0883151e5a6f3c5486ecc340c982fee4333f7a791b6e49bafbdf35d07f-merged.mount: Deactivated successfully.
Oct 14 08:52:09 compute-0 nova_compute[259627]: 2025-10-14 08:52:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:09 compute-0 nova_compute[259627]: 2025-10-14 08:52:09.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:09 compute-0 podman[269649]: 2025-10-14 08:52:09.995051988 +0000 UTC m=+0.243380574 container remove aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:52:10 compute-0 systemd[1]: libpod-conmon-aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a.scope: Deactivated successfully.
Oct 14 08:52:10 compute-0 podman[269690]: 2025-10-14 08:52:10.186962789 +0000 UTC m=+0.044806891 container create a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 08:52:10 compute-0 systemd[1]: Started libpod-conmon-a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4.scope.
Oct 14 08:52:10 compute-0 podman[269690]: 2025-10-14 08:52:10.165579334 +0000 UTC m=+0.023423476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:52:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:10 compute-0 podman[269690]: 2025-10-14 08:52:10.286644705 +0000 UTC m=+0.144488897 container init a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:52:10 compute-0 podman[269690]: 2025-10-14 08:52:10.294552549 +0000 UTC m=+0.152396651 container start a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 08:52:10 compute-0 podman[269690]: 2025-10-14 08:52:10.298037305 +0000 UTC m=+0.155881447 container attach a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 08:52:10 compute-0 nova_compute[259627]: 2025-10-14 08:52:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:10 compute-0 nova_compute[259627]: 2025-10-14 08:52:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:11 compute-0 ceph-mon[74249]: pgmap v997: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:11 compute-0 sleepy_wilson[269706]: --> passed data devices: 0 physical, 3 LVM
Oct 14 08:52:11 compute-0 sleepy_wilson[269706]: --> relative data size: 1.0
Oct 14 08:52:11 compute-0 sleepy_wilson[269706]: --> All data devices are unavailable
Oct 14 08:52:11 compute-0 systemd[1]: libpod-a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4.scope: Deactivated successfully.
Oct 14 08:52:11 compute-0 podman[269690]: 2025-10-14 08:52:11.374175669 +0000 UTC m=+1.232019811 container died a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 08:52:11 compute-0 systemd[1]: libpod-a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4.scope: Consumed 1.036s CPU time.
Oct 14 08:52:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34-merged.mount: Deactivated successfully.
Oct 14 08:52:11 compute-0 podman[269690]: 2025-10-14 08:52:11.440846705 +0000 UTC m=+1.298690807 container remove a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:52:11 compute-0 systemd[1]: libpod-conmon-a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4.scope: Deactivated successfully.
Oct 14 08:52:11 compute-0 sudo[269581]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:11 compute-0 sudo[269750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:52:11 compute-0 sudo[269750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:11 compute-0 sudo[269750]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:11 compute-0 sudo[269775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:52:11 compute-0 sudo[269775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:11 compute-0 sudo[269775]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:11 compute-0 sudo[269800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:52:11 compute-0 sudo[269800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:11 compute-0 sudo[269800]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:11 compute-0 sudo[269825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 08:52:11 compute-0 sudo[269825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:11 compute-0 nova_compute[259627]: 2025-10-14 08:52:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.011 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.011 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:52:12 compute-0 podman[269911]: 2025-10-14 08:52:12.290982661 +0000 UTC m=+0.068221616 container create 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 08:52:12 compute-0 systemd[1]: Started libpod-conmon-594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116.scope.
Oct 14 08:52:12 compute-0 podman[269911]: 2025-10-14 08:52:12.260425311 +0000 UTC m=+0.037664306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:52:12 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:52:12 compute-0 podman[269911]: 2025-10-14 08:52:12.399634258 +0000 UTC m=+0.176873273 container init 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:52:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:12 compute-0 podman[269911]: 2025-10-14 08:52:12.414527773 +0000 UTC m=+0.191766728 container start 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 08:52:12 compute-0 podman[269911]: 2025-10-14 08:52:12.418905641 +0000 UTC m=+0.196144706 container attach 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:52:12 compute-0 wonderful_sammet[269927]: 167 167
Oct 14 08:52:12 compute-0 systemd[1]: libpod-594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116.scope: Deactivated successfully.
Oct 14 08:52:12 compute-0 podman[269911]: 2025-10-14 08:52:12.42376709 +0000 UTC m=+0.201006045 container died 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 08:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ca96595b1fcefeddab3765e87cc95f98c7815f4c1c349d5a2e18e97c7dafb89-merged.mount: Deactivated successfully.
Oct 14 08:52:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:52:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2708719720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:52:12 compute-0 podman[269911]: 2025-10-14 08:52:12.47020468 +0000 UTC m=+0.247443595 container remove 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.483 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:52:12 compute-0 systemd[1]: libpod-conmon-594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116.scope: Deactivated successfully.
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.661 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.663 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.663 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.664 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:52:12 compute-0 podman[269953]: 2025-10-14 08:52:12.665498503 +0000 UTC m=+0.057591984 container create 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:52:12 compute-0 systemd[1]: Started libpod-conmon-05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7.scope.
Oct 14 08:52:12 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:12 compute-0 podman[269953]: 2025-10-14 08:52:12.631323454 +0000 UTC m=+0.023416915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:12 compute-0 podman[269953]: 2025-10-14 08:52:12.740358771 +0000 UTC m=+0.132452222 container init 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.751 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.751 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 08:52:12 compute-0 podman[269953]: 2025-10-14 08:52:12.754670972 +0000 UTC m=+0.146764403 container start 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 08:52:12 compute-0 podman[269953]: 2025-10-14 08:52:12.758746882 +0000 UTC m=+0.150840373 container attach 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 08:52:12 compute-0 nova_compute[259627]: 2025-10-14 08:52:12.767 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:52:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:52:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812353846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:52:13 compute-0 nova_compute[259627]: 2025-10-14 08:52:13.183 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:52:13 compute-0 nova_compute[259627]: 2025-10-14 08:52:13.189 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:52:13 compute-0 ceph-mon[74249]: pgmap v998: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2708719720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:52:13 compute-0 nova_compute[259627]: 2025-10-14 08:52:13.205 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:52:13 compute-0 nova_compute[259627]: 2025-10-14 08:52:13.207 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 08:52:13 compute-0 nova_compute[259627]: 2025-10-14 08:52:13.208 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:52:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]: {
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:     "0": [
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:         {
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "devices": [
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "/dev/loop3"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             ],
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_name": "ceph_lv0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_size": "21470642176",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "name": "ceph_lv0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "tags": {
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cluster_name": "ceph",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.crush_device_class": "",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.encrypted": "0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osd_id": "0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.type": "block",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.vdo": "0"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             },
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "type": "block",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "vg_name": "ceph_vg0"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:         }
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:     ],
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:     "1": [
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:         {
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "devices": [
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "/dev/loop4"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             ],
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_name": "ceph_lv1",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_size": "21470642176",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "name": "ceph_lv1",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "tags": {
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cluster_name": "ceph",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.crush_device_class": "",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.encrypted": "0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osd_id": "1",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.type": "block",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.vdo": "0"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             },
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "type": "block",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "vg_name": "ceph_vg1"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:         }
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:     ],
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:     "2": [
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:         {
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "devices": [
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "/dev/loop5"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             ],
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_name": "ceph_lv2",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_size": "21470642176",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "name": "ceph_lv2",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "tags": {
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.cluster_name": "ceph",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.crush_device_class": "",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.encrypted": "0",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osd_id": "2",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.type": "block",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:                 "ceph.vdo": "0"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             },
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "type": "block",
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:             "vg_name": "ceph_vg2"
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:         }
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]:     ]
Oct 14 08:52:13 compute-0 friendly_varahamihira[269969]: }
Oct 14 08:52:13 compute-0 systemd[1]: libpod-05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7.scope: Deactivated successfully.
Oct 14 08:52:13 compute-0 podman[269953]: 2025-10-14 08:52:13.476258483 +0000 UTC m=+0.868351964 container died 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 08:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1-merged.mount: Deactivated successfully.
Oct 14 08:52:13 compute-0 podman[269953]: 2025-10-14 08:52:13.532927154 +0000 UTC m=+0.925020595 container remove 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 08:52:13 compute-0 systemd[1]: libpod-conmon-05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7.scope: Deactivated successfully.
Oct 14 08:52:13 compute-0 sudo[269825]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:13 compute-0 sudo[270011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:52:13 compute-0 sudo[270011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:13 compute-0 sudo[270011]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:13 compute-0 sudo[270036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:52:13 compute-0 sudo[270036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:13 compute-0 sudo[270036]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:13 compute-0 sudo[270061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:52:13 compute-0 sudo[270061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:13 compute-0 sudo[270061]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:13 compute-0 sudo[270086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 08:52:13 compute-0 sudo[270086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/812353846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:52:14 compute-0 podman[270152]: 2025-10-14 08:52:14.353590037 +0000 UTC m=+0.049886895 container create 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:52:14 compute-0 systemd[1]: Started libpod-conmon-5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684.scope.
Oct 14 08:52:14 compute-0 podman[270152]: 2025-10-14 08:52:14.331863064 +0000 UTC m=+0.028159922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:52:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:52:14 compute-0 podman[270152]: 2025-10-14 08:52:14.451437429 +0000 UTC m=+0.147734327 container init 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 14 08:52:14 compute-0 podman[270152]: 2025-10-14 08:52:14.462093631 +0000 UTC m=+0.158390449 container start 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:52:14 compute-0 podman[270152]: 2025-10-14 08:52:14.466728794 +0000 UTC m=+0.163025702 container attach 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 08:52:14 compute-0 kind_mirzakhani[270169]: 167 167
Oct 14 08:52:14 compute-0 systemd[1]: libpod-5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684.scope: Deactivated successfully.
Oct 14 08:52:14 compute-0 podman[270152]: 2025-10-14 08:52:14.46899005 +0000 UTC m=+0.165286868 container died 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c329689a0a7a787ec3784b1c5a30cd52156c3259b726c465f1df95f15ecceecf-merged.mount: Deactivated successfully.
Oct 14 08:52:14 compute-0 podman[270152]: 2025-10-14 08:52:14.5105292 +0000 UTC m=+0.206826018 container remove 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 08:52:14 compute-0 systemd[1]: libpod-conmon-5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684.scope: Deactivated successfully.
Oct 14 08:52:14 compute-0 podman[270192]: 2025-10-14 08:52:14.730517739 +0000 UTC m=+0.058284201 container create 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:52:14 compute-0 systemd[1]: Started libpod-conmon-5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196.scope.
Oct 14 08:52:14 compute-0 podman[270192]: 2025-10-14 08:52:14.714400344 +0000 UTC m=+0.042166836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:52:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:52:14 compute-0 podman[270192]: 2025-10-14 08:52:14.837379542 +0000 UTC m=+0.165146034 container init 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 08:52:14 compute-0 podman[270192]: 2025-10-14 08:52:14.851649172 +0000 UTC m=+0.179415644 container start 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:52:14 compute-0 podman[270192]: 2025-10-14 08:52:14.855112707 +0000 UTC m=+0.182879229 container attach 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:52:15 compute-0 nova_compute[259627]: 2025-10-14 08:52:15.209 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:15 compute-0 nova_compute[259627]: 2025-10-14 08:52:15.209 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 08:52:15 compute-0 nova_compute[259627]: 2025-10-14 08:52:15.210 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 08:52:15 compute-0 ceph-mon[74249]: pgmap v999: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:15 compute-0 nova_compute[259627]: 2025-10-14 08:52:15.225 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 08:52:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:15 compute-0 unruffled_bose[270208]: {
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "osd_id": 2,
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "type": "bluestore"
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:     },
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "osd_id": 1,
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "type": "bluestore"
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:     },
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "osd_id": 0,
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:         "type": "bluestore"
Oct 14 08:52:15 compute-0 unruffled_bose[270208]:     }
Oct 14 08:52:15 compute-0 unruffled_bose[270208]: }
Oct 14 08:52:15 compute-0 systemd[1]: libpod-5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196.scope: Deactivated successfully.
Oct 14 08:52:15 compute-0 podman[270192]: 2025-10-14 08:52:15.964344372 +0000 UTC m=+1.292110864 container died 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:52:15 compute-0 systemd[1]: libpod-5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196.scope: Consumed 1.120s CPU time.
Oct 14 08:52:15 compute-0 nova_compute[259627]: 2025-10-14 08:52:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c-merged.mount: Deactivated successfully.
Oct 14 08:52:16 compute-0 podman[270192]: 2025-10-14 08:52:16.029603694 +0000 UTC m=+1.357370166 container remove 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 08:52:16 compute-0 systemd[1]: libpod-conmon-5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196.scope: Deactivated successfully.
Oct 14 08:52:16 compute-0 sudo[270086]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:52:16 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:52:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:52:16 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:52:16 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev dacf5ff0-cfd0-47de-9b86-73616d47f846 does not exist
Oct 14 08:52:16 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 4d4ecdeb-15e9-4cde-88cf-9a12de49aad1 does not exist
Oct 14 08:52:16 compute-0 sudo[270255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:52:16 compute-0 sudo[270255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:16 compute-0 sudo[270255]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:16 compute-0 sudo[270280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 08:52:16 compute-0 sudo[270280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:52:16 compute-0 sudo[270280]: pam_unix(sudo:session): session closed for user root
Oct 14 08:52:17 compute-0 ceph-mon[74249]: pgmap v1000: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:52:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:52:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:19 compute-0 ceph-mon[74249]: pgmap v1001: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:20 compute-0 podman[270305]: 2025-10-14 08:52:20.668667428 +0000 UTC m=+0.076887388 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, container_name=multipathd)
Oct 14 08:52:20 compute-0 podman[270306]: 2025-10-14 08:52:20.69400101 +0000 UTC m=+0.102226280 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 14 08:52:21 compute-0 ceph-mon[74249]: pgmap v1002: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:23 compute-0 ceph-mon[74249]: pgmap v1003: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:25 compute-0 ceph-mon[74249]: pgmap v1004: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:27 compute-0 ceph-mon[74249]: pgmap v1005: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:29 compute-0 ceph-mon[74249]: pgmap v1006: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:31 compute-0 ceph-mon[74249]: pgmap v1007: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:31 compute-0 podman[270347]: 2025-10-14 08:52:31.729558312 +0000 UTC m=+0.095995058 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 08:52:31 compute-0 podman[270346]: 2025-10-14 08:52:31.746281635 +0000 UTC m=+0.116210577 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:52:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:52:32
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr', 'vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:52:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:52:33 compute-0 ceph-mon[74249]: pgmap v1008: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:33 compute-0 ceph-mgr[74543]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3625056923
Oct 14 08:52:35 compute-0 ceph-mon[74249]: pgmap v1009: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:37 compute-0 ceph-mon[74249]: pgmap v1010: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:39 compute-0 ceph-mon[74249]: pgmap v1011: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:40 compute-0 ceph-mon[74249]: pgmap v1012: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:41 compute-0 ceph-mon[74249]: pgmap v1013: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:52:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 08:52:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:43 compute-0 ceph-mon[74249]: pgmap v1014: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:45 compute-0 ceph-mon[74249]: pgmap v1015: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:47 compute-0 ceph-mon[74249]: pgmap v1016: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:49 compute-0 ceph-mon[74249]: pgmap v1017: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:51 compute-0 ceph-mon[74249]: pgmap v1018: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:51 compute-0 podman[270388]: 2025-10-14 08:52:51.65913139 +0000 UTC m=+0.072298944 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 08:52:51 compute-0 podman[270389]: 2025-10-14 08:52:51.668126032 +0000 UTC m=+0.075903823 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 08:52:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:53 compute-0 ceph-mon[74249]: pgmap v1019: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:55 compute-0 ceph-mon[74249]: pgmap v1020: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:52:57 compute-0 ceph-mon[74249]: pgmap v1021: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:52:59 compute-0 ceph-mon[74249]: pgmap v1022: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:01 compute-0 ceph-mon[74249]: pgmap v1023: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:02 compute-0 podman[270430]: 2025-10-14 08:53:02.643322554 +0000 UTC m=+0.051382178 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 08:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:53:02 compute-0 podman[270429]: 2025-10-14 08:53:02.734887843 +0000 UTC m=+0.147783596 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 08:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:53:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:03 compute-0 ceph-mon[74249]: pgmap v1024: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:04 compute-0 nova_compute[259627]: 2025-10-14 08:53:04.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:05 compute-0 ceph-mon[74249]: pgmap v1025: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 08:53:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1864568504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:53:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 08:53:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1864568504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:53:05 compute-0 nova_compute[259627]: 2025-10-14 08:53:05.992 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:05 compute-0 nova_compute[259627]: 2025-10-14 08:53:05.993 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 08:53:06 compute-0 nova_compute[259627]: 2025-10-14 08:53:06.022 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 08:53:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1864568504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:53:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1864568504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:53:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:53:07.011 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:53:07.012 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:53:07.012 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:07 compute-0 ceph-mon[74249]: pgmap v1026: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:07 compute-0 nova_compute[259627]: 2025-10-14 08:53:07.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:07 compute-0 nova_compute[259627]: 2025-10-14 08:53:07.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 08:53:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 14 08:53:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 14 08:53:08 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 14 08:53:08 compute-0 nova_compute[259627]: 2025-10-14 08:53:08.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:08 compute-0 nova_compute[259627]: 2025-10-14 08:53:08.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 08:53:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:09 compute-0 ceph-mon[74249]: osdmap e119: 3 total, 3 up, 3 in
Oct 14 08:53:09 compute-0 ceph-mon[74249]: pgmap v1028: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:09 compute-0 nova_compute[259627]: 2025-10-14 08:53:09.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:09 compute-0 nova_compute[259627]: 2025-10-14 08:53:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 14 08:53:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 14 08:53:10 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 14 08:53:10 compute-0 nova_compute[259627]: 2025-10-14 08:53:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:10 compute-0 nova_compute[259627]: 2025-10-14 08:53:10.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Oct 14 08:53:11 compute-0 ceph-mon[74249]: osdmap e120: 3 total, 3 up, 3 in
Oct 14 08:53:11 compute-0 ceph-mon[74249]: pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Oct 14 08:53:11 compute-0 nova_compute[259627]: 2025-10-14 08:53:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:11 compute-0 nova_compute[259627]: 2025-10-14 08:53:11.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.010 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:53:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633643948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.459 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1633643948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.637 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.639 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.639 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.640 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.860 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.861 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 08:53:12 compute-0 nova_compute[259627]: 2025-10-14 08:53:12.915 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.000 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.001 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.018 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.044 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.062 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Oct 14 08:53:13 compute-0 ceph-mon[74249]: pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Oct 14 08:53:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:53:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218190398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.553 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.559 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.574 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.577 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 08:53:13 compute-0 nova_compute[259627]: 2025-10-14 08:53:13.577 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2218190398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:53:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 14 08:53:15 compute-0 ceph-mon[74249]: pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 14 08:53:16 compute-0 sudo[270517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:16 compute-0 sudo[270517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:16 compute-0 sudo[270517]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:16 compute-0 sudo[270542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:53:16 compute-0 sudo[270542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:16 compute-0 sudo[270542]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:16 compute-0 sudo[270567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:16 compute-0 sudo[270567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:16 compute-0 sudo[270567]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:16 compute-0 nova_compute[259627]: 2025-10-14 08:53:16.573 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:16 compute-0 sudo[270592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 08:53:16 compute-0 sudo[270592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:16 compute-0 nova_compute[259627]: 2025-10-14 08:53:16.596 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:16 compute-0 nova_compute[259627]: 2025-10-14 08:53:16.596 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 08:53:16 compute-0 nova_compute[259627]: 2025-10-14 08:53:16.597 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 08:53:16 compute-0 nova_compute[259627]: 2025-10-14 08:53:16.619 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 08:53:16 compute-0 nova_compute[259627]: 2025-10-14 08:53:16.620 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:17 compute-0 sudo[270592]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:17 compute-0 nova_compute[259627]: 2025-10-14 08:53:17.135 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:53:17 compute-0 sudo[270648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:17 compute-0 sudo[270648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:17 compute-0 sudo[270648]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:17 compute-0 sudo[270673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:53:17 compute-0 sudo[270673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:17 compute-0 sudo[270673]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Oct 14 08:53:17 compute-0 sudo[270698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:17 compute-0 sudo[270698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:17 compute-0 sudo[270698]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:17 compute-0 ceph-mon[74249]: pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Oct 14 08:53:17 compute-0 sudo[270723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 14 08:53:17 compute-0 sudo[270723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:17 compute-0 sudo[270723]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:53:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:53:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:53:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 08:53:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 08:53:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7eea08f5-8c31-4b65-a2bb-324b4e2fb3e4 does not exist
Oct 14 08:53:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 2a8bd657-90eb-4b7b-8891-0ba26cffa796 does not exist
Oct 14 08:53:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e43b6c7d-75ae-4278-8f4b-237e9aa331bf does not exist
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 08:53:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 08:53:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:53:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:53:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:53:17 compute-0 sudo[270767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:17 compute-0 sudo[270767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:17 compute-0 sudo[270767]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:17 compute-0 sudo[270792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:53:17 compute-0 sudo[270792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:17 compute-0 sudo[270792]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:17 compute-0 sudo[270817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:17 compute-0 sudo[270817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:17 compute-0 sudo[270817]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:18 compute-0 sudo[270842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 08:53:18 compute-0 sudo[270842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:18 compute-0 podman[270907]: 2025-10-14 08:53:18.477361487 +0000 UTC m=+0.044677783 container create 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 08:53:18 compute-0 systemd[1]: Started libpod-conmon-0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c.scope.
Oct 14 08:53:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:53:18 compute-0 podman[270907]: 2025-10-14 08:53:18.457911927 +0000 UTC m=+0.025228263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:53:18 compute-0 podman[270907]: 2025-10-14 08:53:18.562857266 +0000 UTC m=+0.130173572 container init 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 08:53:18 compute-0 podman[270907]: 2025-10-14 08:53:18.570701059 +0000 UTC m=+0.138017345 container start 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 14 08:53:18 compute-0 podman[270907]: 2025-10-14 08:53:18.57436017 +0000 UTC m=+0.141676476 container attach 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 08:53:18 compute-0 reverent_clarke[270923]: 167 167
Oct 14 08:53:18 compute-0 systemd[1]: libpod-0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c.scope: Deactivated successfully.
Oct 14 08:53:18 compute-0 conmon[270923]: conmon 0ab2f9e7a67f5e319541 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c.scope/container/memory.events
Oct 14 08:53:18 compute-0 podman[270907]: 2025-10-14 08:53:18.577949918 +0000 UTC m=+0.145266234 container died 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d63a9cce15094f3ffbd1810c8bc38bc8152499d4ea538600ce8f8598ca607d91-merged.mount: Deactivated successfully.
Oct 14 08:53:18 compute-0 podman[270907]: 2025-10-14 08:53:18.62588546 +0000 UTC m=+0.193201766 container remove 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 08:53:18 compute-0 systemd[1]: libpod-conmon-0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c.scope: Deactivated successfully.
Oct 14 08:53:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:53:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:53:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:53:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:53:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:53:18 compute-0 podman[270947]: 2025-10-14 08:53:18.808596827 +0000 UTC m=+0.056086364 container create 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:53:18 compute-0 systemd[1]: Started libpod-conmon-0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc.scope.
Oct 14 08:53:18 compute-0 podman[270947]: 2025-10-14 08:53:18.789775343 +0000 UTC m=+0.037264890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:53:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:18 compute-0 podman[270947]: 2025-10-14 08:53:18.904475022 +0000 UTC m=+0.151964549 container init 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:53:18 compute-0 podman[270947]: 2025-10-14 08:53:18.919236786 +0000 UTC m=+0.166726313 container start 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:53:18 compute-0 podman[270947]: 2025-10-14 08:53:18.922754062 +0000 UTC m=+0.170243579 container attach 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 08:53:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 14 08:53:19 compute-0 ceph-mon[74249]: pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 14 08:53:19 compute-0 tender_rubin[270963]: --> passed data devices: 0 physical, 3 LVM
Oct 14 08:53:19 compute-0 tender_rubin[270963]: --> relative data size: 1.0
Oct 14 08:53:19 compute-0 tender_rubin[270963]: --> All data devices are unavailable
Oct 14 08:53:20 compute-0 systemd[1]: libpod-0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc.scope: Deactivated successfully.
Oct 14 08:53:20 compute-0 podman[270947]: 2025-10-14 08:53:20.014651602 +0000 UTC m=+1.262141169 container died 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:53:20 compute-0 systemd[1]: libpod-0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc.scope: Consumed 1.044s CPU time.
Oct 14 08:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10-merged.mount: Deactivated successfully.
Oct 14 08:53:20 compute-0 podman[270947]: 2025-10-14 08:53:20.089348345 +0000 UTC m=+1.336837882 container remove 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 08:53:20 compute-0 systemd[1]: libpod-conmon-0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc.scope: Deactivated successfully.
Oct 14 08:53:20 compute-0 sudo[270842]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:20 compute-0 sudo[271002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:20 compute-0 sudo[271002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:20 compute-0 sudo[271002]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:20 compute-0 sudo[271027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:53:20 compute-0 sudo[271027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:20 compute-0 sudo[271027]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:20 compute-0 sudo[271052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:20 compute-0 sudo[271052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:20 compute-0 sudo[271052]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:20 compute-0 sudo[271077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 08:53:20 compute-0 sudo[271077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:20 compute-0 podman[271143]: 2025-10-14 08:53:20.809299862 +0000 UTC m=+0.066895001 container create c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:53:20 compute-0 systemd[1]: Started libpod-conmon-c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09.scope.
Oct 14 08:53:20 compute-0 podman[271143]: 2025-10-14 08:53:20.780702337 +0000 UTC m=+0.038297496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:53:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:53:20 compute-0 podman[271143]: 2025-10-14 08:53:20.907830372 +0000 UTC m=+0.165425561 container init c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 08:53:20 compute-0 podman[271143]: 2025-10-14 08:53:20.91868443 +0000 UTC m=+0.176279559 container start c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 14 08:53:20 compute-0 podman[271143]: 2025-10-14 08:53:20.922305599 +0000 UTC m=+0.179900778 container attach c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:53:20 compute-0 crazy_noyce[271159]: 167 167
Oct 14 08:53:20 compute-0 systemd[1]: libpod-c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09.scope: Deactivated successfully.
Oct 14 08:53:20 compute-0 podman[271143]: 2025-10-14 08:53:20.925609681 +0000 UTC m=+0.183204800 container died c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-588c460178e3b04deace152e42c5e5a2cc2edcae8d8de0b700dd2da9dbf684de-merged.mount: Deactivated successfully.
Oct 14 08:53:20 compute-0 podman[271143]: 2025-10-14 08:53:20.979065299 +0000 UTC m=+0.236660398 container remove c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:53:20 compute-0 systemd[1]: libpod-conmon-c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09.scope: Deactivated successfully.
Oct 14 08:53:21 compute-0 podman[271185]: 2025-10-14 08:53:21.18058931 +0000 UTC m=+0.054800043 container create b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 08:53:21 compute-0 systemd[1]: Started libpod-conmon-b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824.scope.
Oct 14 08:53:21 compute-0 podman[271185]: 2025-10-14 08:53:21.157693555 +0000 UTC m=+0.031904278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:53:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:21 compute-0 podman[271185]: 2025-10-14 08:53:21.290430999 +0000 UTC m=+0.164641722 container init b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 08:53:21 compute-0 podman[271185]: 2025-10-14 08:53:21.29696955 +0000 UTC m=+0.171180253 container start b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 08:53:21 compute-0 podman[271185]: 2025-10-14 08:53:21.301295087 +0000 UTC m=+0.175505820 container attach b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 08:53:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 08:53:21 compute-0 ceph-mon[74249]: pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 08:53:22 compute-0 romantic_clarke[271202]: {
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:     "0": [
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:         {
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "devices": [
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "/dev/loop3"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             ],
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_name": "ceph_lv0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_size": "21470642176",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "name": "ceph_lv0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "tags": {
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cluster_name": "ceph",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.crush_device_class": "",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.encrypted": "0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osd_id": "0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.type": "block",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.vdo": "0"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             },
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "type": "block",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "vg_name": "ceph_vg0"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:         }
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:     ],
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:     "1": [
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:         {
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "devices": [
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "/dev/loop4"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             ],
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_name": "ceph_lv1",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_size": "21470642176",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "name": "ceph_lv1",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "tags": {
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cluster_name": "ceph",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.crush_device_class": "",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.encrypted": "0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osd_id": "1",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.type": "block",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.vdo": "0"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             },
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "type": "block",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "vg_name": "ceph_vg1"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:         }
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:     ],
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:     "2": [
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:         {
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "devices": [
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "/dev/loop5"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             ],
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_name": "ceph_lv2",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_size": "21470642176",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "name": "ceph_lv2",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "tags": {
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.cluster_name": "ceph",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.crush_device_class": "",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.encrypted": "0",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osd_id": "2",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.type": "block",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:                 "ceph.vdo": "0"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             },
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "type": "block",
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:             "vg_name": "ceph_vg2"
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:         }
Oct 14 08:53:22 compute-0 romantic_clarke[271202]:     ]
Oct 14 08:53:22 compute-0 romantic_clarke[271202]: }
Oct 14 08:53:22 compute-0 systemd[1]: libpod-b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824.scope: Deactivated successfully.
Oct 14 08:53:22 compute-0 podman[271185]: 2025-10-14 08:53:22.065082095 +0000 UTC m=+0.939292868 container died b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b-merged.mount: Deactivated successfully.
Oct 14 08:53:22 compute-0 podman[271185]: 2025-10-14 08:53:22.145342145 +0000 UTC m=+1.019552838 container remove b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:53:22 compute-0 systemd[1]: libpod-conmon-b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824.scope: Deactivated successfully.
Oct 14 08:53:22 compute-0 sudo[271077]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:22 compute-0 podman[271222]: 2025-10-14 08:53:22.191539774 +0000 UTC m=+0.084922886 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 08:53:22 compute-0 podman[271212]: 2025-10-14 08:53:22.19946665 +0000 UTC m=+0.088864243 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 08:53:22 compute-0 sudo[271262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:22 compute-0 sudo[271262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:22 compute-0 sudo[271262]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:22 compute-0 sudo[271287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:53:22 compute-0 sudo[271287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:22 compute-0 sudo[271287]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:22 compute-0 sudo[271312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:22 compute-0 sudo[271312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:22 compute-0 sudo[271312]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:22 compute-0 sudo[271337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 08:53:22 compute-0 sudo[271337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:22 compute-0 podman[271403]: 2025-10-14 08:53:22.829368186 +0000 UTC m=+0.047156264 container create 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 08:53:22 compute-0 systemd[1]: Started libpod-conmon-420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490.scope.
Oct 14 08:53:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:53:22 compute-0 podman[271403]: 2025-10-14 08:53:22.807935237 +0000 UTC m=+0.025723345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:53:22 compute-0 podman[271403]: 2025-10-14 08:53:22.916045714 +0000 UTC m=+0.133833812 container init 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:53:22 compute-0 podman[271403]: 2025-10-14 08:53:22.92318772 +0000 UTC m=+0.140975808 container start 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:53:22 compute-0 podman[271403]: 2025-10-14 08:53:22.927297781 +0000 UTC m=+0.145085859 container attach 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:53:22 compute-0 systemd[1]: libpod-420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490.scope: Deactivated successfully.
Oct 14 08:53:22 compute-0 cool_volhard[271420]: 167 167
Oct 14 08:53:22 compute-0 conmon[271420]: conmon 420a9d419a3d1538a263 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490.scope/container/memory.events
Oct 14 08:53:22 compute-0 podman[271403]: 2025-10-14 08:53:22.932218233 +0000 UTC m=+0.150006311 container died 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 08:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe1858844f5d53133ffba805c59e947253236ffa9e0473d5e57d1be2b17c53f-merged.mount: Deactivated successfully.
Oct 14 08:53:22 compute-0 podman[271403]: 2025-10-14 08:53:22.977465299 +0000 UTC m=+0.195253387 container remove 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:53:22 compute-0 systemd[1]: libpod-conmon-420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490.scope: Deactivated successfully.
Oct 14 08:53:23 compute-0 podman[271446]: 2025-10-14 08:53:23.191052237 +0000 UTC m=+0.052707491 container create 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 08:53:23 compute-0 systemd[1]: Started libpod-conmon-94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377.scope.
Oct 14 08:53:23 compute-0 podman[271446]: 2025-10-14 08:53:23.164951973 +0000 UTC m=+0.026607317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:53:23 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:53:23 compute-0 podman[271446]: 2025-10-14 08:53:23.297519703 +0000 UTC m=+0.159174977 container init 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:53:23 compute-0 podman[271446]: 2025-10-14 08:53:23.308660888 +0000 UTC m=+0.170316152 container start 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:53:23 compute-0 podman[271446]: 2025-10-14 08:53:23.312716478 +0000 UTC m=+0.174371782 container attach 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:53:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Oct 14 08:53:23 compute-0 ceph-mon[74249]: pgmap v1036: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Oct 14 08:53:24 compute-0 youthful_neumann[271463]: {
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "osd_id": 2,
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "type": "bluestore"
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:     },
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "osd_id": 1,
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "type": "bluestore"
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:     },
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "osd_id": 0,
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:         "type": "bluestore"
Oct 14 08:53:24 compute-0 youthful_neumann[271463]:     }
Oct 14 08:53:24 compute-0 youthful_neumann[271463]: }
Oct 14 08:53:24 compute-0 systemd[1]: libpod-94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377.scope: Deactivated successfully.
Oct 14 08:53:24 compute-0 systemd[1]: libpod-94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377.scope: Consumed 1.078s CPU time.
Oct 14 08:53:24 compute-0 podman[271446]: 2025-10-14 08:53:24.380913533 +0000 UTC m=+1.242568777 container died 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d-merged.mount: Deactivated successfully.
Oct 14 08:53:24 compute-0 podman[271446]: 2025-10-14 08:53:24.46674534 +0000 UTC m=+1.328400584 container remove 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 08:53:24 compute-0 systemd[1]: libpod-conmon-94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377.scope: Deactivated successfully.
Oct 14 08:53:24 compute-0 sudo[271337]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:53:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:53:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8d25ef22-aa9e-4e5e-af58-40fe647aaf59 does not exist
Oct 14 08:53:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 477461a4-fb52-4466-9bb4-7f51789677b3 does not exist
Oct 14 08:53:24 compute-0 sudo[271511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:53:24 compute-0 sudo[271511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:24 compute-0 sudo[271511]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:24 compute-0 sudo[271536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 08:53:24 compute-0 sudo[271536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:53:24 compute-0 sudo[271536]: pam_unix(sudo:session): session closed for user root
Oct 14 08:53:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Oct 14 08:53:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:53:25 compute-0 ceph-mon[74249]: pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Oct 14 08:53:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:27 compute-0 ceph-mon[74249]: pgmap v1038: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 08:53:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6036 writes, 24K keys, 6036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6036 writes, 1099 syncs, 5.49 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 387 writes, 815 keys, 387 commit groups, 1.0 writes per commit group, ingest: 0.51 MB, 0.00 MB/s
                                           Interval WAL: 387 writes, 184 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 08:53:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:29 compute-0 ceph-mon[74249]: pgmap v1039: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:53:30.854 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:53:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:53:30.856 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 08:53:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:31 compute-0 ceph-mon[74249]: pgmap v1040: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:53:31.857 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:53:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:53:32
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:53:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:53:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:33 compute-0 ceph-mon[74249]: pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:33 compute-0 podman[271562]: 2025-10-14 08:53:33.710410706 +0000 UTC m=+0.123421155 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 08:53:33 compute-0 podman[271561]: 2025-10-14 08:53:33.710480998 +0000 UTC m=+0.123084397 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 08:53:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 08:53:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7204 writes, 29K keys, 7204 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7204 writes, 1461 syncs, 4.93 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 410 writes, 1133 keys, 410 commit groups, 1.0 writes per commit group, ingest: 0.62 MB, 0.00 MB/s
                                           Interval WAL: 410 writes, 188 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 08:53:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:35 compute-0 ceph-mon[74249]: pgmap v1042: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:37 compute-0 ceph-mon[74249]: pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:39 compute-0 ceph-mon[74249]: pgmap v1044: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 08:53:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6132 writes, 25K keys, 6132 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6132 writes, 1126 syncs, 5.45 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 533 writes, 1494 keys, 533 commit groups, 1.0 writes per commit group, ingest: 0.70 MB, 0.00 MB/s
                                           Interval WAL: 533 writes, 250 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 08:53:40 compute-0 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 08:53:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:41 compute-0 ceph-mon[74249]: pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:53:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 08:53:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:43 compute-0 ceph-mon[74249]: pgmap v1046: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:45 compute-0 ceph-mon[74249]: pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:47 compute-0 ceph-mon[74249]: pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.213 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.215 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.243 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.380 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.381 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.401 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.407 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.407 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.417 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.418 2 INFO nova.compute.claims [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.540 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:48 compute-0 nova_compute[259627]: 2025-10-14 08:53:48.572 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:53:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3370884652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.053 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.059 2 DEBUG nova.compute.provider_tree [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.083 2 DEBUG nova.scheduler.client.report [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:53:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3370884652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.112 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.113 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.117 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.127 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.128 2 INFO nova.compute.claims [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.211 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.237 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.257 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.317 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.357 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.360 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.361 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Creating image(s)
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.405 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.440 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.469 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.507 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.508 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:53:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1449041474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.781 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.789 2 DEBUG nova.compute.provider_tree [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.809 2 DEBUG nova.scheduler.client.report [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.847 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.848 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.938 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.939 2 DEBUG nova.network.neutron [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:53:49 compute-0 nova_compute[259627]: 2025-10-14 08:53:49.971 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.015 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:53:50 compute-0 ceph-mon[74249]: pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1449041474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.138 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.140 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.140 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Creating image(s)
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.162 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.187 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.214 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.219 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.837 2 DEBUG nova.virt.libvirt.imagebackend [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/a4789543-f429-47d7-9f79-80a9d90a59f9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/a4789543-f429-47d7-9f79-80a9d90a59f9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.842 2 DEBUG nova.network.neutron [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 08:53:50 compute-0 nova_compute[259627]: 2025-10-14 08:53:50.843 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:53:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.370 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.456 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.part --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.457 2 DEBUG nova.virt.images [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] a4789543-f429-47d7-9f79-80a9d90a59f9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.459 2 DEBUG nova.privsep.utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.460 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.part /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:52 compute-0 ceph-mon[74249]: pgmap v1050: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.649 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.part /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.converted" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.655 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:52 compute-0 podman[271771]: 2025-10-14 08:53:52.695242128 +0000 UTC m=+0.090816301 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 14 08:53:52 compute-0 podman[271770]: 2025-10-14 08:53:52.708828573 +0000 UTC m=+0.111827290 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.743 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.converted --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.745 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.774 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.778 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.803 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 2.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.804 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.831 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:52 compute-0 nova_compute[259627]: 2025-10-14 08:53:52.835 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 14 08:53:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 14 08:53:53 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 14 08:53:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 14 08:53:54 compute-0 ceph-mon[74249]: pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:53:54 compute-0 ceph-mon[74249]: osdmap e121: 3 total, 3 up, 3 in
Oct 14 08:53:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 14 08:53:54 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 14 08:53:54 compute-0 nova_compute[259627]: 2025-10-14 08:53:54.817 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:54 compute-0 nova_compute[259627]: 2025-10-14 08:53:54.838 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:54 compute-0 nova_compute[259627]: 2025-10-14 08:53:54.910 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] resizing rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:53:54 compute-0 nova_compute[259627]: 2025-10-14 08:53:54.941 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] resizing rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.006 2 DEBUG nova.objects.instance [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ce8be7a-3198-4f1c-ba79-e11a24581a60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.039 2 DEBUG nova.objects.instance [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.041 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.042 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Ensure instance console log exists: /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.042 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.042 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.043 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.044 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.049 2 WARNING nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.055 2 DEBUG nova.virt.libvirt.host [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.055 2 DEBUG nova.virt.libvirt.host [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.057 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.057 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Ensure instance console log exists: /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.057 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.058 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.058 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.059 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.060 2 DEBUG nova.virt.libvirt.host [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.060 2 DEBUG nova.virt.libvirt.host [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.060 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.061 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.061 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.061 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.061 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.063 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.063 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.066 2 DEBUG nova.privsep.utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.066 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.081 2 WARNING nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.088 2 DEBUG nova.virt.libvirt.host [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.088 2 DEBUG nova.virt.libvirt.host [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.092 2 DEBUG nova.virt.libvirt.host [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.092 2 DEBUG nova.virt.libvirt.host [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.092 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.093 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.093 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.093 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.094 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.094 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.094 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.094 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.095 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.095 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.095 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.095 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.098 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 82 op/s
Oct 14 08:53:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:53:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471100098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.481 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:53:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/558469147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.511 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.516 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:55 compute-0 ceph-mon[74249]: osdmap e122: 3 total, 3 up, 3 in
Oct 14 08:53:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3471100098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:53:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/558469147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.534 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.557 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.560 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:53:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483536475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.925 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.927 2 DEBUG nova.objects.instance [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ce8be7a-3198-4f1c-ba79-e11a24581a60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:53:55 compute-0 nova_compute[259627]: 2025-10-14 08:53:55.949 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <uuid>2ce8be7a-3198-4f1c-ba79-e11a24581a60</uuid>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <name>instance-00000001</name>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <nova:name>tempest-AutoAllocateNetworkTest-server-865433973</nova:name>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:53:55</nova:creationTime>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <nova:user uuid="4bed0ea53b244e579f278f95b35bfc0d">tempest-AutoAllocateNetworkTest-907131017-project-member</nova:user>
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <nova:project uuid="6e03adb4741d4f1c8279abf27fb2b6a1">tempest-AutoAllocateNetworkTest-907131017</nova:project>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <system>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <entry name="serial">2ce8be7a-3198-4f1c-ba79-e11a24581a60</entry>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <entry name="uuid">2ce8be7a-3198-4f1c-ba79-e11a24581a60</entry>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     </system>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <os>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   </os>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <features>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   </features>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk">
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       </source>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config">
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       </source>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:53:55 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/console.log" append="off"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <video>
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     </video>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:53:55 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:53:55 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:53:55 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:53:55 compute-0 nova_compute[259627]: </domain>
Oct 14 08:53:55 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.003 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.005 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.005 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Using config drive
Oct 14 08:53:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:53:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3799170032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.023 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.029 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.030 2 DEBUG nova.objects.instance [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.057 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <uuid>0a24666a-3d83-4fd9-8a89-c0585d8dc8e3</uuid>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <name>instance-00000002</name>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerExternalEventsTest-server-98356922</nova:name>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:53:55</nova:creationTime>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <nova:user uuid="352e09590ad54449b344c1cf9ed31e15">tempest-ServerExternalEventsTest-1253606656-project-member</nova:user>
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <nova:project uuid="e2a0546071664670a3e6a70205cf65a4">tempest-ServerExternalEventsTest-1253606656</nova:project>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <system>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <entry name="serial">0a24666a-3d83-4fd9-8a89-c0585d8dc8e3</entry>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <entry name="uuid">0a24666a-3d83-4fd9-8a89-c0585d8dc8e3</entry>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     </system>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <os>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   </os>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <features>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   </features>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk">
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       </source>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config">
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       </source>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:53:56 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/console.log" append="off"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <video>
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     </video>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:53:56 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:53:56 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:53:56 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:53:56 compute-0 nova_compute[259627]: </domain>
Oct 14 08:53:56 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.114 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.114 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.115 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Using config drive
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.134 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:56 compute-0 ceph-mon[74249]: pgmap v1054: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 82 op/s
Oct 14 08:53:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3483536475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:53:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3799170032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.614 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Creating config drive at /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.623 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xf1o1nj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.716 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Creating config drive at /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.727 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg7hh3n6h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.755 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xf1o1nj" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.792 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.797 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.868 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg7hh3n6h" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.911 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.917 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.981 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:56 compute-0 nova_compute[259627]: 2025-10-14 08:53:56.983 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Deleting local config drive /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config because it was imported into RBD.
Oct 14 08:53:57 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 14 08:53:57 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 14 08:53:57 compute-0 nova_compute[259627]: 2025-10-14 08:53:57.116 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:53:57 compute-0 nova_compute[259627]: 2025-10-14 08:53:57.117 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Deleting local config drive /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config because it was imported into RBD.
Oct 14 08:53:57 compute-0 systemd-machined[214636]: New machine qemu-1-instance-00000002.
Oct 14 08:53:57 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Oct 14 08:53:57 compute-0 systemd-machined[214636]: New machine qemu-2-instance-00000001.
Oct 14 08:53:57 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000001.
Oct 14 08:53:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:53:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 14 08:53:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 82 op/s
Oct 14 08:53:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 14 08:53:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 14 08:53:58 compute-0 ceph-mon[74249]: pgmap v1055: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 82 op/s
Oct 14 08:53:58 compute-0 ceph-mon[74249]: osdmap e123: 3 total, 3 up, 3 in
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.489 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432038.488253, 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.490 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] VM Resumed (Lifecycle Event)
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.493 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.493 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.497 2 INFO nova.virt.libvirt.driver [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance spawned successfully.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.497 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.551 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.561 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.562 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.563 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.563 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.564 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.564 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.568 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.569 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.570 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.574 2 INFO nova.virt.libvirt.driver [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance spawned successfully.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.574 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.618 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.619 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432038.490978, 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.619 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] VM Started (Lifecycle Event)
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.640 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.646 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.651 2 INFO nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Took 8.51 seconds to spawn the instance on the hypervisor.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.652 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.653 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.654 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.654 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.655 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.655 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.655 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.664 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.664 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432038.5618749, 2ce8be7a-3198-4f1c-ba79-e11a24581a60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.664 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] VM Resumed (Lifecycle Event)
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.687 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.689 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.716 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.716 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432038.5623958, 2ce8be7a-3198-4f1c-ba79-e11a24581a60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.717 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] VM Started (Lifecycle Event)
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.725 2 INFO nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Took 10.21 seconds to build instance.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.733 2 INFO nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Took 9.37 seconds to spawn the instance on the hypervisor.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.733 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.734 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.739 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.746 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.768 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.797 2 INFO nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Took 10.44 seconds to build instance.
Oct 14 08:53:58 compute-0 nova_compute[259627]: 2025-10-14 08:53:58.822 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:53:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 109 op/s
Oct 14 08:53:59 compute-0 nova_compute[259627]: 2025-10-14 08:53:59.938 2 DEBUG nova.compute.manager [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:53:59 compute-0 nova_compute[259627]: 2025-10-14 08:53:59.939 2 DEBUG nova.compute.manager [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:53:59 compute-0 nova_compute[259627]: 2025-10-14 08:53:59.940 2 DEBUG oslo_concurrency.lockutils [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] Acquiring lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:53:59 compute-0 nova_compute[259627]: 2025-10-14 08:53:59.940 2 DEBUG oslo_concurrency.lockutils [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] Acquired lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:53:59 compute-0 nova_compute[259627]: 2025-10-14 08:53:59.942 2 DEBUG nova.network.neutron [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.187 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.188 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.188 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.189 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.190 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.192 2 INFO nova.compute.manager [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Terminating instance
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.194 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.197 2 DEBUG nova.network.neutron [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.419 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.420 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.421 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.421 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.422 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.424 2 INFO nova.compute.manager [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Terminating instance
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.425 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "refresh_cache-2ce8be7a-3198-4f1c-ba79-e11a24581a60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.426 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquired lock "refresh_cache-2ce8be7a-3198-4f1c-ba79-e11a24581a60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:00 compute-0 nova_compute[259627]: 2025-10-14 08:54:00.426 2 DEBUG nova.network.neutron [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:54:00 compute-0 ceph-mon[74249]: pgmap v1057: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 109 op/s
Oct 14 08:54:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 5.4 MiB/s wr, 306 op/s
Oct 14 08:54:01 compute-0 nova_compute[259627]: 2025-10-14 08:54:01.743 2 DEBUG nova.network.neutron [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:54:01 compute-0 nova_compute[259627]: 2025-10-14 08:54:01.746 2 DEBUG nova.network.neutron [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:01 compute-0 nova_compute[259627]: 2025-10-14 08:54:01.761 2 DEBUG oslo_concurrency.lockutils [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] Releasing lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:01 compute-0 nova_compute[259627]: 2025-10-14 08:54:01.762 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquired lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:01 compute-0 nova_compute[259627]: 2025-10-14 08:54:01.762 2 DEBUG nova.network.neutron [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:54:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:02 compute-0 ceph-mon[74249]: pgmap v1058: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 5.4 MiB/s wr, 306 op/s
Oct 14 08:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:54:02 compute-0 nova_compute[259627]: 2025-10-14 08:54:02.747 2 DEBUG nova.network.neutron [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:54:02 compute-0 nova_compute[259627]: 2025-10-14 08:54:02.904 2 DEBUG nova.network.neutron [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:02 compute-0 nova_compute[259627]: 2025-10-14 08:54:02.922 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Releasing lock "refresh_cache-2ce8be7a-3198-4f1c-ba79-e11a24581a60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:02 compute-0 nova_compute[259627]: 2025-10-14 08:54:02.923 2 DEBUG nova.compute.manager [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:54:02 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 14 08:54:02 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Consumed 5.779s CPU time.
Oct 14 08:54:02 compute-0 systemd-machined[214636]: Machine qemu-2-instance-00000001 terminated.
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.059 2 DEBUG nova.network.neutron [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.081 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Releasing lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.081 2 DEBUG nova.compute.manager [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.152 2 INFO nova.virt.libvirt.driver [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance destroyed successfully.
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.153 2 DEBUG nova.objects.instance [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lazy-loading 'resources' on Instance uuid 2ce8be7a-3198-4f1c-ba79-e11a24581a60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:54:03 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 14 08:54:03 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 5.936s CPU time.
Oct 14 08:54:03 compute-0 systemd-machined[214636]: Machine qemu-1-instance-00000002 terminated.
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.306 2 INFO nova.virt.libvirt.driver [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance destroyed successfully.
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.306 2 DEBUG nova.objects.instance [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lazy-loading 'resources' on Instance uuid 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:54:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.8 MiB/s wr, 273 op/s
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.612 2 INFO nova.virt.libvirt.driver [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Deleting instance files /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60_del
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.614 2 INFO nova.virt.libvirt.driver [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Deletion of /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60_del complete
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.694 2 DEBUG nova.virt.libvirt.host [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.695 2 INFO nova.virt.libvirt.host [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] UEFI support detected
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.698 2 INFO nova.compute.manager [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.699 2 DEBUG oslo.service.loopingcall [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.700 2 DEBUG nova.compute.manager [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.700 2 DEBUG nova.network.neutron [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.776 2 INFO nova.virt.libvirt.driver [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Deleting instance files /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_del
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.777 2 INFO nova.virt.libvirt.driver [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Deletion of /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_del complete
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.834 2 INFO nova.compute.manager [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.834 2 DEBUG oslo.service.loopingcall [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.835 2 DEBUG nova.compute.manager [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:54:03 compute-0 nova_compute[259627]: 2025-10-14 08:54:03.835 2 DEBUG nova.network.neutron [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.167 2 DEBUG nova.network.neutron [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.171 2 DEBUG nova.network.neutron [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.184 2 DEBUG nova.network.neutron [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.186 2 DEBUG nova.network.neutron [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.199 2 INFO nova.compute.manager [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Took 0.36 seconds to deallocate network for instance.
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.205 2 INFO nova.compute.manager [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Took 0.50 seconds to deallocate network for instance.
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.279 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.279 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.290 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.364 2 DEBUG oslo_concurrency.processutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:04 compute-0 ceph-mon[74249]: pgmap v1059: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.8 MiB/s wr, 273 op/s
Oct 14 08:54:04 compute-0 podman[272466]: 2025-10-14 08:54:04.691087903 +0000 UTC m=+0.094578413 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:54:04 compute-0 podman[272465]: 2025-10-14 08:54:04.723738219 +0000 UTC m=+0.138075557 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 08:54:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:54:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2217850088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.899 2 DEBUG oslo_concurrency.processutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.909 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.952 2 ERROR nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [req-b3b1c8bb-c8eb-443e-9601-9858c5acfd1b] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 92105e1d-1743-46e3-a494-858b4331398a.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-b3b1c8bb-c8eb-443e-9601-9858c5acfd1b"}]}
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.972 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.992 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 08:54:04 compute-0 nova_compute[259627]: 2025-10-14 08:54:04.992 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.008 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.031 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.098 2 DEBUG oslo_concurrency.processutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 33 KiB/s wr, 247 op/s
Oct 14 08:54:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2217850088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 08:54:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3324356703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:54:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 08:54:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3324356703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:54:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:54:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001144178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.613 2 DEBUG oslo_concurrency.processutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.621 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.670 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updated inventory for provider 92105e1d-1743-46e3-a494-858b4331398a with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.671 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating resource provider 92105e1d-1743-46e3-a494-858b4331398a generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.671 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.706 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.709 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.748 2 INFO nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Deleted allocations for instance 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.777 2 DEBUG oslo_concurrency.processutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:05 compute-0 nova_compute[259627]: 2025-10-14 08:54:05.840 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:54:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104233409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:06 compute-0 nova_compute[259627]: 2025-10-14 08:54:06.273 2 DEBUG oslo_concurrency.processutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:06 compute-0 nova_compute[259627]: 2025-10-14 08:54:06.282 2 DEBUG nova.compute.provider_tree [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:54:06 compute-0 nova_compute[259627]: 2025-10-14 08:54:06.298 2 DEBUG nova.scheduler.client.report [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:54:06 compute-0 nova_compute[259627]: 2025-10-14 08:54:06.334 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:06 compute-0 nova_compute[259627]: 2025-10-14 08:54:06.355 2 INFO nova.scheduler.client.report [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Deleted allocations for instance 2ce8be7a-3198-4f1c-ba79-e11a24581a60
Oct 14 08:54:06 compute-0 nova_compute[259627]: 2025-10-14 08:54:06.437 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:06 compute-0 ceph-mon[74249]: pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 33 KiB/s wr, 247 op/s
Oct 14 08:54:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3324356703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:54:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3324356703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:54:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1001144178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2104233409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:07.012 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 33 KiB/s wr, 247 op/s
Oct 14 08:54:08 compute-0 ceph-mon[74249]: pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 33 KiB/s wr, 247 op/s
Oct 14 08:54:08 compute-0 nova_compute[259627]: 2025-10-14 08:54:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:08 compute-0 nova_compute[259627]: 2025-10-14 08:54:08.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 08:54:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 205 op/s
Oct 14 08:54:09 compute-0 nova_compute[259627]: 2025-10-14 08:54:09.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:10 compute-0 ceph-mon[74249]: pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 205 op/s
Oct 14 08:54:10 compute-0 nova_compute[259627]: 2025-10-14 08:54:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:10 compute-0 nova_compute[259627]: 2025-10-14 08:54:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 205 op/s
Oct 14 08:54:11 compute-0 nova_compute[259627]: 2025-10-14 08:54:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:11 compute-0 nova_compute[259627]: 2025-10-14 08:54:11.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:11 compute-0 nova_compute[259627]: 2025-10-14 08:54:11.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:11 compute-0 nova_compute[259627]: 2025-10-14 08:54:11.999 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:11 compute-0 nova_compute[259627]: 2025-10-14 08:54:11.999 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:11 compute-0 nova_compute[259627]: 2025-10-14 08:54:11.999 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.000 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:54:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149918691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.424 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:12 compute-0 ceph-mon[74249]: pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 205 op/s
Oct 14 08:54:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/149918691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.580 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.582 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5049MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.582 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.582 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.656 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.657 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 08:54:12 compute-0 nova_compute[259627]: 2025-10-14 08:54:12.680 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:54:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369913060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:13 compute-0 nova_compute[259627]: 2025-10-14 08:54:13.120 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:13 compute-0 nova_compute[259627]: 2025-10-14 08:54:13.128 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:54:13 compute-0 nova_compute[259627]: 2025-10-14 08:54:13.148 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:54:13 compute-0 nova_compute[259627]: 2025-10-14 08:54:13.183 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 08:54:13 compute-0 nova_compute[259627]: 2025-10-14 08:54:13.184 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.3 KiB/s wr, 58 op/s
Oct 14 08:54:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1369913060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:14 compute-0 ceph-mon[74249]: pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.3 KiB/s wr, 58 op/s
Oct 14 08:54:15 compute-0 nova_compute[259627]: 2025-10-14 08:54:15.184 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.3 KiB/s wr, 58 op/s
Oct 14 08:54:16 compute-0 ceph-mon[74249]: pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.3 KiB/s wr, 58 op/s
Oct 14 08:54:16 compute-0 nova_compute[259627]: 2025-10-14 08:54:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:16 compute-0 nova_compute[259627]: 2025-10-14 08:54:16.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 08:54:16 compute-0 nova_compute[259627]: 2025-10-14 08:54:16.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 08:54:16 compute-0 nova_compute[259627]: 2025-10-14 08:54:16.992 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 08:54:16 compute-0 nova_compute[259627]: 2025-10-14 08:54:16.992 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:54:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:18 compute-0 nova_compute[259627]: 2025-10-14 08:54:18.151 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432043.1496763, 2ce8be7a-3198-4f1c-ba79-e11a24581a60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:54:18 compute-0 nova_compute[259627]: 2025-10-14 08:54:18.152 2 INFO nova.compute.manager [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] VM Stopped (Lifecycle Event)
Oct 14 08:54:18 compute-0 nova_compute[259627]: 2025-10-14 08:54:18.184 2 DEBUG nova.compute.manager [None req-826ba085-5d16-4aba-b751-d5c8d841ead3 - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:18 compute-0 nova_compute[259627]: 2025-10-14 08:54:18.301 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432043.3011541, 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:54:18 compute-0 nova_compute[259627]: 2025-10-14 08:54:18.302 2 INFO nova.compute.manager [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] VM Stopped (Lifecycle Event)
Oct 14 08:54:18 compute-0 nova_compute[259627]: 2025-10-14 08:54:18.327 2 DEBUG nova.compute.manager [None req-bb3d20c3-0c23-436d-a3e4-a85c05662c7a - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:18 compute-0 ceph-mon[74249]: pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:20 compute-0 ceph-mon[74249]: pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:22 compute-0 ceph-mon[74249]: pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:23 compute-0 podman[272602]: 2025-10-14 08:54:23.686307573 +0000 UTC m=+0.085449079 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible)
Oct 14 08:54:23 compute-0 podman[272601]: 2025-10-14 08:54:23.724119316 +0000 UTC m=+0.129789323 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 14 08:54:24 compute-0 ceph-mon[74249]: pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:24 compute-0 sudo[272639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:24 compute-0 sudo[272639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:24 compute-0 sudo[272639]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:24 compute-0 sudo[272664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:54:24 compute-0 sudo[272664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:24 compute-0 sudo[272664]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:24 compute-0 sudo[272689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:24 compute-0 sudo[272689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:24 compute-0 sudo[272689]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:25 compute-0 sudo[272714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 14 08:54:25 compute-0 sudo[272714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:25 compute-0 sudo[272714]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:54:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:54:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:25 compute-0 sudo[272759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:25 compute-0 sudo[272759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:25 compute-0 sudo[272759]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:25 compute-0 sudo[272784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:54:25 compute-0 sudo[272784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:25 compute-0 sudo[272784]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:25 compute-0 sudo[272809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:25 compute-0 sudo[272809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:25 compute-0 sudo[272809]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:25 compute-0 sudo[272834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 08:54:25 compute-0 sudo[272834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:26 compute-0 sudo[272834]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:26 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:26 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:26 compute-0 sudo[272890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:26 compute-0 sudo[272890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:26 compute-0 sudo[272890]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:26 compute-0 sudo[272915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:54:26 compute-0 sudo[272915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:26 compute-0 sudo[272915]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:26 compute-0 sudo[272940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:26 compute-0 sudo[272940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:26 compute-0 sudo[272940]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:26 compute-0 sudo[272965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- inventory --format=json-pretty --filter-for-batch
Oct 14 08:54:26 compute-0 sudo[272965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:27 compute-0 podman[273030]: 2025-10-14 08:54:27.019870572 +0000 UTC m=+0.049609195 container create 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:54:27 compute-0 systemd[1]: Started libpod-conmon-3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e.scope.
Oct 14 08:54:27 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:27 compute-0 podman[273030]: 2025-10-14 08:54:26.994678281 +0000 UTC m=+0.024416984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:54:27 compute-0 podman[273030]: 2025-10-14 08:54:27.099994608 +0000 UTC m=+0.129733241 container init 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 08:54:27 compute-0 podman[273030]: 2025-10-14 08:54:27.106688833 +0000 UTC m=+0.136427476 container start 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:54:27 compute-0 podman[273030]: 2025-10-14 08:54:27.110519248 +0000 UTC m=+0.140257881 container attach 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:54:27 compute-0 frosty_ptolemy[273046]: 167 167
Oct 14 08:54:27 compute-0 systemd[1]: libpod-3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e.scope: Deactivated successfully.
Oct 14 08:54:27 compute-0 podman[273030]: 2025-10-14 08:54:27.112792234 +0000 UTC m=+0.142530877 container died 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 08:54:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f3bc74af7b6a9b005f5fab2465204fa9bd9ffc60599360f32a29ee68606060b-merged.mount: Deactivated successfully.
Oct 14 08:54:27 compute-0 podman[273030]: 2025-10-14 08:54:27.158797549 +0000 UTC m=+0.188536162 container remove 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:54:27 compute-0 systemd[1]: libpod-conmon-3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e.scope: Deactivated successfully.
Oct 14 08:54:27 compute-0 podman[273070]: 2025-10-14 08:54:27.30927855 +0000 UTC m=+0.045068102 container create eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 08:54:27 compute-0 systemd[1]: Started libpod-conmon-eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b.scope.
Oct 14 08:54:27 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:27 compute-0 podman[273070]: 2025-10-14 08:54:27.286743514 +0000 UTC m=+0.022533116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:54:27 compute-0 podman[273070]: 2025-10-14 08:54:27.405149315 +0000 UTC m=+0.140938967 container init eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 08:54:27 compute-0 ceph-mon[74249]: pgmap v1070: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:27 compute-0 podman[273070]: 2025-10-14 08:54:27.416206757 +0000 UTC m=+0.151996349 container start eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 08:54:27 compute-0 podman[273070]: 2025-10-14 08:54:27.41997906 +0000 UTC m=+0.155768612 container attach eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:54:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:28 compute-0 cool_engelbart[273087]: [
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:     {
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         "available": false,
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         "ceph_device": false,
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         "lsm_data": {},
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         "lvs": [],
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         "path": "/dev/sr0",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         "rejected_reasons": [
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "Has a FileSystem",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "Insufficient space (<5GB)"
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         ],
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         "sys_api": {
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "actuators": null,
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "device_nodes": "sr0",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "devname": "sr0",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "human_readable_size": "482.00 KB",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "id_bus": "ata",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "model": "QEMU DVD-ROM",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "nr_requests": "2",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "parent": "/dev/sr0",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "partitions": {},
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "path": "/dev/sr0",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "removable": "1",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "rev": "2.5+",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "ro": "0",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "rotational": "0",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "sas_address": "",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "sas_device_handle": "",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "scheduler_mode": "mq-deadline",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "sectors": 0,
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "sectorsize": "2048",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "size": 493568.0,
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "support_discard": "2048",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "type": "disk",
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:             "vendor": "QEMU"
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:         }
Oct 14 08:54:28 compute-0 cool_engelbart[273087]:     }
Oct 14 08:54:28 compute-0 cool_engelbart[273087]: ]
Oct 14 08:54:28 compute-0 systemd[1]: libpod-eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b.scope: Deactivated successfully.
Oct 14 08:54:28 compute-0 systemd[1]: libpod-eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b.scope: Consumed 1.533s CPU time.
Oct 14 08:54:28 compute-0 podman[273070]: 2025-10-14 08:54:28.891145175 +0000 UTC m=+1.626934727 container died eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 08:54:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f-merged.mount: Deactivated successfully.
Oct 14 08:54:28 compute-0 podman[273070]: 2025-10-14 08:54:28.955440581 +0000 UTC m=+1.691230123 container remove eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:54:28 compute-0 systemd[1]: libpod-conmon-eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b.scope: Deactivated successfully.
Oct 14 08:54:28 compute-0 sudo[272965]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:29 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 2a96176a-3a0f-4d4c-ae60-09f9183e6f69 does not exist
Oct 14 08:54:29 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 456e7942-f6da-4403-a642-671685e26a85 does not exist
Oct 14 08:54:29 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev acd3702a-0fb0-4347-b483-43cd66b6f2d8 does not exist
Oct 14 08:54:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:54:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:54:29 compute-0 sudo[275109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:29 compute-0 sudo[275109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:29 compute-0 sudo[275109]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:29 compute-0 sudo[275134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:54:29 compute-0 sudo[275134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:29 compute-0 sudo[275134]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:29 compute-0 sudo[275159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:29 compute-0 sudo[275159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:29 compute-0 sudo[275159]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:29 compute-0 sudo[275184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 08:54:29 compute-0 sudo[275184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:29 compute-0 ceph-mon[74249]: pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:54:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:29 compute-0 podman[275250]: 2025-10-14 08:54:29.660616854 +0000 UTC m=+0.062448112 container create a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:54:29 compute-0 systemd[1]: Started libpod-conmon-a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604.scope.
Oct 14 08:54:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:29 compute-0 podman[275250]: 2025-10-14 08:54:29.638131859 +0000 UTC m=+0.039963117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:54:29 compute-0 podman[275250]: 2025-10-14 08:54:29.747874336 +0000 UTC m=+0.149705624 container init a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 08:54:29 compute-0 podman[275250]: 2025-10-14 08:54:29.760823835 +0000 UTC m=+0.162655083 container start a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:54:29 compute-0 podman[275250]: 2025-10-14 08:54:29.76547566 +0000 UTC m=+0.167306918 container attach a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:54:29 compute-0 loving_hugle[275266]: 167 167
Oct 14 08:54:29 compute-0 systemd[1]: libpod-a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604.scope: Deactivated successfully.
Oct 14 08:54:29 compute-0 podman[275250]: 2025-10-14 08:54:29.768618037 +0000 UTC m=+0.170449295 container died a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:54:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fcf1fb91b8081beaa4a339846f1ed6de75a2f25ffc3782d4164984bce0ef3a4-merged.mount: Deactivated successfully.
Oct 14 08:54:29 compute-0 podman[275250]: 2025-10-14 08:54:29.822940367 +0000 UTC m=+0.224771615 container remove a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 08:54:29 compute-0 systemd[1]: libpod-conmon-a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604.scope: Deactivated successfully.
Oct 14 08:54:30 compute-0 podman[275290]: 2025-10-14 08:54:30.051296949 +0000 UTC m=+0.042874798 container create 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 08:54:30 compute-0 systemd[1]: Started libpod-conmon-54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87.scope.
Oct 14 08:54:30 compute-0 podman[275290]: 2025-10-14 08:54:30.030473876 +0000 UTC m=+0.022051715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:54:30 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:30 compute-0 podman[275290]: 2025-10-14 08:54:30.152808673 +0000 UTC m=+0.144386492 container init 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 08:54:30 compute-0 podman[275290]: 2025-10-14 08:54:30.1591701 +0000 UTC m=+0.150747919 container start 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:54:30 compute-0 podman[275290]: 2025-10-14 08:54:30.163409045 +0000 UTC m=+0.154986864 container attach 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 08:54:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:31.119 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:54:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:31.122 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 08:54:31 compute-0 sharp_herschel[275306]: --> passed data devices: 0 physical, 3 LVM
Oct 14 08:54:31 compute-0 sharp_herschel[275306]: --> relative data size: 1.0
Oct 14 08:54:31 compute-0 sharp_herschel[275306]: --> All data devices are unavailable
Oct 14 08:54:31 compute-0 systemd[1]: libpod-54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87.scope: Deactivated successfully.
Oct 14 08:54:31 compute-0 systemd[1]: libpod-54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87.scope: Consumed 1.146s CPU time.
Oct 14 08:54:31 compute-0 podman[275290]: 2025-10-14 08:54:31.346173937 +0000 UTC m=+1.337751806 container died 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:54:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1-merged.mount: Deactivated successfully.
Oct 14 08:54:31 compute-0 podman[275290]: 2025-10-14 08:54:31.428304121 +0000 UTC m=+1.419881960 container remove 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 08:54:31 compute-0 ceph-mon[74249]: pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:31 compute-0 systemd[1]: libpod-conmon-54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87.scope: Deactivated successfully.
Oct 14 08:54:31 compute-0 sudo[275184]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:31 compute-0 sudo[275349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:31 compute-0 sudo[275349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:31 compute-0 sudo[275349]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:31 compute-0 sudo[275374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:54:31 compute-0 sudo[275374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:31 compute-0 sudo[275374]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:31 compute-0 sudo[275399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:31 compute-0 sudo[275399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:31 compute-0 sudo[275399]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:31 compute-0 sudo[275424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 08:54:31 compute-0 sudo[275424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:32 compute-0 podman[275490]: 2025-10-14 08:54:32.280409528 +0000 UTC m=+0.057238493 container create 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 14 08:54:32 compute-0 systemd[1]: Started libpod-conmon-6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901.scope.
Oct 14 08:54:32 compute-0 podman[275490]: 2025-10-14 08:54:32.252453139 +0000 UTC m=+0.029282204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:54:32 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:32 compute-0 podman[275490]: 2025-10-14 08:54:32.373590306 +0000 UTC m=+0.150419361 container init 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 08:54:32 compute-0 podman[275490]: 2025-10-14 08:54:32.382216119 +0000 UTC m=+0.159045084 container start 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 08:54:32 compute-0 youthful_rhodes[275506]: 167 167
Oct 14 08:54:32 compute-0 systemd[1]: libpod-6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901.scope: Deactivated successfully.
Oct 14 08:54:32 compute-0 podman[275490]: 2025-10-14 08:54:32.386822353 +0000 UTC m=+0.163651358 container attach 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 08:54:32 compute-0 conmon[275506]: conmon 6de3d57a1597297df8bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901.scope/container/memory.events
Oct 14 08:54:32 compute-0 podman[275490]: 2025-10-14 08:54:32.387714525 +0000 UTC m=+0.164543500 container died 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 08:54:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-586100e25cef6ae844784efd284ac3f9b7a3d0d9dbc58a38312ccb350b58b6f7-merged.mount: Deactivated successfully.
Oct 14 08:54:32 compute-0 podman[275490]: 2025-10-14 08:54:32.427529397 +0000 UTC m=+0.204358382 container remove 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:54:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:32 compute-0 systemd[1]: libpod-conmon-6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901.scope: Deactivated successfully.
Oct 14 08:54:32 compute-0 podman[275530]: 2025-10-14 08:54:32.650446195 +0000 UTC m=+0.046506228 container create d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 08:54:32 compute-0 systemd[1]: Started libpod-conmon-d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61.scope.
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:54:32
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups']
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 08:54:32 compute-0 podman[275530]: 2025-10-14 08:54:32.628550655 +0000 UTC m=+0.024610678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:54:32 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:32 compute-0 podman[275530]: 2025-10-14 08:54:32.773867659 +0000 UTC m=+0.169927742 container init d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:54:32 compute-0 podman[275530]: 2025-10-14 08:54:32.78527744 +0000 UTC m=+0.181337483 container start d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 08:54:32 compute-0 podman[275530]: 2025-10-14 08:54:32.790550051 +0000 UTC m=+0.186610094 container attach d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:54:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:54:33 compute-0 ceph-mon[74249]: pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]: {
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:     "0": [
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:         {
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "devices": [
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "/dev/loop3"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             ],
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_name": "ceph_lv0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_size": "21470642176",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "name": "ceph_lv0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "tags": {
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cluster_name": "ceph",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.crush_device_class": "",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.encrypted": "0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osd_id": "0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.type": "block",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.vdo": "0"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             },
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "type": "block",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "vg_name": "ceph_vg0"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:         }
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:     ],
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:     "1": [
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:         {
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "devices": [
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "/dev/loop4"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             ],
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_name": "ceph_lv1",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_size": "21470642176",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "name": "ceph_lv1",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "tags": {
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cluster_name": "ceph",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.crush_device_class": "",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.encrypted": "0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osd_id": "1",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.type": "block",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.vdo": "0"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             },
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "type": "block",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "vg_name": "ceph_vg1"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:         }
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:     ],
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:     "2": [
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:         {
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "devices": [
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "/dev/loop5"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             ],
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_name": "ceph_lv2",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_size": "21470642176",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "name": "ceph_lv2",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "tags": {
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.cluster_name": "ceph",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.crush_device_class": "",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.encrypted": "0",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osd_id": "2",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.type": "block",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:                 "ceph.vdo": "0"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             },
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "type": "block",
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:             "vg_name": "ceph_vg2"
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:         }
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]:     ]
Oct 14 08:54:33 compute-0 quirky_lovelace[275547]: }
Oct 14 08:54:33 compute-0 systemd[1]: libpod-d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61.scope: Deactivated successfully.
Oct 14 08:54:33 compute-0 podman[275530]: 2025-10-14 08:54:33.566867978 +0000 UTC m=+0.962928171 container died d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0-merged.mount: Deactivated successfully.
Oct 14 08:54:33 compute-0 podman[275530]: 2025-10-14 08:54:33.624350596 +0000 UTC m=+1.020410599 container remove d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 08:54:33 compute-0 systemd[1]: libpod-conmon-d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61.scope: Deactivated successfully.
Oct 14 08:54:33 compute-0 sudo[275424]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:33 compute-0 sudo[275568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:33 compute-0 sudo[275568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:33 compute-0 sudo[275568]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:33 compute-0 sudo[275593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:54:33 compute-0 sudo[275593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:33 compute-0 sudo[275593]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:33 compute-0 sudo[275618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:33 compute-0 sudo[275618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:33 compute-0 sudo[275618]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:33 compute-0 sudo[275643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 08:54:33 compute-0 sudo[275643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:34 compute-0 podman[275709]: 2025-10-14 08:54:34.223075423 +0000 UTC m=+0.047201425 container create d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:54:34 compute-0 systemd[1]: Started libpod-conmon-d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2.scope.
Oct 14 08:54:34 compute-0 podman[275709]: 2025-10-14 08:54:34.199237115 +0000 UTC m=+0.023363177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:54:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:34 compute-0 podman[275709]: 2025-10-14 08:54:34.306054339 +0000 UTC m=+0.130180371 container init d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 08:54:34 compute-0 podman[275709]: 2025-10-14 08:54:34.31133484 +0000 UTC m=+0.135460852 container start d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 08:54:34 compute-0 podman[275709]: 2025-10-14 08:54:34.314495148 +0000 UTC m=+0.138621210 container attach d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 08:54:34 compute-0 infallible_grothendieck[275725]: 167 167
Oct 14 08:54:34 compute-0 systemd[1]: libpod-d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2.scope: Deactivated successfully.
Oct 14 08:54:34 compute-0 podman[275709]: 2025-10-14 08:54:34.315718348 +0000 UTC m=+0.139844360 container died d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 08:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa51882d786509b7bf76ac687330626d800da1b67b4734597f3e53a64fd2de15-merged.mount: Deactivated successfully.
Oct 14 08:54:34 compute-0 podman[275709]: 2025-10-14 08:54:34.353044378 +0000 UTC m=+0.177170390 container remove d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:54:34 compute-0 systemd[1]: libpod-conmon-d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2.scope: Deactivated successfully.
Oct 14 08:54:34 compute-0 ceph-mon[74249]: pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:34 compute-0 podman[275748]: 2025-10-14 08:54:34.523491302 +0000 UTC m=+0.047263936 container create ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 08:54:34 compute-0 systemd[1]: Started libpod-conmon-ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811.scope.
Oct 14 08:54:34 compute-0 podman[275748]: 2025-10-14 08:54:34.500310401 +0000 UTC m=+0.024083055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:54:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:34 compute-0 podman[275748]: 2025-10-14 08:54:34.614790054 +0000 UTC m=+0.138562738 container init ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 08:54:34 compute-0 podman[275748]: 2025-10-14 08:54:34.622075574 +0000 UTC m=+0.145848168 container start ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:54:34 compute-0 podman[275748]: 2025-10-14 08:54:34.624990656 +0000 UTC m=+0.148763330 container attach ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:54:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]: {
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "osd_id": 2,
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "type": "bluestore"
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:     },
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "osd_id": 1,
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "type": "bluestore"
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:     },
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "osd_id": 0,
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:         "type": "bluestore"
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]:     }
Oct 14 08:54:35 compute-0 suspicious_knuth[275764]: }
Oct 14 08:54:35 compute-0 podman[275748]: 2025-10-14 08:54:35.573206622 +0000 UTC m=+1.096979266 container died ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:54:35 compute-0 systemd[1]: libpod-ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811.scope: Deactivated successfully.
Oct 14 08:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76-merged.mount: Deactivated successfully.
Oct 14 08:54:35 compute-0 podman[275748]: 2025-10-14 08:54:35.651899933 +0000 UTC m=+1.175672527 container remove ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 08:54:35 compute-0 podman[275798]: 2025-10-14 08:54:35.664003101 +0000 UTC m=+0.076593230 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 14 08:54:35 compute-0 systemd[1]: libpod-conmon-ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811.scope: Deactivated successfully.
Oct 14 08:54:35 compute-0 podman[275797]: 2025-10-14 08:54:35.685455481 +0000 UTC m=+0.096329927 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 14 08:54:35 compute-0 sudo[275643]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:35 compute-0 nova_compute[259627]: 2025-10-14 08:54:35.702 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:35 compute-0 nova_compute[259627]: 2025-10-14 08:54:35.702 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:54:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:54:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:35 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8f92683b-70fd-45ec-9e51-eff03cede30e does not exist
Oct 14 08:54:35 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f38e466b-feb9-4ea1-8971-780599c6a063 does not exist
Oct 14 08:54:35 compute-0 nova_compute[259627]: 2025-10-14 08:54:35.728 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:54:35 compute-0 sudo[275853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:54:35 compute-0 sudo[275853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:35 compute-0 sudo[275853]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:35 compute-0 nova_compute[259627]: 2025-10-14 08:54:35.813 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:35 compute-0 nova_compute[259627]: 2025-10-14 08:54:35.814 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:35 compute-0 nova_compute[259627]: 2025-10-14 08:54:35.821 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:54:35 compute-0 nova_compute[259627]: 2025-10-14 08:54:35.822 2 INFO nova.compute.claims [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:54:35 compute-0 sudo[275878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 08:54:35 compute-0 sudo[275878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:54:35 compute-0 sudo[275878]: pam_unix(sudo:session): session closed for user root
Oct 14 08:54:35 compute-0 nova_compute[259627]: 2025-10-14 08:54:35.937 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:54:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404368946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.355 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.361 2 DEBUG nova.compute.provider_tree [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.377 2 DEBUG nova.scheduler.client.report [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.405 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.406 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.459 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.460 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.481 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.498 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.581 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.583 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.583 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Creating image(s)
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.601 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.622 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.641 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.644 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:36 compute-0 ceph-mon[74249]: pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:54:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3404368946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.713 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.714 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.715 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.717 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.736 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.739 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 30af67a2-4b44-481c-8ab4-296e93c1c517_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.771 2 WARNING oslo_policy.policy [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.772 2 WARNING oslo_policy.policy [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.775 2 DEBUG nova.policy [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '831826dabb48463c92f24c277df4039e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:54:36 compute-0 nova_compute[259627]: 2025-10-14 08:54:36.996 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 30af67a2-4b44-481c-8ab4-296e93c1c517_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.038 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] resizing rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.106 2 DEBUG nova.objects.instance [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'migration_context' on Instance uuid 30af67a2-4b44-481c-8ab4-296e93c1c517 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.119 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.119 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Ensure instance console log exists: /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.120 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.120 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.120 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:37.124 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.791 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Successfully created port: f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.968 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.969 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:37 compute-0 nova_compute[259627]: 2025-10-14 08:54:37.986 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.005 2 DEBUG oslo_concurrency.processutils [None req-83798e37-e1ca-4d67-8680-82a778ae42d9 296d175bbbcb4e68b5452e11aae2ccb2 3736920871984ebdb2935d7d67386536 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.041 2 DEBUG oslo_concurrency.processutils [None req-83798e37-e1ca-4d67-8680-82a778ae42d9 296d175bbbcb4e68b5452e11aae2ccb2 3736920871984ebdb2935d7d67386536 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.056 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.057 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.070 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.070 2 INFO nova.compute.claims [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.225 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:54:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634047493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.687 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.696 2 DEBUG nova.compute.provider_tree [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:54:38 compute-0 ceph-mon[74249]: pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1634047493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.731 2 DEBUG nova.scheduler.client.report [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.762 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.764 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.808 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.809 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.833 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.847 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.911 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Successfully updated port: f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.932 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.934 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.935 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Creating image(s)
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.964 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:38 compute-0 nova_compute[259627]: 2025-10-14 08:54:38.990 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.014 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.024 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.047 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.048 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.048 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.099 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.099 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.100 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.100 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.118 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.120 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f1df3849-6811-41a9-9c70-f10a6863b4f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.137 2 DEBUG nova.policy [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '654cc6be69694fcd8058cc5a5eb78223', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ac87003cad443c2b75e49ebdefe379c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.373 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f1df3849-6811-41a9-9c70-f10a6863b4f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.415 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] resizing rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:54:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.520 2 DEBUG nova.compute.manager [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.520 2 DEBUG nova.compute.manager [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.521 2 DEBUG oslo_concurrency.lockutils [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.527 2 DEBUG nova.objects.instance [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'migration_context' on Instance uuid f1df3849-6811-41a9-9c70-f10a6863b4f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.543 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.544 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Ensure instance console log exists: /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.544 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.545 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.545 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:39 compute-0 nova_compute[259627]: 2025-10-14 08:54:39.679 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.478 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Successfully created port: fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:54:40 compute-0 ceph-mon[74249]: pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.867 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.898 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.899 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance network_info: |[{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.900 2 DEBUG oslo_concurrency.lockutils [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.901 2 DEBUG nova.network.neutron [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.908 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start _get_guest_xml network_info=[{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.917 2 WARNING nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.922 2 DEBUG nova.virt.libvirt.host [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.923 2 DEBUG nova.virt.libvirt.host [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.926 2 DEBUG nova.virt.libvirt.host [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.927 2 DEBUG nova.virt.libvirt.host [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.928 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.928 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.928 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.929 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.929 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.929 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.930 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.930 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.930 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.930 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.931 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.931 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:54:40 compute-0 nova_compute[259627]: 2025-10-14 08:54:40.934 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:54:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3628877322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.362 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Successfully updated port: fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.364 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.390 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.394 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.409 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.410 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquired lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.410 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:54:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 08:54:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3628877322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:54:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:54:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3941367568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.786 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.800 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.802 2 DEBUG nova.virt.libvirt.vif [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1417208985',display_name='tempest-FloatingIPsAssociationTestJSON-server-1417208985',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1417208985',id=3,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-95bnx8ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:36Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=30af67a2-4b44-481c-8ab4-296e93c1c517,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.802 2 DEBUG nova.network.os_vif_util [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.803 2 DEBUG nova.network.os_vif_util [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.805 2 DEBUG nova.objects.instance [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'pci_devices' on Instance uuid 30af67a2-4b44-481c-8ab4-296e93c1c517 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.827 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <uuid>30af67a2-4b44-481c-8ab4-296e93c1c517</uuid>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <name>instance-00000003</name>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1417208985</nova:name>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:54:40</nova:creationTime>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <nova:user uuid="831826dabb48463c92f24c277df4039e">tempest-FloatingIPsAssociationTestJSON-1304888620-project-member</nova:user>
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <nova:project uuid="f5b8fd07d6d54bda9a0257bf72d4b37f">tempest-FloatingIPsAssociationTestJSON-1304888620</nova:project>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <nova:port uuid="f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0">
Oct 14 08:54:41 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <system>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <entry name="serial">30af67a2-4b44-481c-8ab4-296e93c1c517</entry>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <entry name="uuid">30af67a2-4b44-481c-8ab4-296e93c1c517</entry>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </system>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <os>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   </os>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <features>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   </features>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/30af67a2-4b44-481c-8ab4-296e93c1c517_disk">
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       </source>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config">
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       </source>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:54:41 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:76:a5:a0"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <target dev="tapf0f1dcbf-2b"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/console.log" append="off"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <video>
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </video>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:54:41 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:54:41 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:54:41 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:54:41 compute-0 nova_compute[259627]: </domain>
Oct 14 08:54:41 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.829 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Preparing to wait for external event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.829 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.830 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.830 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.830 2 DEBUG nova.virt.libvirt.vif [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1417208985',display_name='tempest-FloatingIPsAssociationTestJSON-server-1417208985',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1417208985',id=3,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-95bnx8ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:36Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=30af67a2-4b44-481c-8ab4-296e93c1c517,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.831 2 DEBUG nova.network.os_vif_util [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.831 2 DEBUG nova.network.os_vif_util [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.832 2 DEBUG os_vif [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.876 2 DEBUG ovsdbapp.backend.ovs_idl [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.877 2 DEBUG ovsdbapp.backend.ovs_idl [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.877 2 DEBUG ovsdbapp.backend.ovs_idl [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [POLLOUT] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.900 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.901 2 INFO oslo.privsep.daemon [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpz411kyq8/privsep.sock']
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.925 2 DEBUG nova.compute.manager [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-changed-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.926 2 DEBUG nova.compute.manager [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Refreshing instance network info cache due to event network-changed-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:54:41 compute-0 nova_compute[259627]: 2025-10-14 08:54:41.926 2 DEBUG oslo_concurrency.lockutils [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.565 2 INFO oslo.privsep.daemon [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Spawned new privsep daemon via rootwrap
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.429 1734 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.434 1734 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.437 1734 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.437 1734 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1734
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.607 2 DEBUG nova.network.neutron [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.607 2 DEBUG nova.network.neutron [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.625 2 DEBUG oslo_concurrency.lockutils [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:42 compute-0 ceph-mon[74249]: pgmap v1078: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 08:54:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3941367568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006919304917952725 of space, bias 1.0, pg target 0.20757914753858175 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:54:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f1dcbf-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0f1dcbf-2b, col_values=(('external_ids', {'iface-id': 'f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:a5:a0', 'vm-uuid': '30af67a2-4b44-481c-8ab4-296e93c1c517'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:42 compute-0 NetworkManager[44885]: <info>  [1760432082.9508] manager: (tapf0f1dcbf-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:42 compute-0 nova_compute[259627]: 2025-10-14 08:54:42.962 2 INFO os_vif [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b')
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.033 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.033 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.036 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No VIF found with MAC fa:16:3e:76:a5:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.038 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Using config drive
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.073 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.109 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updating instance_info_cache with network_info: [{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.136 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Releasing lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.137 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance network_info: |[{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.137 2 DEBUG oslo_concurrency.lockutils [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.138 2 DEBUG nova.network.neutron [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Refreshing network info cache for port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.141 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start _get_guest_xml network_info=[{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.147 2 WARNING nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.154 2 DEBUG nova.virt.libvirt.host [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.155 2 DEBUG nova.virt.libvirt.host [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.158 2 DEBUG nova.virt.libvirt.host [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.158 2 DEBUG nova.virt.libvirt.host [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.159 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.159 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:54:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1771896407',id=29,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1703239647',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.159 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.159 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.160 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.160 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.160 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.160 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.161 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.161 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.161 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.161 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.163 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 08:54:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:54:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365924208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.609 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.626 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:43 compute-0 nova_compute[259627]: 2025-10-14 08:54:43.630 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3365924208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.022 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Creating config drive at /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.028 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dupjhfe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:54:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020984832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.048 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.050 2 DEBUG nova.virt.libvirt.vif [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(29),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-148937452',id=4,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=29,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-oq3iap0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=f1df3849-6811-41a9-9c70-f10a6863b4f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.051 2 DEBUG nova.network.os_vif_util [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.052 2 DEBUG nova.network.os_vif_util [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.053 2 DEBUG nova.objects.instance [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'pci_devices' on Instance uuid f1df3849-6811-41a9-9c70-f10a6863b4f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.068 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <uuid>f1df3849-6811-41a9-9c70-f10a6863b4f9</uuid>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <name>instance-00000004</name>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-148937452</nova:name>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:54:43</nova:creationTime>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-1703239647">
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <nova:user uuid="654cc6be69694fcd8058cc5a5eb78223">tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member</nova:user>
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <nova:project uuid="3ac87003cad443c2b75e49ebdefe379c">tempest-ServersWithSpecificFlavorTestJSON-632252786</nova:project>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <nova:port uuid="fd4673d1-9420-4d31-a2ce-c5cb5bc79c42">
Oct 14 08:54:44 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <system>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <entry name="serial">f1df3849-6811-41a9-9c70-f10a6863b4f9</entry>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <entry name="uuid">f1df3849-6811-41a9-9c70-f10a6863b4f9</entry>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </system>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <os>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   </os>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <features>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   </features>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f1df3849-6811-41a9-9c70-f10a6863b4f9_disk">
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       </source>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config">
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       </source>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:54:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b8:61:64"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <target dev="tapfd4673d1-94"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/console.log" append="off"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <video>
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </video>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:54:44 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:54:44 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:54:44 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:54:44 compute-0 nova_compute[259627]: </domain>
Oct 14 08:54:44 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.070 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Preparing to wait for external event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.070 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.070 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.070 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.071 2 DEBUG nova.virt.libvirt.vif [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(29),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-148937452',id=4,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=29,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-oq3iap0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=f1df3849-6811-41a9-9c70-f10a6863b4f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.071 2 DEBUG nova.network.os_vif_util [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.072 2 DEBUG nova.network.os_vif_util [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.073 2 DEBUG os_vif [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.077 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4673d1-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.078 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd4673d1-94, col_values=(('external_ids', {'iface-id': 'fd4673d1-9420-4d31-a2ce-c5cb5bc79c42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:61:64', 'vm-uuid': 'f1df3849-6811-41a9-9c70-f10a6863b4f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 NetworkManager[44885]: <info>  [1760432084.0802] manager: (tapfd4673d1-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.089 2 INFO os_vif [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94')
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.134 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.135 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.135 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No VIF found with MAC fa:16:3e:b8:61:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.135 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Using config drive
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.152 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.157 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dupjhfe" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.181 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.184 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.320 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.321 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Deleting local config drive /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config because it was imported into RBD.
Oct 14 08:54:44 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 14 08:54:44 compute-0 kernel: tapf0f1dcbf-2b: entered promiscuous mode
Oct 14 08:54:44 compute-0 NetworkManager[44885]: <info>  [1760432084.4077] manager: (tapf0f1dcbf-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/25)
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 ovn_controller[152662]: 2025-10-14T08:54:44Z|00027|binding|INFO|Claiming lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for this chassis.
Oct 14 08:54:44 compute-0 ovn_controller[152662]: 2025-10-14T08:54:44Z|00028|binding|INFO|f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0: Claiming fa:16:3e:76:a5:a0 10.100.0.12
Oct 14 08:54:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.432 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:a5:a0 10.100.0.12'], port_security=['fa:16:3e:76:a5:a0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30af67a2-4b44-481c-8ab4-296e93c1c517', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:54:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.434 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa bound to our chassis
Oct 14 08:54:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.436 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa
Oct 14 08:54:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.438 162547 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp03yknxqr/privsep.sock']
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.450 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Creating config drive at /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.455 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqunzi4sv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:44 compute-0 systemd-udevd[276513]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:54:44 compute-0 systemd-machined[214636]: New machine qemu-3-instance-00000003.
Oct 14 08:54:44 compute-0 NetworkManager[44885]: <info>  [1760432084.4722] device (tapf0f1dcbf-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:54:44 compute-0 NetworkManager[44885]: <info>  [1760432084.4734] device (tapf0f1dcbf-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:54:44 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 ovn_controller[152662]: 2025-10-14T08:54:44Z|00029|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 ovn-installed in OVS
Oct 14 08:54:44 compute-0 ovn_controller[152662]: 2025-10-14T08:54:44Z|00030|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 up in Southbound
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.588 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqunzi4sv" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.607 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.609 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:44 compute-0 ceph-mon[74249]: pgmap v1079: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 08:54:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1020984832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.770 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.771 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Deleting local config drive /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config because it was imported into RBD.
Oct 14 08:54:44 compute-0 NetworkManager[44885]: <info>  [1760432084.8121] manager: (tapfd4673d1-94): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 14 08:54:44 compute-0 systemd-udevd[276511]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:54:44 compute-0 kernel: tapfd4673d1-94: entered promiscuous mode
Oct 14 08:54:44 compute-0 ovn_controller[152662]: 2025-10-14T08:54:44Z|00031|binding|INFO|Claiming lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 for this chassis.
Oct 14 08:54:44 compute-0 ovn_controller[152662]: 2025-10-14T08:54:44Z|00032|binding|INFO|fd4673d1-9420-4d31-a2ce-c5cb5bc79c42: Claiming fa:16:3e:b8:61:64 10.100.0.10
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.827 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:61:64 10.100.0.10'], port_security=['fa:16:3e:b8:61:64 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f1df3849-6811-41a9-9c70-f10a6863b4f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ac87003cad443c2b75e49ebdefe379c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8a9f5d33-cc0d-455f-8821-c23805bbda66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a071297a-bec1-4fd2-a338-694b6508cca6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:54:44 compute-0 NetworkManager[44885]: <info>  [1760432084.8299] device (tapfd4673d1-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:54:44 compute-0 NetworkManager[44885]: <info>  [1760432084.8305] device (tapfd4673d1-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:54:44 compute-0 systemd-machined[214636]: New machine qemu-4-instance-00000004.
Oct 14 08:54:44 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 ovn_controller[152662]: 2025-10-14T08:54:44Z|00033|binding|INFO|Setting lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 ovn-installed in OVS
Oct 14 08:54:44 compute-0 ovn_controller[152662]: 2025-10-14T08:54:44Z|00034|binding|INFO|Setting lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 up in Southbound
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.884 2 DEBUG nova.network.neutron [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updated VIF entry in instance network info cache for port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.884 2 DEBUG nova.network.neutron [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updating instance_info_cache with network_info: [{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.918 2 DEBUG nova.compute.manager [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.918 2 DEBUG oslo_concurrency.lockutils [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.918 2 DEBUG oslo_concurrency.lockutils [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.919 2 DEBUG oslo_concurrency.lockutils [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.919 2 DEBUG nova.compute.manager [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Processing event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:54:44 compute-0 nova_compute[259627]: 2025-10-14 08:54:44.920 2 DEBUG oslo_concurrency.lockutils [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.143 2 DEBUG nova.compute.manager [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.144 2 DEBUG oslo_concurrency.lockutils [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.145 2 DEBUG oslo_concurrency.lockutils [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.145 2 DEBUG oslo_concurrency.lockutils [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.146 2 DEBUG nova.compute.manager [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Processing event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.153 162547 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.153 162547 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp03yknxqr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.037 276588 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.041 276588 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.044 276588 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.044 276588 INFO oslo.privsep.daemon [-] privsep daemon running as pid 276588
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.156 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7097399-2db1-4746-81fc-71cd85d3a236]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.627 276588 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.628 276588 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.629 276588 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.695 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432085.6947627, f1df3849-6811-41a9-9c70-f10a6863b4f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.695 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] VM Started (Lifecycle Event)
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.699 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.705 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.710 2 INFO nova.virt.libvirt.driver [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance spawned successfully.
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.711 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.723 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.727 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.747 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.748 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.749 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.750 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.751 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.751 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.759 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.760 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432085.6948502, f1df3849-6811-41a9-9c70-f10a6863b4f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.760 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] VM Paused (Lifecycle Event)
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.807 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432085.7026436, f1df3849-6811-41a9-9c70-f10a6863b4f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.808 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] VM Resumed (Lifecycle Event)
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.835 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.839 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.844 2 INFO nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Took 6.91 seconds to spawn the instance on the hypervisor.
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.844 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.856 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.896 2 INFO nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Took 7.86 seconds to build instance.
Oct 14 08:54:45 compute-0 nova_compute[259627]: 2025-10-14 08:54:45.916 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.235 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5db8ec9f-6b89-4b5d-96f7-36699129a672]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.236 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92d50a40-91 in ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:54:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.238 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92d50a40-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:54:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.238 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42f0bdcd-86fc-4cb1-9b19-bdd88333e17a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.240 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[adb10e4f-5e5b-43bf-9232-f95eb80bfe6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.280 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ad871b-94f0-417c-ae2c-425226e99cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.315 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c70726c1-a9dd-48c3-8c3e-f5be8eeb6cbc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.317 162547 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpm6cjjc6j/privsep.sock']
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.342 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432086.342444, 30af67a2-4b44-481c-8ab4-296e93c1c517 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.343 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] VM Started (Lifecycle Event)
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.345 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.347 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.350 2 INFO nova.virt.libvirt.driver [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance spawned successfully.
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.351 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.371 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.408 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.409 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.410 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.411 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.411 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.412 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.417 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.440 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.441 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432086.3431032, 30af67a2-4b44-481c-8ab4-296e93c1c517 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.441 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] VM Paused (Lifecycle Event)
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.492 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.498 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432086.3475614, 30af67a2-4b44-481c-8ab4-296e93c1c517 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.499 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] VM Resumed (Lifecycle Event)
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.509 2 INFO nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Took 9.93 seconds to spawn the instance on the hypervisor.
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.510 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.545 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.548 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.580 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.591 2 INFO nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Took 10.81 seconds to build instance.
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.606 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:46 compute-0 ceph-mon[74249]: pgmap v1080: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 08:54:46 compute-0 nova_compute[259627]: 2025-10-14 08:54:46.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.013 2 DEBUG nova.compute.manager [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.014 2 DEBUG oslo_concurrency.lockutils [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.014 2 DEBUG oslo_concurrency.lockutils [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.015 2 DEBUG oslo_concurrency.lockutils [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.015 2 DEBUG nova.compute.manager [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.015 2 WARNING nova.compute.manager [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state active and task_state None.
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.077 162547 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.079 162547 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpm6cjjc6j/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.952 276686 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.957 276686 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.960 276686 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.960 276686 INFO oslo.privsep.daemon [-] privsep daemon running as pid 276686
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.081 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70311341-1164-49ff-9b6f-12cb2e99bea0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.636 2 DEBUG nova.compute.manager [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.637 2 DEBUG oslo_concurrency.lockutils [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.638 2 DEBUG oslo_concurrency.lockutils [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.638 2 DEBUG oslo_concurrency.lockutils [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.638 2 DEBUG nova.compute.manager [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] No waiting events found dispatching network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:54:47 compute-0 nova_compute[259627]: 2025-10-14 08:54:47.639 2 WARNING nova.compute.manager [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received unexpected event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 for instance with vm_state active and task_state None.
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.697 276686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.697 276686 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.697 276686 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.310 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c518cfee-525f-45de-b796-9e8ae12e72e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.3198] manager: (tap92d50a40-90): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0890bf-4a43-4e0b-ad47-493503aadb37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.347 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5770cff1-df6b-4779-9aa5-df9e8898683c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.350 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[44985054-8440-4696-a634-73fa015e17e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 systemd-udevd[276698]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.3698] device (tap92d50a40-90): carrier: link connected
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.373 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef7d7c7-9d6d-4ae8-8906-f53b11fb9ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.400 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04d69da2-aa2a-4fc3-b30e-310224f4d93e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92d50a40-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586713, 'reachable_time': 30695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276701, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.437 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[927ff67c-377c-4c80-bf4c-9f8ff7439ea3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:6c66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586713, 'tstamp': 586713}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276716, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.472 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[676199c0-d3c6-402a-a862-7338bd3caba0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92d50a40-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586713, 'reachable_time': 30695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276717, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[047a848f-918e-42a7-9d5c-b083c21a25e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.577 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ffaebb54-64ee-49a4-8a22-b891432f80d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.579 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92d50a40-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.579 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.580 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92d50a40-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:48 compute-0 nova_compute[259627]: 2025-10-14 08:54:48.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.5824] manager: (tap92d50a40-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 14 08:54:48 compute-0 kernel: tap92d50a40-90: entered promiscuous mode
Oct 14 08:54:48 compute-0 nova_compute[259627]: 2025-10-14 08:54:48.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.587 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92d50a40-90, col_values=(('external_ids', {'iface-id': '2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:48 compute-0 nova_compute[259627]: 2025-10-14 08:54:48.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:48 compute-0 ovn_controller[152662]: 2025-10-14T08:54:48Z|00035|binding|INFO|Releasing lport 2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52 from this chassis (sb_readonly=0)
Oct 14 08:54:48 compute-0 nova_compute[259627]: 2025-10-14 08:54:48.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.591 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92d50a40-95c8-4c0a-a4ab-d459f68516aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92d50a40-95c8-4c0a-a4ab-d459f68516aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.593 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[51e8d14c-bc69-4598-86d5-18d83168fc23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.594 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-92d50a40-95c8-4c0a-a4ab-d459f68516aa
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/92d50a40-95c8-4c0a-a4ab-d459f68516aa.pid.haproxy
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 92d50a40-95c8-4c0a-a4ab-d459f68516aa
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:54:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.595 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'env', 'PROCESS_TAG=haproxy-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92d50a40-95c8-4c0a-a4ab-d459f68516aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:54:48 compute-0 nova_compute[259627]: 2025-10-14 08:54:48.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:48 compute-0 nova_compute[259627]: 2025-10-14 08:54:48.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.7082] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.7085] device (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.7093] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.7096] device (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.7104] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.7109] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.7113] device (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 14 08:54:48 compute-0 NetworkManager[44885]: <info>  [1760432088.7115] device (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 14 08:54:48 compute-0 ceph-mon[74249]: pgmap v1081: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 08:54:48 compute-0 nova_compute[259627]: 2025-10-14 08:54:48.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:48 compute-0 ovn_controller[152662]: 2025-10-14T08:54:48Z|00036|binding|INFO|Releasing lport 2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52 from this chassis (sb_readonly=0)
Oct 14 08:54:48 compute-0 nova_compute[259627]: 2025-10-14 08:54:48.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:49 compute-0 podman[276750]: 2025-10-14 08:54:49.034495823 +0000 UTC m=+0.067200908 container create 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 08:54:49 compute-0 nova_compute[259627]: 2025-10-14 08:54:49.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:49 compute-0 podman[276750]: 2025-10-14 08:54:48.995969633 +0000 UTC m=+0.028674788 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:54:49 compute-0 systemd[1]: Started libpod-conmon-3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b.scope.
Oct 14 08:54:49 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3945ba3649327f68a0834580ef80e82e4e0d34f541da7ffdb8442a6916717c6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:49 compute-0 podman[276750]: 2025-10-14 08:54:49.161298911 +0000 UTC m=+0.194004016 container init 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 08:54:49 compute-0 podman[276750]: 2025-10-14 08:54:49.171131103 +0000 UTC m=+0.203836188 container start 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:54:49 compute-0 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [NOTICE]   (276770) : New worker (276772) forked
Oct 14 08:54:49 compute-0 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [NOTICE]   (276770) : Loading success.
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.241 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 in datapath 6f970eb9-83e1-4efc-b15d-b5885b9eabe7 unbound from our chassis
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.243 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.257 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a57f81c7-d064-4138-aa7e-4f65c8738006]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.258 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6f970eb9-81 in ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.261 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6f970eb9-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.261 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5434a5f-16f2-4ffe-b76d-af64912716de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.263 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87bf5f43-0dd7-4366-ad63-e75a6d142faf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.292 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[97f8e502-df14-4fbf-8219-72b670dc13a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f02065-de63-416f-af7a-b75303a519e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.351 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cfb5d8-7d01-4b61-9b2e-8bb9156d700d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 NetworkManager[44885]: <info>  [1760432089.3702] manager: (tap6f970eb9-80): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.371 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[600e1a63-c211-4c2e-8fa9-e7432a985816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 systemd-udevd[276708]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.407 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb80d3b-4ed2-4065-912f-1bd2b603a04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.411 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ca045d27-9d42-42c4-990e-790c4f9f762e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 NetworkManager[44885]: <info>  [1760432089.4312] device (tap6f970eb9-80): carrier: link connected
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.438 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4141e213-5135-4aff-8ca7-9813a27a4e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.454 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2bc2e97-bbe8-42ee-8adb-ecb0821f091a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f970eb9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:30:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586819, 'reachable_time': 18741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276794, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc1d4cb-d492-466d-b519-ee195af34a92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:30aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586819, 'tstamp': 586819}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276795, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.483 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e00684a-889a-4014-8ead-a91ac043c55c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f970eb9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:30:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586819, 'reachable_time': 18741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276796, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.509 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7d876b-9582-405a-b3b5-ccbf2d0bfbee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.565 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb282bc-20bb-4e7a-b723-3af9f08522ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.566 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f970eb9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.566 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.567 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f970eb9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:49 compute-0 NetworkManager[44885]: <info>  [1760432089.5691] manager: (tap6f970eb9-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 14 08:54:49 compute-0 kernel: tap6f970eb9-80: entered promiscuous mode
Oct 14 08:54:49 compute-0 nova_compute[259627]: 2025-10-14 08:54:49.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:49 compute-0 nova_compute[259627]: 2025-10-14 08:54:49.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.573 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f970eb9-80, col_values=(('external_ids', {'iface-id': '6a62b55c-d140-4dc2-a487-c292e81e63e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:54:49 compute-0 ovn_controller[152662]: 2025-10-14T08:54:49Z|00037|binding|INFO|Releasing lport 6a62b55c-d140-4dc2-a487-c292e81e63e0 from this chassis (sb_readonly=0)
Oct 14 08:54:49 compute-0 nova_compute[259627]: 2025-10-14 08:54:49.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:49 compute-0 nova_compute[259627]: 2025-10-14 08:54:49.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:49 compute-0 nova_compute[259627]: 2025-10-14 08:54:49.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.601 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.602 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e23e45e7-346b-4723-8e39-7662feeabab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.604 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:54:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.606 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'env', 'PROCESS_TAG=haproxy-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:54:49 compute-0 podman[276828]: 2025-10-14 08:54:49.952176376 +0000 UTC m=+0.039330811 container create e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:54:50 compute-0 systemd[1]: Started libpod-conmon-e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e.scope.
Oct 14 08:54:50 compute-0 podman[276828]: 2025-10-14 08:54:49.9336739 +0000 UTC m=+0.020828345 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:54:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffccbbfb65602d0e35d329bb8e895b905f5212158c2b547223141001e2533a96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:54:50 compute-0 podman[276828]: 2025-10-14 08:54:50.06016245 +0000 UTC m=+0.147316945 container init e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:54:50 compute-0 podman[276828]: 2025-10-14 08:54:50.070491764 +0000 UTC m=+0.157646219 container start e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0)
Oct 14 08:54:50 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [NOTICE]   (276848) : New worker (276850) forked
Oct 14 08:54:50 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [NOTICE]   (276848) : Loading success.
Oct 14 08:54:50 compute-0 ceph-mon[74249]: pgmap v1082: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 08:54:50 compute-0 nova_compute[259627]: 2025-10-14 08:54:50.987 2 DEBUG nova.compute.manager [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-changed-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:54:50 compute-0 nova_compute[259627]: 2025-10-14 08:54:50.988 2 DEBUG nova.compute.manager [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Refreshing instance network info cache due to event network-changed-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:54:50 compute-0 nova_compute[259627]: 2025-10-14 08:54:50.988 2 DEBUG oslo_concurrency.lockutils [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:50 compute-0 nova_compute[259627]: 2025-10-14 08:54:50.989 2 DEBUG oslo_concurrency.lockutils [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:50 compute-0 nova_compute[259627]: 2025-10-14 08:54:50.989 2 DEBUG nova.network.neutron [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Refreshing network info cache for port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:54:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Oct 14 08:54:51 compute-0 nova_compute[259627]: 2025-10-14 08:54:51.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:52 compute-0 ceph-mon[74249]: pgmap v1083: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Oct 14 08:54:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 08:54:53 compute-0 nova_compute[259627]: 2025-10-14 08:54:53.824 2 DEBUG nova.network.neutron [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updated VIF entry in instance network info cache for port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:54:53 compute-0 nova_compute[259627]: 2025-10-14 08:54:53.825 2 DEBUG nova.network.neutron [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updating instance_info_cache with network_info: [{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:53 compute-0 nova_compute[259627]: 2025-10-14 08:54:53.843 2 DEBUG oslo_concurrency.lockutils [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:54 compute-0 nova_compute[259627]: 2025-10-14 08:54:54.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:54 compute-0 podman[276861]: 2025-10-14 08:54:54.695429735 +0000 UTC m=+0.096594064 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 08:54:54 compute-0 podman[276860]: 2025-10-14 08:54:54.715984542 +0000 UTC m=+0.121217421 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 08:54:54 compute-0 ceph-mon[74249]: pgmap v1084: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 08:54:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 08:54:55 compute-0 nova_compute[259627]: 2025-10-14 08:54:55.715 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:55 compute-0 nova_compute[259627]: 2025-10-14 08:54:55.715 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:55 compute-0 nova_compute[259627]: 2025-10-14 08:54:55.734 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:54:55 compute-0 nova_compute[259627]: 2025-10-14 08:54:55.798 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:55 compute-0 nova_compute[259627]: 2025-10-14 08:54:55.799 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:55 compute-0 nova_compute[259627]: 2025-10-14 08:54:55.810 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:54:55 compute-0 nova_compute[259627]: 2025-10-14 08:54:55.811 2 INFO nova.compute.claims [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:54:55 compute-0 nova_compute[259627]: 2025-10-14 08:54:55.942 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:54:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128658766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.435 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.441 2 DEBUG nova.compute.provider_tree [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.463 2 DEBUG nova.scheduler.client.report [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.493 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.494 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.575 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.575 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.606 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.629 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.739 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.742 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.743 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Creating image(s)
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.771 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:56 compute-0 ceph-mon[74249]: pgmap v1085: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 08:54:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2128658766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.832 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.866 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.870 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.899 2 DEBUG nova.policy [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '831826dabb48463c92f24c277df4039e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.938 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.939 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.939 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.940 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.979 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:54:56 compute-0 nova_compute[259627]: 2025-10-14 08:54:56.986 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.239 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.298 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] resizing rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.403 2 DEBUG nova.objects.instance [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'migration_context' on Instance uuid 7b60a7cc-57e5-4833-9541-ed03e9e862ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.426 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.427 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Ensure instance console log exists: /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.427 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.427 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.428 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:54:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:54:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 135 op/s
Oct 14 08:54:57 compute-0 nova_compute[259627]: 2025-10-14 08:54:57.651 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Successfully created port: be863b8b-ed33-4cec-a274-d62c9bd4ac05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:54:58 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 14 08:54:58 compute-0 nova_compute[259627]: 2025-10-14 08:54:58.242 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Successfully updated port: be863b8b-ed33-4cec-a274-d62c9bd4ac05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:54:58 compute-0 nova_compute[259627]: 2025-10-14 08:54:58.259 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:58 compute-0 nova_compute[259627]: 2025-10-14 08:54:58.260 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquired lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:58 compute-0 nova_compute[259627]: 2025-10-14 08:54:58.260 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:54:58 compute-0 nova_compute[259627]: 2025-10-14 08:54:58.366 2 DEBUG nova.compute.manager [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:54:58 compute-0 nova_compute[259627]: 2025-10-14 08:54:58.366 2 DEBUG nova.compute.manager [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing instance network info cache due to event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:54:58 compute-0 nova_compute[259627]: 2025-10-14 08:54:58.367 2 DEBUG oslo_concurrency.lockutils [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:54:58 compute-0 nova_compute[259627]: 2025-10-14 08:54:58.769 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:54:58 compute-0 ceph-mon[74249]: pgmap v1086: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 135 op/s
Oct 14 08:54:58 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:54:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 135 op/s
Oct 14 08:54:59 compute-0 ovn_controller[152662]: 2025-10-14T08:54:59Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:a5:a0 10.100.0.12
Oct 14 08:54:59 compute-0 ovn_controller[152662]: 2025-10-14T08:54:59Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:a5:a0 10.100.0.12
Oct 14 08:54:59 compute-0 ovn_controller[152662]: 2025-10-14T08:54:59Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:61:64 10.100.0.10
Oct 14 08:54:59 compute-0 ovn_controller[152662]: 2025-10-14T08:54:59Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:61:64 10.100.0.10
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.825 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.858 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Releasing lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.858 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance network_info: |[{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.859 2 DEBUG oslo_concurrency.lockutils [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.860 2 DEBUG nova.network.neutron [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.868 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start _get_guest_xml network_info=[{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.880 2 WARNING nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.896 2 DEBUG nova.virt.libvirt.host [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.897 2 DEBUG nova.virt.libvirt.host [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.903 2 DEBUG nova.virt.libvirt.host [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:54:59 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.904 2 DEBUG nova.virt.libvirt.host [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.905 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.905 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.906 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.907 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.907 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.907 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.907 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.908 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.908 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.909 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.909 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.909 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:54:59 compute-0 nova_compute[259627]: 2025-10-14 08:54:59.914 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894751824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.408 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.440 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.444 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:00 compute-0 ceph-mon[74249]: pgmap v1087: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 135 op/s
Oct 14 08:55:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3894751824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075345938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.885 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.888 2 DEBUG nova.virt.libvirt.vif [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1096031615',display_name='tempest-FloatingIPsAssociationTestJSON-server-1096031615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1096031615',id=5,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-opxie8ra',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:56Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=7b60a7cc-57e5-4833-9541-ed03e9e862ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.888 2 DEBUG nova.network.os_vif_util [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.889 2 DEBUG nova.network.os_vif_util [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.891 2 DEBUG nova.objects.instance [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b60a7cc-57e5-4833-9541-ed03e9e862ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.913 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <uuid>7b60a7cc-57e5-4833-9541-ed03e9e862ea</uuid>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <name>instance-00000005</name>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1096031615</nova:name>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:54:59</nova:creationTime>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <nova:user uuid="831826dabb48463c92f24c277df4039e">tempest-FloatingIPsAssociationTestJSON-1304888620-project-member</nova:user>
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <nova:project uuid="f5b8fd07d6d54bda9a0257bf72d4b37f">tempest-FloatingIPsAssociationTestJSON-1304888620</nova:project>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <nova:port uuid="be863b8b-ed33-4cec-a274-d62c9bd4ac05">
Oct 14 08:55:00 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <system>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <entry name="serial">7b60a7cc-57e5-4833-9541-ed03e9e862ea</entry>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <entry name="uuid">7b60a7cc-57e5-4833-9541-ed03e9e862ea</entry>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </system>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <os>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   </os>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <features>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   </features>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk">
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config">
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:00 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c0:74:b6"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <target dev="tapbe863b8b-ed"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/console.log" append="off"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <video>
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </video>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:55:00 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:55:00 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:55:00 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:55:00 compute-0 nova_compute[259627]: </domain>
Oct 14 08:55:00 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.915 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Preparing to wait for external event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.915 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.916 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.916 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.917 2 DEBUG nova.virt.libvirt.vif [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1096031615',display_name='tempest-FloatingIPsAssociationTestJSON-server-1096031615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1096031615',id=5,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-opxie8ra',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:56Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=7b60a7cc-57e5-4833-9541-ed03e9e862ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.917 2 DEBUG nova.network.os_vif_util [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.918 2 DEBUG nova.network.os_vif_util [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.919 2 DEBUG os_vif [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.927 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe863b8b-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.928 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe863b8b-ed, col_values=(('external_ids', {'iface-id': 'be863b8b-ed33-4cec-a274-d62c9bd4ac05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:74:b6', 'vm-uuid': '7b60a7cc-57e5-4833-9541-ed03e9e862ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:00 compute-0 NetworkManager[44885]: <info>  [1760432100.9323] manager: (tapbe863b8b-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.938 2 INFO os_vif [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed')
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.992 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.992 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.992 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No VIF found with MAC fa:16:3e:c0:74:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:55:00 compute-0 nova_compute[259627]: 2025-10-14 08:55:00.993 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Using config drive
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.015 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.363 2 DEBUG nova.network.neutron [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updated VIF entry in instance network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.363 2 DEBUG nova.network.neutron [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.377 2 DEBUG oslo_concurrency.lockutils [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.394 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Creating config drive at /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.398 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6_0lvgl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 241 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.0 MiB/s wr, 281 op/s
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.527 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6_0lvgl" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.568 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.574 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.728 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.730 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Deleting local config drive /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config because it was imported into RBD.
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:01 compute-0 NetworkManager[44885]: <info>  [1760432101.7984] manager: (tapbe863b8b-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct 14 08:55:01 compute-0 kernel: tapbe863b8b-ed: entered promiscuous mode
Oct 14 08:55:01 compute-0 ovn_controller[152662]: 2025-10-14T08:55:01Z|00038|binding|INFO|Claiming lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 for this chassis.
Oct 14 08:55:01 compute-0 ovn_controller[152662]: 2025-10-14T08:55:01Z|00039|binding|INFO|be863b8b-ed33-4cec-a274-d62c9bd4ac05: Claiming fa:16:3e:c0:74:b6 10.100.0.13
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.810 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:74:b6 10.100.0.13'], port_security=['fa:16:3e:c0:74:b6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b60a7cc-57e5-4833-9541-ed03e9e862ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=be863b8b-ed33-4cec-a274-d62c9bd4ac05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.814 162547 INFO neutron.agent.ovn.metadata.agent [-] Port be863b8b-ed33-4cec-a274-d62c9bd4ac05 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa bound to our chassis
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.819 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa
Oct 14 08:55:01 compute-0 ovn_controller[152662]: 2025-10-14T08:55:01Z|00040|binding|INFO|Setting lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 ovn-installed in OVS
Oct 14 08:55:01 compute-0 ovn_controller[152662]: 2025-10-14T08:55:01Z|00041|binding|INFO|Setting lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 up in Southbound
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2075345938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:01 compute-0 systemd-machined[214636]: New machine qemu-5-instance-00000005.
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.845 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[41dfe802-69c0-4231-9bd9-77dbddbcedbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:01 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 14 08:55:01 compute-0 systemd-udevd[277224]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.884 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[113f033e-56c6-4795-b73a-c296d5314f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:01 compute-0 NetworkManager[44885]: <info>  [1760432101.8901] device (tapbe863b8b-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:55:01 compute-0 NetworkManager[44885]: <info>  [1760432101.8912] device (tapbe863b8b-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.890 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccefe4b-e46e-4525-882b-962e4d54d0b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.923 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3576dccf-187d-491f-a0fa-ff3b4690f417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.939 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[443a7bf4-3000-4a0f-a6f1-0686df8dbfe9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92d50a40-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586713, 'reachable_time': 30695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277234, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.957 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5664779f-fa4f-4a96-bc46-f12d48ddf0a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92d50a40-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586730, 'tstamp': 586730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277236, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92d50a40-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586733, 'tstamp': 586733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277236, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.958 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92d50a40-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:01 compute-0 nova_compute[259627]: 2025-10-14 08:55:01.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.966 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92d50a40-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.966 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.966 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92d50a40-90, col_values=(('external_ids', {'iface-id': '2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.967 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.135 2 DEBUG nova.compute.manager [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.135 2 DEBUG oslo_concurrency.lockutils [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.136 2 DEBUG oslo_concurrency.lockutils [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.136 2 DEBUG oslo_concurrency.lockutils [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.136 2 DEBUG nova.compute.manager [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Processing event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:55:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:55:02 compute-0 ceph-mon[74249]: pgmap v1088: 305 pgs: 305 active+clean; 241 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.0 MiB/s wr, 281 op/s
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.951 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432102.9504519, 7b60a7cc-57e5-4833-9541-ed03e9e862ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.952 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] VM Started (Lifecycle Event)
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.955 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.959 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.964 2 INFO nova.virt.libvirt.driver [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance spawned successfully.
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.965 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.982 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.989 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.998 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.999 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:02 compute-0 nova_compute[259627]: 2025-10-14 08:55:02.999 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.000 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.000 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.001 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.012 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.012 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432102.9506228, 7b60a7cc-57e5-4833-9541-ed03e9e862ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.012 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] VM Paused (Lifecycle Event)
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.042 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.046 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432102.9594593, 7b60a7cc-57e5-4833-9541-ed03e9e862ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] VM Resumed (Lifecycle Event)
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.065 2 INFO nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Took 6.33 seconds to spawn the instance on the hypervisor.
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.065 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.066 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.072 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.101 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.134 2 INFO nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Took 7.36 seconds to build instance.
Oct 14 08:55:03 compute-0 nova_compute[259627]: 2025-10-14 08:55:03.149 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 241 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 576 KiB/s rd, 6.0 MiB/s wr, 146 op/s
Oct 14 08:55:04 compute-0 nova_compute[259627]: 2025-10-14 08:55:04.277 2 DEBUG nova.compute.manager [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:04 compute-0 nova_compute[259627]: 2025-10-14 08:55:04.277 2 DEBUG oslo_concurrency.lockutils [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:04 compute-0 nova_compute[259627]: 2025-10-14 08:55:04.277 2 DEBUG oslo_concurrency.lockutils [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:04 compute-0 nova_compute[259627]: 2025-10-14 08:55:04.277 2 DEBUG oslo_concurrency.lockutils [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:04 compute-0 nova_compute[259627]: 2025-10-14 08:55:04.278 2 DEBUG nova.compute.manager [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] No waiting events found dispatching network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:04 compute-0 nova_compute[259627]: 2025-10-14 08:55:04.278 2 WARNING nova.compute.manager [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received unexpected event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 for instance with vm_state active and task_state None.
Oct 14 08:55:04 compute-0 ceph-mon[74249]: pgmap v1089: 305 pgs: 305 active+clean; 241 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 576 KiB/s rd, 6.0 MiB/s wr, 146 op/s
Oct 14 08:55:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 08:55:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 08:55:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2180873797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:55:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 08:55:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2180873797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:55:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2180873797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:55:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2180873797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:55:05 compute-0 nova_compute[259627]: 2025-10-14 08:55:05.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.186 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.187 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.212 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.302 2 DEBUG nova.compute.manager [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.303 2 DEBUG nova.compute.manager [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.304 2 DEBUG oslo_concurrency.lockutils [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.304 2 DEBUG oslo_concurrency.lockutils [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.305 2 DEBUG nova.network.neutron [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.312 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.313 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.322 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.323 2 INFO nova.compute.claims [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.526 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:06 compute-0 podman[277281]: 2025-10-14 08:55:06.714855764 +0000 UTC m=+0.068437309 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 08:55:06 compute-0 podman[277280]: 2025-10-14 08:55:06.76132908 +0000 UTC m=+0.114583277 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 08:55:06 compute-0 nova_compute[259627]: 2025-10-14 08:55:06.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:06 compute-0 ceph-mon[74249]: pgmap v1090: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 08:55:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770342919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:07.012 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.013 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.018 2 DEBUG nova.compute.provider_tree [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.034 2 DEBUG nova.scheduler.client.report [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.061 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.062 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.102 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.102 2 DEBUG nova.network.neutron [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.120 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.138 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.225 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.226 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.226 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Creating image(s)
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.250 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.271 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.292 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.295 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.354 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.355 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.355 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.356 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.374 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.378 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.479 2 DEBUG nova.network.neutron [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.480 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.593 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.667 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] resizing rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.792 2 DEBUG nova.objects.instance [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lazy-loading 'migration_context' on Instance uuid 2d3012e0-0c96-4f38-aaf5-91e69018d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.810 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.811 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Ensure instance console log exists: /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.811 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.812 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.812 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.814 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.819 2 WARNING nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.824 2 DEBUG nova.virt.libvirt.host [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.824 2 DEBUG nova.virt.libvirt.host [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.827 2 DEBUG nova.virt.libvirt.host [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.827 2 DEBUG nova.virt.libvirt.host [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.830 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.832 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.868 2 DEBUG nova.network.neutron [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.868 2 DEBUG nova.network.neutron [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:07 compute-0 nova_compute[259627]: 2025-10-14 08:55:07.883 2 DEBUG oslo_concurrency.lockutils [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/770342919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427218954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.307 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.326 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.329 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942311881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.747 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.750 2 DEBUG nova.objects.instance [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d3012e0-0c96-4f38-aaf5-91e69018d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.768 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <uuid>2d3012e0-0c96-4f38-aaf5-91e69018d624</uuid>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <name>instance-00000006</name>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1046199962</nova:name>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:55:07</nova:creationTime>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <nova:user uuid="d198584e448f4f7588fd71c62016a5d9">tempest-ServerDiagnosticsNegativeTest-1101654890-project-member</nova:user>
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <nova:project uuid="b83283e63f5f412aa3f06e953847cac6">tempest-ServerDiagnosticsNegativeTest-1101654890</nova:project>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <system>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <entry name="serial">2d3012e0-0c96-4f38-aaf5-91e69018d624</entry>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <entry name="uuid">2d3012e0-0c96-4f38-aaf5-91e69018d624</entry>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     </system>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <os>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   </os>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <features>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   </features>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2d3012e0-0c96-4f38-aaf5-91e69018d624_disk">
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config">
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:08 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/console.log" append="off"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <video>
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     </video>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:55:08 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:55:08 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:55:08 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:55:08 compute-0 nova_compute[259627]: </domain>
Oct 14 08:55:08 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.842 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.843 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.844 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Using config drive
Oct 14 08:55:08 compute-0 nova_compute[259627]: 2025-10-14 08:55:08.871 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:08 compute-0 ceph-mon[74249]: pgmap v1091: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 08:55:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3427218954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3942311881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.032 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.035 2 INFO nova.compute.manager [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Terminating instance
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.036 2 DEBUG nova.compute.manager [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:55:09 compute-0 kernel: tapfd4673d1-94 (unregistering): left promiscuous mode
Oct 14 08:55:09 compute-0 NetworkManager[44885]: <info>  [1760432109.1000] device (tapfd4673d1-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.125 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Creating config drive at /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.130 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurfes625 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:09 compute-0 ovn_controller[152662]: 2025-10-14T08:55:09Z|00042|binding|INFO|Releasing lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 from this chassis (sb_readonly=0)
Oct 14 08:55:09 compute-0 ovn_controller[152662]: 2025-10-14T08:55:09Z|00043|binding|INFO|Setting lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 down in Southbound
Oct 14 08:55:09 compute-0 ovn_controller[152662]: 2025-10-14T08:55:09Z|00044|binding|INFO|Removing iface tapfd4673d1-94 ovn-installed in OVS
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.159 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:61:64 10.100.0.10'], port_security=['fa:16:3e:b8:61:64 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f1df3849-6811-41a9-9c70-f10a6863b4f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ac87003cad443c2b75e49ebdefe379c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8a9f5d33-cc0d-455f-8821-c23805bbda66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a071297a-bec1-4fd2-a338-694b6508cca6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.161 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 in datapath 6f970eb9-83e1-4efc-b15d-b5885b9eabe7 unbound from our chassis
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.163 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f970eb9-83e1-4efc-b15d-b5885b9eabe7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.167 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf6ec4f-9470-45a6-b1a5-f6668530e059]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.168 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 namespace which is not needed anymore
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:09 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 14 08:55:09 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 14.678s CPU time.
Oct 14 08:55:09 compute-0 systemd-machined[214636]: Machine qemu-4-instance-00000004 terminated.
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.255 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurfes625" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.281 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.285 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:09 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [NOTICE]   (276848) : haproxy version is 2.8.14-c23fe91
Oct 14 08:55:09 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [NOTICE]   (276848) : path to executable is /usr/sbin/haproxy
Oct 14 08:55:09 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [WARNING]  (276848) : Exiting Master process...
Oct 14 08:55:09 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [ALERT]    (276848) : Current worker (276850) exited with code 143 (Terminated)
Oct 14 08:55:09 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [WARNING]  (276848) : All workers exited. Exiting... (0)
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.309 2 INFO nova.virt.libvirt.driver [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance destroyed successfully.
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.310 2 DEBUG nova.objects.instance [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'resources' on Instance uuid f1df3849-6811-41a9-9c70-f10a6863b4f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:09 compute-0 systemd[1]: libpod-e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e.scope: Deactivated successfully.
Oct 14 08:55:09 compute-0 podman[277623]: 2025-10-14 08:55:09.320296794 +0000 UTC m=+0.047442641 container died e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.334 2 DEBUG nova.virt.libvirt.vif [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(29),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-148937452',id=4,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=29,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:54:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-oq3iap0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:54:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=f1df3849-6811-41a9-9c70-f10a6863b4f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.335 2 DEBUG nova.network.os_vif_util [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.336 2 DEBUG nova.network.os_vif_util [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.336 2 DEBUG os_vif [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4673d1-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffccbbfb65602d0e35d329bb8e895b905f5212158c2b547223141001e2533a96-merged.mount: Deactivated successfully.
Oct 14 08:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e-userdata-shm.mount: Deactivated successfully.
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.346 2 INFO os_vif [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94')
Oct 14 08:55:09 compute-0 podman[277623]: 2025-10-14 08:55:09.363870839 +0000 UTC m=+0.091016686 container cleanup e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 08:55:09 compute-0 systemd[1]: libpod-conmon-e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e.scope: Deactivated successfully.
Oct 14 08:55:09 compute-0 podman[277704]: 2025-10-14 08:55:09.434967023 +0000 UTC m=+0.045582476 container remove e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.441 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[902c06d1-1858-407c-a034-4f463d1e2eab]: (4, ('Tue Oct 14 08:55:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 (e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e)\ne67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e\nTue Oct 14 08:55:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 (e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e)\ne67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.447 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0130d686-6e15-4619-8d24-2311b98e13ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.447 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.448 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Deleting local config drive /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config because it was imported into RBD.
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.448 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f970eb9-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 08:55:09 compute-0 kernel: tap6f970eb9-80: left promiscuous mode
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.474 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0e76b138-254e-4762-b817-5c3694a7299a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d23b5c9f-a311-4351-b3cc-ad89f80642c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.503 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04487446-bfdd-4519-b055-e2647a99f38d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42d916de-ebab-40b0-946e-48887570ccd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586810, 'reachable_time': 28615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277732, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:09 compute-0 systemd-machined[214636]: New machine qemu-6-instance-00000006.
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.531 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:55:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.532 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[50f0a6f8-3147-4398-902f-8375c2a34a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d6f970eb9\x2d83e1\x2d4efc\x2db15d\x2db5885b9eabe7.mount: Deactivated successfully.
Oct 14 08:55:09 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.748 2 INFO nova.virt.libvirt.driver [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Deleting instance files /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9_del
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.749 2 INFO nova.virt.libvirt.driver [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Deletion of /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9_del complete
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.831 2 INFO nova.compute.manager [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.832 2 DEBUG oslo.service.loopingcall [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.832 2 DEBUG nova.compute.manager [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:55:09 compute-0 nova_compute[259627]: 2025-10-14 08:55:09.832 2 DEBUG nova.network.neutron [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.540 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432110.5404274, 2d3012e0-0c96-4f38-aaf5-91e69018d624 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] VM Resumed (Lifecycle Event)
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.544 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.545 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.548 2 INFO nova.virt.libvirt.driver [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance spawned successfully.
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.549 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.575 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.578 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.585 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.586 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.586 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.587 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.588 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.588 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.635 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.636 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432110.5429018, 2d3012e0-0c96-4f38-aaf5-91e69018d624 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.636 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] VM Started (Lifecycle Event)
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.664 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.666 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.676 2 INFO nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Took 3.45 seconds to spawn the instance on the hypervisor.
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.677 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.694 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.749 2 INFO nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Took 4.49 seconds to build instance.
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.774 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:10 compute-0 ceph-mon[74249]: pgmap v1092: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:10 compute-0 nova_compute[259627]: 2025-10-14 08:55:10.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.415 2 DEBUG nova.network.neutron [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.435 2 INFO nova.compute.manager [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Took 1.60 seconds to deallocate network for instance.
Oct 14 08:55:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 214 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.8 MiB/s wr, 285 op/s
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.483 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.484 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.571 2 DEBUG nova.compute.manager [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-unplugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.571 2 DEBUG oslo_concurrency.lockutils [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.572 2 DEBUG oslo_concurrency.lockutils [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.572 2 DEBUG oslo_concurrency.lockutils [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.572 2 DEBUG nova.compute.manager [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] No waiting events found dispatching network-vif-unplugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.572 2 WARNING nova.compute.manager [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received unexpected event network-vif-unplugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 for instance with vm_state deleted and task_state None.
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.626 2 DEBUG oslo_concurrency.processutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.985 2 DEBUG nova.compute.manager [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.986 2 DEBUG nova.compute.manager [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.986 2 DEBUG oslo_concurrency.lockutils [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.986 2 DEBUG oslo_concurrency.lockutils [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:11 compute-0 nova_compute[259627]: 2025-10-14 08:55:11.986 2 DEBUG nova.network.neutron [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:55:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989207332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.086 2 DEBUG oslo_concurrency.processutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.092 2 DEBUG nova.compute.provider_tree [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.114 2 DEBUG nova.scheduler.client.report [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.136 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.187 2 INFO nova.scheduler.client.report [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Deleted allocations for instance f1df3849-6811-41a9-9c70-f10a6863b4f9
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.191 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.192 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.233 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.315 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.335 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.335 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.340 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.341 2 INFO nova.compute.claims [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:55:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.534 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.595 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.596 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.596 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "2d3012e0-0c96-4f38-aaf5-91e69018d624-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.597 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.597 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.598 2 INFO nova.compute.manager [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Terminating instance
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.599 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "refresh_cache-2d3012e0-0c96-4f38-aaf5-91e69018d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.599 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquired lock "refresh_cache-2d3012e0-0c96-4f38-aaf5-91e69018d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.599 2 DEBUG nova.network.neutron [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.824 2 DEBUG nova.network.neutron [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:55:12 compute-0 ceph-mon[74249]: pgmap v1093: 305 pgs: 305 active+clean; 214 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.8 MiB/s wr, 285 op/s
Oct 14 08:55:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1989207332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.913646) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112913682, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2084, "num_deletes": 251, "total_data_size": 3398059, "memory_usage": 3460080, "flush_reason": "Manual Compaction"}
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112926397, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3309618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20882, "largest_seqno": 22965, "table_properties": {"data_size": 3300262, "index_size": 5850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19305, "raw_average_key_size": 20, "raw_value_size": 3281295, "raw_average_value_size": 3428, "num_data_blocks": 264, "num_entries": 957, "num_filter_entries": 957, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760431900, "oldest_key_time": 1760431900, "file_creation_time": 1760432112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 12864 microseconds, and 6121 cpu microseconds.
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.926509) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3309618 bytes OK
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.926553) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.928422) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.928433) EVENT_LOG_v1 {"time_micros": 1760432112928429, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.928448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3389304, prev total WAL file size 3389304, number of live WAL files 2.
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.929394) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3232KB)], [50(7365KB)]
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112929462, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10851920, "oldest_snapshot_seqno": -1}
Oct 14 08:55:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950822610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.968 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4728 keys, 9106228 bytes, temperature: kUnknown
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112973900, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9106228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9072192, "index_size": 21126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 115850, "raw_average_key_size": 24, "raw_value_size": 8984321, "raw_average_value_size": 1900, "num_data_blocks": 889, "num_entries": 4728, "num_filter_entries": 4728, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.974 2 DEBUG nova.compute.provider_tree [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.974153) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9106228 bytes
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.975561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.9 rd, 204.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5246, records dropped: 518 output_compression: NoCompression
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.975576) EVENT_LOG_v1 {"time_micros": 1760432112975569, "job": 26, "event": "compaction_finished", "compaction_time_micros": 44493, "compaction_time_cpu_micros": 18109, "output_level": 6, "num_output_files": 1, "total_output_size": 9106228, "num_input_records": 5246, "num_output_records": 4728, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112976154, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112977214, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.929273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:55:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:55:12 compute-0 nova_compute[259627]: 2025-10-14 08:55:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.003 2 DEBUG nova.scheduler.client.report [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.033 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.034 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.080 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.081 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.102 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.137 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.200 2 DEBUG nova.network.neutron [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.219 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Releasing lock "refresh_cache-2d3012e0-0c96-4f38-aaf5-91e69018d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.220 2 DEBUG nova.compute.manager [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.241 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.256 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.257 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Creating image(s)
Oct 14 08:55:13 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 14 08:55:13 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 3.732s CPU time.
Oct 14 08:55:13 compute-0 systemd-machined[214636]: Machine qemu-6-instance-00000006 terminated.
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.300 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.334 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.367 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.372 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.410 2 DEBUG nova.policy [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '654cc6be69694fcd8058cc5a5eb78223', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ac87003cad443c2b75e49ebdefe379c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.440 2 INFO nova.virt.libvirt.driver [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance destroyed successfully.
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.441 2 DEBUG nova.objects.instance [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lazy-loading 'resources' on Instance uuid 2d3012e0-0c96-4f38-aaf5-91e69018d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 214 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 139 op/s
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.479 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.480 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.481 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.481 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.507 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.510 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.682 2 DEBUG nova.network.neutron [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.683 2 DEBUG nova.network.neutron [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.710 2 DEBUG oslo_concurrency.lockutils [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.723 2 DEBUG nova.compute.manager [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.723 2 DEBUG oslo_concurrency.lockutils [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.724 2 DEBUG oslo_concurrency.lockutils [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.724 2 DEBUG oslo_concurrency.lockutils [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.724 2 DEBUG nova.compute.manager [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] No waiting events found dispatching network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.724 2 WARNING nova.compute.manager [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received unexpected event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 for instance with vm_state deleted and task_state None.
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.800 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.857 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] resizing rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:55:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/950822610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.975 2 INFO nova.virt.libvirt.driver [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Deleting instance files /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624_del
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.976 2 INFO nova.virt.libvirt.driver [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Deletion of /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624_del complete
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:13 compute-0 nova_compute[259627]: 2025-10-14 08:55:13.983 2 DEBUG nova.objects.instance [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'migration_context' on Instance uuid b063b9bf-1f88-47d3-a838-a4bcfc5eeecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.066 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.111 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.117 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.118 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.119 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.151 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Successfully created port: fac86e41-30dc-481e-a423-10c6cdb3626f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.168 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.169 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.170 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.170 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.171 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.211 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.212 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.236 2 INFO nova.compute.manager [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Took 1.02 seconds to destroy the instance on the hypervisor.
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.237 2 DEBUG oslo.service.loopingcall [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.239 2 DEBUG nova.compute.manager [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.239 2 DEBUG nova.network.neutron [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.251 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.252 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.284 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.290 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.479 2 DEBUG nova.network.neutron [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.497 2 DEBUG nova.network.neutron [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.517 2 INFO nova.compute.manager [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Took 0.28 seconds to deallocate network for instance.
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.570 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.571 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.669 2 DEBUG oslo_concurrency.processutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.709 2 DEBUG nova.compute.manager [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.711 2 DEBUG nova.compute.manager [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing instance network info cache due to event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.713 2 DEBUG oslo_concurrency.lockutils [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.714 2 DEBUG oslo_concurrency.lockutils [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.716 2 DEBUG nova.network.neutron [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:55:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352307652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.741 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.845 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.846 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.850 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.850 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.926 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Successfully updated port: fac86e41-30dc-481e-a423-10c6cdb3626f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:55:14 compute-0 ceph-mon[74249]: pgmap v1094: 305 pgs: 305 active+clean; 214 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 139 op/s
Oct 14 08:55:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3352307652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.943 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.944 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquired lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:14 compute-0 nova_compute[259627]: 2025-10-14 08:55:14.944 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:55:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757442718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.079 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.080 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4300MB free_disk=59.9010009765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.081 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.088 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.094 2 DEBUG oslo_concurrency.processutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.099 2 DEBUG nova.compute.provider_tree [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.125 2 DEBUG nova.scheduler.client.report [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.149 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.156 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.183 2 INFO nova.scheduler.client.report [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Deleted allocations for instance 2d3012e0-0c96-4f38-aaf5-91e69018d624
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.254 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.257 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 30af67a2-4b44-481c-8ab4-296e93c1c517 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.257 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 7b60a7cc-57e5-4833-9541-ed03e9e862ea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.257 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance b063b9bf-1f88-47d3-a838-a4bcfc5eeecc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.257 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.258 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.264 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.338 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.339 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Ensure instance console log exists: /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.339 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.339 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.340 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.355 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 310 op/s
Oct 14 08:55:15 compute-0 ovn_controller[152662]: 2025-10-14T08:55:15Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:74:b6 10.100.0.13
Oct 14 08:55:15 compute-0 ovn_controller[152662]: 2025-10-14T08:55:15Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:74:b6 10.100.0.13
Oct 14 08:55:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115688616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.848 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.857 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.877 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.907 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 08:55:15 compute-0 nova_compute[259627]: 2025-10-14 08:55:15.908 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/757442718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4115688616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.297 2 DEBUG nova.network.neutron [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updated VIF entry in instance network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.298 2 DEBUG nova.network.neutron [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.319 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updating instance_info_cache with network_info: [{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.322 2 DEBUG oslo_concurrency.lockutils [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.323 2 DEBUG nova.compute.manager [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-deleted-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.343 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Releasing lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.343 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance network_info: |[{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.348 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start _get_guest_xml network_info=[{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [{'encryption_secret_uuid': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 1, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.352 2 WARNING nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.359 2 DEBUG nova.virt.libvirt.host [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.360 2 DEBUG nova.virt.libvirt.host [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.364 2 DEBUG nova.virt.libvirt.host [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.365 2 DEBUG nova.virt.libvirt.host [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.365 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.366 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:54:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1507542026',id=28,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-352684228',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.366 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.367 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.367 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.367 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.368 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.368 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.368 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.369 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.369 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.369 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.374 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424509819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.847 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.848 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.907 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:16 compute-0 ceph-mon[74249]: pgmap v1095: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 310 op/s
Oct 14 08:55:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2424509819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:16 compute-0 nova_compute[259627]: 2025-10-14 08:55:16.953 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.104 2 DEBUG nova.compute.manager [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-changed-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.105 2 DEBUG nova.compute.manager [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Refreshing instance network info cache due to event network-changed-fac86e41-30dc-481e-a423-10c6cdb3626f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.105 2 DEBUG oslo_concurrency.lockutils [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.105 2 DEBUG oslo_concurrency.lockutils [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.105 2 DEBUG nova.network.neutron [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Refreshing network info cache for port fac86e41-30dc-481e-a423-10c6cdb3626f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:55:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087898165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.243 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.262 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.265 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.285 2 DEBUG nova.compute.manager [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.285 2 DEBUG nova.compute.manager [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing instance network info cache due to event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.286 2 DEBUG oslo_concurrency.lockutils [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.286 2 DEBUG oslo_concurrency.lockutils [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.286 2 DEBUG nova.network.neutron [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:55:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 232 op/s
Oct 14 08:55:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/189364581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.687 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.688 2 DEBUG nova.virt.libvirt.vif [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(28),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1648521031',id=7,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=28,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-3ocp7rye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:55:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=b063b9bf-1f88-47d3-a838-a4bcfc5eeecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.689 2 DEBUG nova.network.os_vif_util [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.690 2 DEBUG nova.network.os_vif_util [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.692 2 DEBUG nova.objects.instance [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'pci_devices' on Instance uuid b063b9bf-1f88-47d3-a838-a4bcfc5eeecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.710 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <uuid>b063b9bf-1f88-47d3-a838-a4bcfc5eeecc</uuid>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <name>instance-00000007</name>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1648521031</nova:name>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:55:16</nova:creationTime>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-352684228">
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <nova:ephemeral>1</nova:ephemeral>
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <nova:user uuid="654cc6be69694fcd8058cc5a5eb78223">tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member</nova:user>
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <nova:project uuid="3ac87003cad443c2b75e49ebdefe379c">tempest-ServersWithSpecificFlavorTestJSON-632252786</nova:project>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <nova:port uuid="fac86e41-30dc-481e-a423-10c6cdb3626f">
Oct 14 08:55:17 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <system>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <entry name="serial">b063b9bf-1f88-47d3-a838-a4bcfc5eeecc</entry>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <entry name="uuid">b063b9bf-1f88-47d3-a838-a4bcfc5eeecc</entry>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </system>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <os>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   </os>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <features>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   </features>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk">
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0">
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <target dev="vdb" bus="virtio"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config">
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:17 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:e1:72:b8"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <target dev="tapfac86e41-30"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/console.log" append="off"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <video>
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </video>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:55:17 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:55:17 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:55:17 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:55:17 compute-0 nova_compute[259627]: </domain>
Oct 14 08:55:17 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.711 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Preparing to wait for external event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.711 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.711 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.712 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.713 2 DEBUG nova.virt.libvirt.vif [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(28),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1648521031',id=7,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=28,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-3ocp7rye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:55:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=b063b9bf-1f88-47d3-a838-a4bcfc5eeecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.713 2 DEBUG nova.network.os_vif_util [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.714 2 DEBUG nova.network.os_vif_util [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.714 2 DEBUG os_vif [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfac86e41-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.721 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfac86e41-30, col_values=(('external_ids', {'iface-id': 'fac86e41-30dc-481e-a423-10c6cdb3626f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:72:b8', 'vm-uuid': 'b063b9bf-1f88-47d3-a838-a4bcfc5eeecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:17 compute-0 NetworkManager[44885]: <info>  [1760432117.7657] manager: (tapfac86e41-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.771 2 INFO os_vif [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30')
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.833 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.833 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.834 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.834 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No VIF found with MAC fa:16:3e:e1:72:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.835 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Using config drive
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.869 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2087898165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/189364581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 08:55:17 compute-0 nova_compute[259627]: 2025-10-14 08:55:17.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.001 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.416 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.417 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.417 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.417 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 30af67a2-4b44-481c-8ab4-296e93c1c517 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.811 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Creating config drive at /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.816 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplmuav2_e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.957 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplmuav2_e" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:18 compute-0 ceph-mon[74249]: pgmap v1096: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 232 op/s
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.983 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:18 compute-0 nova_compute[259627]: 2025-10-14 08:55:18.986 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.159 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.159 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Deleting local config drive /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config because it was imported into RBD.
Oct 14 08:55:19 compute-0 kernel: tapfac86e41-30: entered promiscuous mode
Oct 14 08:55:19 compute-0 NetworkManager[44885]: <info>  [1760432119.2203] manager: (tapfac86e41-30): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct 14 08:55:19 compute-0 ovn_controller[152662]: 2025-10-14T08:55:19Z|00045|binding|INFO|Claiming lport fac86e41-30dc-481e-a423-10c6cdb3626f for this chassis.
Oct 14 08:55:19 compute-0 ovn_controller[152662]: 2025-10-14T08:55:19Z|00046|binding|INFO|fac86e41-30dc-481e-a423-10c6cdb3626f: Claiming fa:16:3e:e1:72:b8 10.100.0.3
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.229 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:72:b8 10.100.0.3'], port_security=['fa:16:3e:e1:72:b8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b063b9bf-1f88-47d3-a838-a4bcfc5eeecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ac87003cad443c2b75e49ebdefe379c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8a9f5d33-cc0d-455f-8821-c23805bbda66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a071297a-bec1-4fd2-a338-694b6508cca6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fac86e41-30dc-481e-a423-10c6cdb3626f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.231 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fac86e41-30dc-481e-a423-10c6cdb3626f in datapath 6f970eb9-83e1-4efc-b15d-b5885b9eabe7 bound to our chassis
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.232 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 08:55:19 compute-0 ovn_controller[152662]: 2025-10-14T08:55:19Z|00047|binding|INFO|Setting lport fac86e41-30dc-481e-a423-10c6cdb3626f ovn-installed in OVS
Oct 14 08:55:19 compute-0 ovn_controller[152662]: 2025-10-14T08:55:19Z|00048|binding|INFO|Setting lport fac86e41-30dc-481e-a423-10c6cdb3626f up in Southbound
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.244 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b39d51f-5657-4521-ab09-670e004ee4c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.249 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6f970eb9-81 in ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.251 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6f970eb9-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.251 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dda2074c-efa3-4f25-85cf-e1de8a801b87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.252 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a16f4d69-d340-464a-a8e9-b7f46119db65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 systemd-machined[214636]: New machine qemu-7-instance-00000007.
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.264 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f283eab8-5db7-4bde-8d0a-a3eba4467a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb5c8b5-e106-403d-a953-e7f5eea9fa32]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 systemd-udevd[278377]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:55:19 compute-0 NetworkManager[44885]: <info>  [1760432119.2962] device (tapfac86e41-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:55:19 compute-0 NetworkManager[44885]: <info>  [1760432119.2992] device (tapfac86e41-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.314 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0fc214-242f-4a38-a194-20838608b738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8938c407-23e9-4e01-b393-83d02b0cbdbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 NetworkManager[44885]: <info>  [1760432119.3216] manager: (tap6f970eb9-80): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct 14 08:55:19 compute-0 systemd-udevd[278381]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.356 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5e02f26b-8668-435e-8993-85f85844f150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.362 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9ccfb6-6cf1-43b4-936d-d25af52fd917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.382 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.382 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.383 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.383 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.383 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.384 2 INFO nova.compute.manager [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Terminating instance
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.385 2 DEBUG nova.compute.manager [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:55:19 compute-0 NetworkManager[44885]: <info>  [1760432119.3856] device (tap6f970eb9-80): carrier: link connected
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.392 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3da2b2f9-5a4a-4370-ad92-aa1c1fd261c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.409 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5889c940-6064-49bf-817e-9a8c61df3587]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f970eb9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:30:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589814, 'reachable_time': 42086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278407, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.423 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d872b3cc-d304-4a44-859c-ea434fe10a99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:30aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589814, 'tstamp': 589814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278408, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 kernel: tapbe863b8b-ed (unregistering): left promiscuous mode
Oct 14 08:55:19 compute-0 NetworkManager[44885]: <info>  [1760432119.4311] device (tapbe863b8b-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:55:19 compute-0 ovn_controller[152662]: 2025-10-14T08:55:19Z|00049|binding|INFO|Releasing lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 from this chassis (sb_readonly=0)
Oct 14 08:55:19 compute-0 ovn_controller[152662]: 2025-10-14T08:55:19Z|00050|binding|INFO|Setting lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 down in Southbound
Oct 14 08:55:19 compute-0 ovn_controller[152662]: 2025-10-14T08:55:19Z|00051|binding|INFO|Removing iface tapbe863b8b-ed ovn-installed in OVS
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.441 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4729077e-56f3-4d69-b4dc-76700b573ec9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f970eb9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:30:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589814, 'reachable_time': 42086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278409, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.444 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:74:b6 10.100.0.13'], port_security=['fa:16:3e:c0:74:b6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b60a7cc-57e5-4833-9541-ed03e9e862ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=be863b8b-ed33-4cec-a274-d62c9bd4ac05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.449 2 DEBUG nova.compute.manager [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.449 2 DEBUG oslo_concurrency.lockutils [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.449 2 DEBUG oslo_concurrency.lockutils [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.450 2 DEBUG oslo_concurrency.lockutils [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.450 2 DEBUG nova.compute.manager [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Processing event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 232 op/s
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[772fa4df-d89c-49de-90f3-ddd18dc1662b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 14 08:55:19 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 13.355s CPU time.
Oct 14 08:55:19 compute-0 systemd-machined[214636]: Machine qemu-5-instance-00000005 terminated.
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[72edbf3a-3903-4cfd-8a49-52aba5bb7a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f970eb9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f970eb9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:19 compute-0 NetworkManager[44885]: <info>  [1760432119.5507] manager: (tap6f970eb9-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 kernel: tap6f970eb9-80: entered promiscuous mode
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.555 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f970eb9-80, col_values=(('external_ids', {'iface-id': '6a62b55c-d140-4dc2-a487-c292e81e63e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:19 compute-0 ovn_controller[152662]: 2025-10-14T08:55:19Z|00052|binding|INFO|Releasing lport 6a62b55c-d140-4dc2-a487-c292e81e63e0 from this chassis (sb_readonly=0)
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.576 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.576 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3005aee6-3ebb-4244-ba23-7ed26dee72bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.577 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:55:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.578 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'env', 'PROCESS_TAG=haproxy-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:55:19 compute-0 NetworkManager[44885]: <info>  [1760432119.6045] manager: (tapbe863b8b-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.621 2 INFO nova.virt.libvirt.driver [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance destroyed successfully.
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.621 2 DEBUG nova.objects.instance [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'resources' on Instance uuid 7b60a7cc-57e5-4833-9541-ed03e9e862ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.640 2 DEBUG nova.virt.libvirt.vif [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1096031615',display_name='tempest-FloatingIPsAssociationTestJSON-server-1096031615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1096031615',id=5,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:55:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-opxie8ra',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:55:03Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=7b60a7cc-57e5-4833-9541-ed03e9e862ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.640 2 DEBUG nova.network.os_vif_util [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.641 2 DEBUG nova.network.os_vif_util [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.641 2 DEBUG os_vif [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe863b8b-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.649 2 INFO os_vif [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed')
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.765 2 DEBUG nova.network.neutron [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updated VIF entry in instance network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.766 2 DEBUG nova.network.neutron [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.771 2 DEBUG nova.network.neutron [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updated VIF entry in instance network info cache for port fac86e41-30dc-481e-a423-10c6cdb3626f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.772 2 DEBUG nova.network.neutron [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updating instance_info_cache with network_info: [{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.792 2 DEBUG oslo_concurrency.lockutils [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:19 compute-0 nova_compute[259627]: 2025-10-14 08:55:19.792 2 DEBUG oslo_concurrency.lockutils [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.025 2 INFO nova.virt.libvirt.driver [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Deleting instance files /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea_del
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.026 2 INFO nova.virt.libvirt.driver [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Deletion of /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea_del complete
Oct 14 08:55:20 compute-0 podman[278534]: 2025-10-14 08:55:20.033943317 +0000 UTC m=+0.064150103 container create 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 08:55:20 compute-0 systemd[1]: Started libpod-conmon-234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a.scope.
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.077 2 INFO nova.compute.manager [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.078 2 DEBUG oslo.service.loopingcall [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.078 2 DEBUG nova.compute.manager [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.078 2 DEBUG nova.network.neutron [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:55:20 compute-0 podman[278534]: 2025-10-14 08:55:19.993443048 +0000 UTC m=+0.023649864 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:55:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/409964ad013ad7282d1b3ada20703b51786ffd8f880c945dcb857889c1c15880/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:20 compute-0 podman[278534]: 2025-10-14 08:55:20.119883897 +0000 UTC m=+0.150090703 container init 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 08:55:20 compute-0 podman[278534]: 2025-10-14 08:55:20.124849089 +0000 UTC m=+0.155055875 container start 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 08:55:20 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [NOTICE]   (278553) : New worker (278555) forked
Oct 14 08:55:20 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [NOTICE]   (278553) : Loading success.
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.171 162547 INFO neutron.agent.ovn.metadata.agent [-] Port be863b8b-ed33-4cec-a274-d62c9bd4ac05 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa unbound from our chassis
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.173 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5200abe5-75f0-48b7-9059-930ec87e21ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.216 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0fba0767-c5ee-4fa8-aa07-d453a5971fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.219 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5b277a-610c-4a92-b2ef-2d66c051f93e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.252 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ec83f1b7-ef74-4289-bcaa-9d2e3f940d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.268 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432120.2678437, b063b9bf-1f88-47d3-a838-a4bcfc5eeecc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.267 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec32ffb7-7a89-479f-93f0-d8645191cf55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92d50a40-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586713, 'reachable_time': 30695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278569, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.268 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] VM Started (Lifecycle Event)
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.270 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.273 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.276 2 INFO nova.virt.libvirt.driver [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance spawned successfully.
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.276 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4922b516-6f14-4f0e-b4e3-81ef5ed5a1f6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92d50a40-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586730, 'tstamp': 586730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278570, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92d50a40-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586733, 'tstamp': 586733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278570, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.281 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92d50a40-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.284 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92d50a40-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.285 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.285 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92d50a40-90, col_values=(('external_ids', {'iface-id': '2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.285 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.292 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.298 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.301 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.301 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.301 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.301 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.302 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.302 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.340 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.340 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432120.2698064, b063b9bf-1f88-47d3-a838-a4bcfc5eeecc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.340 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] VM Paused (Lifecycle Event)
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.377 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.380 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432120.2727513, b063b9bf-1f88-47d3-a838-a4bcfc5eeecc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.380 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] VM Resumed (Lifecycle Event)
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.392 2 INFO nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Took 7.15 seconds to spawn the instance on the hypervisor.
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.393 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.409 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.411 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.448 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.469 2 INFO nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Took 8.16 seconds to build instance.
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.491 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.544 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.556 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.556 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.557 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:55:20 compute-0 ceph-mon[74249]: pgmap v1097: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 232 op/s
Oct 14 08:55:20 compute-0 nova_compute[259627]: 2025-10-14 08:55:20.988 2 DEBUG nova.network.neutron [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.008 2 INFO nova.compute.manager [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Took 0.93 seconds to deallocate network for instance.
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.061 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.062 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.184 2 DEBUG oslo_concurrency.processutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.215 2 DEBUG nova.compute.manager [req-9aa9ea90-7807-4365-a2bd-b6485c279ecc req-cd516856-14db-44d6-922b-fdbcb59a14c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-deleted-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 169 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 293 op/s
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.557 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.557 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.558 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.558 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.558 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] No waiting events found dispatching network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.558 2 WARNING nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received unexpected event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f for instance with vm_state active and task_state None.
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.558 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-unplugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] No waiting events found dispatching network-vif-unplugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.559 2 WARNING nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received unexpected event network-vif-unplugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 for instance with vm_state deleted and task_state None.
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] No waiting events found dispatching network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.560 2 WARNING nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received unexpected event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 for instance with vm_state deleted and task_state None.
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.561 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.561 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.561 2 DEBUG nova.network.neutron [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:55:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148715762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.599 2 DEBUG oslo_concurrency.processutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.604 2 DEBUG nova.compute.provider_tree [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.619 2 DEBUG nova.scheduler.client.report [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.643 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.667 2 INFO nova.scheduler.client.report [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Deleted allocations for instance 7b60a7cc-57e5-4833-9541-ed03e9e862ea
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.741 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:21 compute-0 nova_compute[259627]: 2025-10-14 08:55:21.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2148715762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:22 compute-0 ceph-mon[74249]: pgmap v1098: 305 pgs: 305 active+clean; 169 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 293 op/s
Oct 14 08:55:23 compute-0 nova_compute[259627]: 2025-10-14 08:55:23.116 2 DEBUG nova.network.neutron [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:23 compute-0 nova_compute[259627]: 2025-10-14 08:55:23.117 2 DEBUG nova.network.neutron [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:23 compute-0 nova_compute[259627]: 2025-10-14 08:55:23.131 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 169 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 232 op/s
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.306 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432109.2670877, f1df3849-6811-41a9-9c70-f10a6863b4f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.306 2 INFO nova.compute.manager [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] VM Stopped (Lifecycle Event)
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.334 2 DEBUG nova.compute.manager [None req-a09f45e9-6aa8-4c97-a40b-7a98db0f4725 - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.372 2 DEBUG nova.compute.manager [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-changed-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.373 2 DEBUG nova.compute.manager [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Refreshing instance network info cache due to event network-changed-fac86e41-30dc-481e-a423-10c6cdb3626f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.373 2 DEBUG oslo_concurrency.lockutils [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.374 2 DEBUG oslo_concurrency.lockutils [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.374 2 DEBUG nova.network.neutron [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Refreshing network info cache for port fac86e41-30dc-481e-a423-10c6cdb3626f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:55:24 compute-0 nova_compute[259627]: 2025-10-14 08:55:24.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:24 compute-0 ceph-mon[74249]: pgmap v1099: 305 pgs: 305 active+clean; 169 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 232 op/s
Oct 14 08:55:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 301 op/s
Oct 14 08:55:25 compute-0 podman[278595]: 2025-10-14 08:55:25.653244352 +0000 UTC m=+0.066142603 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 08:55:25 compute-0 podman[278594]: 2025-10-14 08:55:25.655029406 +0000 UTC m=+0.070245144 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.137 2 DEBUG nova.network.neutron [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updated VIF entry in instance network info cache for port fac86e41-30dc-481e-a423-10c6cdb3626f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.137 2 DEBUG nova.network.neutron [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updating instance_info_cache with network_info: [{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.162 2 DEBUG oslo_concurrency.lockutils [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.981 2 DEBUG nova.compute.manager [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.982 2 DEBUG nova.compute.manager [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.982 2 DEBUG oslo_concurrency.lockutils [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.982 2 DEBUG oslo_concurrency.lockutils [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:26 compute-0 nova_compute[259627]: 2025-10-14 08:55:26.983 2 DEBUG nova.network.neutron [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:55:27 compute-0 ceph-mon[74249]: pgmap v1100: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 301 op/s
Oct 14 08:55:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 129 op/s
Oct 14 08:55:28 compute-0 nova_compute[259627]: 2025-10-14 08:55:28.438 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432113.4367025, 2d3012e0-0c96-4f38-aaf5-91e69018d624 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:28 compute-0 nova_compute[259627]: 2025-10-14 08:55:28.439 2 INFO nova.compute.manager [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] VM Stopped (Lifecycle Event)
Oct 14 08:55:28 compute-0 nova_compute[259627]: 2025-10-14 08:55:28.465 2 DEBUG nova.compute.manager [None req-655cfc2e-4bee-466f-b2ab-eaf69d7e1850 - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:29 compute-0 ceph-mon[74249]: pgmap v1101: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 129 op/s
Oct 14 08:55:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 129 op/s
Oct 14 08:55:29 compute-0 nova_compute[259627]: 2025-10-14 08:55:29.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:30 compute-0 nova_compute[259627]: 2025-10-14 08:55:30.606 2 DEBUG nova.network.neutron [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:55:30 compute-0 nova_compute[259627]: 2025-10-14 08:55:30.607 2 DEBUG nova.network.neutron [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:30 compute-0 nova_compute[259627]: 2025-10-14 08:55:30.629 2 DEBUG oslo_concurrency.lockutils [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:31 compute-0 ceph-mon[74249]: pgmap v1102: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 129 op/s
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.338 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.339 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 08:55:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 130 op/s
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:72:b8 10.100.0.3
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:72:b8 10.100.0.3
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.580 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.581 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.581 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.581 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.581 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.582 2 INFO nova.compute.manager [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Terminating instance
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.583 2 DEBUG nova.compute.manager [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:55:31 compute-0 kernel: tapf0f1dcbf-2b (unregistering): left promiscuous mode
Oct 14 08:55:31 compute-0 NetworkManager[44885]: <info>  [1760432131.6447] device (tapf0f1dcbf-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00053|binding|INFO|Releasing lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 from this chassis (sb_readonly=0)
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00054|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 down in Southbound
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00055|binding|INFO|Removing iface tapf0f1dcbf-2b ovn-installed in OVS
Oct 14 08:55:31 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 14 08:55:31 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 16.903s CPU time.
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.710 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:a5:a0 10.100.0.12'], port_security=['fa:16:3e:76:a5:a0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30af67a2-4b44-481c-8ab4-296e93c1c517', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.712 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa unbound from our chassis
Oct 14 08:55:31 compute-0 systemd-machined[214636]: Machine qemu-3-instance-00000003 terminated.
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.713 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[086e8c74-c270-4f4e-bd45-a1015593f4a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.714 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa namespace which is not needed anymore
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 kernel: tapf0f1dcbf-2b: entered promiscuous mode
Oct 14 08:55:31 compute-0 NetworkManager[44885]: <info>  [1760432131.7977] manager: (tapf0f1dcbf-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Oct 14 08:55:31 compute-0 kernel: tapf0f1dcbf-2b (unregistering): left promiscuous mode
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00056|binding|INFO|Claiming lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for this chassis.
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00057|binding|INFO|f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0: Claiming fa:16:3e:76:a5:a0 10.100.0.12
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.814 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:a5:a0 10.100.0.12'], port_security=['fa:16:3e:76:a5:a0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30af67a2-4b44-481c-8ab4-296e93c1c517', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00058|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 ovn-installed in OVS
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00059|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 up in Southbound
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00060|binding|INFO|Releasing lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 from this chassis (sb_readonly=1)
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00061|binding|INFO|Removing iface tapf0f1dcbf-2b ovn-installed in OVS
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00062|if_status|INFO|Not setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 down as sb is readonly
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.825 2 INFO nova.virt.libvirt.driver [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance destroyed successfully.
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.826 2 DEBUG nova.objects.instance [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'resources' on Instance uuid 30af67a2-4b44-481c-8ab4-296e93c1c517 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00063|binding|INFO|Releasing lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 from this chassis (sb_readonly=0)
Oct 14 08:55:31 compute-0 ovn_controller[152662]: 2025-10-14T08:55:31Z|00064|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 down in Southbound
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.842 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:a5:a0 10.100.0.12'], port_security=['fa:16:3e:76:a5:a0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30af67a2-4b44-481c-8ab4-296e93c1c517', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:31 compute-0 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [NOTICE]   (276770) : haproxy version is 2.8.14-c23fe91
Oct 14 08:55:31 compute-0 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [NOTICE]   (276770) : path to executable is /usr/sbin/haproxy
Oct 14 08:55:31 compute-0 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [WARNING]  (276770) : Exiting Master process...
Oct 14 08:55:31 compute-0 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [WARNING]  (276770) : Exiting Master process...
Oct 14 08:55:31 compute-0 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [ALERT]    (276770) : Current worker (276772) exited with code 143 (Terminated)
Oct 14 08:55:31 compute-0 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [WARNING]  (276770) : All workers exited. Exiting... (0)
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.850 2 DEBUG nova.virt.libvirt.vif [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:54:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1417208985',display_name='tempest-FloatingIPsAssociationTestJSON-server-1417208985',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1417208985',id=3,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:54:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-95bnx8ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:54:46Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=30af67a2-4b44-481c-8ab4-296e93c1c517,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.851 2 DEBUG nova.network.os_vif_util [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.851 2 DEBUG nova.network.os_vif_util [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.851 2 DEBUG os_vif [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:55:31 compute-0 systemd[1]: libpod-3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b.scope: Deactivated successfully.
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f1dcbf-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.857 2 INFO os_vif [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b')
Oct 14 08:55:31 compute-0 podman[278658]: 2025-10-14 08:55:31.860459477 +0000 UTC m=+0.058900054 container died 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 08:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b-userdata-shm.mount: Deactivated successfully.
Oct 14 08:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3945ba3649327f68a0834580ef80e82e4e0d34f541da7ffdb8442a6916717c6f-merged.mount: Deactivated successfully.
Oct 14 08:55:31 compute-0 podman[278658]: 2025-10-14 08:55:31.901924529 +0000 UTC m=+0.100365116 container cleanup 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:55:31 compute-0 systemd[1]: libpod-conmon-3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b.scope: Deactivated successfully.
Oct 14 08:55:31 compute-0 podman[278708]: 2025-10-14 08:55:31.971867674 +0000 UTC m=+0.046605480 container remove 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bacc9211-0a89-47e1-a15e-e463c2cf0ebd]: (4, ('Tue Oct 14 08:55:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa (3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b)\n3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b\nTue Oct 14 08:55:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa (3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b)\n3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.983 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[051ac235-1e36-4572-a9df-30832074ef5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.984 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92d50a40-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 kernel: tap92d50a40-90: left promiscuous mode
Oct 14 08:55:31 compute-0 nova_compute[259627]: 2025-10-14 08:55:31.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91f78f32-11a6-48cb-92ad-01dadd789db8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.014 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2679f7c1-4dc1-4c03-8b09-be77a28e60f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.015 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[988d8d5d-a638-44d6-bc09-cbf9de117524]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.036 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[86d32d58-8e05-401b-988a-42b99e4d55d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586706, 'reachable_time': 37454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278723, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d92d50a40\x2d95c8\x2d4c0a\x2da4ab\x2dd459f68516aa.mount: Deactivated successfully.
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.039 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.039 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2048c8a8-f150-4f29-9c6c-a57071c9669f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.042 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa unbound from our chassis
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.044 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.045 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[499880a6-7fd2-43a8-9460-6afb24135f2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.045 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa unbound from our chassis
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.049 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:55:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.049 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d7d9a2-c83a-4000-8aba-59ccf59ee406]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.283 2 INFO nova.virt.libvirt.driver [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Deleting instance files /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517_del
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.285 2 INFO nova.virt.libvirt.driver [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Deletion of /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517_del complete
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.384 2 INFO nova.compute.manager [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.384 2 DEBUG oslo.service.loopingcall [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.384 2 DEBUG nova.compute.manager [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.385 2 DEBUG nova.network.neutron [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:55:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.538 2 DEBUG nova.compute.manager [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.539 2 DEBUG oslo_concurrency.lockutils [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.539 2 DEBUG oslo_concurrency.lockutils [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.540 2 DEBUG oslo_concurrency.lockutils [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.540 2 DEBUG nova.compute.manager [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:32 compute-0 nova_compute[259627]: 2025-10-14 08:55:32.541 2 DEBUG nova.compute.manager [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:55:32
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'default.rgw.meta']
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:55:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:55:33 compute-0 ceph-mon[74249]: pgmap v1103: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 130 op/s
Oct 14 08:55:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Oct 14 08:55:33 compute-0 nova_compute[259627]: 2025-10-14 08:55:33.563 2 DEBUG nova.network.neutron [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:33 compute-0 nova_compute[259627]: 2025-10-14 08:55:33.579 2 INFO nova.compute.manager [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Took 1.19 seconds to deallocate network for instance.
Oct 14 08:55:33 compute-0 nova_compute[259627]: 2025-10-14 08:55:33.627 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:33 compute-0 nova_compute[259627]: 2025-10-14 08:55:33.627 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:33 compute-0 nova_compute[259627]: 2025-10-14 08:55:33.724 2 DEBUG oslo_concurrency.processutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:33 compute-0 nova_compute[259627]: 2025-10-14 08:55:33.761 2 DEBUG nova.compute.manager [req-9fa0b056-6cab-4af5-a2fe-5faedda4aa02 req-767f6fba-f91b-4172-8ebc-027425435f77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-deleted-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750230889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.272 2 DEBUG oslo_concurrency.processutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.280 2 DEBUG nova.compute.provider_tree [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.296 2 DEBUG nova.scheduler.client.report [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.325 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.350 2 INFO nova.scheduler.client.report [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Deleted allocations for instance 30af67a2-4b44-481c-8ab4-296e93c1c517
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.418 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.617 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432119.6169467, 7b60a7cc-57e5-4833-9541-ed03e9e862ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.618 2 INFO nova.compute.manager [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] VM Stopped (Lifecycle Event)
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.637 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.638 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.639 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.639 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.640 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.640 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.641 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.641 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.642 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.642 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.643 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.643 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.644 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.644 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.645 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.645 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.646 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.646 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.647 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.647 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.648 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.648 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.649 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.649 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.650 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.650 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.650 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.651 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.651 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.652 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.
Oct 14 08:55:34 compute-0 nova_compute[259627]: 2025-10-14 08:55:34.655 2 DEBUG nova.compute.manager [None req-3dc29873-c609-4101-9790-305750111f02 - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:35 compute-0 ceph-mon[74249]: pgmap v1104: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Oct 14 08:55:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3750230889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 168 op/s
Oct 14 08:55:35 compute-0 sudo[278747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:35 compute-0 sudo[278747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:35 compute-0 sudo[278747]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:36 compute-0 sudo[278772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:55:36 compute-0 sudo[278772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:36 compute-0 sudo[278772]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:36 compute-0 sudo[278797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:36 compute-0 sudo[278797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:36 compute-0 sudo[278797]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:36 compute-0 sudo[278822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 08:55:36 compute-0 sudo[278822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:36 compute-0 sudo[278822]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:55:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:55:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 08:55:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:55:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 08:55:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:55:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c3aa8daa-908c-4fb1-8c9f-cc8667655e59 does not exist
Oct 14 08:55:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev db138469-d829-41be-b1a6-507a8197c55c does not exist
Oct 14 08:55:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6f29dd08-f203-4467-a632-e5ed45511f8f does not exist
Oct 14 08:55:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 08:55:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:55:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 08:55:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:55:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:55:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:55:36 compute-0 sudo[278878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:36 compute-0 sudo[278878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:36 compute-0 sudo[278878]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:36 compute-0 nova_compute[259627]: 2025-10-14 08:55:36.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:36 compute-0 podman[278902]: 2025-10-14 08:55:36.846566434 +0000 UTC m=+0.067242230 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 08:55:36 compute-0 nova_compute[259627]: 2025-10-14 08:55:36.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:36 compute-0 sudo[278909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:55:36 compute-0 sudo[278909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:36 compute-0 sudo[278909]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:36 compute-0 nova_compute[259627]: 2025-10-14 08:55:36.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:36 compute-0 sudo[278950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:36 compute-0 sudo[278950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:36 compute-0 sudo[278950]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:36 compute-0 podman[278943]: 2025-10-14 08:55:36.975709489 +0000 UTC m=+0.101056823 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:55:37 compute-0 sudo[278995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 08:55:37 compute-0 sudo[278995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:37 compute-0 ceph-mon[74249]: pgmap v1105: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 168 op/s
Oct 14 08:55:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:55:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:55:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:55:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:55:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:55:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:55:37 compute-0 podman[279062]: 2025-10-14 08:55:37.435797077 +0000 UTC m=+0.038773107 container create 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:55:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 08:55:37 compute-0 systemd[1]: Started libpod-conmon-1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64.scope.
Oct 14 08:55:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:55:37 compute-0 podman[279062]: 2025-10-14 08:55:37.420072519 +0000 UTC m=+0.023048539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:55:37 compute-0 podman[279062]: 2025-10-14 08:55:37.533774673 +0000 UTC m=+0.136750773 container init 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 08:55:37 compute-0 podman[279062]: 2025-10-14 08:55:37.545348539 +0000 UTC m=+0.148324599 container start 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:55:37 compute-0 podman[279062]: 2025-10-14 08:55:37.550311421 +0000 UTC m=+0.153287481 container attach 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:55:37 compute-0 cool_ramanujan[279078]: 167 167
Oct 14 08:55:37 compute-0 systemd[1]: libpod-1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64.scope: Deactivated successfully.
Oct 14 08:55:37 compute-0 podman[279062]: 2025-10-14 08:55:37.556052723 +0000 UTC m=+0.159028763 container died 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 08:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-270dbf189e011135f0f5b6a4368fd7535a1be1da0266466ef701a4ccb5bddbff-merged.mount: Deactivated successfully.
Oct 14 08:55:37 compute-0 podman[279062]: 2025-10-14 08:55:37.606853066 +0000 UTC m=+0.209829086 container remove 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 08:55:37 compute-0 systemd[1]: libpod-conmon-1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64.scope: Deactivated successfully.
Oct 14 08:55:37 compute-0 podman[279102]: 2025-10-14 08:55:37.827236081 +0000 UTC m=+0.054506805 container create a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:55:37 compute-0 systemd[1]: Started libpod-conmon-a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172.scope.
Oct 14 08:55:37 compute-0 podman[279102]: 2025-10-14 08:55:37.800288377 +0000 UTC m=+0.027559101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:55:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:37 compute-0 podman[279102]: 2025-10-14 08:55:37.943676333 +0000 UTC m=+0.170947047 container init a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:55:37 compute-0 podman[279102]: 2025-10-14 08:55:37.956672904 +0000 UTC m=+0.183943618 container start a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 08:55:37 compute-0 podman[279102]: 2025-10-14 08:55:37.960953709 +0000 UTC m=+0.188224433 container attach a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:55:39 compute-0 ceph-mon[74249]: pgmap v1106: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 08:55:39 compute-0 brave_hopper[279118]: --> passed data devices: 0 physical, 3 LVM
Oct 14 08:55:39 compute-0 brave_hopper[279118]: --> relative data size: 1.0
Oct 14 08:55:39 compute-0 brave_hopper[279118]: --> All data devices are unavailable
Oct 14 08:55:39 compute-0 systemd[1]: libpod-a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172.scope: Deactivated successfully.
Oct 14 08:55:39 compute-0 podman[279102]: 2025-10-14 08:55:39.126094177 +0000 UTC m=+1.353364871 container died a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:55:39 compute-0 systemd[1]: libpod-a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172.scope: Consumed 1.128s CPU time.
Oct 14 08:55:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d-merged.mount: Deactivated successfully.
Oct 14 08:55:39 compute-0 podman[279102]: 2025-10-14 08:55:39.192584787 +0000 UTC m=+1.419855511 container remove a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 08:55:39 compute-0 systemd[1]: libpod-conmon-a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172.scope: Deactivated successfully.
Oct 14 08:55:39 compute-0 sudo[278995]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:39 compute-0 sudo[279158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:39 compute-0 sudo[279158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:39 compute-0 sudo[279158]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:39.341 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:39 compute-0 sudo[279183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:55:39 compute-0 sudo[279183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:39 compute-0 sudo[279183]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 08:55:39 compute-0 sudo[279208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:39 compute-0 sudo[279208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:39 compute-0 sudo[279208]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:39 compute-0 sudo[279233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 08:55:39 compute-0 sudo[279233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:39 compute-0 ovn_controller[152662]: 2025-10-14T08:55:39Z|00065|binding|INFO|Releasing lport 6a62b55c-d140-4dc2-a487-c292e81e63e0 from this chassis (sb_readonly=0)
Oct 14 08:55:39 compute-0 nova_compute[259627]: 2025-10-14 08:55:39.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:40 compute-0 podman[279299]: 2025-10-14 08:55:40.004341097 +0000 UTC m=+0.059247812 container create 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 08:55:40 compute-0 systemd[1]: Started libpod-conmon-3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8.scope.
Oct 14 08:55:40 compute-0 podman[279299]: 2025-10-14 08:55:39.976683115 +0000 UTC m=+0.031589890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:55:40 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:55:40 compute-0 podman[279299]: 2025-10-14 08:55:40.099986626 +0000 UTC m=+0.154893421 container init 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:55:40 compute-0 podman[279299]: 2025-10-14 08:55:40.112000223 +0000 UTC m=+0.166906918 container start 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:55:40 compute-0 podman[279299]: 2025-10-14 08:55:40.116073153 +0000 UTC m=+0.170979878 container attach 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 08:55:40 compute-0 beautiful_fermat[279315]: 167 167
Oct 14 08:55:40 compute-0 podman[279299]: 2025-10-14 08:55:40.117253812 +0000 UTC m=+0.172160497 container died 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:55:40 compute-0 systemd[1]: libpod-3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8.scope: Deactivated successfully.
Oct 14 08:55:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6efc83e076da523583df42ac7edeabafc2d80f2f5a58bd71651111a20f4e9b07-merged.mount: Deactivated successfully.
Oct 14 08:55:40 compute-0 podman[279299]: 2025-10-14 08:55:40.1512133 +0000 UTC m=+0.206119995 container remove 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 08:55:40 compute-0 systemd[1]: libpod-conmon-3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8.scope: Deactivated successfully.
Oct 14 08:55:40 compute-0 podman[279339]: 2025-10-14 08:55:40.355618691 +0000 UTC m=+0.062631605 container create 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:55:40 compute-0 systemd[1]: Started libpod-conmon-8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98.scope.
Oct 14 08:55:40 compute-0 podman[279339]: 2025-10-14 08:55:40.327343154 +0000 UTC m=+0.034355988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:55:40 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:40 compute-0 podman[279339]: 2025-10-14 08:55:40.479998679 +0000 UTC m=+0.187011473 container init 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:55:40 compute-0 podman[279339]: 2025-10-14 08:55:40.489295728 +0000 UTC m=+0.196308512 container start 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 08:55:40 compute-0 podman[279339]: 2025-10-14 08:55:40.49625771 +0000 UTC m=+0.203270514 container attach 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.524 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.525 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.525 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.525 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.526 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.527 2 INFO nova.compute.manager [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Terminating instance
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.528 2 DEBUG nova.compute.manager [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:55:40 compute-0 kernel: tapfac86e41-30 (unregistering): left promiscuous mode
Oct 14 08:55:40 compute-0 NetworkManager[44885]: <info>  [1760432140.6086] device (tapfac86e41-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:55:40 compute-0 ovn_controller[152662]: 2025-10-14T08:55:40Z|00066|binding|INFO|Releasing lport fac86e41-30dc-481e-a423-10c6cdb3626f from this chassis (sb_readonly=0)
Oct 14 08:55:40 compute-0 ovn_controller[152662]: 2025-10-14T08:55:40Z|00067|binding|INFO|Setting lport fac86e41-30dc-481e-a423-10c6cdb3626f down in Southbound
Oct 14 08:55:40 compute-0 ovn_controller[152662]: 2025-10-14T08:55:40Z|00068|binding|INFO|Removing iface tapfac86e41-30 ovn-installed in OVS
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.634 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:72:b8 10.100.0.3'], port_security=['fa:16:3e:e1:72:b8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b063b9bf-1f88-47d3-a838-a4bcfc5eeecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ac87003cad443c2b75e49ebdefe379c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8a9f5d33-cc0d-455f-8821-c23805bbda66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a071297a-bec1-4fd2-a338-694b6508cca6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fac86e41-30dc-481e-a423-10c6cdb3626f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.636 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fac86e41-30dc-481e-a423-10c6cdb3626f in datapath 6f970eb9-83e1-4efc-b15d-b5885b9eabe7 unbound from our chassis
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.638 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f970eb9-83e1-4efc-b15d-b5885b9eabe7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a467657f-50c7-42ac-954a-9da0f45da308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.640 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 namespace which is not needed anymore
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:40 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 14 08:55:40 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 12.965s CPU time.
Oct 14 08:55:40 compute-0 systemd-machined[214636]: Machine qemu-7-instance-00000007 terminated.
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.764 2 INFO nova.virt.libvirt.driver [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance destroyed successfully.
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.765 2 DEBUG nova.objects.instance [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'resources' on Instance uuid b063b9bf-1f88-47d3-a838-a4bcfc5eeecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.784 2 DEBUG nova.virt.libvirt.vif [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(28),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1648521031',id=7,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=28,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:55:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-3ocp7rye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:55:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=b063b9bf-1f88-47d3-a838-a4bcfc5eeecc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.785 2 DEBUG nova.network.os_vif_util [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.785 2 DEBUG nova.network.os_vif_util [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.786 2 DEBUG os_vif [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfac86e41-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:40 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [NOTICE]   (278553) : haproxy version is 2.8.14-c23fe91
Oct 14 08:55:40 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [NOTICE]   (278553) : path to executable is /usr/sbin/haproxy
Oct 14 08:55:40 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [ALERT]    (278553) : Current worker (278555) exited with code 143 (Terminated)
Oct 14 08:55:40 compute-0 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [WARNING]  (278553) : All workers exited. Exiting... (0)
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.792 2 INFO os_vif [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30')
Oct 14 08:55:40 compute-0 systemd[1]: libpod-234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a.scope: Deactivated successfully.
Oct 14 08:55:40 compute-0 podman[279385]: 2025-10-14 08:55:40.804075062 +0000 UTC m=+0.064738217 container died 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:55:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a-userdata-shm.mount: Deactivated successfully.
Oct 14 08:55:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-409964ad013ad7282d1b3ada20703b51786ffd8f880c945dcb857889c1c15880-merged.mount: Deactivated successfully.
Oct 14 08:55:40 compute-0 podman[279385]: 2025-10-14 08:55:40.852121707 +0000 UTC m=+0.112784842 container cleanup 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 08:55:40 compute-0 systemd[1]: libpod-conmon-234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a.scope: Deactivated successfully.
Oct 14 08:55:40 compute-0 podman[279441]: 2025-10-14 08:55:40.924824091 +0000 UTC m=+0.044857038 container remove 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.932 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9228989-7c33-4824-9584-c28b13c8b181]: (4, ('Tue Oct 14 08:55:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 (234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a)\n234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a\nTue Oct 14 08:55:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 (234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a)\n234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.934 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8819d783-e724-412a-bb24-5d0f6fd22b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.935 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f970eb9-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:55:40 compute-0 kernel: tap6f970eb9-80: left promiscuous mode
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:40 compute-0 nova_compute[259627]: 2025-10-14 08:55:40.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.954 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[edfb55b0-ce7b-47bc-9927-4ce7d125b014]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.986 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24ddd1dc-0bf3-4539-9a91-bde0762370cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.987 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c85645d8-bb20-4d49-9aa0-f222731bbf0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:41.007 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[575fdee5-6590-4c71-84e9-b4da8ca6553e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589806, 'reachable_time': 34887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279456, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d6f970eb9\x2d83e1\x2d4efc\x2db15d\x2db5885b9eabe7.mount: Deactivated successfully.
Oct 14 08:55:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:41.012 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:55:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:55:41.012 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9c825437-2331-4ba6-955f-903979bdd662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:55:41 compute-0 ceph-mon[74249]: pgmap v1107: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 08:55:41 compute-0 youthful_johnson[279356]: {
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:     "0": [
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:         {
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "devices": [
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "/dev/loop3"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             ],
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_name": "ceph_lv0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_size": "21470642176",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "name": "ceph_lv0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "tags": {
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cluster_name": "ceph",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.crush_device_class": "",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.encrypted": "0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osd_id": "0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.type": "block",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.vdo": "0"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             },
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "type": "block",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "vg_name": "ceph_vg0"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:         }
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:     ],
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:     "1": [
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:         {
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "devices": [
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "/dev/loop4"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             ],
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_name": "ceph_lv1",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_size": "21470642176",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "name": "ceph_lv1",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "tags": {
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cluster_name": "ceph",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.crush_device_class": "",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.encrypted": "0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osd_id": "1",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.type": "block",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.vdo": "0"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             },
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "type": "block",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "vg_name": "ceph_vg1"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:         }
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:     ],
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:     "2": [
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:         {
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "devices": [
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "/dev/loop5"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             ],
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_name": "ceph_lv2",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_size": "21470642176",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "name": "ceph_lv2",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "tags": {
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.cluster_name": "ceph",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.crush_device_class": "",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.encrypted": "0",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osd_id": "2",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.type": "block",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:                 "ceph.vdo": "0"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             },
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "type": "block",
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:             "vg_name": "ceph_vg2"
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:         }
Oct 14 08:55:41 compute-0 youthful_johnson[279356]:     ]
Oct 14 08:55:41 compute-0 youthful_johnson[279356]: }
Oct 14 08:55:41 compute-0 systemd[1]: libpod-8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98.scope: Deactivated successfully.
Oct 14 08:55:41 compute-0 podman[279339]: 2025-10-14 08:55:41.264841437 +0000 UTC m=+0.971854231 container died 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 08:55:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5-merged.mount: Deactivated successfully.
Oct 14 08:55:41 compute-0 podman[279339]: 2025-10-14 08:55:41.330085066 +0000 UTC m=+1.037097890 container remove 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:55:41 compute-0 systemd[1]: libpod-conmon-8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98.scope: Deactivated successfully.
Oct 14 08:55:41 compute-0 nova_compute[259627]: 2025-10-14 08:55:41.366 2 INFO nova.virt.libvirt.driver [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Deleting instance files /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_del
Oct 14 08:55:41 compute-0 nova_compute[259627]: 2025-10-14 08:55:41.367 2 INFO nova.virt.libvirt.driver [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Deletion of /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_del complete
Oct 14 08:55:41 compute-0 sudo[279233]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:41 compute-0 nova_compute[259627]: 2025-10-14 08:55:41.455 2 INFO nova.compute.manager [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 14 08:55:41 compute-0 nova_compute[259627]: 2025-10-14 08:55:41.456 2 DEBUG oslo.service.loopingcall [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:55:41 compute-0 nova_compute[259627]: 2025-10-14 08:55:41.457 2 DEBUG nova.compute.manager [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:55:41 compute-0 nova_compute[259627]: 2025-10-14 08:55:41.458 2 DEBUG nova.network.neutron [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:55:41 compute-0 sudo[279473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 14 08:55:41 compute-0 sudo[279473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:41 compute-0 sudo[279473]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:41 compute-0 sudo[279498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:55:41 compute-0 sudo[279498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:41 compute-0 sudo[279498]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:41 compute-0 sudo[279523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:41 compute-0 sudo[279523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:41 compute-0 sudo[279523]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:41 compute-0 sudo[279548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 08:55:41 compute-0 sudo[279548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:41 compute-0 nova_compute[259627]: 2025-10-14 08:55:41.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:42 compute-0 podman[279612]: 2025-10-14 08:55:42.088862281 +0000 UTC m=+0.046488528 container create f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:55:42 compute-0 systemd[1]: Started libpod-conmon-f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e.scope.
Oct 14 08:55:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:55:42 compute-0 podman[279612]: 2025-10-14 08:55:42.069119604 +0000 UTC m=+0.026745891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:55:42 compute-0 podman[279612]: 2025-10-14 08:55:42.167616143 +0000 UTC m=+0.125242460 container init f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:55:42 compute-0 podman[279612]: 2025-10-14 08:55:42.175727433 +0000 UTC m=+0.133353670 container start f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 08:55:42 compute-0 podman[279612]: 2025-10-14 08:55:42.178731037 +0000 UTC m=+0.136357354 container attach f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:55:42 compute-0 brave_taussig[279628]: 167 167
Oct 14 08:55:42 compute-0 systemd[1]: libpod-f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e.scope: Deactivated successfully.
Oct 14 08:55:42 compute-0 podman[279612]: 2025-10-14 08:55:42.180593473 +0000 UTC m=+0.138219720 container died f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 08:55:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d28b91e8f4a833af5ee796bbedb72e057af10894273c8605402c37ff2bcadbb-merged.mount: Deactivated successfully.
Oct 14 08:55:42 compute-0 podman[279612]: 2025-10-14 08:55:42.219748009 +0000 UTC m=+0.177374286 container remove f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 08:55:42 compute-0 systemd[1]: libpod-conmon-f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e.scope: Deactivated successfully.
Oct 14 08:55:42 compute-0 podman[279652]: 2025-10-14 08:55:42.417385434 +0000 UTC m=+0.059554890 container create 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 08:55:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:42 compute-0 systemd[1]: Started libpod-conmon-70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a.scope.
Oct 14 08:55:42 compute-0 podman[279652]: 2025-10-14 08:55:42.396108209 +0000 UTC m=+0.038277705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:55:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:55:42 compute-0 podman[279652]: 2025-10-14 08:55:42.522753032 +0000 UTC m=+0.164922578 container init 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 08:55:42 compute-0 podman[279652]: 2025-10-14 08:55:42.532396 +0000 UTC m=+0.174565486 container start 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 08:55:42 compute-0 podman[279652]: 2025-10-14 08:55:42.537211309 +0000 UTC m=+0.179380795 container attach 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000759845367747607 of space, bias 1.0, pg target 0.2279536103242821 quantized to 32 (current 32)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:55:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 08:55:43 compute-0 ceph-mon[74249]: pgmap v1108: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 14 08:55:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]: {
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "osd_id": 2,
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "type": "bluestore"
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:     },
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "osd_id": 1,
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "type": "bluestore"
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:     },
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "osd_id": 0,
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:         "type": "bluestore"
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]:     }
Oct 14 08:55:43 compute-0 amazing_satoshi[279669]: }
Oct 14 08:55:43 compute-0 systemd[1]: libpod-70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a.scope: Deactivated successfully.
Oct 14 08:55:43 compute-0 systemd[1]: libpod-70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a.scope: Consumed 1.098s CPU time.
Oct 14 08:55:43 compute-0 podman[279652]: 2025-10-14 08:55:43.623419899 +0000 UTC m=+1.265589375 container died 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 08:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7-merged.mount: Deactivated successfully.
Oct 14 08:55:43 compute-0 podman[279652]: 2025-10-14 08:55:43.698828249 +0000 UTC m=+1.340997695 container remove 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:55:43 compute-0 systemd[1]: libpod-conmon-70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a.scope: Deactivated successfully.
Oct 14 08:55:43 compute-0 sudo[279548]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:55:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:55:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:55:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:55:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev bc9cdd0e-7cf1-4b72-85d7-cb38fbd76788 does not exist
Oct 14 08:55:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f72e0617-f717-4102-ae41-73db88c0ee3b does not exist
Oct 14 08:55:43 compute-0 nova_compute[259627]: 2025-10-14 08:55:43.753 2 DEBUG nova.network.neutron [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:43 compute-0 nova_compute[259627]: 2025-10-14 08:55:43.779 2 INFO nova.compute.manager [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Took 2.32 seconds to deallocate network for instance.
Oct 14 08:55:43 compute-0 sudo[279714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:55:43 compute-0 sudo[279714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:43 compute-0 sudo[279714]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:43 compute-0 sudo[279739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 08:55:43 compute-0 sudo[279739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:55:43 compute-0 sudo[279739]: pam_unix(sudo:session): session closed for user root
Oct 14 08:55:43 compute-0 nova_compute[259627]: 2025-10-14 08:55:43.988 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:43 compute-0 nova_compute[259627]: 2025-10-14 08:55:43.989 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.043 2 DEBUG oslo_concurrency.processutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.447 2 DEBUG nova.compute.manager [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-unplugged-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.448 2 DEBUG oslo_concurrency.lockutils [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.449 2 DEBUG oslo_concurrency.lockutils [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.450 2 DEBUG oslo_concurrency.lockutils [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.450 2 DEBUG nova.compute.manager [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] No waiting events found dispatching network-vif-unplugged-fac86e41-30dc-481e-a423-10c6cdb3626f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.451 2 WARNING nova.compute.manager [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received unexpected event network-vif-unplugged-fac86e41-30dc-481e-a423-10c6cdb3626f for instance with vm_state deleted and task_state None.
Oct 14 08:55:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/451440496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.486 2 DEBUG oslo_concurrency.processutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.494 2 DEBUG nova.compute.provider_tree [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.511 2 DEBUG nova.scheduler.client.report [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.537 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.567 2 INFO nova.scheduler.client.report [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Deleted allocations for instance b063b9bf-1f88-47d3-a838-a4bcfc5eeecc
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.681 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.734 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "dfaff639-0439-40d7-8f56-5e8068d741cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.735 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:44 compute-0 ceph-mon[74249]: pgmap v1109: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 08:55:44 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:55:44 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:55:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/451440496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.752 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.820 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.821 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.829 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.829 2 INFO nova.compute.claims [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:55:44 compute-0 nova_compute[259627]: 2025-10-14 08:55:44.954 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458850696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.437 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.446 2 DEBUG nova.compute.provider_tree [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 410 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.481 2 DEBUG nova.scheduler.client.report [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.509 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.510 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.564 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.564 2 DEBUG nova.network.neutron [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.607 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.630 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.719 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.721 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.721 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Creating image(s)
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.744 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3458850696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.775 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.800 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.804 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.833 2 DEBUG nova.network.neutron [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.833 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.874 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.874 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.875 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.875 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.898 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:45 compute-0 nova_compute[259627]: 2025-10-14 08:55:45.902 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dfaff639-0439-40d7-8f56-5e8068d741cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.177 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dfaff639-0439-40d7-8f56-5e8068d741cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.253 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] resizing rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.359 2 DEBUG nova.objects.instance [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lazy-loading 'migration_context' on Instance uuid dfaff639-0439-40d7-8f56-5e8068d741cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.378 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.378 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Ensure instance console log exists: /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.379 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.379 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.379 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.381 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.386 2 WARNING nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.392 2 DEBUG nova.virt.libvirt.host [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.393 2 DEBUG nova.virt.libvirt.host [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.396 2 DEBUG nova.virt.libvirt.host [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.397 2 DEBUG nova.virt.libvirt.host [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.398 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.398 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.398 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.399 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.399 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.399 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.399 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.400 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.400 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.400 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.401 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.401 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.404 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.529 2 DEBUG nova.compute.manager [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.530 2 DEBUG oslo_concurrency.lockutils [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.530 2 DEBUG oslo_concurrency.lockutils [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.530 2 DEBUG oslo_concurrency.lockutils [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.530 2 DEBUG nova.compute.manager [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] No waiting events found dispatching network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.531 2 WARNING nova.compute.manager [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received unexpected event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f for instance with vm_state deleted and task_state None.
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.531 2 DEBUG nova.compute.manager [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-deleted-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:46 compute-0 ceph-mon[74249]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 410 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Oct 14 08:55:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/631251832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.822 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432131.8212082, 30af67a2-4b44-481c-8ab4-296e93c1c517 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.822 2 INFO nova.compute.manager [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] VM Stopped (Lifecycle Event)
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.839 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.869 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.874 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.906 2 DEBUG nova.compute.manager [None req-15fc0198-1685-455d-93b3-8634e2e64e6d - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:46 compute-0 nova_compute[259627]: 2025-10-14 08:55:46.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:55:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761699309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.371 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.373 2 DEBUG nova.objects.instance [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfaff639-0439-40d7-8f56-5e8068d741cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.395 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <uuid>dfaff639-0439-40d7-8f56-5e8068d741cf</uuid>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <name>instance-00000008</name>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerDiagnosticsTest-server-1361332870</nova:name>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:55:46</nova:creationTime>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <nova:user uuid="590d8c12b92e491291f294d8ab3f0b24">tempest-ServerDiagnosticsTest-929986915-project-member</nova:user>
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <nova:project uuid="82278e5903c445ef88008b2a13537a99">tempest-ServerDiagnosticsTest-929986915</nova:project>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <system>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <entry name="serial">dfaff639-0439-40d7-8f56-5e8068d741cf</entry>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <entry name="uuid">dfaff639-0439-40d7-8f56-5e8068d741cf</entry>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     </system>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <os>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   </os>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <features>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   </features>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dfaff639-0439-40d7-8f56-5e8068d741cf_disk">
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config">
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       </source>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:55:47 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/console.log" append="off"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <video>
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     </video>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:55:47 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:55:47 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:55:47 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:55:47 compute-0 nova_compute[259627]: </domain>
Oct 14 08:55:47 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:55:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.464 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.465 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.465 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Using config drive
Oct 14 08:55:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 41 op/s
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.499 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.695 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Creating config drive at /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.704 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7_vgazy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/631251832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/761699309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.837 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7_vgazy" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.861 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:47 compute-0 nova_compute[259627]: 2025-10-14 08:55:47.865 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:48 compute-0 nova_compute[259627]: 2025-10-14 08:55:48.037 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:48 compute-0 nova_compute[259627]: 2025-10-14 08:55:48.038 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Deleting local config drive /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config because it was imported into RBD.
Oct 14 08:55:48 compute-0 systemd-machined[214636]: New machine qemu-8-instance-00000008.
Oct 14 08:55:48 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 14 08:55:48 compute-0 ceph-mon[74249]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 41 op/s
Oct 14 08:55:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 41 op/s
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.581 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432149.580854, dfaff639-0439-40d7-8f56-5e8068d741cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.583 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] VM Resumed (Lifecycle Event)
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.588 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.589 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.595 2 INFO nova.virt.libvirt.driver [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance spawned successfully.
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.596 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.607 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.612 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.624 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.625 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.626 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.626 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.627 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.628 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.687 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.688 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432149.5831437, dfaff639-0439-40d7-8f56-5e8068d741cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.689 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] VM Started (Lifecycle Event)
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.714 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.719 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.743 2 INFO nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Took 4.02 seconds to spawn the instance on the hypervisor.
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.744 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.748 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.833 2 INFO nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Took 5.03 seconds to build instance.
Oct 14 08:55:49 compute-0 nova_compute[259627]: 2025-10-14 08:55:49.858 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.273 2 DEBUG nova.compute.manager [None req-875931fc-c8e7-4d31-a31a-576f8b504865 abec7e84696f45b39425bd6626415ef8 3dd4f80f8f3246faa638ef3cd796ed26 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.277 2 INFO nova.compute.manager [None req-875931fc-c8e7-4d31-a31a-576f8b504865 abec7e84696f45b39425bd6626415ef8 3dd4f80f8f3246faa638ef3cd796ed26 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Retrieving diagnostics
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.453 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "dfaff639-0439-40d7-8f56-5e8068d741cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.454 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.454 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "dfaff639-0439-40d7-8f56-5e8068d741cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.454 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.454 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.455 2 INFO nova.compute.manager [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Terminating instance
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.456 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "refresh_cache-dfaff639-0439-40d7-8f56-5e8068d741cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.456 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquired lock "refresh_cache-dfaff639-0439-40d7-8f56-5e8068d741cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.456 2 DEBUG nova.network.neutron [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.767 2 DEBUG nova.network.neutron [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:55:50 compute-0 ceph-mon[74249]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 41 op/s
Oct 14 08:55:50 compute-0 nova_compute[259627]: 2025-10-14 08:55:50.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.036 2 DEBUG nova.network.neutron [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.061 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Releasing lock "refresh_cache-dfaff639-0439-40d7-8f56-5e8068d741cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.062 2 DEBUG nova.compute.manager [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:55:51 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 14 08:55:51 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 2.963s CPU time.
Oct 14 08:55:51 compute-0 systemd-machined[214636]: Machine qemu-8-instance-00000008 terminated.
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.283 2 INFO nova.virt.libvirt.driver [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance destroyed successfully.
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.283 2 DEBUG nova.objects.instance [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lazy-loading 'resources' on Instance uuid dfaff639-0439-40d7-8f56-5e8068d741cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.688 2 INFO nova.virt.libvirt.driver [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Deleting instance files /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf_del
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.689 2 INFO nova.virt.libvirt.driver [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Deletion of /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf_del complete
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.766 2 INFO nova.compute.manager [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.767 2 DEBUG oslo.service.loopingcall [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.768 2 DEBUG nova.compute.manager [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.768 2 DEBUG nova.network.neutron [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.924 2 DEBUG nova.network.neutron [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.945 2 DEBUG nova.network.neutron [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:55:51 compute-0 nova_compute[259627]: 2025-10-14 08:55:51.976 2 INFO nova.compute.manager [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Took 0.21 seconds to deallocate network for instance.
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.028 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.028 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.151 2 DEBUG oslo_concurrency.processutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808018943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.595 2 DEBUG oslo_concurrency.processutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.601 2 DEBUG nova.compute.provider_tree [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.614 2 DEBUG nova.scheduler.client.report [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.644 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.673 2 INFO nova.scheduler.client.report [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Deleted allocations for instance dfaff639-0439-40d7-8f56-5e8068d741cf
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.768 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:52 compute-0 nova_compute[259627]: 2025-10-14 08:55:52.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:52 compute-0 ceph-mon[74249]: pgmap v1113: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 14 08:55:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1808018943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:53 compute-0 nova_compute[259627]: 2025-10-14 08:55:53.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 14 08:55:54 compute-0 ceph-mon[74249]: pgmap v1114: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 14 08:55:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.632 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.634 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.654 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.734 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.735 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.741 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.741 2 INFO nova.compute.claims [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.762 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432140.7610755, b063b9bf-1f88-47d3-a838-a4bcfc5eeecc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.762 2 INFO nova.compute.manager [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] VM Stopped (Lifecycle Event)
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.801 2 DEBUG nova.compute.manager [None req-5ff0cd66-dd1f-4f3e-bcec-5bac6270d4c3 - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:55 compute-0 nova_compute[259627]: 2025-10-14 08:55:55.868 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:55:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344160010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.375 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.382 2 DEBUG nova.compute.provider_tree [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.406 2 DEBUG nova.scheduler.client.report [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.440 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.441 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.498 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.499 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.516 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.532 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.633 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.635 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.636 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Creating image(s)
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.670 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:56 compute-0 podman[280221]: 2025-10-14 08:55:56.678380808 +0000 UTC m=+0.085811278 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:55:56 compute-0 podman[280220]: 2025-10-14 08:55:56.692003114 +0000 UTC m=+0.094881121 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.709 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.731 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.734 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.817 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.819 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.820 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.821 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:56 compute-0 ceph-mon[74249]: pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Oct 14 08:55:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1344160010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.854 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.858 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:55:56 compute-0 nova_compute[259627]: 2025-10-14 08:55:56.887 2 DEBUG nova.policy [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'efc7f5a2c6324662956767ff381c6b16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '11f8dfac5e5c4bfe9bccae4608ea8d51', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.123 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.210 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] resizing rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.310 2 DEBUG nova.objects.instance [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lazy-loading 'migration_context' on Instance uuid 726e9e21-4f40-48aa-947a-95a78db4dbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.321 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.322 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Ensure instance console log exists: /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.322 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.323 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.323 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:55:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:55:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 08:55:57 compute-0 nova_compute[259627]: 2025-10-14 08:55:57.851 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Successfully created port: d5a0b41a-72da-4c0e-a568-fdba9541e479 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:55:58 compute-0 ceph-mon[74249]: pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 08:55:58 compute-0 nova_compute[259627]: 2025-10-14 08:55:58.879 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Successfully updated port: d5a0b41a-72da-4c0e-a568-fdba9541e479 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:55:58 compute-0 nova_compute[259627]: 2025-10-14 08:55:58.891 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:58 compute-0 nova_compute[259627]: 2025-10-14 08:55:58.891 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquired lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:55:58 compute-0 nova_compute[259627]: 2025-10-14 08:55:58.891 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:55:59 compute-0 nova_compute[259627]: 2025-10-14 08:55:59.087 2 DEBUG nova.compute.manager [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-changed-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:55:59 compute-0 nova_compute[259627]: 2025-10-14 08:55:59.087 2 DEBUG nova.compute.manager [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Refreshing instance network info cache due to event network-changed-d5a0b41a-72da-4c0e-a568-fdba9541e479. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:55:59 compute-0 nova_compute[259627]: 2025-10-14 08:55:59.088 2 DEBUG oslo_concurrency.lockutils [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:55:59 compute-0 nova_compute[259627]: 2025-10-14 08:55:59.173 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:55:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.395 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Updating instance_info_cache with network_info: [{"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.418 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Releasing lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.419 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance network_info: |[{"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.419 2 DEBUG oslo_concurrency.lockutils [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.419 2 DEBUG nova.network.neutron [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Refreshing network info cache for port d5a0b41a-72da-4c0e-a568-fdba9541e479 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.421 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start _get_guest_xml network_info=[{"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.426 2 WARNING nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.431 2 DEBUG nova.virt.libvirt.host [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.431 2 DEBUG nova.virt.libvirt.host [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.438 2 DEBUG nova.virt.libvirt.host [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.439 2 DEBUG nova.virt.libvirt.host [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.439 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.440 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.440 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.441 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.441 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.441 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.441 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.442 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.442 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.442 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.443 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.443 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.446 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839077312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:00 compute-0 ceph-mon[74249]: pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 08:56:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/839077312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.845 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.880 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:00 compute-0 nova_compute[259627]: 2025-10-14 08:56:00.885 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199315589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.311 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.314 2 DEBUG nova.virt.libvirt.vif [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1369634349',display_name='tempest-ImagesNegativeTestJSON-server-1369634349',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1369634349',id=9,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11f8dfac5e5c4bfe9bccae4608ea8d51',ramdisk_id='',reservation_id='r-ogdi8y87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-866675267',owner_user_name='tempest-ImagesNegativeTestJSON-866675267-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:55:56Z,user_data=None,user_id='efc7f5a2c6324662956767ff381c6b16',uuid=726e9e21-4f40-48aa-947a-95a78db4dbf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.315 2 DEBUG nova.network.os_vif_util [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converting VIF {"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.316 2 DEBUG nova.network.os_vif_util [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.318 2 DEBUG nova.objects.instance [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lazy-loading 'pci_devices' on Instance uuid 726e9e21-4f40-48aa-947a-95a78db4dbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.338 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <uuid>726e9e21-4f40-48aa-947a-95a78db4dbf3</uuid>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <name>instance-00000009</name>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesNegativeTestJSON-server-1369634349</nova:name>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:56:00</nova:creationTime>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <nova:user uuid="efc7f5a2c6324662956767ff381c6b16">tempest-ImagesNegativeTestJSON-866675267-project-member</nova:user>
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <nova:project uuid="11f8dfac5e5c4bfe9bccae4608ea8d51">tempest-ImagesNegativeTestJSON-866675267</nova:project>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <nova:port uuid="d5a0b41a-72da-4c0e-a568-fdba9541e479">
Oct 14 08:56:01 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <system>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <entry name="serial">726e9e21-4f40-48aa-947a-95a78db4dbf3</entry>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <entry name="uuid">726e9e21-4f40-48aa-947a-95a78db4dbf3</entry>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </system>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <os>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   </os>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <features>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   </features>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/726e9e21-4f40-48aa-947a-95a78db4dbf3_disk">
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config">
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:01 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:8c:0c:0a"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <target dev="tapd5a0b41a-72"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/console.log" append="off"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <video>
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </video>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:56:01 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:56:01 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:56:01 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:56:01 compute-0 nova_compute[259627]: </domain>
Oct 14 08:56:01 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.340 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Preparing to wait for external event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.340 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.341 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.342 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.343 2 DEBUG nova.virt.libvirt.vif [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1369634349',display_name='tempest-ImagesNegativeTestJSON-server-1369634349',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1369634349',id=9,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11f8dfac5e5c4bfe9bccae4608ea8d51',ramdisk_id='',reservation_id='r-ogdi8y87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-866675267',owner_user_name='tempest-ImagesNegativeTestJSON-866675267-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:55:56Z,user_data=None,user_id='efc7f5a2c6324662956767ff381c6b16',uuid=726e9e21-4f40-48aa-947a-95a78db4dbf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.343 2 DEBUG nova.network.os_vif_util [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converting VIF {"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.344 2 DEBUG nova.network.os_vif_util [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.345 2 DEBUG os_vif [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5a0b41a-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5a0b41a-72, col_values=(('external_ids', {'iface-id': 'd5a0b41a-72da-4c0e-a568-fdba9541e479', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:0c:0a', 'vm-uuid': '726e9e21-4f40-48aa-947a-95a78db4dbf3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:01 compute-0 NetworkManager[44885]: <info>  [1760432161.3580] manager: (tapd5a0b41a-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.368 2 INFO os_vif [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72')
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.443 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.444 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.444 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] No VIF found with MAC fa:16:3e:8c:0c:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.445 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Using config drive
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.474 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2199315589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.891 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Creating config drive at /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config
Oct 14 08:56:01 compute-0 nova_compute[259627]: 2025-10-14 08:56:01.902 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2_qssvt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.039 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2_qssvt" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.066 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.071 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.221 2 DEBUG nova.network.neutron [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Updated VIF entry in instance network info cache for port d5a0b41a-72da-4c0e-a568-fdba9541e479. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.222 2 DEBUG nova.network.neutron [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Updating instance_info_cache with network_info: [{"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.242 2 DEBUG oslo_concurrency.lockutils [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.244 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.244 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Deleting local config drive /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config because it was imported into RBD.
Oct 14 08:56:02 compute-0 kernel: tapd5a0b41a-72: entered promiscuous mode
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:02 compute-0 ovn_controller[152662]: 2025-10-14T08:56:02Z|00069|binding|INFO|Claiming lport d5a0b41a-72da-4c0e-a568-fdba9541e479 for this chassis.
Oct 14 08:56:02 compute-0 ovn_controller[152662]: 2025-10-14T08:56:02Z|00070|binding|INFO|d5a0b41a-72da-4c0e-a568-fdba9541e479: Claiming fa:16:3e:8c:0c:0a 10.100.0.9
Oct 14 08:56:02 compute-0 NetworkManager[44885]: <info>  [1760432162.2947] manager: (tapd5a0b41a-72): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.308 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:0c:0a 10.100.0.9'], port_security=['fa:16:3e:8c:0c:0a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '726e9e21-4f40-48aa-947a-95a78db4dbf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11f8dfac5e5c4bfe9bccae4608ea8d51', 'neutron:revision_number': '2', 'neutron:security_group_ids': '109e5b33-46b2-4d5e-adaf-e6f9890b6610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cb3f08b-f3e8-45d0-921c-e2283d0c50c8, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d5a0b41a-72da-4c0e-a568-fdba9541e479) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.310 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d5a0b41a-72da-4c0e-a568-fdba9541e479 in datapath 9b410fba-e0a4-4544-b437-e1dbffa6da06 bound to our chassis
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.312 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b410fba-e0a4-4544-b437-e1dbffa6da06
Oct 14 08:56:02 compute-0 systemd-machined[214636]: New machine qemu-9-instance-00000009.
Oct 14 08:56:02 compute-0 systemd-udevd[280558]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.326 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8a47bd-c0f0-40db-af6e-395b69c5e316]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.327 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9b410fba-e1 in ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.328 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9b410fba-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.329 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee9bef4-81ff-44d2-bd0a-626bb3728a20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.329 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[64104582-9802-4261-af49-35c25a505d1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.344 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8d69aa-118c-4215-932a-c06459ea6b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 NetworkManager[44885]: <info>  [1760432162.3485] device (tapd5a0b41a-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:56:02 compute-0 NetworkManager[44885]: <info>  [1760432162.3501] device (tapd5a0b41a-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:56:02 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.373 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be729019-0914-4e69-a576-55e569d62759]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_controller[152662]: 2025-10-14T08:56:02Z|00071|binding|INFO|Setting lport d5a0b41a-72da-4c0e-a568-fdba9541e479 ovn-installed in OVS
Oct 14 08:56:02 compute-0 ovn_controller[152662]: 2025-10-14T08:56:02Z|00072|binding|INFO|Setting lport d5a0b41a-72da-4c0e-a568-fdba9541e479 up in Southbound
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.404 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[954301b5-f0cf-4a97-9b5b-05c1ac1cbe11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 NetworkManager[44885]: <info>  [1760432162.4091] manager: (tap9b410fba-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.408 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc251199-7e10-41cd-82f4-4c1a795257b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 systemd-udevd[280562]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.438 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8da4f188-69d8-4aee-a9b1-87b2a4acf5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.441 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8f439dad-3a8c-46c5-8780-fa90ddc37184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:02 compute-0 NetworkManager[44885]: <info>  [1760432162.4651] device (tap9b410fba-e0): carrier: link connected
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.470 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e75f54e8-4c63-47e3-a8c1-815aac2e7f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9eedc5ce-a6ad-4bc2-bb82-a213327cbc93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b410fba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:3b:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594122, 'reachable_time': 26959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280591, 'error': None, 'target': 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.510 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[64e259ad-ee50-49c6-bd4e-1f56cb37a29d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:3bd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594122, 'tstamp': 594122}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280592, 'error': None, 'target': 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.529 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0b59b2-17a8-49a9-b152-88dac01afd4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b410fba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:3b:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594122, 'reachable_time': 26959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280593, 'error': None, 'target': 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[060daf68-e8e0-4d81-b9e5-197b0eda9e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.611 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4a146db4-f1eb-4e9d-ac1e-026279edd6aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b410fba-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.614 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b410fba-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:02 compute-0 kernel: tap9b410fba-e0: entered promiscuous mode
Oct 14 08:56:02 compute-0 NetworkManager[44885]: <info>  [1760432162.6181] manager: (tap9b410fba-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.621 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b410fba-e0, col_values=(('external_ids', {'iface-id': '291e43c3-46e3-4e83-b745-e05afcea3301'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:02 compute-0 ovn_controller[152662]: 2025-10-14T08:56:02Z|00073|binding|INFO|Releasing lport 291e43c3-46e3-4e83-b745-e05afcea3301 from this chassis (sb_readonly=0)
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.643 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9b410fba-e0a4-4544-b437-e1dbffa6da06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9b410fba-e0a4-4544-b437-e1dbffa6da06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d82f5cb-7e63-4a29-9d56-48d51bab91c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.645 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-9b410fba-e0a4-4544-b437-e1dbffa6da06
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/9b410fba-e0a4-4544-b437-e1dbffa6da06.pid.haproxy
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 9b410fba-e0a4-4544-b437-e1dbffa6da06
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:56:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.646 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'env', 'PROCESS_TAG=haproxy-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9b410fba-e0a4-4544-b437-e1dbffa6da06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.699 2 DEBUG nova.compute.manager [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.700 2 DEBUG oslo_concurrency.lockutils [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.700 2 DEBUG oslo_concurrency.lockutils [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.700 2 DEBUG oslo_concurrency.lockutils [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.700 2 DEBUG nova.compute.manager [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Processing event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.784 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "0936e31d-4bed-46c8-a561-05467cf93f75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.785 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.800 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:02 compute-0 ceph-mon[74249]: pgmap v1118: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.878 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.886 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.898 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:02 compute-0 nova_compute[259627]: 2025-10-14 08:56:02.899 2 INFO nova.compute.claims [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.023 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:03 compute-0 podman[280667]: 2025-10-14 08:56:03.069773436 +0000 UTC m=+0.070379657 container create a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 08:56:03 compute-0 systemd[1]: Started libpod-conmon-a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b.scope.
Oct 14 08:56:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:03 compute-0 podman[280667]: 2025-10-14 08:56:03.032669061 +0000 UTC m=+0.033275362 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04047615a0e39b76733f58c9bbb2c5b94aae3cbd29f91e22f62d4a2216aca8e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:03 compute-0 podman[280667]: 2025-10-14 08:56:03.14250017 +0000 UTC m=+0.143106421 container init a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 08:56:03 compute-0 podman[280667]: 2025-10-14 08:56:03.147605946 +0000 UTC m=+0.148212157 container start a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 08:56:03 compute-0 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [NOTICE]   (280685) : New worker (280706) forked
Oct 14 08:56:03 compute-0 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [NOTICE]   (280685) : Loading success.
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.401 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432163.4010384, 726e9e21-4f40-48aa-947a-95a78db4dbf3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.402 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] VM Started (Lifecycle Event)
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.403 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.408 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.411 2 INFO nova.virt.libvirt.driver [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance spawned successfully.
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.411 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.424 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.429 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.433 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.433 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.434 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.434 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.435 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.435 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.474 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.475 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432163.403522, 726e9e21-4f40-48aa-947a-95a78db4dbf3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.475 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] VM Paused (Lifecycle Event)
Oct 14 08:56:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:03 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210446206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.494 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.497 2 INFO nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Took 6.86 seconds to spawn the instance on the hypervisor.
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.498 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.499 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.502 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432163.406412, 726e9e21-4f40-48aa-947a-95a78db4dbf3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.503 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] VM Resumed (Lifecycle Event)
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.507 2 DEBUG nova.compute.provider_tree [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.529 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.532 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.557 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.566 2 DEBUG nova.scheduler.client.report [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.572 2 INFO nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Took 7.86 seconds to build instance.
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.593 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.594 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.596 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.644 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.645 2 DEBUG nova.network.neutron [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.665 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.682 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.773 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.774 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.775 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Creating image(s)
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.797 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.820 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.845 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.849 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1210446206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.910 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.910 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.911 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.911 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.932 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.938 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0936e31d-4bed-46c8-a561-05467cf93f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.962 2 DEBUG nova.network.neutron [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 08:56:03 compute-0 nova_compute[259627]: 2025-10-14 08:56:03.962 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.177 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0936e31d-4bed-46c8-a561-05467cf93f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.233 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] resizing rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.281 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "ed89f48c-8144-453c-9357-0abc99716b22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.282 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.330 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.331 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.331 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.331 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.331 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.333 2 INFO nova.compute.manager [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Terminating instance
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.334 2 DEBUG nova.compute.manager [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.335 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.343 2 DEBUG nova.objects.instance [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lazy-loading 'migration_context' on Instance uuid 0936e31d-4bed-46c8-a561-05467cf93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.366 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.367 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Ensure instance console log exists: /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.367 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.367 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.367 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.368 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.371 2 WARNING nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:04 compute-0 kernel: tapd5a0b41a-72 (unregistering): left promiscuous mode
Oct 14 08:56:04 compute-0 NetworkManager[44885]: <info>  [1760432164.3780] device (tapd5a0b41a-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.377 2 DEBUG nova.virt.libvirt.host [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.380 2 DEBUG nova.virt.libvirt.host [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:56:04 compute-0 ovn_controller[152662]: 2025-10-14T08:56:04Z|00074|binding|INFO|Releasing lport d5a0b41a-72da-4c0e-a568-fdba9541e479 from this chassis (sb_readonly=0)
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:04 compute-0 ovn_controller[152662]: 2025-10-14T08:56:04Z|00075|binding|INFO|Setting lport d5a0b41a-72da-4c0e-a568-fdba9541e479 down in Southbound
Oct 14 08:56:04 compute-0 ovn_controller[152662]: 2025-10-14T08:56:04Z|00076|binding|INFO|Removing iface tapd5a0b41a-72 ovn-installed in OVS
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.387 2 DEBUG nova.virt.libvirt.host [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.388 2 DEBUG nova.virt.libvirt.host [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.388 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.388 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.393 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.392 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:0c:0a 10.100.0.9'], port_security=['fa:16:3e:8c:0c:0a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '726e9e21-4f40-48aa-947a-95a78db4dbf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11f8dfac5e5c4bfe9bccae4608ea8d51', 'neutron:revision_number': '4', 'neutron:security_group_ids': '109e5b33-46b2-4d5e-adaf-e6f9890b6610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cb3f08b-f3e8-45d0-921c-e2283d0c50c8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d5a0b41a-72da-4c0e-a568-fdba9541e479) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.394 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d5a0b41a-72da-4c0e-a568-fdba9541e479 in datapath 9b410fba-e0a4-4544-b437-e1dbffa6da06 unbound from our chassis
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.395 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b410fba-e0a4-4544-b437-e1dbffa6da06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42c90486-4e69-4742-adb5-6a75110070c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.396 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 namespace which is not needed anymore
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:04 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 14 08:56:04 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 1.844s CPU time.
Oct 14 08:56:04 compute-0 systemd-machined[214636]: Machine qemu-9-instance-00000009 terminated.
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.447 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.447 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.456 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.457 2 INFO nova.compute.claims [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:56:04 compute-0 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [NOTICE]   (280685) : haproxy version is 2.8.14-c23fe91
Oct 14 08:56:04 compute-0 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [NOTICE]   (280685) : path to executable is /usr/sbin/haproxy
Oct 14 08:56:04 compute-0 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [WARNING]  (280685) : Exiting Master process...
Oct 14 08:56:04 compute-0 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [ALERT]    (280685) : Current worker (280706) exited with code 143 (Terminated)
Oct 14 08:56:04 compute-0 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [WARNING]  (280685) : All workers exited. Exiting... (0)
Oct 14 08:56:04 compute-0 systemd[1]: libpod-a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b.scope: Deactivated successfully.
Oct 14 08:56:04 compute-0 podman[280906]: 2025-10-14 08:56:04.52307529 +0000 UTC m=+0.043805071 container died a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b-userdata-shm.mount: Deactivated successfully.
Oct 14 08:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-04047615a0e39b76733f58c9bbb2c5b94aae3cbd29f91e22f62d4a2216aca8e0-merged.mount: Deactivated successfully.
Oct 14 08:56:04 compute-0 NetworkManager[44885]: <info>  [1760432164.5590] manager: (tapd5a0b41a-72): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Oct 14 08:56:04 compute-0 podman[280906]: 2025-10-14 08:56:04.559049267 +0000 UTC m=+0.079778988 container cleanup a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 08:56:04 compute-0 systemd[1]: libpod-conmon-a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b.scope: Deactivated successfully.
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.579 2 INFO nova.virt.libvirt.driver [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance destroyed successfully.
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.579 2 DEBUG nova.objects.instance [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lazy-loading 'resources' on Instance uuid 726e9e21-4f40-48aa-947a-95a78db4dbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.597 2 DEBUG nova.virt.libvirt.vif [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1369634349',display_name='tempest-ImagesNegativeTestJSON-server-1369634349',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1369634349',id=9,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='11f8dfac5e5c4bfe9bccae4608ea8d51',ramdisk_id='',reservation_id='r-ogdi8y87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-866675267',owner_user_name='tempest-ImagesNegativeTestJSON-866675267-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:56:03Z,user_data=None,user_id='efc7f5a2c6324662956767ff381c6b16',uuid=726e9e21-4f40-48aa-947a-95a78db4dbf3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.597 2 DEBUG nova.network.os_vif_util [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converting VIF {"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.598 2 DEBUG nova.network.os_vif_util [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.598 2 DEBUG os_vif [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.600 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5a0b41a-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.605 2 INFO os_vif [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72')
Oct 14 08:56:04 compute-0 podman[280958]: 2025-10-14 08:56:04.642509266 +0000 UTC m=+0.057038888 container remove a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.649 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e75ff5c4-4ded-4d30-b127-c33dcccbe931]: (4, ('Tue Oct 14 08:56:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 (a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b)\na405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b\nTue Oct 14 08:56:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 (a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b)\na405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c97c680-4fb4-4ee4-be1b-2c811d33fea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.652 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b410fba-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:04 compute-0 kernel: tap9b410fba-e0: left promiscuous mode
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.680 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2d34b1-5f03-4fa2-b50f-4c407c042e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67daebfa-3c28-4f77-b059-50d49639ed5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.713 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20b5667f-e021-4d8e-9f20-561cd87724bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c520ad6-8626-46ee-90ed-d32adb51dcd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594116, 'reachable_time': 43073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280995, 'error': None, 'target': 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d9b410fba\x2de0a4\x2d4544\x2db437\x2de1dbffa6da06.mount: Deactivated successfully.
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.739 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:56:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.739 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4b19a5a2-1bc2-4ee9-b644-583c2a128a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.789 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.789 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.790 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.790 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.791 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] No waiting events found dispatching network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.792 2 WARNING nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received unexpected event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 for instance with vm_state active and task_state deleting.
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.792 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-unplugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.793 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.793 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.793 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.793 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] No waiting events found dispatching network-vif-unplugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.794 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-unplugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.794 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.795 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.795 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.795 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.796 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] No waiting events found dispatching network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.796 2 WARNING nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received unexpected event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 for instance with vm_state active and task_state deleting.
Oct 14 08:56:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647073352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.850 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.874 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:04 compute-0 ceph-mon[74249]: pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 14 08:56:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3647073352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.880 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.978 2 INFO nova.virt.libvirt.driver [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Deleting instance files /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3_del
Oct 14 08:56:04 compute-0 nova_compute[259627]: 2025-10-14 08:56:04.979 2 INFO nova.virt.libvirt.driver [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Deletion of /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3_del complete
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.058 2 INFO nova.compute.manager [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.059 2 DEBUG oslo.service.loopingcall [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.059 2 DEBUG nova.compute.manager [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.059 2 DEBUG nova.network.neutron [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:56:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346080831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.111 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.115 2 DEBUG nova.compute.provider_tree [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.138 2 DEBUG nova.scheduler.client.report [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.157 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.157 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.195 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.195 2 DEBUG nova.network.neutron [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.212 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.231 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:56:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877257362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.287 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.288 2 DEBUG nova.objects.instance [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0936e31d-4bed-46c8-a561-05467cf93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.313 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <uuid>0936e31d-4bed-46c8-a561-05467cf93f75</uuid>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <name>instance-0000000a</name>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <nova:name>tempest-TenantUsagesTestJSON-server-196977972</nova:name>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:56:04</nova:creationTime>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <nova:user uuid="0a71ad98ce694327be82c546cc6f37b4">tempest-TenantUsagesTestJSON-1237927654-project-member</nova:user>
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <nova:project uuid="d319ba0849124bb381bfab8321324c76">tempest-TenantUsagesTestJSON-1237927654</nova:project>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <system>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <entry name="serial">0936e31d-4bed-46c8-a561-05467cf93f75</entry>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <entry name="uuid">0936e31d-4bed-46c8-a561-05467cf93f75</entry>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     </system>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <os>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   </os>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <features>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   </features>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/0936e31d-4bed-46c8-a561-05467cf93f75_disk">
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/0936e31d-4bed-46c8-a561-05467cf93f75_disk.config">
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:05 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/console.log" append="off"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <video>
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     </video>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:56:05 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:56:05 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:56:05 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:56:05 compute-0 nova_compute[259627]: </domain>
Oct 14 08:56:05 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.357 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.358 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.358 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Creating image(s)
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.374 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.390 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.409 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.412 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.435 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.435 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.436 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Using config drive
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.456 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.467 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.468 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.468 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.469 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.487 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.490 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ed89f48c-8144-453c-9357-0abc99716b22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 08:56:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3458753081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:56:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 08:56:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3458753081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.572 2 DEBUG nova.network.neutron [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.573 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.661 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Creating config drive at /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.665 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjjp08b4w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.752 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ed89f48c-8144-453c-9357-0abc99716b22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.818 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjjp08b4w" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.846 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.850 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.870 2 DEBUG nova.network.neutron [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.876 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] resizing rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:56:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1346080831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2877257362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3458753081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:56:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3458753081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.907 2 INFO nova.compute.manager [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Took 0.85 seconds to deallocate network for instance.
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.971 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.972 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.977 2 DEBUG nova.objects.instance [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'migration_context' on Instance uuid ed89f48c-8144-453c-9357-0abc99716b22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.996 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.996 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Ensure instance console log exists: /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.997 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.997 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.997 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:05 compute-0 nova_compute[259627]: 2025-10-14 08:56:05.999 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.006 2 WARNING nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.010 2 DEBUG nova.virt.libvirt.host [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.011 2 DEBUG nova.virt.libvirt.host [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.014 2 DEBUG nova.virt.libvirt.host [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.014 2 DEBUG nova.virt.libvirt.host [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.015 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.015 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.016 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.016 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.017 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.017 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.017 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.018 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.018 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.018 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.019 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.019 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.022 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.046 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.048 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Deleting local config drive /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config because it was imported into RBD.
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.111 2 DEBUG oslo_concurrency.processutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:06 compute-0 systemd-machined[214636]: New machine qemu-10-instance-0000000a.
Oct 14 08:56:06 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.282 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432151.28092, dfaff639-0439-40d7-8f56-5e8068d741cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.283 2 INFO nova.compute.manager [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] VM Stopped (Lifecycle Event)
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.304 2 DEBUG nova.compute.manager [None req-857d13c3-b60c-4ff6-affc-d47da1f8c644 - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534328190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.433 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.460 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.464 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491166002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.571 2 DEBUG oslo_concurrency.processutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.582 2 DEBUG nova.compute.provider_tree [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.602 2 DEBUG nova.scheduler.client.report [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.629 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.654 2 INFO nova.scheduler.client.report [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Deleted allocations for instance 726e9e21-4f40-48aa-947a-95a78db4dbf3
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.719 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258172472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.883 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.885 2 DEBUG nova.objects.instance [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ed89f48c-8144-453c-9357-0abc99716b22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:06 compute-0 ceph-mon[74249]: pgmap v1120: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 14 08:56:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/534328190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3491166002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/258172472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.901 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <uuid>ed89f48c-8144-453c-9357-0abc99716b22</uuid>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <name>instance-0000000b</name>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-292904921</nova:name>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:56:06</nova:creationTime>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <nova:user uuid="8414c5e33cd949b9809db7c92c81ef19">tempest-DeleteServersAdminTestJSON-882596417-project-member</nova:user>
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <nova:project uuid="e939ba1f2e1f42ccb04f896c697625d2">tempest-DeleteServersAdminTestJSON-882596417</nova:project>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <system>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <entry name="serial">ed89f48c-8144-453c-9357-0abc99716b22</entry>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <entry name="uuid">ed89f48c-8144-453c-9357-0abc99716b22</entry>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     </system>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <os>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   </os>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <features>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   </features>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ed89f48c-8144-453c-9357-0abc99716b22_disk">
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ed89f48c-8144-453c-9357-0abc99716b22_disk.config">
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/console.log" append="off"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <video>
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     </video>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:56:06 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:56:06 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:56:06 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:56:06 compute-0 nova_compute[259627]: </domain>
Oct 14 08:56:06 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.969 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.969 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:06 compute-0 nova_compute[259627]: 2025-10-14 08:56:06.970 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Using config drive
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.001 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:07.014 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:07 compute-0 podman[281427]: 2025-10-14 08:56:07.019924804 +0000 UTC m=+0.081546723 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.039 2 DEBUG nova.compute.manager [req-91488208-e646-4b9e-a438-3ba9400a844a req-eb73e2e0-4a0c-4272-b71e-8fa62fc8b8e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-deleted-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:07 compute-0 podman[281464]: 2025-10-14 08:56:07.116140997 +0000 UTC m=+0.079697387 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.127 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Creating config drive at /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.131 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bjpcml7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.158 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432167.1576405, 0936e31d-4bed-46c8-a561-05467cf93f75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.159 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] VM Resumed (Lifecycle Event)
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.163 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.164 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.168 2 INFO nova.virt.libvirt.driver [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance spawned successfully.
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.168 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.198 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.205 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.216 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.216 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.218 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.219 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.220 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.221 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.255 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bjpcml7" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.292 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.296 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config ed89f48c-8144-453c-9357-0abc99716b22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.326 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.327 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432167.1621742, 0936e31d-4bed-46c8-a561-05467cf93f75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.327 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] VM Started (Lifecycle Event)
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.331 2 INFO nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Took 3.56 seconds to spawn the instance on the hypervisor.
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.332 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.349 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.353 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.375 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.395 2 INFO nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Took 4.55 seconds to build instance.
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.411 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.486 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config ed89f48c-8144-453c-9357-0abc99716b22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:07 compute-0 nova_compute[259627]: 2025-10-14 08:56:07.487 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Deleting local config drive /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config because it was imported into RBD.
Oct 14 08:56:07 compute-0 virtqemud[259351]: End of file while reading data: Input/output error
Oct 14 08:56:07 compute-0 systemd-machined[214636]: New machine qemu-11-instance-0000000b.
Oct 14 08:56:07 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.511 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432168.51139, ed89f48c-8144-453c-9357-0abc99716b22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.512 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] VM Resumed (Lifecycle Event)
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.514 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.514 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.519 2 INFO nova.virt.libvirt.driver [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance spawned successfully.
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.519 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.534 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.539 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.542 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.543 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.543 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.543 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.544 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.544 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.594 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.595 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432168.5119765, ed89f48c-8144-453c-9357-0abc99716b22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] VM Started (Lifecycle Event)
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.634 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.639 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.650 2 INFO nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Took 3.29 seconds to spawn the instance on the hypervisor.
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.651 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.662 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.715 2 INFO nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Took 4.29 seconds to build instance.
Oct 14 08:56:08 compute-0 nova_compute[259627]: 2025-10-14 08:56:08.733 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:08 compute-0 ceph-mon[74249]: pgmap v1121: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 14 08:56:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.556 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "0936e31d-4bed-46c8-a561-05467cf93f75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.557 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.557 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "0936e31d-4bed-46c8-a561-05467cf93f75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.558 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.558 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.559 2 INFO nova.compute.manager [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Terminating instance
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.560 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "refresh_cache-0936e31d-4bed-46c8-a561-05467cf93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.560 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquired lock "refresh_cache-0936e31d-4bed-46c8-a561-05467cf93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.560 2 DEBUG nova.network.neutron [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:09 compute-0 nova_compute[259627]: 2025-10-14 08:56:09.813 2 DEBUG nova.network.neutron [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.368 2 DEBUG nova.network.neutron [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.390 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Releasing lock "refresh_cache-0936e31d-4bed-46c8-a561-05467cf93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.390 2 DEBUG nova.compute.manager [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:56:10 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 14 08:56:10 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 4.331s CPU time.
Oct 14 08:56:10 compute-0 systemd-machined[214636]: Machine qemu-10-instance-0000000a terminated.
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.537 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquiring lock "ed89f48c-8144-453c-9357-0abc99716b22" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.537 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.538 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquiring lock "ed89f48c-8144-453c-9357-0abc99716b22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.538 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.538 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.539 2 INFO nova.compute.manager [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Terminating instance
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.540 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquiring lock "refresh_cache-ed89f48c-8144-453c-9357-0abc99716b22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.540 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquired lock "refresh_cache-ed89f48c-8144-453c-9357-0abc99716b22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.540 2 DEBUG nova.network.neutron [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.606 2 INFO nova.virt.libvirt.driver [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance destroyed successfully.
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.607 2 DEBUG nova.objects.instance [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lazy-loading 'resources' on Instance uuid 0936e31d-4bed-46c8-a561-05467cf93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:10 compute-0 nova_compute[259627]: 2025-10-14 08:56:10.662 2 DEBUG nova.network.neutron [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:10 compute-0 ceph-mon[74249]: pgmap v1122: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.000 2 INFO nova.virt.libvirt.driver [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Deleting instance files /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75_del
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.000 2 INFO nova.virt.libvirt.driver [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Deletion of /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75_del complete
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.078 2 INFO nova.compute.manager [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.079 2 DEBUG oslo.service.loopingcall [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.079 2 DEBUG nova.compute.manager [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.079 2 DEBUG nova.network.neutron [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:56:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.4 MiB/s wr, 299 op/s
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.762 2 DEBUG nova.network.neutron [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.779 2 DEBUG nova.network.neutron [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.785 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Releasing lock "refresh_cache-ed89f48c-8144-453c-9357-0abc99716b22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.786 2 DEBUG nova.compute.manager [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.799 2 DEBUG nova.network.neutron [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.821 2 INFO nova.compute.manager [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Took 0.74 seconds to deallocate network for instance.
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:11 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 14 08:56:11 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 4.209s CPU time.
Oct 14 08:56:11 compute-0 systemd-machined[214636]: Machine qemu-11-instance-0000000b terminated.
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.870 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.871 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.959 2 DEBUG oslo_concurrency.processutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.983 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.983 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 08:56:11 compute-0 nova_compute[259627]: 2025-10-14 08:56:11.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.003 2 INFO nova.virt.libvirt.driver [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance destroyed successfully.
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.003 2 DEBUG nova.objects.instance [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lazy-loading 'resources' on Instance uuid ed89f48c-8144-453c-9357-0abc99716b22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989591504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.415 2 DEBUG oslo_concurrency.processutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.422 2 INFO nova.virt.libvirt.driver [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Deleting instance files /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22_del
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.423 2 INFO nova.virt.libvirt.driver [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Deletion of /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22_del complete
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.427 2 DEBUG nova.compute.provider_tree [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.500 2 DEBUG nova.scheduler.client.report [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.543 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.552 2 INFO nova.compute.manager [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.552 2 DEBUG oslo.service.loopingcall [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.553 2 DEBUG nova.compute.manager [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.553 2 DEBUG nova.network.neutron [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.571 2 INFO nova.scheduler.client.report [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Deleted allocations for instance 0936e31d-4bed-46c8-a561-05467cf93f75
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.645 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.727 2 DEBUG nova.network.neutron [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.743 2 DEBUG nova.network.neutron [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.760 2 INFO nova.compute.manager [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Took 0.21 seconds to deallocate network for instance.
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.814 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.815 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.895 2 DEBUG oslo_concurrency.processutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:12 compute-0 ceph-mon[74249]: pgmap v1123: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.4 MiB/s wr, 299 op/s
Oct 14 08:56:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1989591504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:12 compute-0 nova_compute[259627]: 2025-10-14 08:56:12.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814602029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.377 2 DEBUG oslo_concurrency.processutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.385 2 DEBUG nova.compute.provider_tree [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.414 2 DEBUG nova.scheduler.client.report [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.457 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.494 2 INFO nova.scheduler.client.report [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Deleted allocations for instance ed89f48c-8144-453c-9357-0abc99716b22
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.562 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/814602029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.999 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 08:56:13 compute-0 nova_compute[259627]: 2025-10-14 08:56:13.999 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082987097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.392 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.602 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.604 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4639MB free_disk=59.94648361206055GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.604 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.605 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.678 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.678 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 08:56:14 compute-0 nova_compute[259627]: 2025-10-14 08:56:14.710 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:14 compute-0 ceph-mon[74249]: pgmap v1124: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Oct 14 08:56:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2082987097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1413220776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:15 compute-0 nova_compute[259627]: 2025-10-14 08:56:15.125 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:15 compute-0 nova_compute[259627]: 2025-10-14 08:56:15.132 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:15 compute-0 nova_compute[259627]: 2025-10-14 08:56:15.148 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:15 compute-0 nova_compute[259627]: 2025-10-14 08:56:15.171 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 08:56:15 compute-0 nova_compute[259627]: 2025-10-14 08:56:15.171 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 325 op/s
Oct 14 08:56:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1413220776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.173 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.173 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.421 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.422 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.442 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.537 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.538 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.543 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.543 2 INFO nova.compute.claims [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.647 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:16 compute-0 nova_compute[259627]: 2025-10-14 08:56:16.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:16 compute-0 ceph-mon[74249]: pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 325 op/s
Oct 14 08:56:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126881945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.090 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.096 2 DEBUG nova.compute.provider_tree [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.120 2 DEBUG nova.scheduler.client.report [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.146 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.147 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.210 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.211 2 DEBUG nova.network.neutron [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.232 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.250 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.373 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.375 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.376 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Creating image(s)
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.411 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.446 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.477 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.481 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.565 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.567 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.569 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.569 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.600 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.605 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.878 2 DEBUG nova.network.neutron [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.878 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.881 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.930 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] resizing rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:56:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1126881945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:17 compute-0 nova_compute[259627]: 2025-10-14 08:56:17.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.012 2 DEBUG nova.objects.instance [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.045 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.045 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.046 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Ensure instance console log exists: /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.046 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.046 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.047 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.048 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.052 2 WARNING nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.056 2 DEBUG nova.virt.libvirt.host [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.057 2 DEBUG nova.virt.libvirt.host [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.061 2 DEBUG nova.virt.libvirt.host [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.061 2 DEBUG nova.virt.libvirt.host [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.061 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.061 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.062 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.062 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.062 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.063 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.063 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.063 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.063 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.064 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.064 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.064 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.066 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187502317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.540 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.562 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.565 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440874776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:18 compute-0 ceph-mon[74249]: pgmap v1126: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 14 08:56:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3187502317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3440874776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.979 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:18 compute-0 nova_compute[259627]: 2025-10-14 08:56:18.981 2 DEBUG nova.objects.instance [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.012 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <uuid>4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543</uuid>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <name>instance-0000000c</name>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-280360781</nova:name>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:56:18</nova:creationTime>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <nova:user uuid="8414c5e33cd949b9809db7c92c81ef19">tempest-DeleteServersAdminTestJSON-882596417-project-member</nova:user>
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <nova:project uuid="e939ba1f2e1f42ccb04f896c697625d2">tempest-DeleteServersAdminTestJSON-882596417</nova:project>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <system>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <entry name="serial">4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543</entry>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <entry name="uuid">4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543</entry>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     </system>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <os>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   </os>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <features>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   </features>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk">
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config">
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:19 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/console.log" append="off"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <video>
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     </video>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:56:19 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:56:19 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:56:19 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:56:19 compute-0 nova_compute[259627]: </domain>
Oct 14 08:56:19 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.058 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.059 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.060 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Using config drive
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.084 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.375 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Creating config drive at /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.384 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgr_sbz5j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.527 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgr_sbz5j" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.565 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.571 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.599 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432164.57261, 726e9e21-4f40-48aa-947a-95a78db4dbf3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.600 2 INFO nova.compute.manager [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] VM Stopped (Lifecycle Event)
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.621 2 DEBUG nova.compute.manager [None req-7c8f2b14-9afd-49fd-9119-b17b5f5827c0 - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.775 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:19 compute-0 nova_compute[259627]: 2025-10-14 08:56:19.776 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Deleting local config drive /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config because it was imported into RBD.
Oct 14 08:56:19 compute-0 systemd-machined[214636]: New machine qemu-12-instance-0000000c.
Oct 14 08:56:19 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.788 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432180.7879593, 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.789 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] VM Resumed (Lifecycle Event)
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.791 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.791 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.795 2 INFO nova.virt.libvirt.driver [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance spawned successfully.
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.795 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.825 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.832 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.836 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.836 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.836 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.837 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.837 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.837 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.864 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.865 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432180.7881947, 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.865 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] VM Started (Lifecycle Event)
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.891 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.894 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.901 2 INFO nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Took 3.53 seconds to spawn the instance on the hypervisor.
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.901 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.913 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.963 2 INFO nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Took 4.47 seconds to build instance.
Oct 14 08:56:20 compute-0 ceph-mon[74249]: pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 14 08:56:20 compute-0 nova_compute[259627]: 2025-10-14 08:56:20.986 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Oct 14 08:56:21 compute-0 nova_compute[259627]: 2025-10-14 08:56:21.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.220 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.220 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.220 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.221 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.221 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.222 2 INFO nova.compute.manager [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Terminating instance
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.223 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "refresh_cache-4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.224 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquired lock "refresh_cache-4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.224 2 DEBUG nova.network.neutron [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:56:23 compute-0 ceph-mon[74249]: pgmap v1128: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Oct 14 08:56:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.557 2 DEBUG nova.network.neutron [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.836 2 DEBUG nova.network.neutron [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.861 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Releasing lock "refresh_cache-4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:23 compute-0 nova_compute[259627]: 2025-10-14 08:56:23.862 2 DEBUG nova.compute.manager [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:56:23 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 14 08:56:23 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 4.043s CPU time.
Oct 14 08:56:23 compute-0 systemd-machined[214636]: Machine qemu-12-instance-0000000c terminated.
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.086 2 INFO nova.virt.libvirt.driver [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance destroyed successfully.
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.087 2 DEBUG nova.objects.instance [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'resources' on Instance uuid 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:24 compute-0 ceph-mon[74249]: pgmap v1129: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.531 2 INFO nova.virt.libvirt.driver [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Deleting instance files /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_del
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.532 2 INFO nova.virt.libvirt.driver [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Deletion of /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_del complete
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.551 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.552 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.582 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.590 2 INFO nova.compute.manager [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.591 2 DEBUG oslo.service.loopingcall [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.591 2 DEBUG nova.compute.manager [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.592 2 DEBUG nova.network.neutron [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.668 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.669 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.676 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.676 2 INFO nova.compute.claims [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.835 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.923 2 DEBUG nova.network.neutron [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.942 2 DEBUG nova.network.neutron [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:24 compute-0 nova_compute[259627]: 2025-10-14 08:56:24.956 2 INFO nova.compute.manager [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Took 0.36 seconds to deallocate network for instance.
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.005 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166849200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.304 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.314 2 DEBUG nova.compute.provider_tree [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.329 2 DEBUG nova.scheduler.client.report [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.350 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.351 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.354 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.415 2 DEBUG oslo_concurrency.processutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.438 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.440 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.473 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:56:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4166849200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.492 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.583 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.585 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.586 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Creating image(s)
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.624 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.661 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.690 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.695 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.723 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432170.6048746, 0936e31d-4bed-46c8-a561-05467cf93f75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.724 2 INFO nova.compute.manager [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] VM Stopped (Lifecycle Event)
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.751 2 DEBUG nova.compute.manager [None req-af93ef21-ed20-4481-9f5c-b0cfc5141508 - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.789 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.790 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.791 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.792 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.819 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.823 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 548bff7e-531b-4f5d-b4d3-18d586f46581_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.853 2 DEBUG nova.policy [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50e2582d63041b682c71a379f763c0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bf65c21e4104af6981b071561617657', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:56:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785891186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.890 2 DEBUG oslo_concurrency.processutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.895 2 DEBUG nova.compute.provider_tree [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.919 2 DEBUG nova.scheduler.client.report [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.956 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:25 compute-0 nova_compute[259627]: 2025-10-14 08:56:25.987 2 INFO nova.scheduler.client.report [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Deleted allocations for instance 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.066 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.110 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 548bff7e-531b-4f5d-b4d3-18d586f46581_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.160 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] resizing rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.247 2 DEBUG nova.objects.instance [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'migration_context' on Instance uuid 548bff7e-531b-4f5d-b4d3-18d586f46581 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.267 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.267 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Ensure instance console log exists: /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.268 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.269 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.269 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:26 compute-0 ceph-mon[74249]: pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 14 08:56:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/785891186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:26 compute-0 nova_compute[259627]: 2025-10-14 08:56:26.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:27 compute-0 nova_compute[259627]: 2025-10-14 08:56:27.003 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432172.0018141, ed89f48c-8144-453c-9357-0abc99716b22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:27 compute-0 nova_compute[259627]: 2025-10-14 08:56:27.003 2 INFO nova.compute.manager [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] VM Stopped (Lifecycle Event)
Oct 14 08:56:27 compute-0 nova_compute[259627]: 2025-10-14 08:56:27.037 2 DEBUG nova.compute.manager [None req-41272684-7352-4219-aaba-79ee3340b08e - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:27 compute-0 nova_compute[259627]: 2025-10-14 08:56:27.145 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Successfully created port: 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:56:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.463678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187463698, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1179, "num_deletes": 507, "total_data_size": 1159067, "memory_usage": 1187552, "flush_reason": "Manual Compaction"}
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187468737, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 763981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22966, "largest_seqno": 24144, "table_properties": {"data_size": 759565, "index_size": 1428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14273, "raw_average_key_size": 18, "raw_value_size": 748105, "raw_average_value_size": 992, "num_data_blocks": 64, "num_entries": 754, "num_filter_entries": 754, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432113, "oldest_key_time": 1760432113, "file_creation_time": 1760432187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 5092 microseconds, and 2391 cpu microseconds.
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.468768) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 763981 bytes OK
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.468783) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.469981) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.469993) EVENT_LOG_v1 {"time_micros": 1760432187469989, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.470005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1152515, prev total WAL file size 1152515, number of live WAL files 2.
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.470636) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(746KB)], [53(8892KB)]
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187470692, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9870209, "oldest_snapshot_seqno": -1}
Oct 14 08:56:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4484 keys, 6862429 bytes, temperature: kUnknown
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187500816, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 6862429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6833001, "index_size": 17094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112478, "raw_average_key_size": 25, "raw_value_size": 6752430, "raw_average_value_size": 1505, "num_data_blocks": 711, "num_entries": 4484, "num_filter_entries": 4484, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.501037) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 6862429 bytes
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.502256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 327.0 rd, 227.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.7 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(21.9) write-amplify(9.0) OK, records in: 5482, records dropped: 998 output_compression: NoCompression
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.502271) EVENT_LOG_v1 {"time_micros": 1760432187502264, "job": 28, "event": "compaction_finished", "compaction_time_micros": 30180, "compaction_time_cpu_micros": 16375, "output_level": 6, "num_output_files": 1, "total_output_size": 6862429, "num_input_records": 5482, "num_output_records": 4484, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187502476, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187503735, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.470517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:56:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:56:27 compute-0 podman[282315]: 2025-10-14 08:56:27.67977122 +0000 UTC m=+0.083530882 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 08:56:27 compute-0 podman[282314]: 2025-10-14 08:56:27.683741807 +0000 UTC m=+0.089127919 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, config_id=multipathd)
Oct 14 08:56:28 compute-0 ceph-mon[74249]: pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.622 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Successfully updated port: 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.637 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.637 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.638 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.650 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.651 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.668 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.746 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.746 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.756 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.756 2 INFO nova.compute.claims [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.856 2 DEBUG nova.compute.manager [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.857 2 DEBUG nova.compute.manager [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing instance network info cache due to event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.857 2 DEBUG oslo_concurrency.lockutils [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.898 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:28 compute-0 nova_compute[259627]: 2025-10-14 08:56:28.933 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698723594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.397 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.405 2 DEBUG nova.compute.provider_tree [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.426 2 DEBUG nova.scheduler.client.report [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.451 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.452 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:56:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1698723594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.503 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.504 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.523 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.539 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.620 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.622 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.622 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Creating image(s)
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.646 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.674 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.698 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.702 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.771 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.772 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.772 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.773 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.794 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.797 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:29 compute-0 nova_compute[259627]: 2025-10-14 08:56:29.912 2 DEBUG nova.policy [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aafd6ad40c944c3eb14e7fbf454040c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.092 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.188 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] resizing rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.296 2 DEBUG nova.objects.instance [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'migration_context' on Instance uuid 5de76ef0-5c03-4b43-a691-c858cecd9e80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.310 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.311 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Ensure instance console log exists: /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.311 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.312 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.312 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.324 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.342 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.343 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance network_info: |[{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.343 2 DEBUG oslo_concurrency.lockutils [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.343 2 DEBUG nova.network.neutron [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.347 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start _get_guest_xml network_info=[{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.351 2 WARNING nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.358 2 DEBUG nova.virt.libvirt.host [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.358 2 DEBUG nova.virt.libvirt.host [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.361 2 DEBUG nova.virt.libvirt.host [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.362 2 DEBUG nova.virt.libvirt.host [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.362 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.362 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.363 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.363 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.364 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.364 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.364 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.364 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.365 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.365 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.365 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.365 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.368 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:30 compute-0 ceph-mon[74249]: pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.820 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Successfully created port: e0cae4c8-f654-471d-a9c1-c77a306f1edf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:56:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842344352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.848 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.872 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:30 compute-0 nova_compute[259627]: 2025-10-14 08:56:30.875 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251090085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.301 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.304 2 DEBUG nova.virt.libvirt.vif [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1911316539',display_name='tempest-SecurityGroupsTestJSON-server-1911316539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1911316539',id=13,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-xovv24ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:25Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=548bff7e-531b-4f5d-b4d3-18d586f46581,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.304 2 DEBUG nova.network.os_vif_util [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.305 2 DEBUG nova.network.os_vif_util [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.307 2 DEBUG nova.objects.instance [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'pci_devices' on Instance uuid 548bff7e-531b-4f5d-b4d3-18d586f46581 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.322 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <uuid>548bff7e-531b-4f5d-b4d3-18d586f46581</uuid>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <name>instance-0000000d</name>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1911316539</nova:name>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:56:30</nova:creationTime>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <nova:user uuid="f50e2582d63041b682c71a379f763c0e">tempest-SecurityGroupsTestJSON-663845074-project-member</nova:user>
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <nova:project uuid="9bf65c21e4104af6981b071561617657">tempest-SecurityGroupsTestJSON-663845074</nova:project>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <nova:port uuid="7da6c99d-4e04-4c0b-b4d0-d32a2e19c462">
Oct 14 08:56:31 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <system>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <entry name="serial">548bff7e-531b-4f5d-b4d3-18d586f46581</entry>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <entry name="uuid">548bff7e-531b-4f5d-b4d3-18d586f46581</entry>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </system>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <os>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   </os>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <features>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   </features>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/548bff7e-531b-4f5d-b4d3-18d586f46581_disk">
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config">
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:a5:2f:21"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <target dev="tap7da6c99d-4e"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/console.log" append="off"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <video>
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </video>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:56:31 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:56:31 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:56:31 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:56:31 compute-0 nova_compute[259627]: </domain>
Oct 14 08:56:31 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.323 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Preparing to wait for external event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.324 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.324 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.324 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.325 2 DEBUG nova.virt.libvirt.vif [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1911316539',display_name='tempest-SecurityGroupsTestJSON-server-1911316539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1911316539',id=13,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-xovv24ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:25Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=548bff7e-531b-4f5d-b4d3-18d586f46581,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.325 2 DEBUG nova.network.os_vif_util [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.326 2 DEBUG nova.network.os_vif_util [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.326 2 DEBUG os_vif [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7da6c99d-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7da6c99d-4e, col_values=(('external_ids', {'iface-id': '7da6c99d-4e04-4c0b-b4d0-d32a2e19c462', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:2f:21', 'vm-uuid': '548bff7e-531b-4f5d-b4d3-18d586f46581'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:31 compute-0 NetworkManager[44885]: <info>  [1760432191.3802] manager: (tap7da6c99d-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.387 2 INFO os_vif [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e')
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.441 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.442 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.442 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No VIF found with MAC fa:16:3e:a5:2f:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.443 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Using config drive
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.467 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2842344352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4251090085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 180 op/s
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.932 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Creating config drive at /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config
Oct 14 08:56:31 compute-0 nova_compute[259627]: 2025-10-14 08:56:31.942 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaxvylg5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.085 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaxvylg5i" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.120 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.126 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.156 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.170 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.204 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Successfully updated port: e0cae4c8-f654-471d-a9c1-c77a306f1edf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.228 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.228 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquired lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.229 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.311 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.313 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Deleting local config drive /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config because it was imported into RBD.
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.355 2 DEBUG nova.compute.manager [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-changed-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.356 2 DEBUG nova.compute.manager [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Refreshing instance network info cache due to event network-changed-e0cae4c8-f654-471d-a9c1-c77a306f1edf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.356 2 DEBUG oslo_concurrency.lockutils [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:32 compute-0 kernel: tap7da6c99d-4e: entered promiscuous mode
Oct 14 08:56:32 compute-0 NetworkManager[44885]: <info>  [1760432192.3786] manager: (tap7da6c99d-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:32 compute-0 ovn_controller[152662]: 2025-10-14T08:56:32Z|00077|binding|INFO|Claiming lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 for this chassis.
Oct 14 08:56:32 compute-0 ovn_controller[152662]: 2025-10-14T08:56:32Z|00078|binding|INFO|7da6c99d-4e04-4c0b-b4d0-d32a2e19c462: Claiming fa:16:3e:a5:2f:21 10.100.0.9
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.407 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:2f:21 10.100.0.9'], port_security=['fa:16:3e:a5:2f:21 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '548bff7e-531b-4f5d-b4d3-18d586f46581', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.408 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 bound to our chassis
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.409 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 08:56:32 compute-0 systemd-machined[214636]: New machine qemu-13-instance-0000000d.
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.420 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9047cc8-1bd9-45fd-a10e-4bb530c6a1c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.421 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58ff48d6-a1 in ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.423 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58ff48d6-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.424 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2de96bd-f0c3-40f9-ab21-1d3b2cf1c062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.424 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df07d5b8-079f-4723-8466-c833ad9ef831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.435 2 DEBUG nova.network.neutron [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updated VIF entry in instance network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.435 2 DEBUG nova.network.neutron [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.438 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[31aaf8db-e568-460a-9fa7-955c6b14e279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.451 2 DEBUG oslo_concurrency.lockutils [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:32 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct 14 08:56:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.466 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff9f0d2-8a3b-4924-b9c0-f85dd82cba3a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 systemd-udevd[282678]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:56:32 compute-0 ovn_controller[152662]: 2025-10-14T08:56:32Z|00079|binding|INFO|Setting lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 ovn-installed in OVS
Oct 14 08:56:32 compute-0 ovn_controller[152662]: 2025-10-14T08:56:32Z|00080|binding|INFO|Setting lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 up in Southbound
Oct 14 08:56:32 compute-0 ceph-mon[74249]: pgmap v1133: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 180 op/s
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:32 compute-0 NetworkManager[44885]: <info>  [1760432192.5100] device (tap7da6c99d-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:56:32 compute-0 NetworkManager[44885]: <info>  [1760432192.5118] device (tap7da6c99d-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.511 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6cda1610-579a-43be-9b35-400591176703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 NetworkManager[44885]: <info>  [1760432192.5197] manager: (tap58ff48d6-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2cc48a-c0fc-4c1c-adcd-83ae9c02e9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.545 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.567 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[779d0f34-f18b-451b-87ee-1a9e4ceb92ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.571 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aba15ba4-ed2d-424f-b41e-b42d1320a348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 NetworkManager[44885]: <info>  [1760432192.6014] device (tap58ff48d6-a0): carrier: link connected
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.614 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[defe4d97-714e-43d5-85c0-0a0acaefa7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.631 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c7804970-1ac3-4ef8-855b-01baacdeb9f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282707, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bea17eb5-ebea-44f4-848c-80c2c7ad4baa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:28ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597136, 'tstamp': 597136}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282708, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.673 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cca91d-7500-4fd2-8cc3-86c4d838742b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282709, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1dec415e-9041-4cc6-a670-b5e4795f40ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:56:32
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta']
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.791 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40a6404f-fd26-4bbb-a297-40b5b4f13b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.793 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.793 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.794 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:32 compute-0 NetworkManager[44885]: <info>  [1760432192.7966] manager: (tap58ff48d6-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 14 08:56:32 compute-0 kernel: tap58ff48d6-a0: entered promiscuous mode
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.803 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:32 compute-0 ovn_controller[152662]: 2025-10-14T08:56:32Z|00081|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.809 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58ff48d6-a644-40e6-8fc9-ee19b4354df9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58ff48d6-a644-40e6-8fc9-ee19b4354df9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.810 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a617d7f-7846-4abd-a79d-c3245dabaec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.811 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/58ff48d6-a644-40e6-8fc9-ee19b4354df9.pid.haproxy
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:56:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.812 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'env', 'PROCESS_TAG=haproxy-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58ff48d6-a644-40e6-8fc9-ee19b4354df9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:56:32 compute-0 nova_compute[259627]: 2025-10-14 08:56:32.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:56:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:56:33 compute-0 podman[282741]: 2025-10-14 08:56:33.235113076 +0000 UTC m=+0.071874934 container create 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:56:33 compute-0 systemd[1]: Started libpod-conmon-6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765.scope.
Oct 14 08:56:33 compute-0 podman[282741]: 2025-10-14 08:56:33.197845057 +0000 UTC m=+0.034606995 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:56:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fd54790ff5fec54b5deca5f48d8cccc184d3f02bd560c1168873a2e4409722/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:33 compute-0 podman[282741]: 2025-10-14 08:56:33.312821993 +0000 UTC m=+0.149583881 container init 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:56:33 compute-0 podman[282741]: 2025-10-14 08:56:33.318891983 +0000 UTC m=+0.155653841 container start 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 08:56:33 compute-0 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [NOTICE]   (282801) : New worker (282804) forked
Oct 14 08:56:33 compute-0 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [NOTICE]   (282801) : Loading success.
Oct 14 08:56:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.559 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Updating instance_info_cache with network_info: [{"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.579 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Releasing lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.579 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance network_info: |[{"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.580 2 DEBUG oslo_concurrency.lockutils [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.580 2 DEBUG nova.network.neutron [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Refreshing network info cache for port e0cae4c8-f654-471d-a9c1-c77a306f1edf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.586 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start _get_guest_xml network_info=[{"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.593 2 WARNING nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.603 2 DEBUG nova.virt.libvirt.host [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.604 2 DEBUG nova.virt.libvirt.host [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.609 2 DEBUG nova.virt.libvirt.host [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.610 2 DEBUG nova.virt.libvirt.host [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.611 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.611 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.612 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.612 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.612 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.613 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.613 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.613 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.614 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.614 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.615 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.615 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.620 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.815 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432193.8148136, 548bff7e-531b-4f5d-b4d3-18d586f46581 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.815 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] VM Started (Lifecycle Event)
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.833 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.838 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432193.8151333, 548bff7e-531b-4f5d-b4d3-18d586f46581 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.838 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] VM Paused (Lifecycle Event)
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.872 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.876 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:33 compute-0 nova_compute[259627]: 2025-10-14 08:56:33.894 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249284711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.108 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.125 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.128 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/887096197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.512 2 DEBUG nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.513 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.513 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.514 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.514 2 DEBUG nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Processing event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.514 2 DEBUG nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.515 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.515 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.516 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.516 2 DEBUG nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] No waiting events found dispatching network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.516 2 WARNING nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received unexpected event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 for instance with vm_state building and task_state spawning.
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.518 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.523 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432194.5228746, 548bff7e-531b-4f5d-b4d3-18d586f46581 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.524 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] VM Resumed (Lifecycle Event)
Oct 14 08:56:34 compute-0 ceph-mon[74249]: pgmap v1134: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Oct 14 08:56:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4249284711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/887096197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.530 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.533 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.535 2 DEBUG nova.virt.libvirt.vif [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-526971628',display_name='tempest-ImagesOneServerNegativeTestJSON-server-526971628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-526971628',id=14,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-6k171gzh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:29Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=5de76ef0-5c03-4b43-a691-c858cecd9e80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.536 2 DEBUG nova.network.os_vif_util [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.537 2 DEBUG nova.network.os_vif_util [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.540 2 DEBUG nova.objects.instance [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5de76ef0-5c03-4b43-a691-c858cecd9e80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.548 2 INFO nova.virt.libvirt.driver [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance spawned successfully.
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.549 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.563 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.573 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <uuid>5de76ef0-5c03-4b43-a691-c858cecd9e80</uuid>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <name>instance-0000000e</name>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-526971628</nova:name>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:56:33</nova:creationTime>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <nova:user uuid="aafd6ad40c944c3eb14e7fbf454040c3">tempest-ImagesOneServerNegativeTestJSON-531836018-project-member</nova:user>
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <nova:project uuid="f24bbeb2f91141e294590ca2afc5ed42">tempest-ImagesOneServerNegativeTestJSON-531836018</nova:project>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <nova:port uuid="e0cae4c8-f654-471d-a9c1-c77a306f1edf">
Oct 14 08:56:34 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <system>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <entry name="serial">5de76ef0-5c03-4b43-a691-c858cecd9e80</entry>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <entry name="uuid">5de76ef0-5c03-4b43-a691-c858cecd9e80</entry>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </system>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <os>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   </os>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <features>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   </features>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/5de76ef0-5c03-4b43-a691-c858cecd9e80_disk">
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config">
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:34 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:bc:3d:9e"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <target dev="tape0cae4c8-f6"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/console.log" append="off"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <video>
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </video>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:56:34 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:56:34 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:56:34 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:56:34 compute-0 nova_compute[259627]: </domain>
Oct 14 08:56:34 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.575 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Preparing to wait for external event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.575 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.576 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.576 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.577 2 DEBUG nova.virt.libvirt.vif [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-526971628',display_name='tempest-ImagesOneServerNegativeTestJSON-server-526971628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-526971628',id=14,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-6k171gzh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:29Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=5de76ef0-5c03-4b43-a691-c858cecd9e80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.578 2 DEBUG nova.network.os_vif_util [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.578 2 DEBUG nova.network.os_vif_util [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.579 2 DEBUG os_vif [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0cae4c8-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0cae4c8-f6, col_values=(('external_ids', {'iface-id': 'e0cae4c8-f654-471d-a9c1-c77a306f1edf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:3d:9e', 'vm-uuid': '5de76ef0-5c03-4b43-a691-c858cecd9e80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:34 compute-0 NetworkManager[44885]: <info>  [1760432194.5904] manager: (tape0cae4c8-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.592 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.595 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.595 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.596 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.596 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.596 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.597 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.601 2 INFO os_vif [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6')
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.623 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.663 2 INFO nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Took 9.08 seconds to spawn the instance on the hypervisor.
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.664 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.670 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.670 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.670 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No VIF found with MAC fa:16:3e:bc:3d:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.671 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Using config drive
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.689 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.770 2 INFO nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Took 10.13 seconds to build instance.
Oct 14 08:56:34 compute-0 nova_compute[259627]: 2025-10-14 08:56:34.794 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.108 2 DEBUG nova.network.neutron [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Updated VIF entry in instance network info cache for port e0cae4c8-f654-471d-a9c1-c77a306f1edf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.108 2 DEBUG nova.network.neutron [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Updating instance_info_cache with network_info: [{"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.139 2 DEBUG oslo_concurrency.lockutils [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.430 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Creating config drive at /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.436 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeeemi12b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 171 op/s
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.566 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeeemi12b" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.621 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.628 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.794 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.795 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Deleting local config drive /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config because it was imported into RBD.
Oct 14 08:56:35 compute-0 kernel: tape0cae4c8-f6: entered promiscuous mode
Oct 14 08:56:35 compute-0 NetworkManager[44885]: <info>  [1760432195.8569] manager: (tape0cae4c8-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 14 08:56:35 compute-0 ovn_controller[152662]: 2025-10-14T08:56:35Z|00082|binding|INFO|Claiming lport e0cae4c8-f654-471d-a9c1-c77a306f1edf for this chassis.
Oct 14 08:56:35 compute-0 ovn_controller[152662]: 2025-10-14T08:56:35Z|00083|binding|INFO|e0cae4c8-f654-471d-a9c1-c77a306f1edf: Claiming fa:16:3e:bc:3d:9e 10.100.0.5
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:35 compute-0 systemd-udevd[282948]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.907 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:3d:9e 10.100.0.5'], port_security=['fa:16:3e:bc:3d:9e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5de76ef0-5c03-4b43-a691-c858cecd9e80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e0cae4c8-f654-471d-a9c1-c77a306f1edf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.908 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e0cae4c8-f654-471d-a9c1-c77a306f1edf in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce bound to our chassis
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.910 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:56:35 compute-0 systemd-machined[214636]: New machine qemu-14-instance-0000000e.
Oct 14 08:56:35 compute-0 NetworkManager[44885]: <info>  [1760432195.9203] device (tape0cae4c8-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:56:35 compute-0 NetworkManager[44885]: <info>  [1760432195.9213] device (tape0cae4c8-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.923 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[292569f1-e773-4a24-9f44-c39ce48090c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.924 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58d74886-d1 in ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.925 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58d74886-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.925 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3abe0105-ab3d-4b27-b379-c7e8d314c5b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.926 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed950ae3-51a6-4b3f-b62f-8a3572d9be1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.939 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2a4238-7d7d-4408-beff-110413a2dcdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:35 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:35 compute-0 ovn_controller[152662]: 2025-10-14T08:56:35Z|00084|binding|INFO|Setting lport e0cae4c8-f654-471d-a9c1-c77a306f1edf ovn-installed in OVS
Oct 14 08:56:35 compute-0 ovn_controller[152662]: 2025-10-14T08:56:35Z|00085|binding|INFO|Setting lport e0cae4c8-f654-471d-a9c1-c77a306f1edf up in Southbound
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.964 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65317362-35a3-4374-a8ff-10b6f60e772e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:35 compute-0 nova_compute[259627]: 2025-10-14 08:56:35.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.990 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a70ff9c3-4631-4607-b15e-0769b24fcdb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:35 compute-0 NetworkManager[44885]: <info>  [1760432195.9977] manager: (tap58d74886-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct 14 08:56:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.997 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[384239c8-7177-4b03-9666-bba4c49cf389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.028 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[67ac003e-7cc0-4ffb-bafa-06c4da131327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.031 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc181f7-d0fa-4834-91e3-e325713cf1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 NetworkManager[44885]: <info>  [1760432196.0581] device (tap58d74886-d0): carrier: link connected
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.065 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3c36cf01-f0c6-4979-ab82-564934ed182b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.087 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac886bad-d2bd-4074-a9e0-f3ac5e09c0c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597481, 'reachable_time': 18511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282983, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.105 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7768f9b7-4798-45f1-86e5-f6400e8207b9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:d2a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597481, 'tstamp': 597481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282984, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.123 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52178bf2-a983-4aef-a8fe-98115a21975e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597481, 'reachable_time': 18511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282985, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.146 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84cbdce8-ea07-4dec-b88b-6fd304a0a1d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.203 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[148db18f-8683-4f57-ab30-34a18b22afd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.205 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.205 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.206 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d74886-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:36 compute-0 NetworkManager[44885]: <info>  [1760432196.2093] manager: (tap58d74886-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 14 08:56:36 compute-0 kernel: tap58d74886-d0: entered promiscuous mode
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.213 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d74886-d0, col_values=(('external_ids', {'iface-id': 'ef5c894d-34c4-4781-b15c-6813576a45e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:36 compute-0 ovn_controller[152662]: 2025-10-14T08:56:36Z|00086|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.244 2 DEBUG nova.compute.manager [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.244 2 DEBUG oslo_concurrency.lockutils [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.244 2 DEBUG oslo_concurrency.lockutils [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.245 2 DEBUG oslo_concurrency.lockutils [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.245 2 DEBUG nova.compute.manager [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Processing event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.281 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c7fa18-6d75-481e-adb1-3e257d834012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.282 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:56:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.282 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'env', 'PROCESS_TAG=haproxy-58d74886-d603-4fb5-b8ff-9c184284bdce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58d74886-d603-4fb5-b8ff-9c184284bdce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:56:36 compute-0 ceph-mon[74249]: pgmap v1135: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 171 op/s
Oct 14 08:56:36 compute-0 podman[283059]: 2025-10-14 08:56:36.661931886 +0000 UTC m=+0.046844986 container create 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:56:36 compute-0 systemd[1]: Started libpod-conmon-4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f.scope.
Oct 14 08:56:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9fec2e8f9314ad08f9029050299940c2c033e7affbb383b843a03d80cd24df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:36 compute-0 podman[283059]: 2025-10-14 08:56:36.729175034 +0000 UTC m=+0.114088154 container init 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 08:56:36 compute-0 podman[283059]: 2025-10-14 08:56:36.637863143 +0000 UTC m=+0.022776263 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:56:36 compute-0 podman[283059]: 2025-10-14 08:56:36.737230493 +0000 UTC m=+0.122143593 container start 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:56:36 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [NOTICE]   (283078) : New worker (283080) forked
Oct 14 08:56:36 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [NOTICE]   (283078) : Loading success.
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.826 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432196.825972, 5de76ef0-5c03-4b43-a691-c858cecd9e80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.826 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] VM Started (Lifecycle Event)
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.828 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.834 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.837 2 INFO nova.virt.libvirt.driver [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance spawned successfully.
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.837 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.841 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.843 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.856 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.857 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.857 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.857 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.858 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.858 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.861 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.861 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432196.8262136, 5de76ef0-5c03-4b43-a691-c858cecd9e80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.861 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] VM Paused (Lifecycle Event)
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.881 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.883 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432196.8307307, 5de76ef0-5c03-4b43-a691-c858cecd9e80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.883 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] VM Resumed (Lifecycle Event)
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.904 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.907 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.914 2 INFO nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Took 7.29 seconds to spawn the instance on the hypervisor.
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.915 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:36 compute-0 nova_compute[259627]: 2025-10-14 08:56:36.933 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:37 compute-0 nova_compute[259627]: 2025-10-14 08:56:37.006 2 INFO nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Took 8.28 seconds to build instance.
Oct 14 08:56:37 compute-0 nova_compute[259627]: 2025-10-14 08:56:37.039 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 14 08:56:37 compute-0 podman[283090]: 2025-10-14 08:56:37.645203847 +0000 UTC m=+0.055033178 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 08:56:37 compute-0 podman[283089]: 2025-10-14 08:56:37.671923396 +0000 UTC m=+0.084726730 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 08:56:38 compute-0 ceph-mon[74249]: pgmap v1136: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 14 08:56:38 compute-0 nova_compute[259627]: 2025-10-14 08:56:38.921 2 DEBUG nova.compute.manager [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:38 compute-0 nova_compute[259627]: 2025-10-14 08:56:38.923 2 DEBUG oslo_concurrency.lockutils [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:38 compute-0 nova_compute[259627]: 2025-10-14 08:56:38.923 2 DEBUG oslo_concurrency.lockutils [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:38 compute-0 nova_compute[259627]: 2025-10-14 08:56:38.924 2 DEBUG oslo_concurrency.lockutils [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:38 compute-0 nova_compute[259627]: 2025-10-14 08:56:38.924 2 DEBUG nova.compute.manager [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] No waiting events found dispatching network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:56:38 compute-0 nova_compute[259627]: 2025-10-14 08:56:38.925 2 WARNING nova.compute.manager [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received unexpected event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf for instance with vm_state active and task_state None.
Oct 14 08:56:39 compute-0 nova_compute[259627]: 2025-10-14 08:56:39.084 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432184.0823946, 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:39 compute-0 nova_compute[259627]: 2025-10-14 08:56:39.084 2 INFO nova.compute.manager [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] VM Stopped (Lifecycle Event)
Oct 14 08:56:39 compute-0 nova_compute[259627]: 2025-10-14 08:56:39.105 2 DEBUG nova.compute.manager [None req-42c44aff-5996-42ed-8a93-23fdfedf2c8d - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 14 08:56:39 compute-0 nova_compute[259627]: 2025-10-14 08:56:39.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:40.173 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:40 compute-0 ceph-mon[74249]: pgmap v1137: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 14 08:56:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 14 08:56:41 compute-0 nova_compute[259627]: 2025-10-14 08:56:41.829 2 DEBUG nova.compute.manager [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:41 compute-0 nova_compute[259627]: 2025-10-14 08:56:41.831 2 DEBUG nova.compute.manager [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing instance network info cache due to event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:56:41 compute-0 nova_compute[259627]: 2025-10-14 08:56:41.831 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:41 compute-0 nova_compute[259627]: 2025-10-14 08:56:41.832 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:41 compute-0 nova_compute[259627]: 2025-10-14 08:56:41.833 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:56:41 compute-0 nova_compute[259627]: 2025-10-14 08:56:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:42 compute-0 ceph-mon[74249]: pgmap v1138: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006967633855896333 of space, bias 1.0, pg target 0.20902901567689 quantized to 32 (current 32)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:56:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.197 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.198 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.217 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.303 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.303 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.310 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.311 2 INFO nova.compute.claims [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.483 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 435 KiB/s wr, 148 op/s
Oct 14 08:56:43 compute-0 sudo[283152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:43 compute-0 sudo[283152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:43 compute-0 sudo[283152]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356587249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.983 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:43 compute-0 nova_compute[259627]: 2025-10-14 08:56:43.991 2 DEBUG nova.compute.provider_tree [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:44 compute-0 sudo[283177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:56:44 compute-0 sudo[283177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:44 compute-0 sudo[283177]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.047 2 DEBUG nova.scheduler.client.report [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:44 compute-0 sudo[283204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:44 compute-0 sudo[283204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:44 compute-0 sudo[283204]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.073 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.074 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:56:44 compute-0 sudo[283229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 14 08:56:44 compute-0 sudo[283229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.119 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.119 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.137 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.151 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.234 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.237 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.239 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Creating image(s)
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.284 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.322 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.351 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.359 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.382 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updated VIF entry in instance network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.383 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.408 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.409 2 DEBUG nova.compute.manager [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.409 2 DEBUG nova.compute.manager [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing instance network info cache due to event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.409 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.409 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.410 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.431 2 DEBUG nova.policy [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11f9a8052a8349b0a21b3acc32a7f2b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dffd1ba9c7eb426eba02b7fa1cb571e2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.442 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.442 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.443 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.443 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.466 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.468 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 251ae181-b980-4338-a6b5-eee48450b510_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:44 compute-0 ceph-mon[74249]: pgmap v1139: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 435 KiB/s wr, 148 op/s
Oct 14 08:56:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1356587249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:44 compute-0 podman[283414]: 2025-10-14 08:56:44.66593064 +0000 UTC m=+0.096957342 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.716 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 251ae181-b980-4338-a6b5-eee48450b510_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:44 compute-0 podman[283414]: 2025-10-14 08:56:44.750251001 +0000 UTC m=+0.181277683 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.803 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] resizing rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.890 2 DEBUG nova.objects.instance [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lazy-loading 'migration_context' on Instance uuid 251ae181-b980-4338-a6b5-eee48450b510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.912 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.913 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Ensure instance console log exists: /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.913 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.913 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:44 compute-0 nova_compute[259627]: 2025-10-14 08:56:44.913 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:45 compute-0 sudo[283229]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:56:45 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:56:45 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:45 compute-0 nova_compute[259627]: 2025-10-14 08:56:45.450 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Successfully created port: 3af4eb0a-c48b-4857-8399-453429b6af53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:56:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 177 op/s
Oct 14 08:56:45 compute-0 sudo[283643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:45 compute-0 sudo[283643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:45 compute-0 sudo[283643]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:45 compute-0 sudo[283669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:56:45 compute-0 sudo[283669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:45 compute-0 sudo[283669]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:45 compute-0 sudo[283694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:45 compute-0 sudo[283694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:45 compute-0 sudo[283694]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:45 compute-0 sudo[283719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 08:56:45 compute-0 sudo[283719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:46 compute-0 ovn_controller[152662]: 2025-10-14T08:56:46Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:2f:21 10.100.0.9
Oct 14 08:56:46 compute-0 ovn_controller[152662]: 2025-10-14T08:56:46Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:2f:21 10.100.0.9
Oct 14 08:56:46 compute-0 sudo[283719]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.373 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updated VIF entry in instance network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.374 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.395 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:46 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:46 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:56:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:56:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 08:56:46 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:56:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 08:56:46 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:46 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 34df4e35-12e5-4a26-b71a-52d5ee7b4426 does not exist
Oct 14 08:56:46 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 2f209ca5-99fc-4a9b-9a05-5c35d15e23ab does not exist
Oct 14 08:56:46 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3ff1c5f5-6062-4550-81df-74bb75a338b4 does not exist
Oct 14 08:56:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 08:56:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:56:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 08:56:46 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:56:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:56:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:56:46 compute-0 sudo[283775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:46 compute-0 sudo[283775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:46 compute-0 sudo[283775]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:46 compute-0 sudo[283800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:56:46 compute-0 sudo[283800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:46 compute-0 sudo[283800]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.574 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Successfully updated port: 3af4eb0a-c48b-4857-8399-453429b6af53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.589 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.589 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquired lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.589 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:56:46 compute-0 sudo[283825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:46 compute-0 sudo[283825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:46 compute-0 sudo[283825]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:46 compute-0 sudo[283850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 08:56:46 compute-0 sudo[283850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.857 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.972 2 DEBUG nova.compute.manager [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.973 2 DEBUG nova.compute.manager [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing instance network info cache due to event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:56:46 compute-0 nova_compute[259627]: 2025-10-14 08:56:46.973 2 DEBUG oslo_concurrency.lockutils [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:47 compute-0 podman[283916]: 2025-10-14 08:56:47.069467432 +0000 UTC m=+0.069818226 container create 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 08:56:47 compute-0 systemd[1]: Started libpod-conmon-6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a.scope.
Oct 14 08:56:47 compute-0 podman[283916]: 2025-10-14 08:56:47.026092177 +0000 UTC m=+0.026442961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:56:47 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:47 compute-0 podman[283916]: 2025-10-14 08:56:47.161272066 +0000 UTC m=+0.161622850 container init 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:56:47 compute-0 podman[283916]: 2025-10-14 08:56:47.168437602 +0000 UTC m=+0.168788366 container start 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:56:47 compute-0 podman[283916]: 2025-10-14 08:56:47.171397455 +0000 UTC m=+0.171748219 container attach 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:56:47 compute-0 epic_swanson[283932]: 167 167
Oct 14 08:56:47 compute-0 systemd[1]: libpod-6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a.scope: Deactivated successfully.
Oct 14 08:56:47 compute-0 podman[283916]: 2025-10-14 08:56:47.17485698 +0000 UTC m=+0.175207784 container died 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c0a8c1572c07b2aa85ee313db1437632293e90831b447391762bdd4d46cebe0-merged.mount: Deactivated successfully.
Oct 14 08:56:47 compute-0 podman[283916]: 2025-10-14 08:56:47.226702653 +0000 UTC m=+0.227053417 container remove 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 08:56:47 compute-0 systemd[1]: libpod-conmon-6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a.scope: Deactivated successfully.
Oct 14 08:56:47 compute-0 podman[283954]: 2025-10-14 08:56:47.388493226 +0000 UTC m=+0.039251415 container create 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:56:47 compute-0 ceph-mon[74249]: pgmap v1140: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 177 op/s
Oct 14 08:56:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:56:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:56:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:56:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:56:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:56:47 compute-0 systemd[1]: Started libpod-conmon-862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99.scope.
Oct 14 08:56:47 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:47 compute-0 podman[283954]: 2025-10-14 08:56:47.370854713 +0000 UTC m=+0.021612932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:56:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Oct 14 08:56:47 compute-0 podman[283954]: 2025-10-14 08:56:47.503888468 +0000 UTC m=+0.154646687 container init 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:56:47 compute-0 podman[283954]: 2025-10-14 08:56:47.517957234 +0000 UTC m=+0.168715413 container start 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 08:56:47 compute-0 podman[283954]: 2025-10-14 08:56:47.523729336 +0000 UTC m=+0.174487625 container attach 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:56:47 compute-0 nova_compute[259627]: 2025-10-14 08:56:47.664 2 DEBUG nova.compute.manager [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:47 compute-0 nova_compute[259627]: 2025-10-14 08:56:47.739 2 INFO nova.compute.manager [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] instance snapshotting
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.001 2 INFO nova.virt.libvirt.driver [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Beginning live snapshot process
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.151 2 DEBUG nova.virt.libvirt.imagebackend [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.178 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.199 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Releasing lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.199 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance network_info: |[{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.200 2 DEBUG oslo_concurrency.lockutils [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.200 2 DEBUG nova.network.neutron [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.204 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start _get_guest_xml network_info=[{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.208 2 WARNING nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.213 2 DEBUG nova.virt.libvirt.host [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.213 2 DEBUG nova.virt.libvirt.host [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.219 2 DEBUG nova.virt.libvirt.host [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.219 2 DEBUG nova.virt.libvirt.host [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.221 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.221 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.221 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.221 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.222 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.222 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.222 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.224 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.397 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] creating snapshot(cce7a2c7b7974e068ac051c3b08861cb) on rbd image(5de76ef0-5c03-4b43-a691-c858cecd9e80_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:56:48 compute-0 ovn_controller[152662]: 2025-10-14T08:56:48Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:3d:9e 10.100.0.5
Oct 14 08:56:48 compute-0 ovn_controller[152662]: 2025-10-14T08:56:48Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:3d:9e 10.100.0.5
Oct 14 08:56:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654340394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.658 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:48 compute-0 flamboyant_brahmagupta[283970]: --> passed data devices: 0 physical, 3 LVM
Oct 14 08:56:48 compute-0 flamboyant_brahmagupta[283970]: --> relative data size: 1.0
Oct 14 08:56:48 compute-0 flamboyant_brahmagupta[283970]: --> All data devices are unavailable
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.682 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:48 compute-0 nova_compute[259627]: 2025-10-14 08:56:48.686 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:48 compute-0 systemd[1]: libpod-862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99.scope: Deactivated successfully.
Oct 14 08:56:48 compute-0 podman[283954]: 2025-10-14 08:56:48.70447846 +0000 UTC m=+1.355236639 container died 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:56:48 compute-0 systemd[1]: libpod-862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99.scope: Consumed 1.082s CPU time.
Oct 14 08:56:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119-merged.mount: Deactivated successfully.
Oct 14 08:56:48 compute-0 podman[283954]: 2025-10-14 08:56:48.774743316 +0000 UTC m=+1.425501505 container remove 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 08:56:48 compute-0 systemd[1]: libpod-conmon-862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99.scope: Deactivated successfully.
Oct 14 08:56:48 compute-0 sudo[283850]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:48 compute-0 sudo[284103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:48 compute-0 sudo[284103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:48 compute-0 sudo[284103]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:48 compute-0 sudo[284147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:56:48 compute-0 sudo[284147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:48 compute-0 sudo[284147]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:48 compute-0 sudo[284172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:48 compute-0 sudo[284172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:48 compute-0 sudo[284172]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:49 compute-0 sudo[284197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 08:56:49 compute-0 sudo[284197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815552921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.134 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.137 2 DEBUG nova.virt.libvirt.vif [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-928191118',id=15,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dffd1ba9c7eb426eba02b7fa1cb571e2',ramdisk_id='',reservation_id='r-wq8uysq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:44Z,user_data=None,user_id='11f9a8052a8349b0a21b3acc32a7f2b1',uuid=251ae181-b980-4338-a6b5-eee48450b510,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.137 2 DEBUG nova.network.os_vif_util [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converting VIF {"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.138 2 DEBUG nova.network.os_vif_util [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.139 2 DEBUG nova.objects.instance [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 251ae181-b980-4338-a6b5-eee48450b510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.158 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <uuid>251ae181-b980-4338-a6b5-eee48450b510</uuid>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <name>instance-0000000f</name>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118</nova:name>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:56:48</nova:creationTime>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <nova:user uuid="11f9a8052a8349b0a21b3acc32a7f2b1">tempest-FloatingIPsAssociationNegativeTestJSON-39168433-project-member</nova:user>
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <nova:project uuid="dffd1ba9c7eb426eba02b7fa1cb571e2">tempest-FloatingIPsAssociationNegativeTestJSON-39168433</nova:project>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <nova:port uuid="3af4eb0a-c48b-4857-8399-453429b6af53">
Oct 14 08:56:49 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <system>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <entry name="serial">251ae181-b980-4338-a6b5-eee48450b510</entry>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <entry name="uuid">251ae181-b980-4338-a6b5-eee48450b510</entry>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </system>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <os>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   </os>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <features>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   </features>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/251ae181-b980-4338-a6b5-eee48450b510_disk">
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/251ae181-b980-4338-a6b5-eee48450b510_disk.config">
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:dd:99:88"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <target dev="tap3af4eb0a-c4"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/console.log" append="off"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <video>
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </video>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:56:49 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:56:49 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:56:49 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:56:49 compute-0 nova_compute[259627]: </domain>
Oct 14 08:56:49 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.159 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Preparing to wait for external event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.159 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.159 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.159 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.160 2 DEBUG nova.virt.libvirt.vif [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-928191118',id=15,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dffd1ba9c7eb426eba02b7fa1cb571e2',ramdisk_id='',reservation_id='r-wq8uysq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:44Z,user_data=None,user_id='11f9a8052a8349b0a21b3acc32a7f2b1',uuid=251ae181-b980-4338-a6b5-eee48450b510,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.160 2 DEBUG nova.network.os_vif_util [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converting VIF {"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.161 2 DEBUG nova.network.os_vif_util [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.161 2 DEBUG os_vif [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3af4eb0a-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3af4eb0a-c4, col_values=(('external_ids', {'iface-id': '3af4eb0a-c48b-4857-8399-453429b6af53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:99:88', 'vm-uuid': '251ae181-b980-4338-a6b5-eee48450b510'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:49 compute-0 NetworkManager[44885]: <info>  [1760432209.1728] manager: (tap3af4eb0a-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.180 2 INFO os_vif [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4')
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.246 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.246 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.247 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] No VIF found with MAC fa:16:3e:dd:99:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.247 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Using config drive
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.275 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:49 compute-0 podman[284286]: 2025-10-14 08:56:49.415644304 +0000 UTC m=+0.054247173 container create 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 08:56:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 14 08:56:49 compute-0 ceph-mon[74249]: pgmap v1141: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Oct 14 08:56:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/654340394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/815552921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 14 08:56:49 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 14 08:56:49 compute-0 systemd[1]: Started libpod-conmon-6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3.scope.
Oct 14 08:56:49 compute-0 podman[284286]: 2025-10-14 08:56:49.398171235 +0000 UTC m=+0.036774164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.498 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] cloning vms/5de76ef0-5c03-4b43-a691-c858cecd9e80_disk@cce7a2c7b7974e068ac051c3b08861cb to images/c5692065-49f8-45a6-8eac-e30026b3f690 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 08:56:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 182 op/s
Oct 14 08:56:49 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:49 compute-0 podman[284286]: 2025-10-14 08:56:49.525532652 +0000 UTC m=+0.164135561 container init 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:56:49 compute-0 podman[284286]: 2025-10-14 08:56:49.534157804 +0000 UTC m=+0.172760683 container start 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 08:56:49 compute-0 podman[284286]: 2025-10-14 08:56:49.537584248 +0000 UTC m=+0.176187137 container attach 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:56:49 compute-0 stupefied_golick[284301]: 167 167
Oct 14 08:56:49 compute-0 systemd[1]: libpod-6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3.scope: Deactivated successfully.
Oct 14 08:56:49 compute-0 conmon[284301]: conmon 6a72bb74e1a6537abbf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3.scope/container/memory.events
Oct 14 08:56:49 compute-0 podman[284286]: 2025-10-14 08:56:49.54172048 +0000 UTC m=+0.180323369 container died 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 08:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-08d8a554f6c7b905bb590ab4da7f8c9e13b92d72a26b8fa22c91dae907df4c30-merged.mount: Deactivated successfully.
Oct 14 08:56:49 compute-0 podman[284286]: 2025-10-14 08:56:49.585945306 +0000 UTC m=+0.224548175 container remove 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 08:56:49 compute-0 systemd[1]: libpod-conmon-6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3.scope: Deactivated successfully.
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.617 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] flattening images/c5692065-49f8-45a6-8eac-e30026b3f690 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.651 2 DEBUG nova.network.neutron [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updated VIF entry in instance network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.652 2 DEBUG nova.network.neutron [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.672 2 DEBUG oslo_concurrency.lockutils [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.785 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Creating config drive at /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.789 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0310kl_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:49 compute-0 podman[284379]: 2025-10-14 08:56:49.806310087 +0000 UTC m=+0.048086682 container create fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:56:49 compute-0 systemd[1]: Started libpod-conmon-fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687.scope.
Oct 14 08:56:49 compute-0 podman[284379]: 2025-10-14 08:56:49.787239799 +0000 UTC m=+0.029016414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:56:49 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:49 compute-0 podman[284379]: 2025-10-14 08:56:49.910327161 +0000 UTC m=+0.152103816 container init fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.919 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0310kl_n" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:49 compute-0 podman[284379]: 2025-10-14 08:56:49.922709576 +0000 UTC m=+0.164486191 container start fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 08:56:49 compute-0 podman[284379]: 2025-10-14 08:56:49.926516139 +0000 UTC m=+0.168292754 container attach fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.943 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.950 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config 251ae181-b980-4338-a6b5-eee48450b510_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:49 compute-0 nova_compute[259627]: 2025-10-14 08:56:49.978 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] removing snapshot(cce7a2c7b7974e068ac051c3b08861cb) on rbd image(5de76ef0-5c03-4b43-a691-c858cecd9e80_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.102 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config 251ae181-b980-4338-a6b5-eee48450b510_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.103 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Deleting local config drive /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config because it was imported into RBD.
Oct 14 08:56:50 compute-0 kernel: tap3af4eb0a-c4: entered promiscuous mode
Oct 14 08:56:50 compute-0 NetworkManager[44885]: <info>  [1760432210.1603] manager: (tap3af4eb0a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:50 compute-0 ovn_controller[152662]: 2025-10-14T08:56:50Z|00087|binding|INFO|Claiming lport 3af4eb0a-c48b-4857-8399-453429b6af53 for this chassis.
Oct 14 08:56:50 compute-0 ovn_controller[152662]: 2025-10-14T08:56:50Z|00088|binding|INFO|3af4eb0a-c48b-4857-8399-453429b6af53: Claiming fa:16:3e:dd:99:88 10.100.0.13
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.184 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:99:88 10.100.0.13'], port_security=['fa:16:3e:dd:99:88 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '251ae181-b980-4338-a6b5-eee48450b510', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dffd1ba9c7eb426eba02b7fa1cb571e2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1853f749-24a7-4699-9f13-e869ca5b59f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=076f043b-a4ac-4ba0-9e01-fc8b197a9834, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3af4eb0a-c48b-4857-8399-453429b6af53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.190 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3af4eb0a-c48b-4857-8399-453429b6af53 in datapath fb9605f8-2a2c-40d4-892f-fb75a29c07c3 bound to our chassis
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.192 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb9605f8-2a2c-40d4-892f-fb75a29c07c3
Oct 14 08:56:50 compute-0 systemd-machined[214636]: New machine qemu-15-instance-0000000f.
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.218 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[653713cf-6515-4c8b-9e7b-923313ad9162]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.219 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb9605f8-21 in ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:56:50 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.221 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb9605f8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.221 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b8130f55-d023-4654-96c3-2ec130d3b628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.225 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[59e79797-c004-4dc2-b3ec-aebedb4706b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.247 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd10875-df39-445c-b22e-bf3ce8e140ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_controller[152662]: 2025-10-14T08:56:50Z|00089|binding|INFO|Setting lport 3af4eb0a-c48b-4857-8399-453429b6af53 ovn-installed in OVS
Oct 14 08:56:50 compute-0 ovn_controller[152662]: 2025-10-14T08:56:50Z|00090|binding|INFO|Setting lport 3af4eb0a-c48b-4857-8399-453429b6af53 up in Southbound
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:50 compute-0 systemd-udevd[284477]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.274 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38c5ea12-088f-47b3-9b21-51b69c1064dc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 NetworkManager[44885]: <info>  [1760432210.2969] device (tap3af4eb0a-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:56:50 compute-0 NetworkManager[44885]: <info>  [1760432210.2984] device (tap3af4eb0a-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.307 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9b72f0-7937-4168-8d8e-93a79628c116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 NetworkManager[44885]: <info>  [1760432210.3176] manager: (tapfb9605f8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ac3277-b768-40b6-858d-05dd69704283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.360 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ae29df-60df-421f-bac1-d5ca665de7d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.363 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0400e4bb-96a8-4178-b90c-d5e87b1bb337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 NetworkManager[44885]: <info>  [1760432210.3897] device (tapfb9605f8-20): carrier: link connected
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.396 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[caa684c0-4905-4722-ad5f-ad68befcd81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa069bf-58b5-4fc4-8635-1fc00920cad2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb9605f8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:bd:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598915, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284506, 'error': None, 'target': 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.424 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91cded16-db02-4683-80e1-7e80e5a5d47f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:bd78'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598915, 'tstamp': 598915}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284507, 'error': None, 'target': 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.437 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1c7166-3892-45cf-83c8-f4a592a292b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb9605f8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:bd:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598915, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284508, 'error': None, 'target': 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 14 08:56:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 14 08:56:50 compute-0 ceph-mon[74249]: osdmap e124: 3 total, 3 up, 3 in
Oct 14 08:56:50 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f92bc69-45ff-4bd2-8cb6-7f640dcef307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.495 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] creating snapshot(snap) on rbd image(c5692065-49f8-45a6-8eac-e30026b3f690) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.539 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b351a0ef-295b-4d47-84ea-a75c574ed296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.540 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb9605f8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.541 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.541 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb9605f8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:50 compute-0 NetworkManager[44885]: <info>  [1760432210.5437] manager: (tapfb9605f8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct 14 08:56:50 compute-0 kernel: tapfb9605f8-20: entered promiscuous mode
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.551 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb9605f8-20, col_values=(('external_ids', {'iface-id': 'e772e4c9-7f41-4f58-a7a3-269a843bc77c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:50 compute-0 ovn_controller[152662]: 2025-10-14T08:56:50Z|00091|binding|INFO|Releasing lport e772e4c9-7f41-4f58-a7a3-269a843bc77c from this chassis (sb_readonly=0)
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.554 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb9605f8-2a2c-40d4-892f-fb75a29c07c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb9605f8-2a2c-40d4-892f-fb75a29c07c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2669d2-f7b2-4c3a-9fca-83cab941a570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.556 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-fb9605f8-2a2c-40d4-892f-fb75a29c07c3
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/fb9605f8-2a2c-40d4-892f-fb75a29c07c3.pid.haproxy
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID fb9605f8-2a2c-40d4-892f-fb75a29c07c3
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:56:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.557 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'env', 'PROCESS_TAG=haproxy-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb9605f8-2a2c-40d4-892f-fb75a29c07c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]: {
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:     "0": [
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:         {
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "devices": [
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "/dev/loop3"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             ],
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_name": "ceph_lv0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_size": "21470642176",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "name": "ceph_lv0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "tags": {
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cluster_name": "ceph",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.crush_device_class": "",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.encrypted": "0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osd_id": "0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.type": "block",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.vdo": "0"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             },
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "type": "block",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "vg_name": "ceph_vg0"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:         }
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:     ],
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:     "1": [
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:         {
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "devices": [
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "/dev/loop4"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             ],
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_name": "ceph_lv1",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_size": "21470642176",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "name": "ceph_lv1",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "tags": {
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cluster_name": "ceph",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.crush_device_class": "",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.encrypted": "0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osd_id": "1",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.type": "block",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.vdo": "0"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             },
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "type": "block",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "vg_name": "ceph_vg1"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:         }
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:     ],
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:     "2": [
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:         {
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "devices": [
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "/dev/loop5"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             ],
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_name": "ceph_lv2",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_size": "21470642176",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "name": "ceph_lv2",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "tags": {
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.cluster_name": "ceph",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.crush_device_class": "",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.encrypted": "0",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osd_id": "2",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.type": "block",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:                 "ceph.vdo": "0"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             },
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "type": "block",
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:             "vg_name": "ceph_vg2"
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:         }
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]:     ]
Oct 14 08:56:50 compute-0 musing_aryabhata[284398]: }
Oct 14 08:56:50 compute-0 systemd[1]: libpod-fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687.scope: Deactivated successfully.
Oct 14 08:56:50 compute-0 podman[284379]: 2025-10-14 08:56:50.74629617 +0000 UTC m=+0.988072765 container died fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 08:56:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47-merged.mount: Deactivated successfully.
Oct 14 08:56:50 compute-0 podman[284379]: 2025-10-14 08:56:50.808081837 +0000 UTC m=+1.049858432 container remove fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 08:56:50 compute-0 systemd[1]: libpod-conmon-fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687.scope: Deactivated successfully.
Oct 14 08:56:50 compute-0 sudo[284197]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:50 compute-0 sudo[284612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.928 2 DEBUG nova.compute.manager [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.930 2 DEBUG oslo_concurrency.lockutils [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.930 2 DEBUG oslo_concurrency.lockutils [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:50 compute-0 sudo[284612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.931 2 DEBUG oslo_concurrency.lockutils [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:50 compute-0 nova_compute[259627]: 2025-10-14 08:56:50.932 2 DEBUG nova.compute.manager [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Processing event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:56:50 compute-0 sudo[284612]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:50 compute-0 podman[284621]: 2025-10-14 08:56:50.935797083 +0000 UTC m=+0.053264459 container create b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:56:50 compute-0 systemd[1]: Started libpod-conmon-b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4.scope.
Oct 14 08:56:51 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:51 compute-0 podman[284621]: 2025-10-14 08:56:50.91082566 +0000 UTC m=+0.028293026 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c99ed231e5c1fd99f5e9dee19ad83031abe3a98e852118ad1216c381a913f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:51 compute-0 sudo[284654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:56:51 compute-0 sudo[284654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:51 compute-0 sudo[284654]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:51 compute-0 podman[284621]: 2025-10-14 08:56:51.025842043 +0000 UTC m=+0.143309479 container init b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 08:56:51 compute-0 podman[284621]: 2025-10-14 08:56:51.031318638 +0000 UTC m=+0.148786024 container start b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 08:56:51 compute-0 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [NOTICE]   (284688) : New worker (284711) forked
Oct 14 08:56:51 compute-0 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [NOTICE]   (284688) : Loading success.
Oct 14 08:56:51 compute-0 sudo[284686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:51 compute-0 sudo[284686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:51 compute-0 sudo[284686]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.102 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432211.1023536, 251ae181-b980-4338-a6b5-eee48450b510 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.103 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] VM Started (Lifecycle Event)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.105 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.107 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.110 2 INFO nova.virt.libvirt.driver [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance spawned successfully.
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.111 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.134 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:51 compute-0 sudo[284722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.138 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:51 compute-0 sudo[284722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.143 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.144 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.144 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.145 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.145 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.145 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.172 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.173 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432211.102583, 251ae181-b980-4338-a6b5-eee48450b510 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.173 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] VM Paused (Lifecycle Event)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.197 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.205 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.205 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.208 2 INFO nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Took 6.97 seconds to spawn the instance on the hypervisor.
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.208 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.213 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432211.1066892, 251ae181-b980-4338-a6b5-eee48450b510 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.213 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] VM Resumed (Lifecycle Event)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.238 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.241 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.243 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.288 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.305 2 INFO nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Took 8.04 seconds to build instance.
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.316 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.343 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.343 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.350 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.350 2 INFO nova.compute.claims [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:56:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 14 08:56:51 compute-0 ceph-mon[74249]: pgmap v1143: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 182 op/s
Oct 14 08:56:51 compute-0 ceph-mon[74249]: osdmap e125: 3 total, 3 up, 3 in
Oct 14 08:56:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 14 08:56:51 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.492 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 311 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 16 MiB/s wr, 387 op/s
Oct 14 08:56:51 compute-0 podman[284786]: 2025-10-14 08:56:51.520939121 +0000 UTC m=+0.047693372 container create f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 08:56:51 compute-0 systemd[1]: Started libpod-conmon-f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b.scope.
Oct 14 08:56:51 compute-0 podman[284786]: 2025-10-14 08:56:51.496780248 +0000 UTC m=+0.023534529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:56:51 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:51 compute-0 podman[284786]: 2025-10-14 08:56:51.611240478 +0000 UTC m=+0.137994729 container init f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:56:51 compute-0 podman[284786]: 2025-10-14 08:56:51.619575613 +0000 UTC m=+0.146329834 container start f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 08:56:51 compute-0 podman[284786]: 2025-10-14 08:56:51.624524875 +0000 UTC m=+0.151279106 container attach f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:56:51 compute-0 awesome_turing[284803]: 167 167
Oct 14 08:56:51 compute-0 systemd[1]: libpod-f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b.scope: Deactivated successfully.
Oct 14 08:56:51 compute-0 podman[284786]: 2025-10-14 08:56:51.630243185 +0000 UTC m=+0.156997426 container died f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 14 08:56:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-03f50a5a28da95121eb7137613496f9e275f5d38756c7c2aa58c5464276c0526-merged.mount: Deactivated successfully.
Oct 14 08:56:51 compute-0 podman[284786]: 2025-10-14 08:56:51.664446915 +0000 UTC m=+0.191201136 container remove f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image c5692065-49f8-45a6-8eac-e30026b3f690 could not be found.
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID c5692065-49f8-45a6-8eac-e30026b3f690
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver 
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver 
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image c5692065-49f8-45a6-8eac-e30026b3f690 could not be found.
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver 
Oct 14 08:56:51 compute-0 systemd[1]: libpod-conmon-f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b.scope: Deactivated successfully.
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.718 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] removing snapshot(snap) on rbd image(c5692065-49f8-45a6-8eac-e30026b3f690) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 08:56:51 compute-0 nova_compute[259627]: 2025-10-14 08:56:51.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:51 compute-0 podman[284863]: 2025-10-14 08:56:51.866530677 +0000 UTC m=+0.045453817 container create 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:56:51 compute-0 systemd[1]: Started libpod-conmon-8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde.scope.
Oct 14 08:56:51 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:56:51 compute-0 podman[284863]: 2025-10-14 08:56:51.851622541 +0000 UTC m=+0.030545701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:56:51 compute-0 podman[284863]: 2025-10-14 08:56:51.970616423 +0000 UTC m=+0.149539583 container init 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:56:51 compute-0 podman[284863]: 2025-10-14 08:56:51.978771693 +0000 UTC m=+0.157694833 container start 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:56:51 compute-0 podman[284863]: 2025-10-14 08:56:51.982440684 +0000 UTC m=+0.161363854 container attach 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:56:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437809994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.019 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.024 2 DEBUG nova.compute.provider_tree [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.047 2 DEBUG nova.scheduler.client.report [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.072 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.072 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.126 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.126 2 DEBUG nova.network.neutron [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.150 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.171 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.269 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.271 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.271 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Creating image(s)
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.298 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.325 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.347 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.350 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.377 2 DEBUG nova.network.neutron [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.377 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.432 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.433 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.434 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.434 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.452 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.455 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 14 08:56:52 compute-0 ceph-mon[74249]: osdmap e126: 3 total, 3 up, 3 in
Oct 14 08:56:52 compute-0 ceph-mon[74249]: pgmap v1146: 305 pgs: 305 active+clean; 311 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 16 MiB/s wr, 387 op/s
Oct 14 08:56:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3437809994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 14 08:56:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 14 08:56:52 compute-0 nova_compute[259627]: 2025-10-14 08:56:52.944 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]: {
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "osd_id": 2,
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "type": "bluestore"
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:     },
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "osd_id": 1,
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "type": "bluestore"
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:     },
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "osd_id": 0,
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:         "type": "bluestore"
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]:     }
Oct 14 08:56:53 compute-0 youthful_hamilton[284880]: }
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.046 2 WARNING nova.compute.manager [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Image not found during snapshot: nova.exception.ImageNotFound: Image c5692065-49f8-45a6-8eac-e30026b3f690 could not be found.
Oct 14 08:56:53 compute-0 systemd[1]: libpod-8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde.scope: Deactivated successfully.
Oct 14 08:56:53 compute-0 podman[284863]: 2025-10-14 08:56:53.05168435 +0000 UTC m=+1.230607530 container died 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:56:53 compute-0 systemd[1]: libpod-8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde.scope: Consumed 1.010s CPU time.
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.056 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] resizing rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:56:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad-merged.mount: Deactivated successfully.
Oct 14 08:56:53 compute-0 podman[284863]: 2025-10-14 08:56:53.115110308 +0000 UTC m=+1.294033468 container remove 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:56:53 compute-0 systemd[1]: libpod-conmon-8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde.scope: Deactivated successfully.
Oct 14 08:56:53 compute-0 sudo[284722]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.182 2 DEBUG nova.objects.instance [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'migration_context' on Instance uuid 753ea698-6cc6-4a73-a0d2-1366e5374a9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:53 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:56:53 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:53 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d91aea29-ef9c-425b-86b8-d9354c8200b9 does not exist
Oct 14 08:56:53 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5359da39-87da-4cc7-8c9e-66b84b63cc7a does not exist
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.205 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.205 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Ensure instance console log exists: /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.206 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.206 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.206 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.207 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.215 2 WARNING nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.220 2 DEBUG nova.compute.manager [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.220 2 DEBUG oslo_concurrency.lockutils [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.221 2 DEBUG oslo_concurrency.lockutils [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.221 2 DEBUG oslo_concurrency.lockutils [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.221 2 DEBUG nova.compute.manager [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] No waiting events found dispatching network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.221 2 WARNING nova.compute.manager [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received unexpected event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 for instance with vm_state active and task_state None.
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.227 2 DEBUG nova.virt.libvirt.host [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.227 2 DEBUG nova.virt.libvirt.host [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.231 2 DEBUG nova.virt.libvirt.host [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.231 2 DEBUG nova.virt.libvirt.host [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.232 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.232 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.233 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.233 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.233 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.234 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.234 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.234 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.235 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.236 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.236 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.236 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.239 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:53 compute-0 sudo[285110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:56:53 compute-0 sudo[285110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:53 compute-0 sudo[285110]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:53 compute-0 sudo[285136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 08:56:53 compute-0 sudo[285136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:56:53 compute-0 sudo[285136]: pam_unix(sudo:session): session closed for user root
Oct 14 08:56:53 compute-0 ceph-mon[74249]: osdmap e127: 3 total, 3 up, 3 in
Oct 14 08:56:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:56:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 311 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 24 MiB/s wr, 574 op/s
Oct 14 08:56:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909585213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.694 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.728 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:53 compute-0 nova_compute[259627]: 2025-10-14 08:56:53.732 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:56:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2617580688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.191 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.193 2 DEBUG nova.objects.instance [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 753ea698-6cc6-4a73-a0d2-1366e5374a9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.208 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <uuid>753ea698-6cc6-4a73-a0d2-1366e5374a9c</uuid>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <name>instance-00000010</name>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <nova:name>tempest-LiveMigrationNegativeTest-server-101546050</nova:name>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:56:53</nova:creationTime>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <nova:user uuid="6ecc59efebb941f4b0aa79b58a7e610e">tempest-LiveMigrationNegativeTest-588604906-project-member</nova:user>
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <nova:project uuid="a618b00ff8c34f40bd31e4f56c019b1b">tempest-LiveMigrationNegativeTest-588604906</nova:project>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <system>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <entry name="serial">753ea698-6cc6-4a73-a0d2-1366e5374a9c</entry>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <entry name="uuid">753ea698-6cc6-4a73-a0d2-1366e5374a9c</entry>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     </system>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <os>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   </os>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <features>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   </features>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk">
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config">
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       </source>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:56:54 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/console.log" append="off"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <video>
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     </video>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:56:54 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:56:54 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:56:54 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:56:54 compute-0 nova_compute[259627]: </domain>
Oct 14 08:56:54 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.271 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.272 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.273 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Using config drive
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.292 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.453 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Creating config drive at /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.459 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8mgz6ydt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:54 compute-0 ceph-mon[74249]: pgmap v1148: 305 pgs: 305 active+clean; 311 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 24 MiB/s wr, 574 op/s
Oct 14 08:56:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2909585213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2617580688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.586 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8mgz6ydt" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.610 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.613 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.743 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:54 compute-0 nova_compute[259627]: 2025-10-14 08:56:54.744 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Deleting local config drive /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config because it was imported into RBD.
Oct 14 08:56:54 compute-0 systemd-machined[214636]: New machine qemu-16-instance-00000010.
Oct 14 08:56:54 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Oct 14 08:56:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 20 MiB/s wr, 660 op/s
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.579 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.580 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.581 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.581 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.581 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.582 2 INFO nova.compute.manager [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Terminating instance
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.583 2 DEBUG nova.compute.manager [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:56:55 compute-0 kernel: tape0cae4c8-f6 (unregistering): left promiscuous mode
Oct 14 08:56:55 compute-0 NetworkManager[44885]: <info>  [1760432215.6262] device (tape0cae4c8-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:55 compute-0 ovn_controller[152662]: 2025-10-14T08:56:55Z|00092|binding|INFO|Releasing lport e0cae4c8-f654-471d-a9c1-c77a306f1edf from this chassis (sb_readonly=0)
Oct 14 08:56:55 compute-0 ovn_controller[152662]: 2025-10-14T08:56:55Z|00093|binding|INFO|Setting lport e0cae4c8-f654-471d-a9c1-c77a306f1edf down in Southbound
Oct 14 08:56:55 compute-0 ovn_controller[152662]: 2025-10-14T08:56:55Z|00094|binding|INFO|Removing iface tape0cae4c8-f6 ovn-installed in OVS
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.647 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:3d:9e 10.100.0.5'], port_security=['fa:16:3e:bc:3d:9e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5de76ef0-5c03-4b43-a691-c858cecd9e80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e0cae4c8-f654-471d-a9c1-c77a306f1edf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:56:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.648 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e0cae4c8-f654-471d-a9c1-c77a306f1edf in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce unbound from our chassis
Oct 14 08:56:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.650 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58d74886-d603-4fb5-b8ff-9c184284bdce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:56:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.653 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca8af4e-c3a5-435c-a8c0-3753bffb18e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.655 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace which is not needed anymore
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:55 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 14 08:56:55 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 12.674s CPU time.
Oct 14 08:56:55 compute-0 systemd-machined[214636]: Machine qemu-14-instance-0000000e terminated.
Oct 14 08:56:55 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [NOTICE]   (283078) : haproxy version is 2.8.14-c23fe91
Oct 14 08:56:55 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [NOTICE]   (283078) : path to executable is /usr/sbin/haproxy
Oct 14 08:56:55 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [WARNING]  (283078) : Exiting Master process...
Oct 14 08:56:55 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [WARNING]  (283078) : Exiting Master process...
Oct 14 08:56:55 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [ALERT]    (283078) : Current worker (283080) exited with code 143 (Terminated)
Oct 14 08:56:55 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [WARNING]  (283078) : All workers exited. Exiting... (0)
Oct 14 08:56:55 compute-0 systemd[1]: libpod-4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f.scope: Deactivated successfully.
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:55 compute-0 podman[285361]: 2025-10-14 08:56:55.816106593 +0000 UTC m=+0.070280887 container died 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.833 2 INFO nova.virt.libvirt.driver [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance destroyed successfully.
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.834 2 DEBUG nova.objects.instance [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'resources' on Instance uuid 5de76ef0-5c03-4b43-a691-c858cecd9e80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.849 2 DEBUG nova.virt.libvirt.vif [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-526971628',display_name='tempest-ImagesOneServerNegativeTestJSON-server-526971628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-526971628',id=14,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:56:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-6k171gzh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:56:53Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=5de76ef0-5c03-4b43-a691-c858cecd9e80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.850 2 DEBUG nova.network.os_vif_util [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.852 2 DEBUG nova.network.os_vif_util [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.852 2 DEBUG os_vif [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0cae4c8-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.864 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432215.8647397, 753ea698-6cc6-4a73-a0d2-1366e5374a9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.865 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] VM Resumed (Lifecycle Event)
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.867 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.867 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.886 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.889 2 INFO nova.virt.libvirt.driver [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance spawned successfully.
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.890 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.892 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f-userdata-shm.mount: Deactivated successfully.
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea9fec2e8f9314ad08f9029050299940c2c033e7affbb383b843a03d80cd24df-merged.mount: Deactivated successfully.
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.912 2 INFO os_vif [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6')
Oct 14 08:56:55 compute-0 podman[285361]: 2025-10-14 08:56:55.921095501 +0000 UTC m=+0.175269755 container cleanup 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 08:56:55 compute-0 systemd[1]: libpod-conmon-4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f.scope: Deactivated successfully.
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.933 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.933 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432215.8653927, 753ea698-6cc6-4a73-a0d2-1366e5374a9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.933 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] VM Started (Lifecycle Event)
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.945 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.945 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.946 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.946 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.946 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.947 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.957 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.960 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:56:55 compute-0 nova_compute[259627]: 2025-10-14 08:56:55.986 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:56:56 compute-0 podman[285412]: 2025-10-14 08:56:56.008074427 +0000 UTC m=+0.050747517 container remove 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.011 2 INFO nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Took 3.74 seconds to spawn the instance on the hypervisor.
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.011 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.014 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56a397ae-fd45-49e4-af2a-f9d4eb66ee47]: (4, ('Tue Oct 14 08:56:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f)\n4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f\nTue Oct 14 08:56:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f)\n4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.016 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ece2c3a-c73d-4337-bc82-6eb60c7049d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.016 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:56 compute-0 kernel: tap58d74886-d0: left promiscuous mode
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.028 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b0973e95-6c8a-4726-b7b9-dfa009a2d4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.055 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7df35203-fca2-4ef3-8194-4f592512bca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.057 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0df2aadf-768b-4156-b0a2-55cf3649589a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a06a0d36-242f-4f08-b859-18a7f5e7d095]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597474, 'reachable_time': 38692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285430, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d58d74886\x2dd603\x2d4fb5\x2db8ff\x2d9c184284bdce.mount: Deactivated successfully.
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.081 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:56:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.081 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[621f13d2-b946-4250-be65-6860cee01ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.110 2 INFO nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Took 4.80 seconds to build instance.
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.133 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.279 2 INFO nova.virt.libvirt.driver [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Deleting instance files /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80_del
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.280 2 INFO nova.virt.libvirt.driver [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Deletion of /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80_del complete
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.361 2 INFO nova.compute.manager [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.362 2 DEBUG oslo.service.loopingcall [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.363 2 DEBUG nova.compute.manager [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.363 2 DEBUG nova.network.neutron [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.372 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.372 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.388 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.457 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.457 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.464 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.464 2 INFO nova.compute.claims [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.493 2 DEBUG nova.compute.manager [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-unplugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.494 2 DEBUG oslo_concurrency.lockutils [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.494 2 DEBUG oslo_concurrency.lockutils [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.495 2 DEBUG oslo_concurrency.lockutils [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.495 2 DEBUG nova.compute.manager [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] No waiting events found dispatching network-vif-unplugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.495 2 DEBUG nova.compute.manager [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-unplugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:56:56 compute-0 ceph-mon[74249]: pgmap v1149: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 20 MiB/s wr, 660 op/s
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.623 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:56 compute-0 ovn_controller[152662]: 2025-10-14T08:56:56Z|00095|binding|INFO|Releasing lport e772e4c9-7f41-4f58-a7a3-269a843bc77c from this chassis (sb_readonly=0)
Oct 14 08:56:56 compute-0 NetworkManager[44885]: <info>  [1760432216.8238] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 14 08:56:56 compute-0 NetworkManager[44885]: <info>  [1760432216.8247] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:56 compute-0 ovn_controller[152662]: 2025-10-14T08:56:56Z|00096|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 08:56:56 compute-0 ovn_controller[152662]: 2025-10-14T08:56:56Z|00097|binding|INFO|Releasing lport e772e4c9-7f41-4f58-a7a3-269a843bc77c from this chassis (sb_readonly=0)
Oct 14 08:56:56 compute-0 ovn_controller[152662]: 2025-10-14T08:56:56Z|00098|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:56 compute-0 nova_compute[259627]: 2025-10-14 08:56:56.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:56:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386572064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.079 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.084 2 DEBUG nova.compute.provider_tree [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.112 2 DEBUG nova.scheduler.client.report [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.120 2 DEBUG nova.network.neutron [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.148 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.149 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.156 2 INFO nova.compute.manager [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Took 0.79 seconds to deallocate network for instance.
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.298 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.299 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.305 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.305 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.318 2 DEBUG nova.compute.manager [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.320 2 DEBUG nova.compute.manager [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing instance network info cache due to event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.321 2 DEBUG oslo_concurrency.lockutils [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.321 2 DEBUG oslo_concurrency.lockutils [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.321 2 DEBUG nova.network.neutron [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.323 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.343 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.425 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.426 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.427 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Creating image(s)
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.446 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.467 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:56:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 14 08:56:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 14 08:56:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.496 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.501 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 271 op/s
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.534 2 DEBUG oslo_concurrency.processutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.566 2 DEBUG nova.policy [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50e2582d63041b682c71a379f763c0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bf65c21e4104af6981b071561617657', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:56:57 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3386572064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:57 compute-0 ceph-mon[74249]: osdmap e128: 3 total, 3 up, 3 in
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.572 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.577 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.578 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.578 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.604 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.607 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:56:57 compute-0 nova_compute[259627]: 2025-10-14 08:56:57.933 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.001 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] resizing rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:56:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:56:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531381479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.085 2 DEBUG nova.objects.instance [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'migration_context' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.088 2 DEBUG oslo_concurrency.processutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.093 2 DEBUG nova.compute.provider_tree [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.102 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.102 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Ensure instance console log exists: /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.103 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.103 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.103 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.104 2 DEBUG nova.scheduler.client.report [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.138 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.178 2 INFO nova.scheduler.client.report [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Deleted allocations for instance 5de76ef0-5c03-4b43-a691-c858cecd9e80
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.246 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.443 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Successfully created port: 499f3731-d66b-4964-b5a8-387adacf5166 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:56:58 compute-0 ceph-mon[74249]: pgmap v1151: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 271 op/s
Oct 14 08:56:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3531381479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:56:58 compute-0 podman[285643]: 2025-10-14 08:56:58.648005542 +0000 UTC m=+0.063491060 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid)
Oct 14 08:56:58 compute-0 podman[285642]: 2025-10-14 08:56:58.654910202 +0000 UTC m=+0.072162403 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.664 2 DEBUG nova.compute.manager [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.665 2 DEBUG oslo_concurrency.lockutils [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.666 2 DEBUG oslo_concurrency.lockutils [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.666 2 DEBUG oslo_concurrency.lockutils [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.666 2 DEBUG nova.compute.manager [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] No waiting events found dispatching network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.666 2 WARNING nova.compute.manager [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received unexpected event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf for instance with vm_state deleted and task_state None.
Oct 14 08:56:58 compute-0 nova_compute[259627]: 2025-10-14 08:56:58.667 2 DEBUG nova.compute.manager [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-deleted-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.148 2 DEBUG nova.network.neutron [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updated VIF entry in instance network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.149 2 DEBUG nova.network.neutron [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.170 2 DEBUG oslo_concurrency.lockutils [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:56:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 204 op/s
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.531 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Successfully updated port: 499f3731-d66b-4964-b5a8-387adacf5166 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.558 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.558 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.559 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.752 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.855 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.856 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.876 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.954 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.955 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.965 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:56:59 compute-0 nova_compute[259627]: 2025-10-14 08:56:59.965 2 INFO nova.compute.claims [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.172 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.585 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:00 compute-0 ceph-mon[74249]: pgmap v1152: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 204 op/s
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.616 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.617 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance network_info: |[{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.621 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start _get_guest_xml network_info=[{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.627 2 WARNING nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.632 2 DEBUG nova.virt.libvirt.host [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.633 2 DEBUG nova.virt.libvirt.host [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.637 2 DEBUG nova.virt.libvirt.host [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.638 2 DEBUG nova.virt.libvirt.host [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.639 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.640 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.641 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.641 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.642 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.642 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.643 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.644 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.644 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:57:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818534980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.645 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.646 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.647 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.656 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.696 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.704 2 DEBUG nova.compute.provider_tree [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.723 2 DEBUG nova.scheduler.client.report [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.751 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.753 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.794 2 DEBUG nova.compute.manager [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-changed-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.796 2 DEBUG nova.compute.manager [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing instance network info cache due to event network-changed-499f3731-d66b-4964-b5a8-387adacf5166. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.797 2 DEBUG oslo_concurrency.lockutils [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.797 2 DEBUG oslo_concurrency.lockutils [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.798 2 DEBUG nova.network.neutron [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.808 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.808 2 DEBUG nova.network.neutron [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.833 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.853 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.949 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.950 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.950 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Creating image(s)
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.973 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:00 compute-0 nova_compute[259627]: 2025-10-14 08:57:00.994 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.013 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.024 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.079 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.079 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.080 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.080 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.100 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.104 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059173365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.127 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.147 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.150 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.175 2 DEBUG nova.network.neutron [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.176 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.348 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.421 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] resizing rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:57:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 260 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.8 MiB/s wr, 345 op/s
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.538 2 DEBUG nova.objects.instance [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'migration_context' on Instance uuid 45f3b13d-65b1-4bbf-8192-7b842f616b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.563 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.564 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Ensure instance console log exists: /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.564 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.565 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.565 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.567 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.573 2 WARNING nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.577 2 DEBUG nova.virt.libvirt.host [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.578 2 DEBUG nova.virt.libvirt.host [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.582 2 DEBUG nova.virt.libvirt.host [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.583 2 DEBUG nova.virt.libvirt.host [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.583 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.584 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.584 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.584 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.585 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.585 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.585 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.585 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.586 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.586 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.587 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.587 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.590 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/818534980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3059173365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585421242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.639 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.641 2 DEBUG nova.virt.libvirt.vif [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:57Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.642 2 DEBUG nova.network.os_vif_util [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.643 2 DEBUG nova.network.os_vif_util [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.644 2 DEBUG nova.objects.instance [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.656 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <uuid>01db05f6-07fb-41b5-8aaf-27ad5712fcda</uuid>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <name>instance-00000011</name>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1349611853</nova:name>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:57:00</nova:creationTime>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <nova:user uuid="f50e2582d63041b682c71a379f763c0e">tempest-SecurityGroupsTestJSON-663845074-project-member</nova:user>
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <nova:project uuid="9bf65c21e4104af6981b071561617657">tempest-SecurityGroupsTestJSON-663845074</nova:project>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <nova:port uuid="499f3731-d66b-4964-b5a8-387adacf5166">
Oct 14 08:57:01 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <system>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <entry name="serial">01db05f6-07fb-41b5-8aaf-27ad5712fcda</entry>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <entry name="uuid">01db05f6-07fb-41b5-8aaf-27ad5712fcda</entry>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </system>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <os>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   </os>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <features>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   </features>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk">
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config">
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:01 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:58:ee:87"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <target dev="tap499f3731-d6"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/console.log" append="off"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <video>
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </video>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:57:01 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:57:01 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:57:01 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:57:01 compute-0 nova_compute[259627]: </domain>
Oct 14 08:57:01 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.658 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Preparing to wait for external event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.659 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.659 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.659 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.662 2 DEBUG nova.virt.libvirt.vif [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:57Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.662 2 DEBUG nova.network.os_vif_util [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.663 2 DEBUG nova.network.os_vif_util [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.664 2 DEBUG os_vif [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap499f3731-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap499f3731-d6, col_values=(('external_ids', {'iface-id': '499f3731-d66b-4964-b5a8-387adacf5166', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:ee:87', 'vm-uuid': '01db05f6-07fb-41b5-8aaf-27ad5712fcda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:01 compute-0 NetworkManager[44885]: <info>  [1760432221.6715] manager: (tap499f3731-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.681 2 INFO os_vif [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6')
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.741 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.742 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.743 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No VIF found with MAC fa:16:3e:58:ee:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.744 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Using config drive
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.770 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:01 compute-0 nova_compute[259627]: 2025-10-14 08:57:01.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.000 2 DEBUG nova.network.neutron [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updated VIF entry in instance network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.001 2 DEBUG nova.network.neutron [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.016 2 DEBUG oslo_concurrency.lockutils [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243070830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.062 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.083 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.087 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.108 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Creating config drive at /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.113 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsw3uglaq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.239 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsw3uglaq" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.259 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.263 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.420 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.421 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Deleting local config drive /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config because it was imported into RBD.
Oct 14 08:57:02 compute-0 kernel: tap499f3731-d6: entered promiscuous mode
Oct 14 08:57:02 compute-0 NetworkManager[44885]: <info>  [1760432222.4638] manager: (tap499f3731-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Oct 14 08:57:02 compute-0 ovn_controller[152662]: 2025-10-14T08:57:02Z|00099|binding|INFO|Claiming lport 499f3731-d66b-4964-b5a8-387adacf5166 for this chassis.
Oct 14 08:57:02 compute-0 ovn_controller[152662]: 2025-10-14T08:57:02Z|00100|binding|INFO|499f3731-d66b-4964-b5a8-387adacf5166: Claiming fa:16:3e:58:ee:87 10.100.0.5
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.472 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:87 10.100.0.5'], port_security=['fa:16:3e:58:ee:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '01db05f6-07fb-41b5-8aaf-27ad5712fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=499f3731-d66b-4964-b5a8-387adacf5166) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.474 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 499f3731-d66b-4964-b5a8-387adacf5166 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 bound to our chassis
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.475 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 08:57:02 compute-0 ovn_controller[152662]: 2025-10-14T08:57:02Z|00101|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 ovn-installed in OVS
Oct 14 08:57:02 compute-0 ovn_controller[152662]: 2025-10-14T08:57:02Z|00102|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 up in Southbound
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/290988960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.494 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85f942ed-58c7-47f1-8d9f-81d6c0825b67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:02 compute-0 systemd-udevd[286063]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:57:02 compute-0 systemd-machined[214636]: New machine qemu-17-instance-00000011.
Oct 14 08:57:02 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.512 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.515 2 DEBUG nova.objects.instance [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 45f3b13d-65b1-4bbf-8192-7b842f616b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:02 compute-0 NetworkManager[44885]: <info>  [1760432222.5247] device (tap499f3731-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:57:02 compute-0 NetworkManager[44885]: <info>  [1760432222.5257] device (tap499f3731-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.532 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <uuid>45f3b13d-65b1-4bbf-8192-7b842f616b4d</uuid>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <name>instance-00000012</name>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1263596378</nova:name>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:57:01</nova:creationTime>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <nova:user uuid="6ecc59efebb941f4b0aa79b58a7e610e">tempest-LiveMigrationNegativeTest-588604906-project-member</nova:user>
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <nova:project uuid="a618b00ff8c34f40bd31e4f56c019b1b">tempest-LiveMigrationNegativeTest-588604906</nova:project>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <system>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <entry name="serial">45f3b13d-65b1-4bbf-8192-7b842f616b4d</entry>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <entry name="uuid">45f3b13d-65b1-4bbf-8192-7b842f616b4d</entry>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     </system>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <os>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   </os>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <features>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   </features>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk">
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config">
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:02 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/console.log" append="off"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <video>
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     </video>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:57:02 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:57:02 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:57:02 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:57:02 compute-0 nova_compute[259627]: </domain>
Oct 14 08:57:02 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.538 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c4878e76-bdc1-4869-8e62-50530a1de648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.544 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3043d0f5-9e28-4f13-ae50-120ea05acb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.576 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[55040020-5bef-45e6-b7b4-a336d39270a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.582 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.583 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.584 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Using config drive
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.593 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9acbfbbb-c52a-4b11-afae-1ea0e600d09f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286078, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:02 compute-0 ceph-mon[74249]: pgmap v1153: 305 pgs: 305 active+clean; 260 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.8 MiB/s wr, 345 op/s
Oct 14 08:57:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3585421242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1243070830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/290988960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.608 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a3ee0d-e530-4903-97f0-9b8608ef96ad]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597151, 'tstamp': 597151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286093, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597154, 'tstamp': 597154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286093, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.610 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.614 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.616 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:57:02 compute-0 ovn_controller[152662]: 2025-10-14T08:57:02Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:99:88 10.100.0.13
Oct 14 08:57:02 compute-0 ovn_controller[152662]: 2025-10-14T08:57:02Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:99:88 10.100.0.13
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.976 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Creating config drive at /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config
Oct 14 08:57:02 compute-0 nova_compute[259627]: 2025-10-14 08:57:02.981 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_c09z7_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.114 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_c09z7_" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.141 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.145 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.293 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.294 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Deleting local config drive /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config because it was imported into RBD.
Oct 14 08:57:03 compute-0 systemd-machined[214636]: New machine qemu-18-instance-00000012.
Oct 14 08:57:03 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.430 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432223.4296439, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.431 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Started (Lifecycle Event)
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.452 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.457 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432223.4304578, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.457 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Paused (Lifecycle Event)
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.485 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.489 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 260 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 311 op/s
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.513 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.608 2 DEBUG nova.compute.manager [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.609 2 DEBUG oslo_concurrency.lockutils [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.610 2 DEBUG oslo_concurrency.lockutils [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.610 2 DEBUG oslo_concurrency.lockutils [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.610 2 DEBUG nova.compute.manager [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Processing event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.611 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.613 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432223.6132832, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.613 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Resumed (Lifecycle Event)
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.615 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.623 2 INFO nova.virt.libvirt.driver [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance spawned successfully.
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.623 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.637 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.644 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.647 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.647 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.648 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.648 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.648 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.649 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.676 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.718 2 INFO nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Took 6.29 seconds to spawn the instance on the hypervisor.
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.719 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.782 2 INFO nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Took 7.34 seconds to build instance.
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.801 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.916 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.916 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.933 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.997 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:03 compute-0 nova_compute[259627]: 2025-10-14 08:57:03.998 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.004 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.004 2 INFO nova.compute.claims [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.192 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.274 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432224.273628, 45f3b13d-65b1-4bbf-8192-7b842f616b4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.275 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] VM Resumed (Lifecycle Event)
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.285 2 DEBUG nova.compute.manager [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.285 2 DEBUG nova.compute.manager [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing instance network info cache due to event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.286 2 DEBUG oslo_concurrency.lockutils [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.287 2 DEBUG oslo_concurrency.lockutils [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.288 2 DEBUG nova.network.neutron [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.292 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.294 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.298 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.304 2 INFO nova.virt.libvirt.driver [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance spawned successfully.
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.305 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.325 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.333 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.333 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.334 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.336 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.337 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.337 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.350 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.350 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432224.2754664, 45f3b13d-65b1-4bbf-8192-7b842f616b4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] VM Started (Lifecycle Event)
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.379 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.388 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.413 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.421 2 INFO nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Took 3.47 seconds to spawn the instance on the hypervisor.
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.421 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.517 2 INFO nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Took 4.59 seconds to build instance.
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.535 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:04 compute-0 ceph-mon[74249]: pgmap v1154: 305 pgs: 305 active+clean; 260 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 311 op/s
Oct 14 08:57:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236549572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.653 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.660 2 DEBUG nova.compute.provider_tree [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.680 2 DEBUG nova.scheduler.client.report [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.700 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.700 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.746 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.747 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.768 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.791 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.914 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.916 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.917 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Creating image(s)
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.950 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:04 compute-0 nova_compute[259627]: 2025-10-14 08:57:04.981 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.007 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.011 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.044 2 DEBUG nova.policy [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aafd6ad40c944c3eb14e7fbf454040c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.077 2 DEBUG nova.objects.instance [None req-662adbaf-2a63-42c6-9fcc-fa225606b41c 82ddf91380754dd8ae02917eaf89cc5e 4e1512a994154263bd259548462f8c31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45f3b13d-65b1-4bbf-8192-7b842f616b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.101 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.101 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.102 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.102 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.135 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.138 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.158 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432225.1069105, 45f3b13d-65b1-4bbf-8192-7b842f616b4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.159 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] VM Paused (Lifecycle Event)
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.181 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:05 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 14 08:57:05 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 1.629s CPU time.
Oct 14 08:57:05 compute-0 systemd-machined[214636]: Machine qemu-18-instance-00000012 terminated.
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.386 2 DEBUG nova.compute.manager [None req-662adbaf-2a63-42c6-9fcc-fa225606b41c 82ddf91380754dd8ae02917eaf89cc5e 4e1512a994154263bd259548462f8c31 - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.393 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.446 2 DEBUG nova.network.neutron [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updated VIF entry in instance network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.447 2 DEBUG nova.network.neutron [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.456 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] resizing rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.486 2 DEBUG oslo_concurrency.lockutils [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 08:57:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3650321656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:57:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 08:57:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3650321656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:57:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.2 MiB/s wr, 363 op/s
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.547 2 DEBUG nova.objects.instance [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'migration_context' on Instance uuid cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.560 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.560 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Ensure instance console log exists: /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.561 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.561 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:05 compute-0 nova_compute[259627]: 2025-10-14 08:57:05.561 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4236549572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3650321656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:57:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3650321656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:57:06 compute-0 ceph-mon[74249]: pgmap v1155: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.2 MiB/s wr, 363 op/s
Oct 14 08:57:06 compute-0 nova_compute[259627]: 2025-10-14 08:57:06.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:06 compute-0 nova_compute[259627]: 2025-10-14 08:57:06.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:07.014 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.220 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Successfully created port: d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:57:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:57:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.2 MiB/s wr, 362 op/s
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.643 2 DEBUG nova.compute.manager [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.643 2 DEBUG oslo_concurrency.lockutils [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.644 2 DEBUG oslo_concurrency.lockutils [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.644 2 DEBUG oslo_concurrency.lockutils [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.644 2 DEBUG nova.compute.manager [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.645 2 WARNING nova.compute.manager [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state None.
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.716 2 DEBUG nova.compute.manager [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-changed-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.717 2 DEBUG nova.compute.manager [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing instance network info cache due to event network-changed-499f3731-d66b-4964-b5a8-387adacf5166. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.717 2 DEBUG oslo_concurrency.lockutils [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.717 2 DEBUG oslo_concurrency.lockutils [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.717 2 DEBUG nova.network.neutron [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:57:07 compute-0 nova_compute[259627]: 2025-10-14 08:57:07.994 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Successfully updated port: d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.008 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.009 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquired lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.009 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.140 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.222 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.223 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.223 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.224 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.224 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.225 2 INFO nova.compute.manager [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Terminating instance
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.227 2 DEBUG nova.compute.manager [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:57:08 compute-0 kernel: tap3af4eb0a-c4 (unregistering): left promiscuous mode
Oct 14 08:57:08 compute-0 NetworkManager[44885]: <info>  [1760432228.2828] device (tap3af4eb0a-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:57:08 compute-0 ovn_controller[152662]: 2025-10-14T08:57:08Z|00103|binding|INFO|Releasing lport 3af4eb0a-c48b-4857-8399-453429b6af53 from this chassis (sb_readonly=0)
Oct 14 08:57:08 compute-0 ovn_controller[152662]: 2025-10-14T08:57:08Z|00104|binding|INFO|Setting lport 3af4eb0a-c48b-4857-8399-453429b6af53 down in Southbound
Oct 14 08:57:08 compute-0 ovn_controller[152662]: 2025-10-14T08:57:08Z|00105|binding|INFO|Removing iface tap3af4eb0a-c4 ovn-installed in OVS
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.305 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:99:88 10.100.0.13'], port_security=['fa:16:3e:dd:99:88 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '251ae181-b980-4338-a6b5-eee48450b510', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dffd1ba9c7eb426eba02b7fa1cb571e2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1853f749-24a7-4699-9f13-e869ca5b59f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=076f043b-a4ac-4ba0-9e01-fc8b197a9834, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3af4eb0a-c48b-4857-8399-453429b6af53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.307 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3af4eb0a-c48b-4857-8399-453429b6af53 in datapath fb9605f8-2a2c-40d4-892f-fb75a29c07c3 unbound from our chassis
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.308 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb9605f8-2a2c-40d4-892f-fb75a29c07c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.309 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5a3ee3-0f21-440d-8655-89dc389c750e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.313 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 namespace which is not needed anymore
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.327 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.328 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.328 2 INFO nova.compute.manager [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Rebooting instance
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.345 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:08 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 14 08:57:08 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 12.127s CPU time.
Oct 14 08:57:08 compute-0 systemd-machined[214636]: Machine qemu-15-instance-0000000f terminated.
Oct 14 08:57:08 compute-0 podman[286433]: 2025-10-14 08:57:08.396440315 +0000 UTC m=+0.080747034 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 08:57:08 compute-0 podman[286430]: 2025-10-14 08:57:08.43004278 +0000 UTC m=+0.115202900 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 08:57:08 compute-0 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [NOTICE]   (284688) : haproxy version is 2.8.14-c23fe91
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.465 2 INFO nova.virt.libvirt.driver [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance destroyed successfully.
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.466 2 DEBUG nova.objects.instance [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lazy-loading 'resources' on Instance uuid 251ae181-b980-4338-a6b5-eee48450b510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:08 compute-0 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [NOTICE]   (284688) : path to executable is /usr/sbin/haproxy
Oct 14 08:57:08 compute-0 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [WARNING]  (284688) : Exiting Master process...
Oct 14 08:57:08 compute-0 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [WARNING]  (284688) : Exiting Master process...
Oct 14 08:57:08 compute-0 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [ALERT]    (284688) : Current worker (284711) exited with code 143 (Terminated)
Oct 14 08:57:08 compute-0 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [WARNING]  (284688) : All workers exited. Exiting... (0)
Oct 14 08:57:08 compute-0 systemd[1]: libpod-b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4.scope: Deactivated successfully.
Oct 14 08:57:08 compute-0 podman[286489]: 2025-10-14 08:57:08.478352096 +0000 UTC m=+0.054661933 container died b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.480 2 DEBUG nova.virt.libvirt.vif [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-928191118',id=15,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:56:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dffd1ba9c7eb426eba02b7fa1cb571e2',ramdisk_id='',reservation_id='r-wq8uysq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:56:51Z,user_data=None,user_id='11f9a8052a8349b0a21b3acc32a7f2b1',uuid=251ae181-b980-4338-a6b5-eee48450b510,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.480 2 DEBUG nova.network.os_vif_util [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converting VIF {"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.481 2 DEBUG nova.network.os_vif_util [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.481 2 DEBUG os_vif [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3af4eb0a-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.488 2 INFO os_vif [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4')
Oct 14 08:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4-userdata-shm.mount: Deactivated successfully.
Oct 14 08:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c99ed231e5c1fd99f5e9dee19ad83031abe3a98e852118ad1216c381a913f6-merged.mount: Deactivated successfully.
Oct 14 08:57:08 compute-0 podman[286489]: 2025-10-14 08:57:08.515511139 +0000 UTC m=+0.091820976 container cleanup b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 08:57:08 compute-0 systemd[1]: libpod-conmon-b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4.scope: Deactivated successfully.
Oct 14 08:57:08 compute-0 podman[286550]: 2025-10-14 08:57:08.593921874 +0000 UTC m=+0.057014111 container remove b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.604 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b78b5f74-3109-4100-9920-042da629f573]: (4, ('Tue Oct 14 08:57:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 (b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4)\nb0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4\nTue Oct 14 08:57:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 (b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4)\nb0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.607 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7adddc7e-9550-4d32-b437-6ae047e5a95f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.608 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb9605f8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:08 compute-0 kernel: tapfb9605f8-20: left promiscuous mode
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[936ffb49-86ee-4237-9393-3029573def59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:08 compute-0 ceph-mon[74249]: pgmap v1156: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.2 MiB/s wr, 362 op/s
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.643 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e206d79f-c1e1-4717-b758-01e177dca692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.645 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd72b99-b20f-43f8-9cad-1a730ba8362b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.665 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c69e5352-e5cb-42ca-8646-bb36e38a26ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598906, 'reachable_time': 40193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286562, 'error': None, 'target': 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:08 compute-0 systemd[1]: run-netns-ovnmeta\x2dfb9605f8\x2d2a2c\x2d40d4\x2d892f\x2dfb75a29c07c3.mount: Deactivated successfully.
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.671 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:57:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.671 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[1e520074-281c-4b0e-9191-0ccd1a828cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.898 2 DEBUG nova.network.neutron [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updated VIF entry in instance network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.899 2 DEBUG nova.network.neutron [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.908 2 INFO nova.virt.libvirt.driver [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Deleting instance files /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510_del
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.909 2 INFO nova.virt.libvirt.driver [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Deletion of /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510_del complete
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.928 2 DEBUG oslo_concurrency.lockutils [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.929 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.929 2 DEBUG nova.network.neutron [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.968 2 INFO nova.compute.manager [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.969 2 DEBUG oslo.service.loopingcall [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.969 2 DEBUG nova.compute.manager [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:57:08 compute-0 nova_compute[259627]: 2025-10-14 08:57:08.969 2 DEBUG nova.network.neutron [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:57:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 302 op/s
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.731 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Updating instance_info_cache with network_info: [{"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.772 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Releasing lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.772 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance network_info: |[{"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.777 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start _get_guest_xml network_info=[{"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.783 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-changed-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.784 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Refreshing instance network info cache due to event network-changed-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.784 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.785 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.785 2 DEBUG nova.network.neutron [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Refreshing network info cache for port d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.793 2 WARNING nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.806 2 DEBUG nova.virt.libvirt.host [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.807 2 DEBUG nova.virt.libvirt.host [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.812 2 DEBUG nova.virt.libvirt.host [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.813 2 DEBUG nova.virt.libvirt.host [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.814 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.814 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.815 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.816 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.816 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.817 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.817 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.818 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.819 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.819 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.820 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.820 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.825 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.937 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.938 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.938 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.938 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.939 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.941 2 INFO nova.compute.manager [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Terminating instance
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.942 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "refresh_cache-45f3b13d-65b1-4bbf-8192-7b842f616b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.942 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquired lock "refresh_cache-45f3b13d-65b1-4bbf-8192-7b842f616b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:09 compute-0 nova_compute[259627]: 2025-10-14 08:57:09.943 2 DEBUG nova.network.neutron [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.133 2 DEBUG nova.network.neutron [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:57:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648541971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.272 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.311 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.315 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.344 2 DEBUG nova.network.neutron [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.363 2 INFO nova.compute.manager [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Took 1.39 seconds to deallocate network for instance.
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.419 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.420 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.619 2 DEBUG nova.network.neutron [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.623 2 DEBUG nova.compute.manager [req-ab211404-250b-4031-b512-add73e08cf76 req-d35398cf-21ef-44a3-8db9-898f7e789fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-deleted-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:10 compute-0 ceph-mon[74249]: pgmap v1157: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 302 op/s
Oct 14 08:57:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1648541971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.648 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Releasing lock "refresh_cache-45f3b13d-65b1-4bbf-8192-7b842f616b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.649 2 DEBUG nova.compute.manager [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.654 2 INFO nova.virt.libvirt.driver [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance destroyed successfully.
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.654 2 DEBUG nova.objects.instance [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'resources' on Instance uuid 45f3b13d-65b1-4bbf-8192-7b842f616b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578576337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.761 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.762 2 DEBUG nova.virt.libvirt.vif [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:57:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-476297486',display_name='tempest-ImagesOneServerNegativeTestJSON-server-476297486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-476297486',id=19,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-7i3jta7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:57:04Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.762 2 DEBUG nova.network.os_vif_util [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.763 2 DEBUG nova.network.os_vif_util [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.764 2 DEBUG nova.objects.instance [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.784 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <uuid>cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a</uuid>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <name>instance-00000013</name>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-476297486</nova:name>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:57:09</nova:creationTime>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <nova:user uuid="aafd6ad40c944c3eb14e7fbf454040c3">tempest-ImagesOneServerNegativeTestJSON-531836018-project-member</nova:user>
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <nova:project uuid="f24bbeb2f91141e294590ca2afc5ed42">tempest-ImagesOneServerNegativeTestJSON-531836018</nova:project>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <nova:port uuid="d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb">
Oct 14 08:57:10 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <system>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <entry name="serial">cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a</entry>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <entry name="uuid">cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a</entry>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </system>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <os>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   </os>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <features>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   </features>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk">
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config">
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:10 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:80:e1:2a"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <target dev="tapd87ee1ff-7b"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/console.log" append="off"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <video>
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </video>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:57:10 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:57:10 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:57:10 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:57:10 compute-0 nova_compute[259627]: </domain>
Oct 14 08:57:10 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.785 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Preparing to wait for external event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.786 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.786 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.786 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.787 2 DEBUG nova.virt.libvirt.vif [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:57:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-476297486',display_name='tempest-ImagesOneServerNegativeTestJSON-server-476297486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-476297486',id=19,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-7i3jta7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:57:04Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.787 2 DEBUG nova.network.os_vif_util [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.787 2 DEBUG nova.network.os_vif_util [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.788 2 DEBUG os_vif [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd87ee1ff-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd87ee1ff-7b, col_values=(('external_ids', {'iface-id': 'd87ee1ff-7b3b-4020-b6ac-b1aa7b688adb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:e1:2a', 'vm-uuid': 'cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:10 compute-0 NetworkManager[44885]: <info>  [1760432230.7935] manager: (tapd87ee1ff-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.799 2 INFO os_vif [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b')
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.829 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432215.8282287, 5de76ef0-5c03-4b43-a691-c858cecd9e80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.830 2 INFO nova.compute.manager [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] VM Stopped (Lifecycle Event)
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.838 2 DEBUG oslo_concurrency.processutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.869 2 DEBUG nova.compute.manager [None req-629cb982-2b32-4d29-b01c-1e12217042bb - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.893 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.893 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.894 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No VIF found with MAC fa:16:3e:80:e1:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.894 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Using config drive
Oct 14 08:57:10 compute-0 nova_compute[259627]: 2025-10-14 08:57:10.913 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.037 2 INFO nova.virt.libvirt.driver [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Deleting instance files /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d_del
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.037 2 INFO nova.virt.libvirt.driver [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Deletion of /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d_del complete
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.134 2 INFO nova.compute.manager [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Took 0.48 seconds to destroy the instance on the hypervisor.
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.134 2 DEBUG oslo.service.loopingcall [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.135 2 DEBUG nova.compute.manager [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.135 2 DEBUG nova.network.neutron [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:57:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918469538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.297 2 DEBUG oslo_concurrency.processutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.305 2 DEBUG nova.compute.provider_tree [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.339 2 DEBUG nova.scheduler.client.report [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.370 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.427 2 INFO nova.scheduler.client.report [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Deleted allocations for instance 251ae181-b980-4338-a6b5-eee48450b510
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.435 2 DEBUG nova.network.neutron [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.453 2 DEBUG nova.network.neutron [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.494 2 INFO nova.compute.manager [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Took 0.36 seconds to deallocate network for instance.
Oct 14 08:57:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.6 MiB/s wr, 428 op/s
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.521 2 DEBUG nova.network.neutron [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.525 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.581 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.584 2 DEBUG nova.compute.manager [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.597 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.598 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2578576337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3918469538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.740 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Creating config drive at /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.747 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmvlrurl_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:11 compute-0 kernel: tap499f3731-d6 (unregistering): left promiscuous mode
Oct 14 08:57:11 compute-0 NetworkManager[44885]: <info>  [1760432231.7562] device (tap499f3731-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:57:11 compute-0 ovn_controller[152662]: 2025-10-14T08:57:11Z|00106|binding|INFO|Releasing lport 499f3731-d66b-4964-b5a8-387adacf5166 from this chassis (sb_readonly=0)
Oct 14 08:57:11 compute-0 ovn_controller[152662]: 2025-10-14T08:57:11Z|00107|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 down in Southbound
Oct 14 08:57:11 compute-0 ovn_controller[152662]: 2025-10-14T08:57:11Z|00108|binding|INFO|Removing iface tap499f3731-d6 ovn-installed in OVS
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.826 2 DEBUG nova.network.neutron [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Updated VIF entry in instance network info cache for port d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.827 2 DEBUG nova.network.neutron [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Updating instance_info_cache with network_info: [{"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.827 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:87 10.100.0.5'], port_security=['fa:16:3e:58:ee:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '01db05f6-07fb-41b5-8aaf-27ad5712fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '5', 'neutron:security_group_ids': '62a2565f-d2d5-452b-bee3-932bd15ef802 e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=499f3731-d66b-4964-b5a8-387adacf5166) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.828 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 499f3731-d66b-4964-b5a8-387adacf5166 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 unbound from our chassis
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.847 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c187b085-1516-43a6-afa3-a71292e511c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.851 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.851 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-unplugged-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.851 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] No waiting events found dispatching network-vif-unplugged-3af4eb0a-c48b-4857-8399-453429b6af53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-unplugged-3af4eb0a-c48b-4857-8399-453429b6af53 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.853 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.853 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] No waiting events found dispatching network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.853 2 WARNING nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received unexpected event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 for instance with vm_state active and task_state deleting.
Oct 14 08:57:11 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 14 08:57:11 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 8.854s CPU time.
Oct 14 08:57:11 compute-0 systemd-machined[214636]: Machine qemu-17-instance-00000011 terminated.
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.876 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[935cbe16-15d9-4006-baa8-af3ded9a959d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.879 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[554ffca7-9da5-4b5d-b8f1-556286595fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.881 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmvlrurl_" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.906 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.907 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d95869f0-f0d9-4504-a099-e6a0d7f74bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.916 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.922 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2439d2aa-42dd-4ad7-8422-f471a9f6f5f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286721, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.940 2 DEBUG oslo_concurrency.processutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.939 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a495b0b9-8daa-4b90-8c55-a7e0682d97b0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597151, 'tstamp': 597151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286725, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597154, 'tstamp': 597154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286725, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.941 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.957 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.970 2 INFO nova.virt.libvirt.driver [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance destroyed successfully.
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.971 2 DEBUG nova.objects.instance [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'resources' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.992 2 DEBUG nova.virt.libvirt.vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:11Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.992 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.994 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.994 2 DEBUG os_vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:11 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap499f3731-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:11.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.005 2 INFO os_vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6')
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.012 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start _get_guest_xml network_info=[{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.016 2 WARNING nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.019 2 DEBUG nova.virt.libvirt.host [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.020 2 DEBUG nova.virt.libvirt.host [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.023 2 DEBUG nova.virt.libvirt.host [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.023 2 DEBUG nova.virt.libvirt.host [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.024 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.024 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.027 2 DEBUG nova.objects.instance [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.046 2 DEBUG oslo_concurrency.processutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.074 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.074 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Deleting local config drive /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config because it was imported into RBD.
Oct 14 08:57:12 compute-0 kernel: tapd87ee1ff-7b: entered promiscuous mode
Oct 14 08:57:12 compute-0 ovn_controller[152662]: 2025-10-14T08:57:12Z|00109|binding|INFO|Claiming lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb for this chassis.
Oct 14 08:57:12 compute-0 systemd-udevd[286694]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:57:12 compute-0 ovn_controller[152662]: 2025-10-14T08:57:12Z|00110|binding|INFO|d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb: Claiming fa:16:3e:80:e1:2a 10.100.0.11
Oct 14 08:57:12 compute-0 NetworkManager[44885]: <info>  [1760432232.1339] manager: (tapd87ee1ff-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.142 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:e1:2a 10.100.0.11'], port_security=['fa:16:3e:80:e1:2a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.144 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce bound to our chassis
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.145 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:57:12 compute-0 NetworkManager[44885]: <info>  [1760432232.1534] device (tapd87ee1ff-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:57:12 compute-0 ovn_controller[152662]: 2025-10-14T08:57:12Z|00111|binding|INFO|Setting lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb ovn-installed in OVS
Oct 14 08:57:12 compute-0 ovn_controller[152662]: 2025-10-14T08:57:12Z|00112|binding|INFO|Setting lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb up in Southbound
Oct 14 08:57:12 compute-0 NetworkManager[44885]: <info>  [1760432232.1546] device (tapd87ee1ff-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.159 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73ac41a9-2670-47f8-980a-aabfad4e5e91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.160 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58d74886-d1 in ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.161 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58d74886-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.161 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3975b002-e031-4034-975f-cc40be547d87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.165 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[675a4e96-5f34-4f41-9877-089daf18691e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 systemd-machined[214636]: New machine qemu-19-instance-00000013.
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.177 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[17690999-c627-4360-a7bd-84e3dbaf202d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.200 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8dc0199-fd64-4c08-8ef4-c7090114a89f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.231 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7983b692-0673-4396-9c8a-abe6d8397611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5279131-d242-448c-bab6-2e8beec78064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 NetworkManager[44885]: <info>  [1760432232.2372] manager: (tap58d74886-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.267 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4b49528c-da4d-4e22-b5ba-5bcdc36c5c0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.270 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[567c39f6-2c93-43d0-810d-31c179c3108c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 NetworkManager[44885]: <info>  [1760432232.2895] device (tap58d74886-d0): carrier: link connected
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.295 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f08967ce-5bf7-43cb-8219-505b55d9e14f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d5c8df-36e7-4eca-9a50-3c82b5ce1935]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601105, 'reachable_time': 33903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286842, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.329 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcb32ed-da06-4691-bb30-425d548845d4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:d2a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601105, 'tstamp': 601105}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286843, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f32c77d8-84bb-4153-a04b-bb01d791132c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601105, 'reachable_time': 33903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286844, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/508029087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.376 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29ecbbca-2f6e-4735-8baa-8ae5ba9ce869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.385 2 DEBUG oslo_concurrency.processutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.392 2 DEBUG nova.compute.provider_tree [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.408 2 DEBUG nova.scheduler.client.report [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.432 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.436 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f990ba98-110b-4e6f-b207-5101f9074d46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.437 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.437 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d74886-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:12 compute-0 NetworkManager[44885]: <info>  [1760432232.4408] manager: (tap58d74886-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Oct 14 08:57:12 compute-0 kernel: tap58d74886-d0: entered promiscuous mode
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.445 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d74886-d0, col_values=(('external_ids', {'iface-id': 'ef5c894d-34c4-4781-b15c-6813576a45e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:12 compute-0 ovn_controller[152662]: 2025-10-14T08:57:12Z|00113|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.449 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0103d4-0c6d-478f-9cf6-9fafaa3e7d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.456 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.457 2 INFO nova.scheduler.client.report [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Deleted allocations for instance 45f3b13d-65b1-4bbf-8192-7b842f616b4d
Oct 14 08:57:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.456 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'env', 'PROCESS_TAG=haproxy-58d74886-d603-4fb5-b8ff-9c184284bdce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58d74886-d603-4fb5-b8ff-9c184284bdce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:57:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959744692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.505 2 DEBUG oslo_concurrency.processutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.534 2 DEBUG oslo_concurrency.processutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:12 compute-0 ceph-mon[74249]: pgmap v1158: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.6 MiB/s wr, 428 op/s
Oct 14 08:57:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/508029087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/959744692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:12 compute-0 podman[286960]: 2025-10-14 08:57:12.85424303 +0000 UTC m=+0.051887206 container create e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.860 2 DEBUG nova.compute.manager [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.861 2 DEBUG oslo_concurrency.lockutils [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.861 2 DEBUG oslo_concurrency.lockutils [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.861 2 DEBUG oslo_concurrency.lockutils [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.861 2 DEBUG nova.compute.manager [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.862 2 WARNING nova.compute.manager [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state reboot_started_hard.
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.868 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:12 compute-0 systemd[1]: Started libpod-conmon-e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536.scope.
Oct 14 08:57:12 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cee4e261b09b2f84c9173291ee5a39f3882304b4c7d47e1f06ce36bd1963e8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:12 compute-0 podman[286960]: 2025-10-14 08:57:12.826605541 +0000 UTC m=+0.024249707 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:57:12 compute-0 podman[286960]: 2025-10-14 08:57:12.932113952 +0000 UTC m=+0.129758118 container init e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.942 2 DEBUG nova.compute.manager [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.942 2 DEBUG oslo_concurrency.lockutils [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:12 compute-0 podman[286960]: 2025-10-14 08:57:12.943190214 +0000 UTC m=+0.140834350 container start e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.943 2 DEBUG oslo_concurrency.lockutils [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.943 2 DEBUG oslo_concurrency.lockutils [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.943 2 DEBUG nova.compute.manager [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Processing event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:57:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951469822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:12 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [NOTICE]   (286979) : New worker (286983) forked
Oct 14 08:57:12 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [NOTICE]   (286979) : Loading success.
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.987 2 DEBUG oslo_concurrency.processutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.988 2 DEBUG nova.virt.libvirt.vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:11Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.988 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.989 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:12 compute-0 nova_compute[259627]: 2025-10-14 08:57:12.990 2 DEBUG nova.objects.instance [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.009 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <uuid>01db05f6-07fb-41b5-8aaf-27ad5712fcda</uuid>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <name>instance-00000011</name>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1349611853</nova:name>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:57:12</nova:creationTime>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <nova:user uuid="f50e2582d63041b682c71a379f763c0e">tempest-SecurityGroupsTestJSON-663845074-project-member</nova:user>
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <nova:project uuid="9bf65c21e4104af6981b071561617657">tempest-SecurityGroupsTestJSON-663845074</nova:project>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <nova:port uuid="499f3731-d66b-4964-b5a8-387adacf5166">
Oct 14 08:57:13 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <system>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <entry name="serial">01db05f6-07fb-41b5-8aaf-27ad5712fcda</entry>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <entry name="uuid">01db05f6-07fb-41b5-8aaf-27ad5712fcda</entry>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </system>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <os>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   </os>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <features>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   </features>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk">
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config">
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:13 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:58:ee:87"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <target dev="tap499f3731-d6"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/console.log" append="off"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <video>
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </video>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:57:13 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:57:13 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:57:13 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:57:13 compute-0 nova_compute[259627]: </domain>
Oct 14 08:57:13 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.010 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.010 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.011 2 DEBUG nova.virt.libvirt.vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:11Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.011 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.011 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.012 2 DEBUG os_vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap499f3731-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap499f3731-d6, col_values=(('external_ids', {'iface-id': '499f3731-d66b-4964-b5a8-387adacf5166', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:ee:87', 'vm-uuid': '01db05f6-07fb-41b5-8aaf-27ad5712fcda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:13 compute-0 NetworkManager[44885]: <info>  [1760432233.0649] manager: (tap499f3731-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.070 2 INFO os_vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6')
Oct 14 08:57:13 compute-0 NetworkManager[44885]: <info>  [1760432233.1401] manager: (tap499f3731-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 14 08:57:13 compute-0 kernel: tap499f3731-d6: entered promiscuous mode
Oct 14 08:57:13 compute-0 systemd-udevd[286837]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:13 compute-0 ovn_controller[152662]: 2025-10-14T08:57:13Z|00114|binding|INFO|Claiming lport 499f3731-d66b-4964-b5a8-387adacf5166 for this chassis.
Oct 14 08:57:13 compute-0 ovn_controller[152662]: 2025-10-14T08:57:13Z|00115|binding|INFO|499f3731-d66b-4964-b5a8-387adacf5166: Claiming fa:16:3e:58:ee:87 10.100.0.5
Oct 14 08:57:13 compute-0 NetworkManager[44885]: <info>  [1760432233.1537] device (tap499f3731-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:57:13 compute-0 NetworkManager[44885]: <info>  [1760432233.1564] device (tap499f3731-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.155 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:87 10.100.0.5'], port_security=['fa:16:3e:58:ee:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '01db05f6-07fb-41b5-8aaf-27ad5712fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '6', 'neutron:security_group_ids': '62a2565f-d2d5-452b-bee3-932bd15ef802 e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=499f3731-d66b-4964-b5a8-387adacf5166) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.156 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 499f3731-d66b-4964-b5a8-387adacf5166 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 bound to our chassis
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.158 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.173 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432233.1729403, cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.173 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] VM Started (Lifecycle Event)
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.172 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[941a526c-a319-4688-ac39-0f64638253c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.175 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.182 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:57:13 compute-0 systemd-machined[214636]: New machine qemu-20-instance-00000011.
Oct 14 08:57:13 compute-0 ovn_controller[152662]: 2025-10-14T08:57:13Z|00116|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 ovn-installed in OVS
Oct 14 08:57:13 compute-0 ovn_controller[152662]: 2025-10-14T08:57:13Z|00117|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 up in Southbound
Oct 14 08:57:13 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000011.
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.200 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.202 2 INFO nova.virt.libvirt.driver [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance spawned successfully.
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.202 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.203 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e707d5-f5d3-4353-a344-9ce36d23aebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.207 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.208 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e56d749-3380-43dc-bf03-f0164b424f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.228 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.228 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432233.1730754, cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.228 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] VM Paused (Lifecycle Event)
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.232 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.233 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.233 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.233 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.233 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.234 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.242 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0523be-b5c4-44e4-a0fe-7f84a0427c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.254 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.257 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432233.1807742, cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.257 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] VM Resumed (Lifecycle Event)
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.263 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9a75e006-1cab-40ec-bdaa-b5bc21826037]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287017, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f883de1-f38c-4931-8beb-45953813e68c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597151, 'tstamp': 597151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287019, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597154, 'tstamp': 597154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287019, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.283 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.285 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.286 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.286 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.286 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.293 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.296 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.315 2 INFO nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Took 8.40 seconds to spawn the instance on the hypervisor.
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.316 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.323 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.386 2 INFO nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Took 9.41 seconds to build instance.
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.410 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 14 08:57:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2951469822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.814 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.815 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.815 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.815 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.816 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.817 2 INFO nova.compute.manager [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Terminating instance
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.818 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "refresh_cache-753ea698-6cc6-4a73-a0d2-1366e5374a9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.818 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquired lock "refresh_cache-753ea698-6cc6-4a73-a0d2-1366e5374a9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.819 2 DEBUG nova.network.neutron [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:57:13 compute-0 nova_compute[259627]: 2025-10-14 08:57:13.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.017 2 DEBUG nova.network.neutron [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.043 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 01db05f6-07fb-41b5-8aaf-27ad5712fcda due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.044 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432234.0434237, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.044 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Resumed (Lifecycle Event)
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.046 2 DEBUG nova.compute.manager [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.049 2 INFO nova.virt.libvirt.driver [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance rebooted successfully.
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.049 2 DEBUG nova.compute.manager [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.425 2 DEBUG nova.network.neutron [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:14 compute-0 ceph-mon[74249]: pgmap v1159: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:14 compute-0 nova_compute[259627]: 2025-10-14 08:57:14.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.505 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.8 MiB/s wr, 461 op/s
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.511 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.773 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Releasing lock "refresh_cache-753ea698-6cc6-4a73-a0d2-1366e5374a9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.774 2 DEBUG nova.compute.manager [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.777 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.777 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432234.0455208, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.777 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Started (Lifecycle Event)
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.820 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.825 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.839 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:15 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 14 08:57:15 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 12.705s CPU time.
Oct 14 08:57:15 compute-0 systemd-machined[214636]: Machine qemu-16-instance-00000010 terminated.
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.994 2 INFO nova.virt.libvirt.driver [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance destroyed successfully.
Oct 14 08:57:15 compute-0 nova_compute[259627]: 2025-10-14 08:57:15.995 2 DEBUG nova.objects.instance [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'resources' on Instance uuid 753ea698-6cc6-4a73-a0d2-1366e5374a9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.028 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.029 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.029 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.029 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.030 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.148 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.149 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.149 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.149 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.149 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.150 2 WARNING nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state None.
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.150 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.150 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.151 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.151 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.151 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.152 2 WARNING nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state None.
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.152 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.152 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.152 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.153 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.153 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.153 2 WARNING nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state None.
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.209 2 DEBUG nova.compute.manager [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.209 2 DEBUG oslo_concurrency.lockutils [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.210 2 DEBUG oslo_concurrency.lockutils [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.210 2 DEBUG oslo_concurrency.lockutils [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.210 2 DEBUG nova.compute.manager [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] No waiting events found dispatching network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.210 2 WARNING nova.compute.manager [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received unexpected event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb for instance with vm_state active and task_state None.
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.423 2 INFO nova.virt.libvirt.driver [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Deleting instance files /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c_del
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.424 2 INFO nova.virt.libvirt.driver [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Deletion of /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c_del complete
Oct 14 08:57:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.471 2 INFO nova.compute.manager [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 14 08:57:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601108212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.471 2 DEBUG oslo.service.loopingcall [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.472 2 DEBUG nova.compute.manager [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.472 2 DEBUG nova.network.neutron [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.588 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.589 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.594 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.594 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.597 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.597 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.655 2 DEBUG nova.network.neutron [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:57:16 compute-0 ceph-mon[74249]: pgmap v1160: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.8 MiB/s wr, 461 op/s
Oct 14 08:57:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/601108212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.746 2 DEBUG nova.network.neutron [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.771 2 INFO nova.compute.manager [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Took 0.30 seconds to deallocate network for instance.
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.806 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.807 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4137MB free_disk=59.85552215576172GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.807 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.807 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.827 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.854 2 DEBUG nova.compute.manager [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.891 2 INFO nova.compute.manager [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] instance snapshotting
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.899 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 548bff7e-531b-4f5d-b4d3-18d586f46581 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.900 2 WARNING nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 753ea698-6cc6-4a73-a0d2-1366e5374a9c is not being actively managed by this compute host but has allocations referencing this compute host: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocation because we do not know what to do.
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 01db05f6-07fb-41b5-8aaf-27ad5712fcda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 08:57:16 compute-0 nova_compute[259627]: 2025-10-14 08:57:16.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.160 2 INFO nova.virt.libvirt.driver [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Beginning live snapshot process
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.407 2 DEBUG nova.virt.libvirt.imagebackend [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 08:57:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:57:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 281 op/s
Oct 14 08:57:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841566202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.541 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.547 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.565 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.601 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.603 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.604 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.617 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.647 2 INFO nova.scheduler.client.report [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Deleted allocations for instance 753ea698-6cc6-4a73-a0d2-1366e5374a9c
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.668 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] creating snapshot(836a67783e6e421cb193ce066e0062c6) on rbd image(cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:57:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2841566202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:17 compute-0 nova_compute[259627]: 2025-10-14 08:57:17.723 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:18 compute-0 nova_compute[259627]: 2025-10-14 08:57:18.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:18 compute-0 ovn_controller[152662]: 2025-10-14T08:57:18Z|00118|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 08:57:18 compute-0 ovn_controller[152662]: 2025-10-14T08:57:18Z|00119|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 08:57:18 compute-0 nova_compute[259627]: 2025-10-14 08:57:18.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:18 compute-0 ovn_controller[152662]: 2025-10-14T08:57:18Z|00120|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 08:57:18 compute-0 ovn_controller[152662]: 2025-10-14T08:57:18Z|00121|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 08:57:18 compute-0 nova_compute[259627]: 2025-10-14 08:57:18.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 14 08:57:18 compute-0 ceph-mon[74249]: pgmap v1161: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 281 op/s
Oct 14 08:57:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 14 08:57:18 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 14 08:57:18 compute-0 nova_compute[259627]: 2025-10-14 08:57:18.793 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] cloning vms/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk@836a67783e6e421cb193ce066e0062c6 to images/b53dc6f8-1bef-45e3-8ec2-4d8ee625214e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 08:57:18 compute-0 nova_compute[259627]: 2025-10-14 08:57:18.906 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] flattening images/b53dc6f8-1bef-45e3-8ec2-4d8ee625214e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 08:57:19 compute-0 nova_compute[259627]: 2025-10-14 08:57:19.170 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] removing snapshot(836a67783e6e421cb193ce066e0062c6) on rbd image(cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 08:57:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.3 MiB/s wr, 337 op/s
Oct 14 08:57:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 14 08:57:19 compute-0 ceph-mon[74249]: osdmap e129: 3 total, 3 up, 3 in
Oct 14 08:57:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 14 08:57:19 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 14 08:57:19 compute-0 nova_compute[259627]: 2025-10-14 08:57:19.785 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] creating snapshot(snap) on rbd image(b53dc6f8-1bef-45e3-8ec2-4d8ee625214e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.388 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432225.38623, 45f3b13d-65b1-4bbf-8192-7b842f616b4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.388 2 INFO nova.compute.manager [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] VM Stopped (Lifecycle Event)
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.392 2 DEBUG nova.compute.manager [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-changed-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.392 2 DEBUG nova.compute.manager [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing instance network info cache due to event network-changed-499f3731-d66b-4964-b5a8-387adacf5166. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.393 2 DEBUG oslo_concurrency.lockutils [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.393 2 DEBUG oslo_concurrency.lockutils [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.393 2 DEBUG nova.network.neutron [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.415 2 DEBUG nova.compute.manager [None req-381e5263-0efa-46e9-8117-347b7cc7f174 - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.605 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.606 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.606 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 08:57:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 14 08:57:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 14 08:57:20 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 14 08:57:20 compute-0 ceph-mon[74249]: pgmap v1163: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.3 MiB/s wr, 337 op/s
Oct 14 08:57:20 compute-0 ceph-mon[74249]: osdmap e130: 3 total, 3 up, 3 in
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.830 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.831 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.831 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 08:57:20 compute-0 nova_compute[259627]: 2025-10-14 08:57:20.831 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 548bff7e-531b-4f5d-b4d3-18d586f46581 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image b53dc6f8-1bef-45e3-8ec2-4d8ee625214e could not be found.
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID b53dc6f8-1bef-45e3-8ec2-4d8ee625214e
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver 
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver 
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image b53dc6f8-1bef-45e3-8ec2-4d8ee625214e could not be found.
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver 
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.242 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] removing snapshot(snap) on rbd image(b53dc6f8-1bef-45e3-8ec2-4d8ee625214e) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.401 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.401 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.402 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.403 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.403 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.405 2 INFO nova.compute.manager [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Terminating instance
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.407 2 DEBUG nova.compute.manager [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:57:21 compute-0 kernel: tap499f3731-d6 (unregistering): left promiscuous mode
Oct 14 08:57:21 compute-0 NetworkManager[44885]: <info>  [1760432241.4478] device (tap499f3731-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:57:21 compute-0 ovn_controller[152662]: 2025-10-14T08:57:21Z|00122|binding|INFO|Releasing lport 499f3731-d66b-4964-b5a8-387adacf5166 from this chassis (sb_readonly=0)
Oct 14 08:57:21 compute-0 ovn_controller[152662]: 2025-10-14T08:57:21Z|00123|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 down in Southbound
Oct 14 08:57:21 compute-0 ovn_controller[152662]: 2025-10-14T08:57:21Z|00124|binding|INFO|Removing iface tap499f3731-d6 ovn-installed in OVS
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.467 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:87 10.100.0.5'], port_security=['fa:16:3e:58:ee:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '01db05f6-07fb-41b5-8aaf-27ad5712fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '8', 'neutron:security_group_ids': '62a2565f-d2d5-452b-bee3-932bd15ef802 b044039a-151e-49aa-aa1d-0e709a3a113a e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=499f3731-d66b-4964-b5a8-387adacf5166) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.468 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 499f3731-d66b-4964-b5a8-387adacf5166 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 unbound from our chassis
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.470 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.499 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3f32b683-5f9a-40e0-8a87-3430c77d4224]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 260 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 191 op/s
Oct 14 08:57:21 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 14 08:57:21 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000011.scope: Consumed 8.363s CPU time.
Oct 14 08:57:21 compute-0 systemd-machined[214636]: Machine qemu-20-instance-00000011 terminated.
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.530 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f90d61-a7cb-40d5-9a23-d70cf01cea7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.534 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[44d81a3e-aab2-4f45-b175-9dc4284bad0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.560 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[85efc62f-f4da-4507-8c55-87ab10906692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.580 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e7ad63-d38f-4294-9221-3354073754a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287301, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.598 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6773ece-4dd7-4844-9649-dc86af3b368c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597151, 'tstamp': 597151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287302, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597154, 'tstamp': 597154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287302, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.600 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.606 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.606 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.607 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.608 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.646 2 INFO nova.virt.libvirt.driver [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance destroyed successfully.
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.647 2 DEBUG nova.objects.instance [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'resources' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.667 2 DEBUG nova.virt.libvirt.vif [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:15Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.667 2 DEBUG nova.network.os_vif_util [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.668 2 DEBUG nova.network.os_vif_util [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.668 2 DEBUG os_vif [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.672 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap499f3731-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.680 2 INFO os_vif [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6')
Oct 14 08:57:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 14 08:57:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 14 08:57:21 compute-0 ceph-mon[74249]: osdmap e131: 3 total, 3 up, 3 in
Oct 14 08:57:21 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 14 08:57:21 compute-0 nova_compute[259627]: 2025-10-14 08:57:21.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.184 2 WARNING nova.compute.manager [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Image not found during snapshot: nova.exception.ImageNotFound: Image b53dc6f8-1bef-45e3-8ec2-4d8ee625214e could not be found.
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.230 2 INFO nova.virt.libvirt.driver [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Deleting instance files /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda_del
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.231 2 INFO nova.virt.libvirt.driver [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Deletion of /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda_del complete
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.303 2 INFO nova.compute.manager [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.303 2 DEBUG oslo.service.loopingcall [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.304 2 DEBUG nova.compute.manager [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.304 2 DEBUG nova.network.neutron [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.400 2 DEBUG nova.network.neutron [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updated VIF entry in instance network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.401 2 DEBUG nova.network.neutron [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.425 2 DEBUG oslo_concurrency.lockutils [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.527 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.528 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.528 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.528 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.529 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.529 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.529 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.529 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.530 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.530 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.530 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:22 compute-0 nova_compute[259627]: 2025-10-14 08:57:22.531 2 WARNING nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state deleting.
Oct 14 08:57:22 compute-0 ceph-mon[74249]: pgmap v1166: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 260 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 191 op/s
Oct 14 08:57:22 compute-0 ceph-mon[74249]: osdmap e132: 3 total, 3 up, 3 in
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.114 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.139 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.140 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.141 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.463 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432228.4614785, 251ae181-b980-4338-a6b5-eee48450b510 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.464 2 INFO nova.compute.manager [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] VM Stopped (Lifecycle Event)
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.491 2 DEBUG nova.compute.manager [None req-6148d398-19f5-4518-b1ac-a6ca9eff49c5 - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.491 2 DEBUG nova.network.neutron [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.508 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:57:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 260 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 240 op/s
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.527 2 INFO nova.compute.manager [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Took 1.22 seconds to deallocate network for instance.
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.576 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.577 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:23 compute-0 nova_compute[259627]: 2025-10-14 08:57:23.678 2 DEBUG oslo_concurrency.processutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961221219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.134 2 DEBUG oslo_concurrency.processutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.141 2 DEBUG nova.compute.provider_tree [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.159 2 DEBUG nova.scheduler.client.report [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.174 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.175 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.175 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.175 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.176 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.177 2 INFO nova.compute.manager [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Terminating instance
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.178 2 DEBUG nova.compute.manager [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.180 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.209 2 INFO nova.scheduler.client.report [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Deleted allocations for instance 01db05f6-07fb-41b5-8aaf-27ad5712fcda
Oct 14 08:57:24 compute-0 kernel: tapd87ee1ff-7b (unregistering): left promiscuous mode
Oct 14 08:57:24 compute-0 NetworkManager[44885]: <info>  [1760432244.2493] device (tapd87ee1ff-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:24 compute-0 ovn_controller[152662]: 2025-10-14T08:57:24Z|00125|binding|INFO|Releasing lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb from this chassis (sb_readonly=0)
Oct 14 08:57:24 compute-0 ovn_controller[152662]: 2025-10-14T08:57:24Z|00126|binding|INFO|Setting lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb down in Southbound
Oct 14 08:57:24 compute-0 ovn_controller[152662]: 2025-10-14T08:57:24Z|00127|binding|INFO|Removing iface tapd87ee1ff-7b ovn-installed in OVS
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.266 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.266 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:e1:2a 10.100.0.11'], port_security=['fa:16:3e:80:e1:2a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.268 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce unbound from our chassis
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.270 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58d74886-d603-4fb5-b8ff-9c184284bdce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.271 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15435b0d-0ec1-432d-a2bb-3e80c0b39520]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.272 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace which is not needed anymore
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:24 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 14 08:57:24 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 11.587s CPU time.
Oct 14 08:57:24 compute-0 systemd-machined[214636]: Machine qemu-19-instance-00000013 terminated.
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.422 2 INFO nova.virt.libvirt.driver [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance destroyed successfully.
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.424 2 DEBUG nova.objects.instance [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'resources' on Instance uuid cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.439 2 DEBUG nova.virt.libvirt.vif [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:57:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-476297486',display_name='tempest-ImagesOneServerNegativeTestJSON-server-476297486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-476297486',id=19,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-7i3jta7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:22Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.440 2 DEBUG nova.network.os_vif_util [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.441 2 DEBUG nova.network.os_vif_util [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.442 2 DEBUG os_vif [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd87ee1ff-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:24 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [NOTICE]   (286979) : haproxy version is 2.8.14-c23fe91
Oct 14 08:57:24 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [NOTICE]   (286979) : path to executable is /usr/sbin/haproxy
Oct 14 08:57:24 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [WARNING]  (286979) : Exiting Master process...
Oct 14 08:57:24 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [WARNING]  (286979) : Exiting Master process...
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:24 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [ALERT]    (286979) : Current worker (286983) exited with code 143 (Terminated)
Oct 14 08:57:24 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [WARNING]  (286979) : All workers exited. Exiting... (0)
Oct 14 08:57:24 compute-0 systemd[1]: libpod-e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536.scope: Deactivated successfully.
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.457 2 INFO os_vif [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b')
Oct 14 08:57:24 compute-0 podman[287397]: 2025-10-14 08:57:24.464111411 +0000 UTC m=+0.080923279 container died e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 08:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cee4e261b09b2f84c9173291ee5a39f3882304b4c7d47e1f06ce36bd1963e8e-merged.mount: Deactivated successfully.
Oct 14 08:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536-userdata-shm.mount: Deactivated successfully.
Oct 14 08:57:24 compute-0 podman[287397]: 2025-10-14 08:57:24.514256042 +0000 UTC m=+0.131067920 container cleanup e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 08:57:24 compute-0 systemd[1]: libpod-conmon-e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536.scope: Deactivated successfully.
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.552 2 DEBUG nova.compute.manager [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-unplugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.553 2 DEBUG oslo_concurrency.lockutils [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.553 2 DEBUG oslo_concurrency.lockutils [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.554 2 DEBUG oslo_concurrency.lockutils [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.554 2 DEBUG nova.compute.manager [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] No waiting events found dispatching network-vif-unplugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.554 2 DEBUG nova.compute.manager [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-unplugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:57:24 compute-0 podman[287449]: 2025-10-14 08:57:24.597433324 +0000 UTC m=+0.048472461 container remove e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.604 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[53079310-3f8b-4924-9df9-3ca42f649e73]: (4, ('Tue Oct 14 08:57:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536)\ne5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536\nTue Oct 14 08:57:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536)\ne5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.606 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8a8321-61ae-43bf-9e45-777ac5112e64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.607 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:24 compute-0 kernel: tap58d74886-d0: left promiscuous mode
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.628 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[09c4c6cb-d564-46a2-ab76-2ecf38bff814]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e64b0bd-e9e3-4612-ab60-923a1fd3fd21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[55fb8494-bba7-4546-bcc5-a9839a272900]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.665 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9924ce-26b2-4c54-b1a0-98625aa06063]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601098, 'reachable_time': 27715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287464, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d58d74886\x2dd603\x2d4fb5\x2db8ff\x2d9c184284bdce.mount: Deactivated successfully.
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.670 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:57:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.670 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6c72af89-038d-4ee8-a881-8049e47a2e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.717 2 DEBUG nova.compute.manager [req-b9da01fb-57ce-4359-9c1d-ce311e98d904 req-e2883e82-3130-412d-9b85-272c0ff074c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-deleted-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:24 compute-0 ceph-mon[74249]: pgmap v1168: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 260 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 240 op/s
Oct 14 08:57:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3961221219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.846 2 INFO nova.virt.libvirt.driver [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Deleting instance files /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_del
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.848 2 INFO nova.virt.libvirt.driver [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Deletion of /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_del complete
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.897 2 INFO nova.compute.manager [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.898 2 DEBUG oslo.service.loopingcall [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.899 2 DEBUG nova.compute.manager [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:57:24 compute-0 nova_compute[259627]: 2025-10-14 08:57:24.899 2 DEBUG nova.network.neutron [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:57:25 compute-0 nova_compute[259627]: 2025-10-14 08:57:25.480 2 DEBUG nova.network.neutron [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.6 MiB/s wr, 431 op/s
Oct 14 08:57:25 compute-0 nova_compute[259627]: 2025-10-14 08:57:25.518 2 INFO nova.compute.manager [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Took 0.62 seconds to deallocate network for instance.
Oct 14 08:57:25 compute-0 nova_compute[259627]: 2025-10-14 08:57:25.607 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:25 compute-0 nova_compute[259627]: 2025-10-14 08:57:25.608 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:25 compute-0 nova_compute[259627]: 2025-10-14 08:57:25.673 2 DEBUG oslo_concurrency.processutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/587001898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.157 2 DEBUG oslo_concurrency.processutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.162 2 DEBUG nova.compute.provider_tree [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.190 2 DEBUG nova.scheduler.client.report [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.220 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.246 2 INFO nova.scheduler.client.report [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Deleted allocations for instance cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.326 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.750 2 DEBUG nova.compute.manager [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.750 2 DEBUG oslo_concurrency.lockutils [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.751 2 DEBUG oslo_concurrency.lockutils [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.751 2 DEBUG oslo_concurrency.lockutils [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.751 2 DEBUG nova.compute.manager [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] No waiting events found dispatching network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.752 2 WARNING nova.compute.manager [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received unexpected event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb for instance with vm_state deleted and task_state None.
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.752 2 DEBUG nova.compute.manager [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-deleted-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:26 compute-0 ceph-mon[74249]: pgmap v1169: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.6 MiB/s wr, 431 op/s
Oct 14 08:57:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/587001898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:26 compute-0 nova_compute[259627]: 2025-10-14 08:57:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 08:57:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 14 08:57:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 14 08:57:27 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 14 08:57:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 3.6 MiB/s wr, 212 op/s
Oct 14 08:57:28 compute-0 ceph-mon[74249]: osdmap e133: 3 total, 3 up, 3 in
Oct 14 08:57:28 compute-0 ceph-mon[74249]: pgmap v1171: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 3.6 MiB/s wr, 212 op/s
Oct 14 08:57:29 compute-0 nova_compute[259627]: 2025-10-14 08:57:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 3.1 MiB/s wr, 179 op/s
Oct 14 08:57:29 compute-0 podman[287491]: 2025-10-14 08:57:29.656002283 +0000 UTC m=+0.064724401 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:57:29 compute-0 podman[287492]: 2025-10-14 08:57:29.663823795 +0000 UTC m=+0.066246908 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 14 08:57:30 compute-0 ceph-mon[74249]: pgmap v1172: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 3.1 MiB/s wr, 179 op/s
Oct 14 08:57:30 compute-0 nova_compute[259627]: 2025-10-14 08:57:30.993 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432235.9921277, 753ea698-6cc6-4a73-a0d2-1366e5374a9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:30 compute-0 nova_compute[259627]: 2025-10-14 08:57:30.993 2 INFO nova.compute.manager [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] VM Stopped (Lifecycle Event)
Oct 14 08:57:31 compute-0 nova_compute[259627]: 2025-10-14 08:57:31.015 2 DEBUG nova.compute.manager [None req-8e0ac198-8c85-4a0a-8761-1c5e8049edd5 - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 256 KiB/s rd, 2.5 MiB/s wr, 147 op/s
Oct 14 08:57:31 compute-0 nova_compute[259627]: 2025-10-14 08:57:31.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.204 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.205 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.226 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.307 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.308 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.313 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.313 2 INFO nova.compute.claims [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.454 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:32.462 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:32.464 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 08:57:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:32 compute-0 ceph-mon[74249]: pgmap v1173: 305 pgs: 305 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 256 KiB/s rd, 2.5 MiB/s wr, 147 op/s
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:57:32
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', 'vms']
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:57:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016741972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:57:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.964 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:32 compute-0 nova_compute[259627]: 2025-10-14 08:57:32.972 2 DEBUG nova.compute.provider_tree [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.003 2 DEBUG nova.scheduler.client.report [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.026 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.027 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.073 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.073 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.163 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.189 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.324 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.326 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.326 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Creating image(s)
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.351 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.376 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.402 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.406 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.441 2 DEBUG nova.policy [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aafd6ad40c944c3eb14e7fbf454040c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.486 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.487 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.487 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.488 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.488 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.489 2 INFO nova.compute.manager [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Terminating instance
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.490 2 DEBUG nova.compute.manager [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.501 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.501 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.502 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.502 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.4 MiB/s wr, 143 op/s
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.528 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.531 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fc212c27-f5c2-4656-9c1f-7c39234fea45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:33 compute-0 kernel: tap7da6c99d-4e (unregistering): left promiscuous mode
Oct 14 08:57:33 compute-0 NetworkManager[44885]: <info>  [1760432253.5457] device (tap7da6c99d-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:57:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1016741972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:33 compute-0 ovn_controller[152662]: 2025-10-14T08:57:33Z|00128|binding|INFO|Releasing lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 from this chassis (sb_readonly=0)
Oct 14 08:57:33 compute-0 ovn_controller[152662]: 2025-10-14T08:57:33Z|00129|binding|INFO|Setting lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 down in Southbound
Oct 14 08:57:33 compute-0 ovn_controller[152662]: 2025-10-14T08:57:33Z|00130|binding|INFO|Removing iface tap7da6c99d-4e ovn-installed in OVS
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.602 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:2f:21 10.100.0.9'], port_security=['fa:16:3e:a5:2f:21 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '548bff7e-531b-4f5d-b4d3-18d586f46581', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4c703b9d-d52b-458a-9ec5-48a1addb8499 6ea635a1-0626-4474-9bd2-f9cda3d316d5 e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.605 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 unbound from our chassis
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.606 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.607 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07532e72-f289-4531-83ab-0fa0706160b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.608 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 namespace which is not needed anymore
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:33 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 14 08:57:33 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 15.089s CPU time.
Oct 14 08:57:33 compute-0 systemd-machined[214636]: Machine qemu-13-instance-0000000d terminated.
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.724 2 INFO nova.virt.libvirt.driver [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance destroyed successfully.
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.725 2 DEBUG nova.objects.instance [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'resources' on Instance uuid 548bff7e-531b-4f5d-b4d3-18d586f46581 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:33 compute-0 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [NOTICE]   (282801) : haproxy version is 2.8.14-c23fe91
Oct 14 08:57:33 compute-0 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [NOTICE]   (282801) : path to executable is /usr/sbin/haproxy
Oct 14 08:57:33 compute-0 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [WARNING]  (282801) : Exiting Master process...
Oct 14 08:57:33 compute-0 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [WARNING]  (282801) : Exiting Master process...
Oct 14 08:57:33 compute-0 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [ALERT]    (282801) : Current worker (282804) exited with code 143 (Terminated)
Oct 14 08:57:33 compute-0 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [WARNING]  (282801) : All workers exited. Exiting... (0)
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.750 2 DEBUG nova.virt.libvirt.vif [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1911316539',display_name='tempest-SecurityGroupsTestJSON-server-1911316539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1911316539',id=13,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:56:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-xovv24ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:56:34Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=548bff7e-531b-4f5d-b4d3-18d586f46581,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.751 2 DEBUG nova.network.os_vif_util [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.752 2 DEBUG nova.network.os_vif_util [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:33 compute-0 systemd[1]: libpod-6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765.scope: Deactivated successfully.
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.752 2 DEBUG os_vif [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7da6c99d-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:33 compute-0 podman[287670]: 2025-10-14 08:57:33.759614221 +0000 UTC m=+0.057034472 container died 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.761 2 INFO os_vif [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e')
Oct 14 08:57:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765-userdata-shm.mount: Deactivated successfully.
Oct 14 08:57:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-47fd54790ff5fec54b5deca5f48d8cccc184d3f02bd560c1168873a2e4409722-merged.mount: Deactivated successfully.
Oct 14 08:57:33 compute-0 podman[287670]: 2025-10-14 08:57:33.803319564 +0000 UTC m=+0.100739815 container cleanup 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 08:57:33 compute-0 systemd[1]: libpod-conmon-6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765.scope: Deactivated successfully.
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.840 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fc212c27-f5c2-4656-9c1f-7c39234fea45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:33 compute-0 podman[287731]: 2025-10-14 08:57:33.865350597 +0000 UTC m=+0.040493325 container remove 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa32b163-5c1b-492e-814f-31df3ac564fd]: (4, ('Tue Oct 14 08:57:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 (6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765)\n6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765\nTue Oct 14 08:57:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 (6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765)\n6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.875 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8af4e0-d02f-4546-84ab-4b1c8cb53644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.875 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:33 compute-0 kernel: tap58ff48d6-a0: left promiscuous mode
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.891 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] resizing rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12fbbcdc-bda9-4ffe-ae19-65e64c9ad43f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:33 compute-0 nova_compute[259627]: 2025-10-14 08:57:33.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.922 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79db778c-61ba-455d-bfbc-d0a00d10962a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.923 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b50ab952-040d-4c27-af45-67024eb018d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.937 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7958beac-91b8-442f-917b-96c1b3ff5412]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597126, 'reachable_time': 38808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287801, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d58ff48d6\x2da644\x2d40e6\x2d8fc9\x2dee19b4354df9.mount: Deactivated successfully.
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.939 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:57:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.939 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[07cf275c-53b6-46a9-9500-6eb7e18d2bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.004 2 DEBUG nova.objects.instance [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'migration_context' on Instance uuid fc212c27-f5c2-4656-9c1f-7c39234fea45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.040 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.041 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Ensure instance console log exists: /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.043 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.044 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.044 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.147 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Successfully created port: bbcf1d8e-1698-4198-a641-527122f98e09 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.184 2 INFO nova.virt.libvirt.driver [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Deleting instance files /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581_del
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.185 2 INFO nova.virt.libvirt.driver [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Deletion of /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581_del complete
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.240 2 INFO nova.compute.manager [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.241 2 DEBUG oslo.service.loopingcall [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.242 2 DEBUG nova.compute.manager [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.242 2 DEBUG nova.network.neutron [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:57:34 compute-0 ceph-mon[74249]: pgmap v1174: 305 pgs: 305 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.4 MiB/s wr, 143 op/s
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.918 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Successfully updated port: bbcf1d8e-1698-4198-a641-527122f98e09 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.947 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.947 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquired lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:34 compute-0 nova_compute[259627]: 2025-10-14 08:57:34.947 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.087 2 DEBUG nova.network.neutron [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.124 2 INFO nova.compute.manager [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Took 0.88 seconds to deallocate network for instance.
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.177 2 DEBUG nova.compute.manager [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-changed-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.177 2 DEBUG nova.compute.manager [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Refreshing instance network info cache due to event network-changed-bbcf1d8e-1698-4198-a641-527122f98e09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.178 2 DEBUG oslo_concurrency.lockutils [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.192 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.192 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.222 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.250 2 DEBUG nova.compute.manager [req-700d3bf5-bfa0-4d57-9078-baa99c8ecf29 req-fac76693-345e-49d6-a8de-a40a9c8c025b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-vif-deleted-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.257 2 DEBUG oslo_concurrency.processutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 08:57:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185393835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.762 2 DEBUG oslo_concurrency.processutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.768 2 DEBUG nova.compute.provider_tree [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.792 2 DEBUG nova.scheduler.client.report [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.836 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.874 2 INFO nova.scheduler.client.report [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Deleted allocations for instance 548bff7e-531b-4f5d-b4d3-18d586f46581
Oct 14 08:57:35 compute-0 nova_compute[259627]: 2025-10-14 08:57:35.969 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:36 compute-0 ceph-mon[74249]: pgmap v1175: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 08:57:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1185393835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:36 compute-0 nova_compute[259627]: 2025-10-14 08:57:36.644 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432241.643745, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:36 compute-0 nova_compute[259627]: 2025-10-14 08:57:36.645 2 INFO nova.compute.manager [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Stopped (Lifecycle Event)
Oct 14 08:57:36 compute-0 nova_compute[259627]: 2025-10-14 08:57:36.665 2 DEBUG nova.compute.manager [None req-755909fe-ee97-403d-8b10-2342e1aba93a - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:36 compute-0 nova_compute[259627]: 2025-10-14 08:57:36.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.151 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Updating instance_info_cache with network_info: [{"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.169 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Releasing lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.170 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance network_info: |[{"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.170 2 DEBUG oslo_concurrency.lockutils [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.171 2 DEBUG nova.network.neutron [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Refreshing network info cache for port bbcf1d8e-1698-4198-a641-527122f98e09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.174 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start _get_guest_xml network_info=[{"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.179 2 WARNING nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.182 2 DEBUG nova.virt.libvirt.host [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.183 2 DEBUG nova.virt.libvirt.host [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.185 2 DEBUG nova.virt.libvirt.host [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.186 2 DEBUG nova.virt.libvirt.host [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.186 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.186 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.187 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.187 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.187 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.187 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.189 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.191 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:57:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 08:57:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313730875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.701 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.721 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:37 compute-0 nova_compute[259627]: 2025-10-14 08:57:37.725 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550333764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.158 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.161 2 DEBUG nova.virt.libvirt.vif [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-373239651',display_name='tempest-ImagesOneServerNegativeTestJSON-server-373239651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-373239651',id=20,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-p1gxj4jk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:57:33Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=fc212c27-f5c2-4656-9c1f-7c39234fea45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.162 2 DEBUG nova.network.os_vif_util [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.164 2 DEBUG nova.network.os_vif_util [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.167 2 DEBUG nova.objects.instance [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc212c27-f5c2-4656-9c1f-7c39234fea45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.183 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <uuid>fc212c27-f5c2-4656-9c1f-7c39234fea45</uuid>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <name>instance-00000014</name>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-373239651</nova:name>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:57:37</nova:creationTime>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <nova:user uuid="aafd6ad40c944c3eb14e7fbf454040c3">tempest-ImagesOneServerNegativeTestJSON-531836018-project-member</nova:user>
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <nova:project uuid="f24bbeb2f91141e294590ca2afc5ed42">tempest-ImagesOneServerNegativeTestJSON-531836018</nova:project>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <nova:port uuid="bbcf1d8e-1698-4198-a641-527122f98e09">
Oct 14 08:57:38 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <system>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <entry name="serial">fc212c27-f5c2-4656-9c1f-7c39234fea45</entry>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <entry name="uuid">fc212c27-f5c2-4656-9c1f-7c39234fea45</entry>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </system>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <os>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   </os>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <features>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   </features>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fc212c27-f5c2-4656-9c1f-7c39234fea45_disk">
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config">
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:38 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:03:88:18"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <target dev="tapbbcf1d8e-16"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/console.log" append="off"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <video>
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </video>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:57:38 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:57:38 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:57:38 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:57:38 compute-0 nova_compute[259627]: </domain>
Oct 14 08:57:38 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.184 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Preparing to wait for external event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.185 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.186 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.186 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.187 2 DEBUG nova.virt.libvirt.vif [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-373239651',display_name='tempest-ImagesOneServerNegativeTestJSON-server-373239651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-373239651',id=20,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-p1gxj4jk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:57:33Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=fc212c27-f5c2-4656-9c1f-7c39234fea45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.188 2 DEBUG nova.network.os_vif_util [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.189 2 DEBUG nova.network.os_vif_util [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.189 2 DEBUG os_vif [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbbcf1d8e-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbbcf1d8e-16, col_values=(('external_ids', {'iface-id': 'bbcf1d8e-1698-4198-a641-527122f98e09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:88:18', 'vm-uuid': 'fc212c27-f5c2-4656-9c1f-7c39234fea45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:38 compute-0 NetworkManager[44885]: <info>  [1760432258.2016] manager: (tapbbcf1d8e-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.208 2 INFO os_vif [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16')
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.283 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.284 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.284 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No VIF found with MAC fa:16:3e:03:88:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.285 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Using config drive
Oct 14 08:57:38 compute-0 nova_compute[259627]: 2025-10-14 08:57:38.315 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:38 compute-0 ceph-mon[74249]: pgmap v1176: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 08:57:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2313730875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3550333764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:38 compute-0 podman[287927]: 2025-10-14 08:57:38.677578975 +0000 UTC m=+0.071408054 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 14 08:57:38 compute-0 podman[287926]: 2025-10-14 08:57:38.734047202 +0000 UTC m=+0.140246815 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.234 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Creating config drive at /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.245 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdtbvfi7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.295 2 DEBUG nova.network.neutron [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Updated VIF entry in instance network info cache for port bbcf1d8e-1698-4198-a641-527122f98e09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.297 2 DEBUG nova.network.neutron [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Updating instance_info_cache with network_info: [{"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.328 2 DEBUG oslo_concurrency.lockutils [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.403 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdtbvfi7" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.441 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.445 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.477 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432244.418375, cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.480 2 INFO nova.compute.manager [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] VM Stopped (Lifecycle Event)
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.499 2 DEBUG nova.compute.manager [None req-ec3993b5-3257-4b2b-b7b4-3f3ae60ffdd0 - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.657 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.658 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Deleting local config drive /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config because it was imported into RBD.
Oct 14 08:57:39 compute-0 kernel: tapbbcf1d8e-16: entered promiscuous mode
Oct 14 08:57:39 compute-0 NetworkManager[44885]: <info>  [1760432259.7284] manager: (tapbbcf1d8e-16): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Oct 14 08:57:39 compute-0 ovn_controller[152662]: 2025-10-14T08:57:39Z|00131|binding|INFO|Claiming lport bbcf1d8e-1698-4198-a641-527122f98e09 for this chassis.
Oct 14 08:57:39 compute-0 ovn_controller[152662]: 2025-10-14T08:57:39Z|00132|binding|INFO|bbcf1d8e-1698-4198-a641-527122f98e09: Claiming fa:16:3e:03:88:18 10.100.0.6
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.739 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:88:18 10.100.0.6'], port_security=['fa:16:3e:03:88:18 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fc212c27-f5c2-4656-9c1f-7c39234fea45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bbcf1d8e-1698-4198-a641-527122f98e09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.742 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bbcf1d8e-1698-4198-a641-527122f98e09 in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce bound to our chassis
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.744 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:57:39 compute-0 systemd-udevd[288022]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:57:39 compute-0 NetworkManager[44885]: <info>  [1760432259.8015] device (tapbbcf1d8e-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:57:39 compute-0 NetworkManager[44885]: <info>  [1760432259.8038] device (tapbbcf1d8e-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f5b7db-0555-4338-b393-9943333c3389]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.804 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58d74886-d1 in ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:57:39 compute-0 ovn_controller[152662]: 2025-10-14T08:57:39Z|00133|binding|INFO|Setting lport bbcf1d8e-1698-4198-a641-527122f98e09 ovn-installed in OVS
Oct 14 08:57:39 compute-0 ovn_controller[152662]: 2025-10-14T08:57:39Z|00134|binding|INFO|Setting lport bbcf1d8e-1698-4198-a641-527122f98e09 up in Southbound
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.807 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58d74886-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.807 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9e7dab-df35-4ab8-85a6-df0ec5c82a6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.809 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c927e0f-3cef-46eb-ae92-19a0ac037ee8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 nova_compute[259627]: 2025-10-14 08:57:39.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:39 compute-0 systemd-machined[214636]: New machine qemu-21-instance-00000014.
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.831 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[04bb43fa-39f5-4d1e-baca-b36cbbb90176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000014.
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.858 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[121bc9f3-e80a-4cd4-ae0d-336193d3426d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.890 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed696cf-2ccb-4fa3-9402-c623e897c31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc5fcf0-9075-4651-a8f5-eb60211235d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 NetworkManager[44885]: <info>  [1760432259.8977] manager: (tap58d74886-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Oct 14 08:57:39 compute-0 systemd-udevd[288024]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.939 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[74ab51f3-9a71-47dc-991b-39e1e33d4710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.943 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[770db971-9e29-4f12-833a-12d561191c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:39 compute-0 NetworkManager[44885]: <info>  [1760432259.9773] device (tap58d74886-d0): carrier: link connected
Oct 14 08:57:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.984 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d32641d5-aadb-40b7-a338-1c42f2fdb4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.006 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8f96bc2d-1d57-4c71-85e0-4a38aa93fd5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603873, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288056, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.027 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7b7a9a-f6f9-417b-92e8-8cf6df89a9e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:d2a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603873, 'tstamp': 603873}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288057, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.053 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58d50778-0668-4079-a71c-352cdffd4a41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603873, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288058, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.096 2 DEBUG nova.compute.manager [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.096 2 DEBUG oslo_concurrency.lockutils [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.097 2 DEBUG oslo_concurrency.lockutils [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.097 2 DEBUG oslo_concurrency.lockutils [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.097 2 DEBUG nova.compute.manager [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Processing event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.097 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa79b659-cc44-47ae-af0e-18a7737c960c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.175 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9525ab69-c88e-48a9-9ded-d9506917121d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.176 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.177 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.178 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d74886-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:40 compute-0 kernel: tap58d74886-d0: entered promiscuous mode
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:40 compute-0 NetworkManager[44885]: <info>  [1760432260.1879] manager: (tap58d74886-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.187 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d74886-d0, col_values=(('external_ids', {'iface-id': 'ef5c894d-34c4-4781-b15c-6813576a45e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:40 compute-0 ovn_controller[152662]: 2025-10-14T08:57:40Z|00135|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.194 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.195 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7a137d66-98d4-4ea1-b2da-64533951057e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.197 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:57:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.197 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'env', 'PROCESS_TAG=haproxy-58d74886-d603-4fb5-b8ff-9c184284bdce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58d74886-d603-4fb5-b8ff-9c184284bdce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:40 compute-0 podman[288132]: 2025-10-14 08:57:40.571317618 +0000 UTC m=+0.045796835 container create e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:57:40 compute-0 systemd[1]: Started libpod-conmon-e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043.scope.
Oct 14 08:57:40 compute-0 podman[288132]: 2025-10-14 08:57:40.548123309 +0000 UTC m=+0.022602536 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:57:40 compute-0 ceph-mon[74249]: pgmap v1177: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 08:57:40 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8dab89cc5adeb10fc352a887eff47aa5a4cb5a21ba42bc369c3d35d1606c66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:40 compute-0 podman[288132]: 2025-10-14 08:57:40.677958047 +0000 UTC m=+0.152437274 container init e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 08:57:40 compute-0 podman[288132]: 2025-10-14 08:57:40.685058631 +0000 UTC m=+0.159537838 container start e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:57:40 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [NOTICE]   (288151) : New worker (288153) forked
Oct 14 08:57:40 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [NOTICE]   (288151) : Loading success.
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.883 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.885 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432260.883868, fc212c27-f5c2-4656-9c1f-7c39234fea45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.885 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] VM Started (Lifecycle Event)
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.891 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.900 2 INFO nova.virt.libvirt.driver [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance spawned successfully.
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.902 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.938 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.946 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.954 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.955 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.956 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.957 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.958 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:40 compute-0 nova_compute[259627]: 2025-10-14 08:57:40.959 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.006 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.007 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432260.883987, fc212c27-f5c2-4656-9c1f-7c39234fea45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.008 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] VM Paused (Lifecycle Event)
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.041 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432260.8906248, fc212c27-f5c2-4656-9c1f-7c39234fea45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.045 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] VM Resumed (Lifecycle Event)
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.050 2 INFO nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Took 7.73 seconds to spawn the instance on the hypervisor.
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.051 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.075 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.079 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.102 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.113 2 INFO nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Took 8.84 seconds to build instance.
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.134 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:41 compute-0 ovn_controller[152662]: 2025-10-14T08:57:41Z|00136|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:41.466 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 14 08:57:41 compute-0 nova_compute[259627]: 2025-10-14 08:57:41.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:42 compute-0 nova_compute[259627]: 2025-10-14 08:57:42.324 2 DEBUG nova.compute.manager [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:42 compute-0 nova_compute[259627]: 2025-10-14 08:57:42.325 2 DEBUG oslo_concurrency.lockutils [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:42 compute-0 nova_compute[259627]: 2025-10-14 08:57:42.325 2 DEBUG oslo_concurrency.lockutils [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:42 compute-0 nova_compute[259627]: 2025-10-14 08:57:42.325 2 DEBUG oslo_concurrency.lockutils [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:42 compute-0 nova_compute[259627]: 2025-10-14 08:57:42.325 2 DEBUG nova.compute.manager [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] No waiting events found dispatching network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:42 compute-0 nova_compute[259627]: 2025-10-14 08:57:42.326 2 WARNING nova.compute.manager [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received unexpected event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 for instance with vm_state active and task_state None.
Oct 14 08:57:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:57:42 compute-0 ceph-mon[74249]: pgmap v1178: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:57:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.130 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.131 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.131 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.132 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.133 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.135 2 INFO nova.compute.manager [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Terminating instance
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.136 2 DEBUG nova.compute.manager [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:57:43 compute-0 kernel: tapbbcf1d8e-16 (unregistering): left promiscuous mode
Oct 14 08:57:43 compute-0 NetworkManager[44885]: <info>  [1760432263.1912] device (tapbbcf1d8e-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:57:43 compute-0 ovn_controller[152662]: 2025-10-14T08:57:43Z|00137|binding|INFO|Releasing lport bbcf1d8e-1698-4198-a641-527122f98e09 from this chassis (sb_readonly=0)
Oct 14 08:57:43 compute-0 ovn_controller[152662]: 2025-10-14T08:57:43Z|00138|binding|INFO|Setting lport bbcf1d8e-1698-4198-a641-527122f98e09 down in Southbound
Oct 14 08:57:43 compute-0 ovn_controller[152662]: 2025-10-14T08:57:43Z|00139|binding|INFO|Removing iface tapbbcf1d8e-16 ovn-installed in OVS
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.244 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:88:18 10.100.0.6'], port_security=['fa:16:3e:03:88:18 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fc212c27-f5c2-4656-9c1f-7c39234fea45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bbcf1d8e-1698-4198-a641-527122f98e09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.246 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bbcf1d8e-1698-4198-a641-527122f98e09 in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce unbound from our chassis
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.247 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58d74886-d603-4fb5-b8ff-9c184284bdce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.249 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c861cef8-c687-4fe9-b760-61f201297fc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.249 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace which is not needed anymore
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:43 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 14 08:57:43 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000014.scope: Consumed 3.373s CPU time.
Oct 14 08:57:43 compute-0 systemd-machined[214636]: Machine qemu-21-instance-00000014 terminated.
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.383 2 INFO nova.virt.libvirt.driver [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance destroyed successfully.
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.384 2 DEBUG nova.objects.instance [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'resources' on Instance uuid fc212c27-f5c2-4656-9c1f-7c39234fea45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.405 2 DEBUG nova.virt.libvirt.vif [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-373239651',display_name='tempest-ImagesOneServerNegativeTestJSON-server-373239651',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-373239651',id=20,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-p1gxj4jk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:41Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=fc212c27-f5c2-4656-9c1f-7c39234fea45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.406 2 DEBUG nova.network.os_vif_util [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.408 2 DEBUG nova.network.os_vif_util [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.408 2 DEBUG os_vif [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbcf1d8e-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.423 2 INFO os_vif [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16')
Oct 14 08:57:43 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [NOTICE]   (288151) : haproxy version is 2.8.14-c23fe91
Oct 14 08:57:43 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [NOTICE]   (288151) : path to executable is /usr/sbin/haproxy
Oct 14 08:57:43 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [WARNING]  (288151) : Exiting Master process...
Oct 14 08:57:43 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [ALERT]    (288151) : Current worker (288153) exited with code 143 (Terminated)
Oct 14 08:57:43 compute-0 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [WARNING]  (288151) : All workers exited. Exiting... (0)
Oct 14 08:57:43 compute-0 systemd[1]: libpod-e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043.scope: Deactivated successfully.
Oct 14 08:57:43 compute-0 podman[288187]: 2025-10-14 08:57:43.438884684 +0000 UTC m=+0.077127745 container died e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 08:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043-userdata-shm.mount: Deactivated successfully.
Oct 14 08:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e8dab89cc5adeb10fc352a887eff47aa5a4cb5a21ba42bc369c3d35d1606c66-merged.mount: Deactivated successfully.
Oct 14 08:57:43 compute-0 podman[288187]: 2025-10-14 08:57:43.48923632 +0000 UTC m=+0.127479361 container cleanup e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:57:43 compute-0 systemd[1]: libpod-conmon-e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043.scope: Deactivated successfully.
Oct 14 08:57:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 14 08:57:43 compute-0 podman[288244]: 2025-10-14 08:57:43.556516412 +0000 UTC m=+0.040595757 container remove e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[994df02a-7df0-46e5-98f2-7474cccf5191]: (4, ('Tue Oct 14 08:57:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043)\ne15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043\nTue Oct 14 08:57:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043)\ne15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.570 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee1b447-1a9c-4c29-8310-7e090f9a0b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.572 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:43 compute-0 kernel: tap58d74886-d0: left promiscuous mode
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.585 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18d200c3-75ee-4a03-9b0c-320cfab6d754]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.610 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1b4128-11f4-4776-b0f4-d1c588b0d468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[221782fe-aee3-4085-827b-140c6d23d133]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.632 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6818fc68-2aee-47c7-8116-a8c42e90adec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603864, 'reachable_time': 32392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288259, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d58d74886\x2dd603\x2d4fb5\x2db8ff\x2d9c184284bdce.mount: Deactivated successfully.
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.636 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:57:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.636 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b09ae004-7853-4d1f-8a23-202ffe77a2ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:57:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 14 08:57:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 14 08:57:43 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.780 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.780 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.809 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.849 2 INFO nova.virt.libvirt.driver [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Deleting instance files /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45_del
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.850 2 INFO nova.virt.libvirt.driver [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Deletion of /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45_del complete
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.890 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.891 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.904 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.905 2 INFO nova.compute.claims [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.926 2 INFO nova.compute.manager [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.927 2 DEBUG oslo.service.loopingcall [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.928 2 DEBUG nova.compute.manager [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:57:43 compute-0 nova_compute[259627]: 2025-10-14 08:57:43.928 2 DEBUG nova.network.neutron [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.043 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.427 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-unplugged-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.428 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.428 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.429 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.430 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] No waiting events found dispatching network-vif-unplugged-bbcf1d8e-1698-4198-a641-527122f98e09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.430 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-unplugged-bbcf1d8e-1698-4198-a641-527122f98e09 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.431 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.431 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.432 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.432 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.432 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] No waiting events found dispatching network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.433 2 WARNING nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received unexpected event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 for instance with vm_state active and task_state deleting.
Oct 14 08:57:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2714449684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.553 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.563 2 DEBUG nova.compute.provider_tree [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.584 2 DEBUG nova.scheduler.client.report [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.608 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.609 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.653 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.670 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:57:44 compute-0 ceph-mon[74249]: pgmap v1179: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 14 08:57:44 compute-0 ceph-mon[74249]: osdmap e134: 3 total, 3 up, 3 in
Oct 14 08:57:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2714449684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.686 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.775 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.777 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.778 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Creating image(s)
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.818 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.857 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.896 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.903 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.965 2 DEBUG nova.network.neutron [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:57:44 compute-0 nova_compute[259627]: 2025-10-14 08:57:44.991 2 INFO nova.compute.manager [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Took 1.06 seconds to deallocate network for instance.
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.013 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.014 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.014 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.015 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.046 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.051 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c79d4673-ee43-418f-8d38-f48cb8dc4659_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.089 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.090 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.178 2 DEBUG oslo_concurrency.processutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.316 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c79d4673-ee43-418f-8d38-f48cb8dc4659_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.388 2 DEBUG nova.compute.manager [req-66dc5236-b6ff-4543-9bc6-efcf75469d9b req-bdcc7c51-7d59-45ef-9c9f-330af3e98fb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-deleted-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.394 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] resizing rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.492 2 DEBUG nova.objects.instance [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lazy-loading 'migration_context' on Instance uuid c79d4673-ee43-418f-8d38-f48cb8dc4659 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.507 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.508 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Ensure instance console log exists: /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.509 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.509 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.509 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.511 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.517 2 WARNING nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.523 2 DEBUG nova.virt.libvirt.host [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:57:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.524 2 DEBUG nova.virt.libvirt.host [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.528 2 DEBUG nova.virt.libvirt.host [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.529 2 DEBUG nova.virt.libvirt.host [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.530 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.530 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.531 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.531 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.531 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.532 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.532 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.532 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.532 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.533 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.533 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.533 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.537 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:57:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666485927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.649 2 DEBUG oslo_concurrency.processutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.658 2 DEBUG nova.compute.provider_tree [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.676 2 DEBUG nova.scheduler.client.report [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:57:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1666485927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.702 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.726 2 INFO nova.scheduler.client.report [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Deleted allocations for instance fc212c27-f5c2-4656-9c1f-7c39234fea45
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.802 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574151818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:45 compute-0 nova_compute[259627]: 2025-10-14 08:57:45.994 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.027 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.033 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:57:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770348061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.489 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.493 2 DEBUG nova.objects.instance [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lazy-loading 'pci_devices' on Instance uuid c79d4673-ee43-418f-8d38-f48cb8dc4659 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.521 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <uuid>c79d4673-ee43-418f-8d38-f48cb8dc4659</uuid>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <name>instance-00000015</name>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-1352080678</nova:name>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:57:45</nova:creationTime>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <nova:user uuid="d52b590f38bb47e0abb3e01c8a1352af">tempest-ServerDiagnosticsV248Test-911640918-project-member</nova:user>
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <nova:project uuid="53de42c913444310bd1af3c50b917f19">tempest-ServerDiagnosticsV248Test-911640918</nova:project>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <system>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <entry name="serial">c79d4673-ee43-418f-8d38-f48cb8dc4659</entry>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <entry name="uuid">c79d4673-ee43-418f-8d38-f48cb8dc4659</entry>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     </system>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <os>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   </os>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <features>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   </features>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c79d4673-ee43-418f-8d38-f48cb8dc4659_disk">
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config">
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       </source>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:57:46 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/console.log" append="off"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <video>
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     </video>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:57:46 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:57:46 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:57:46 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:57:46 compute-0 nova_compute[259627]: </domain>
Oct 14 08:57:46 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.589 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.590 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.591 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Using config drive
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.625 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:46 compute-0 ceph-mon[74249]: pgmap v1181: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 08:57:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/574151818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1770348061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:57:46 compute-0 nova_compute[259627]: 2025-10-14 08:57:46.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:47 compute-0 nova_compute[259627]: 2025-10-14 08:57:47.049 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Creating config drive at /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config
Oct 14 08:57:47 compute-0 nova_compute[259627]: 2025-10-14 08:57:47.054 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uxx0gof execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:47 compute-0 nova_compute[259627]: 2025-10-14 08:57:47.186 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uxx0gof" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:47 compute-0 nova_compute[259627]: 2025-10-14 08:57:47.232 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:57:47 compute-0 nova_compute[259627]: 2025-10-14 08:57:47.238 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:57:47 compute-0 nova_compute[259627]: 2025-10-14 08:57:47.438 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:57:47 compute-0 nova_compute[259627]: 2025-10-14 08:57:47.439 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Deleting local config drive /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config because it was imported into RBD.
Oct 14 08:57:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:57:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 08:57:47 compute-0 systemd-machined[214636]: New machine qemu-22-instance-00000015.
Oct 14 08:57:47 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000015.
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.531 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432268.5304124, c79d4673-ee43-418f-8d38-f48cb8dc4659 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.531 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] VM Resumed (Lifecycle Event)
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.534 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.534 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.537 2 INFO nova.virt.libvirt.driver [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance spawned successfully.
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.538 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.549 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.554 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.558 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.558 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.558 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.559 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.559 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.559 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.585 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.586 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432268.5311723, c79d4673-ee43-418f-8d38-f48cb8dc4659 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.586 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] VM Started (Lifecycle Event)
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.610 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.613 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.622 2 INFO nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Took 3.85 seconds to spawn the instance on the hypervisor.
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.622 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.629 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.668 2 INFO nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Took 4.81 seconds to build instance.
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.685 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:57:48 compute-0 ceph-mon[74249]: pgmap v1182: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.723 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432253.7214246, 548bff7e-531b-4f5d-b4d3-18d586f46581 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.723 2 INFO nova.compute.manager [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] VM Stopped (Lifecycle Event)
Oct 14 08:57:48 compute-0 nova_compute[259627]: 2025-10-14 08:57:48.743 2 DEBUG nova.compute.manager [None req-aedfa9ce-9107-4467-bb0d-80aa14fb7183 - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 08:57:50 compute-0 nova_compute[259627]: 2025-10-14 08:57:50.520 2 DEBUG nova.compute.manager [None req-29373806-73f4-4dfa-8b9f-0124a9c9392d 4086d396d4af49dfad54dbc8ee5ac67c a636329e57e7406abf5fc3ca0a39a6f0 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:50 compute-0 nova_compute[259627]: 2025-10-14 08:57:50.524 2 INFO nova.compute.manager [None req-29373806-73f4-4dfa-8b9f-0124a9c9392d 4086d396d4af49dfad54dbc8ee5ac67c a636329e57e7406abf5fc3ca0a39a6f0 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Retrieving diagnostics
Oct 14 08:57:50 compute-0 ceph-mon[74249]: pgmap v1183: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 08:57:51 compute-0 nova_compute[259627]: 2025-10-14 08:57:51.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 14 08:57:51 compute-0 nova_compute[259627]: 2025-10-14 08:57:51.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:57:52 compute-0 ceph-mon[74249]: pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 14 08:57:53 compute-0 nova_compute[259627]: 2025-10-14 08:57:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:53 compute-0 sudo[288649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:57:53 compute-0 sudo[288649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:53 compute-0 sudo[288649]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:53 compute-0 sudo[288674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:57:53 compute-0 sudo[288674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:53 compute-0 sudo[288674]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 14 08:57:53 compute-0 sudo[288699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:57:53 compute-0 sudo[288699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:53 compute-0 sudo[288699]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:53 compute-0 sudo[288724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 08:57:53 compute-0 sudo[288724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:54 compute-0 sudo[288724]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:57:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:57:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 08:57:54 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:57:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 08:57:54 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:57:54 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8f539f58-d1ee-4602-b817-88e5fb8e3b1e does not exist
Oct 14 08:57:54 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0f4baf51-cc91-4227-a03a-ab538f650d34 does not exist
Oct 14 08:57:54 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5e58bb41-7ff1-4d75-ba23-5f1e6dc7b0fd does not exist
Oct 14 08:57:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 08:57:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:57:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 08:57:54 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:57:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:57:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:57:54 compute-0 sudo[288781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:57:54 compute-0 sudo[288781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:54 compute-0 sudo[288781]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:54 compute-0 sudo[288806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:57:54 compute-0 sudo[288806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:54 compute-0 sudo[288806]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:54 compute-0 sudo[288831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:57:54 compute-0 sudo[288831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:54 compute-0 sudo[288831]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:54 compute-0 sudo[288856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 08:57:54 compute-0 sudo[288856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:54 compute-0 podman[288922]: 2025-10-14 08:57:54.720297639 +0000 UTC m=+0.035702177 container create c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 08:57:54 compute-0 ceph-mon[74249]: pgmap v1185: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 14 08:57:54 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:57:54 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:57:54 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:57:54 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:57:54 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:57:54 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:57:54 compute-0 systemd[1]: Started libpod-conmon-c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56.scope.
Oct 14 08:57:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:57:54 compute-0 podman[288922]: 2025-10-14 08:57:54.704777668 +0000 UTC m=+0.020182226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:57:54 compute-0 podman[288922]: 2025-10-14 08:57:54.804292272 +0000 UTC m=+0.119696810 container init c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:57:54 compute-0 podman[288922]: 2025-10-14 08:57:54.81074762 +0000 UTC m=+0.126152158 container start c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:57:54 compute-0 podman[288922]: 2025-10-14 08:57:54.814038921 +0000 UTC m=+0.129443489 container attach c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 08:57:54 compute-0 optimistic_darwin[288939]: 167 167
Oct 14 08:57:54 compute-0 systemd[1]: libpod-c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56.scope: Deactivated successfully.
Oct 14 08:57:54 compute-0 podman[288922]: 2025-10-14 08:57:54.816527622 +0000 UTC m=+0.131932150 container died c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 08:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ad5f94520192653472d495653be34411d33a70b620750c95ca0ecb0eed8c3a6-merged.mount: Deactivated successfully.
Oct 14 08:57:54 compute-0 podman[288922]: 2025-10-14 08:57:54.851519702 +0000 UTC m=+0.166924240 container remove c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 08:57:54 compute-0 systemd[1]: libpod-conmon-c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56.scope: Deactivated successfully.
Oct 14 08:57:55 compute-0 podman[288962]: 2025-10-14 08:57:55.054543537 +0000 UTC m=+0.071653970 container create 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 08:57:55 compute-0 systemd[1]: Started libpod-conmon-13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850.scope.
Oct 14 08:57:55 compute-0 podman[288962]: 2025-10-14 08:57:55.02492993 +0000 UTC m=+0.042040423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:57:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:55 compute-0 podman[288962]: 2025-10-14 08:57:55.186378544 +0000 UTC m=+0.203488977 container init 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:57:55 compute-0 podman[288962]: 2025-10-14 08:57:55.198368349 +0000 UTC m=+0.215478752 container start 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:57:55 compute-0 podman[288962]: 2025-10-14 08:57:55.201950907 +0000 UTC m=+0.219061310 container attach 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:57:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 14 08:57:56 compute-0 vibrant_pasteur[288979]: --> passed data devices: 0 physical, 3 LVM
Oct 14 08:57:56 compute-0 vibrant_pasteur[288979]: --> relative data size: 1.0
Oct 14 08:57:56 compute-0 vibrant_pasteur[288979]: --> All data devices are unavailable
Oct 14 08:57:56 compute-0 systemd[1]: libpod-13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850.scope: Deactivated successfully.
Oct 14 08:57:56 compute-0 systemd[1]: libpod-13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850.scope: Consumed 1.094s CPU time.
Oct 14 08:57:56 compute-0 podman[288962]: 2025-10-14 08:57:56.357231015 +0000 UTC m=+1.374341408 container died 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635-merged.mount: Deactivated successfully.
Oct 14 08:57:56 compute-0 podman[288962]: 2025-10-14 08:57:56.410610076 +0000 UTC m=+1.427720479 container remove 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 08:57:56 compute-0 systemd[1]: libpod-conmon-13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850.scope: Deactivated successfully.
Oct 14 08:57:56 compute-0 sudo[288856]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:56 compute-0 sudo[289022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:57:56 compute-0 sudo[289022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:56 compute-0 sudo[289022]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:56 compute-0 sudo[289047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:57:56 compute-0 sudo[289047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:56 compute-0 sudo[289047]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:56 compute-0 sudo[289072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:57:56 compute-0 sudo[289072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:56 compute-0 sudo[289072]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:56 compute-0 sudo[289097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 08:57:56 compute-0 sudo[289097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:56 compute-0 ceph-mon[74249]: pgmap v1186: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 14 08:57:56 compute-0 nova_compute[259627]: 2025-10-14 08:57:56.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:57 compute-0 podman[289162]: 2025-10-14 08:57:57.050891599 +0000 UTC m=+0.044698579 container create 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 08:57:57 compute-0 systemd[1]: Started libpod-conmon-3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831.scope.
Oct 14 08:57:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:57:57 compute-0 podman[289162]: 2025-10-14 08:57:57.030888548 +0000 UTC m=+0.024695618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:57:57 compute-0 podman[289162]: 2025-10-14 08:57:57.133651421 +0000 UTC m=+0.127458451 container init 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 14 08:57:57 compute-0 podman[289162]: 2025-10-14 08:57:57.14011279 +0000 UTC m=+0.133919780 container start 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:57:57 compute-0 podman[289162]: 2025-10-14 08:57:57.143282818 +0000 UTC m=+0.137089828 container attach 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:57:57 compute-0 sleepy_elbakyan[289180]: 167 167
Oct 14 08:57:57 compute-0 systemd[1]: libpod-3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831.scope: Deactivated successfully.
Oct 14 08:57:57 compute-0 podman[289162]: 2025-10-14 08:57:57.145115053 +0000 UTC m=+0.138922063 container died 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 14 08:57:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f517dce1b1f3a939264c309f9b581b37d11e9d215fc23d5e0f4c33f67d441cb-merged.mount: Deactivated successfully.
Oct 14 08:57:57 compute-0 podman[289162]: 2025-10-14 08:57:57.179463166 +0000 UTC m=+0.173270146 container remove 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 08:57:57 compute-0 systemd[1]: libpod-conmon-3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831.scope: Deactivated successfully.
Oct 14 08:57:57 compute-0 podman[289205]: 2025-10-14 08:57:57.355417997 +0000 UTC m=+0.041915160 container create 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 14 08:57:57 compute-0 systemd[1]: Started libpod-conmon-9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098.scope.
Oct 14 08:57:57 compute-0 podman[289205]: 2025-10-14 08:57:57.334669207 +0000 UTC m=+0.021166420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:57:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:57 compute-0 podman[289205]: 2025-10-14 08:57:57.450593454 +0000 UTC m=+0.137090637 container init 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:57:57 compute-0 podman[289205]: 2025-10-14 08:57:57.459177835 +0000 UTC m=+0.145674998 container start 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:57:57 compute-0 podman[289205]: 2025-10-14 08:57:57.462521617 +0000 UTC m=+0.149018780 container attach 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 14 08:57:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:57:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Oct 14 08:57:58 compute-0 interesting_wing[289221]: {
Oct 14 08:57:58 compute-0 interesting_wing[289221]:     "0": [
Oct 14 08:57:58 compute-0 interesting_wing[289221]:         {
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "devices": [
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "/dev/loop3"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             ],
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_name": "ceph_lv0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_size": "21470642176",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "name": "ceph_lv0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "tags": {
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cluster_name": "ceph",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.crush_device_class": "",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.encrypted": "0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osd_id": "0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.type": "block",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.vdo": "0"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             },
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "type": "block",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "vg_name": "ceph_vg0"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:         }
Oct 14 08:57:58 compute-0 interesting_wing[289221]:     ],
Oct 14 08:57:58 compute-0 interesting_wing[289221]:     "1": [
Oct 14 08:57:58 compute-0 interesting_wing[289221]:         {
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "devices": [
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "/dev/loop4"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             ],
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_name": "ceph_lv1",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_size": "21470642176",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "name": "ceph_lv1",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "tags": {
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cluster_name": "ceph",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.crush_device_class": "",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.encrypted": "0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osd_id": "1",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.type": "block",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.vdo": "0"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             },
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "type": "block",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "vg_name": "ceph_vg1"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:         }
Oct 14 08:57:58 compute-0 interesting_wing[289221]:     ],
Oct 14 08:57:58 compute-0 interesting_wing[289221]:     "2": [
Oct 14 08:57:58 compute-0 interesting_wing[289221]:         {
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "devices": [
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "/dev/loop5"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             ],
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_name": "ceph_lv2",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_size": "21470642176",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "name": "ceph_lv2",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "tags": {
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.cluster_name": "ceph",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.crush_device_class": "",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.encrypted": "0",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osd_id": "2",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.type": "block",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:                 "ceph.vdo": "0"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             },
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "type": "block",
Oct 14 08:57:58 compute-0 interesting_wing[289221]:             "vg_name": "ceph_vg2"
Oct 14 08:57:58 compute-0 interesting_wing[289221]:         }
Oct 14 08:57:58 compute-0 interesting_wing[289221]:     ]
Oct 14 08:57:58 compute-0 interesting_wing[289221]: }
Oct 14 08:57:58 compute-0 systemd[1]: libpod-9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098.scope: Deactivated successfully.
Oct 14 08:57:58 compute-0 podman[289205]: 2025-10-14 08:57:58.300602427 +0000 UTC m=+0.987099620 container died 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231-merged.mount: Deactivated successfully.
Oct 14 08:57:58 compute-0 podman[289205]: 2025-10-14 08:57:58.360088138 +0000 UTC m=+1.046585301 container remove 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 08:57:58 compute-0 systemd[1]: libpod-conmon-9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098.scope: Deactivated successfully.
Oct 14 08:57:58 compute-0 nova_compute[259627]: 2025-10-14 08:57:58.379 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432263.3788803, fc212c27-f5c2-4656-9c1f-7c39234fea45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:57:58 compute-0 nova_compute[259627]: 2025-10-14 08:57:58.381 2 INFO nova.compute.manager [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] VM Stopped (Lifecycle Event)
Oct 14 08:57:58 compute-0 sudo[289097]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:58 compute-0 nova_compute[259627]: 2025-10-14 08:57:58.405 2 DEBUG nova.compute.manager [None req-62ae390c-765f-4506-a3bc-f70a6734491e - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:57:58 compute-0 nova_compute[259627]: 2025-10-14 08:57:58.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:57:58 compute-0 sudo[289244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:57:58 compute-0 sudo[289244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:58 compute-0 sudo[289244]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:58 compute-0 sudo[289269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:57:58 compute-0 sudo[289269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:58 compute-0 sudo[289269]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:58 compute-0 sudo[289294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:57:58 compute-0 sudo[289294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:58 compute-0 sudo[289294]: pam_unix(sudo:session): session closed for user root
Oct 14 08:57:58 compute-0 sudo[289319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 08:57:58 compute-0 sudo[289319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:57:58 compute-0 ceph-mon[74249]: pgmap v1187: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Oct 14 08:57:58 compute-0 podman[289382]: 2025-10-14 08:57:58.980638205 +0000 UTC m=+0.038232559 container create cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 08:57:59 compute-0 systemd[1]: Started libpod-conmon-cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb.scope.
Oct 14 08:57:59 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:57:59 compute-0 podman[289382]: 2025-10-14 08:57:59.040884185 +0000 UTC m=+0.098478559 container init cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 08:57:59 compute-0 podman[289382]: 2025-10-14 08:57:59.047444646 +0000 UTC m=+0.105039000 container start cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 08:57:59 compute-0 podman[289382]: 2025-10-14 08:57:59.051356962 +0000 UTC m=+0.108951356 container attach cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 08:57:59 compute-0 nifty_thompson[289398]: 167 167
Oct 14 08:57:59 compute-0 systemd[1]: libpod-cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb.scope: Deactivated successfully.
Oct 14 08:57:59 compute-0 podman[289382]: 2025-10-14 08:57:59.053890854 +0000 UTC m=+0.111485208 container died cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 08:57:59 compute-0 podman[289382]: 2025-10-14 08:57:58.963917165 +0000 UTC m=+0.021511539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-30028a58ae730c2ebbef6a6453f98d27678ac1fa69dfacff7f1d70dbef238903-merged.mount: Deactivated successfully.
Oct 14 08:57:59 compute-0 podman[289382]: 2025-10-14 08:57:59.095740112 +0000 UTC m=+0.153334456 container remove cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:57:59 compute-0 systemd[1]: libpod-conmon-cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb.scope: Deactivated successfully.
Oct 14 08:57:59 compute-0 podman[289422]: 2025-10-14 08:57:59.254390897 +0000 UTC m=+0.038833964 container create b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 08:57:59 compute-0 systemd[1]: Started libpod-conmon-b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646.scope.
Oct 14 08:57:59 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:57:59 compute-0 podman[289422]: 2025-10-14 08:57:59.319540757 +0000 UTC m=+0.103983844 container init b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 08:57:59 compute-0 podman[289422]: 2025-10-14 08:57:59.332169457 +0000 UTC m=+0.116612514 container start b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:57:59 compute-0 podman[289422]: 2025-10-14 08:57:59.238149769 +0000 UTC m=+0.022592866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:57:59 compute-0 podman[289422]: 2025-10-14 08:57:59.335170411 +0000 UTC m=+0.119613478 container attach b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 08:57:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]: {
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "osd_id": 2,
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "type": "bluestore"
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:     },
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "osd_id": 1,
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "type": "bluestore"
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:     },
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "osd_id": 0,
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:         "type": "bluestore"
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]:     }
Oct 14 08:58:00 compute-0 dazzling_thompson[289439]: }
Oct 14 08:58:00 compute-0 systemd[1]: libpod-b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646.scope: Deactivated successfully.
Oct 14 08:58:00 compute-0 conmon[289439]: conmon b17236158007fedfe48d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646.scope/container/memory.events
Oct 14 08:58:00 compute-0 podman[289422]: 2025-10-14 08:58:00.330450231 +0000 UTC m=+1.114893298 container died b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:58:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d-merged.mount: Deactivated successfully.
Oct 14 08:58:00 compute-0 podman[289422]: 2025-10-14 08:58:00.395047308 +0000 UTC m=+1.179490375 container remove b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:58:00 compute-0 systemd[1]: libpod-conmon-b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646.scope: Deactivated successfully.
Oct 14 08:58:00 compute-0 sudo[289319]: pam_unix(sudo:session): session closed for user root
Oct 14 08:58:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:58:00 compute-0 podman[289481]: 2025-10-14 08:58:00.436828574 +0000 UTC m=+0.071030126 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 08:58:00 compute-0 podman[289472]: 2025-10-14 08:58:00.436866964 +0000 UTC m=+0.076891839 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:58:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:58:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:58:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:58:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ce990c2d-32e7-4665-ac9b-e61463cd5840 does not exist
Oct 14 08:58:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a274596-c563-4684-93a0-a6e67c82f22a does not exist
Oct 14 08:58:00 compute-0 sudo[289521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:58:00 compute-0 sudo[289521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:58:00 compute-0 sudo[289521]: pam_unix(sudo:session): session closed for user root
Oct 14 08:58:00 compute-0 sudo[289546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 08:58:00 compute-0 sudo[289546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:58:00 compute-0 sudo[289546]: pam_unix(sudo:session): session closed for user root
Oct 14 08:58:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 14 08:58:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 14 08:58:00 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 14 08:58:00 compute-0 nova_compute[259627]: 2025-10-14 08:58:00.818 2 DEBUG nova.compute.manager [None req-5f4511de-3744-4d0d-8dfd-eaa94387243a 4086d396d4af49dfad54dbc8ee5ac67c a636329e57e7406abf5fc3ca0a39a6f0 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:00 compute-0 nova_compute[259627]: 2025-10-14 08:58:00.826 2 INFO nova.compute.manager [None req-5f4511de-3744-4d0d-8dfd-eaa94387243a 4086d396d4af49dfad54dbc8ee5ac67c a636329e57e7406abf5fc3ca0a39a6f0 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Retrieving diagnostics
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.101 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.102 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.102 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "c79d4673-ee43-418f-8d38-f48cb8dc4659-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.102 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.102 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.103 2 INFO nova.compute.manager [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Terminating instance
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.104 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "refresh_cache-c79d4673-ee43-418f-8d38-f48cb8dc4659" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.104 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquired lock "refresh_cache-c79d4673-ee43-418f-8d38-f48cb8dc4659" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.104 2 DEBUG nova.network.neutron [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.406 2 DEBUG nova.network.neutron [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:01 compute-0 ceph-mon[74249]: pgmap v1188: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Oct 14 08:58:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:58:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:58:01 compute-0 ceph-mon[74249]: osdmap e135: 3 total, 3 up, 3 in
Oct 14 08:58:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 120 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.732 2 DEBUG nova.network.neutron [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.749 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Releasing lock "refresh_cache-c79d4673-ee43-418f-8d38-f48cb8dc4659" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.749 2 DEBUG nova.compute.manager [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:58:01 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 14 08:58:01 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000015.scope: Consumed 12.061s CPU time.
Oct 14 08:58:01 compute-0 systemd-machined[214636]: Machine qemu-22-instance-00000015 terminated.
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.981 2 INFO nova.virt.libvirt.driver [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance destroyed successfully.
Oct 14 08:58:01 compute-0 nova_compute[259627]: 2025-10-14 08:58:01.982 2 DEBUG nova.objects.instance [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lazy-loading 'resources' on Instance uuid c79d4673-ee43-418f-8d38-f48cb8dc4659 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.471 2 INFO nova.virt.libvirt.driver [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Deleting instance files /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659_del
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.473 2 INFO nova.virt.libvirt.driver [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Deletion of /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659_del complete
Oct 14 08:58:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.541 2 INFO nova.compute.manager [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.541 2 DEBUG oslo.service.loopingcall [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.542 2 DEBUG nova.compute.manager [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.542 2 DEBUG nova.network.neutron [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.830 2 DEBUG nova.network.neutron [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.854 2 DEBUG nova.network.neutron [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.871 2 INFO nova.compute.manager [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Took 0.33 seconds to deallocate network for instance.
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.911 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.912 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:02 compute-0 nova_compute[259627]: 2025-10-14 08:58:02.967 2 DEBUG oslo_concurrency.processutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:03 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198239872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:03 compute-0 nova_compute[259627]: 2025-10-14 08:58:03.428 2 DEBUG oslo_concurrency.processutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:03 compute-0 nova_compute[259627]: 2025-10-14 08:58:03.461 2 DEBUG nova.compute.provider_tree [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:03 compute-0 ceph-mon[74249]: pgmap v1190: 305 pgs: 305 active+clean; 120 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Oct 14 08:58:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1198239872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:03 compute-0 nova_compute[259627]: 2025-10-14 08:58:03.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:03 compute-0 nova_compute[259627]: 2025-10-14 08:58:03.486 2 DEBUG nova.scheduler.client.report [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:03 compute-0 nova_compute[259627]: 2025-10-14 08:58:03.512 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 120 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Oct 14 08:58:03 compute-0 nova_compute[259627]: 2025-10-14 08:58:03.554 2 INFO nova.scheduler.client.report [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Deleted allocations for instance c79d4673-ee43-418f-8d38-f48cb8dc4659
Oct 14 08:58:03 compute-0 nova_compute[259627]: 2025-10-14 08:58:03.609 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:05 compute-0 ceph-mon[74249]: pgmap v1191: 305 pgs: 305 active+clean; 120 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Oct 14 08:58:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 08:58:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3465458057' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:58:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 08:58:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3465458057' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:58:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 426 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Oct 14 08:58:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3465458057' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:58:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3465458057' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:58:06 compute-0 nova_compute[259627]: 2025-10-14 08:58:06.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 14 08:58:07 compute-0 ceph-mon[74249]: pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 426 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Oct 14 08:58:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 14 08:58:07 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 14 08:58:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 532 KiB/s rd, 3.2 MiB/s wr, 166 op/s
Oct 14 08:58:07 compute-0 nova_compute[259627]: 2025-10-14 08:58:07.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:07 compute-0 nova_compute[259627]: 2025-10-14 08:58:07.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 08:58:08 compute-0 nova_compute[259627]: 2025-10-14 08:58:08.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:08 compute-0 ceph-mon[74249]: osdmap e136: 3 total, 3 up, 3 in
Oct 14 08:58:08 compute-0 ceph-mon[74249]: pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 532 KiB/s rd, 3.2 MiB/s wr, 166 op/s
Oct 14 08:58:08 compute-0 nova_compute[259627]: 2025-10-14 08:58:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 42 KiB/s wr, 69 op/s
Oct 14 08:58:09 compute-0 podman[289616]: 2025-10-14 08:58:09.714809022 +0000 UTC m=+0.113816485 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 08:58:09 compute-0 podman[289615]: 2025-10-14 08:58:09.729211016 +0000 UTC m=+0.138275446 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 08:58:10 compute-0 ceph-mon[74249]: pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 42 KiB/s wr, 69 op/s
Oct 14 08:58:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 36 KiB/s wr, 60 op/s
Oct 14 08:58:11 compute-0 nova_compute[259627]: 2025-10-14 08:58:11.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:12 compute-0 ceph-mon[74249]: pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 36 KiB/s wr, 60 op/s
Oct 14 08:58:12 compute-0 nova_compute[259627]: 2025-10-14 08:58:12.988 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:13 compute-0 nova_compute[259627]: 2025-10-14 08:58:13.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 36 KiB/s wr, 60 op/s
Oct 14 08:58:13 compute-0 nova_compute[259627]: 2025-10-14 08:58:13.781 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "51c76e0f-284d-4122-83b4-32c4518b9056" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:13 compute-0 nova_compute[259627]: 2025-10-14 08:58:13.782 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:13 compute-0 nova_compute[259627]: 2025-10-14 08:58:13.800 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:13 compute-0 nova_compute[259627]: 2025-10-14 08:58:13.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.003 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.004 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.014 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.014 2 INFO nova.compute.claims [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.315 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:14 compute-0 ceph-mon[74249]: pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 36 KiB/s wr, 60 op/s
Oct 14 08:58:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246393399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.811 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.821 2 DEBUG nova.compute.provider_tree [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.853 2 DEBUG nova.scheduler.client.report [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.903 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.904 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.977 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:14 compute-0 nova_compute[259627]: 2025-10-14 08:58:14.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.013 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.039 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.161 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.164 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.165 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating image(s)
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.206 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.248 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.278 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.283 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.380 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.381 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.382 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.382 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.409 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.413 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:58:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2246393399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.681 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.769 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] resizing rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.884 2 DEBUG nova.objects.instance [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'migration_context' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.903 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.904 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Ensure instance console log exists: /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.904 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.905 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.905 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.906 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.910 2 WARNING nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.916 2 DEBUG nova.virt.libvirt.host [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.917 2 DEBUG nova.virt.libvirt.host [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.922 2 DEBUG nova.virt.libvirt.host [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.923 2 DEBUG nova.virt.libvirt.host [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.923 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.923 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.924 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.924 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.924 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.925 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.925 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.925 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.925 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.926 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.926 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.926 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.929 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:15 compute-0 nova_compute[259627]: 2025-10-14 08:58:15.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715691113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.367 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.394 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.400 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:16 compute-0 ceph-mon[74249]: pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:58:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3715691113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301376505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.909 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.912 2 DEBUG nova.objects.instance [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.980 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432281.9769046, c79d4673-ee43-418f-8d38-f48cb8dc4659 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.981 2 INFO nova.compute.manager [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] VM Stopped (Lifecycle Event)
Oct 14 08:58:16 compute-0 nova_compute[259627]: 2025-10-14 08:58:16.987 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <uuid>51c76e0f-284d-4122-83b4-32c4518b9056</uuid>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <name>instance-00000016</name>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdmin275Test-server-546094612</nova:name>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:15</nova:creationTime>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <nova:user uuid="24a7b84f511340ae859b668a0e7becf6">tempest-ServersAdmin275Test-1795131452-project-member</nova:user>
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <nova:project uuid="61066a48551647f18a4cfb7a7147e7ed">tempest-ServersAdmin275Test-1795131452</nova:project>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <entry name="serial">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <entry name="uuid">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk">
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk.config">
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log" append="off"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:16 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:16 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:16 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:16 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:16 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.198 2 DEBUG nova.compute.manager [None req-07187fbb-98d6-4000-b259-b6be5f75ab26 - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.238 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.239 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.240 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.240 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.241 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.285 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.286 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.288 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Using config drive
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.318 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.372 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.372 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.394 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.527 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.528 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.538 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.538 2 INFO nova.compute.claims [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3301376505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.657 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283171025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.740 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.800 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.800 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.844 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating config drive at /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.849 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6cw0xl_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:17 compute-0 nova_compute[259627]: 2025-10-14 08:58:17.977 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6cw0xl_" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.001 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.005 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.061 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.062 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4575MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.062 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348919076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.153 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.160 2 DEBUG nova.compute.provider_tree [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.187 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.188 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting local config drive /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config because it was imported into RBD.
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.192 2 DEBUG nova.scheduler.client.report [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.222 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.223 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.228 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:18 compute-0 systemd-machined[214636]: New machine qemu-23-instance-00000016.
Oct 14 08:58:18 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000016.
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.327 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.328 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.349 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 51c76e0f-284d-4122-83b4-32c4518b9056 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.349 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance a071857d-db87-4931-95ad-f8c627f74160 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.350 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.350 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.355 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.373 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.432 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.507 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.509 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.509 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Creating image(s)
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.532 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.562 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.595 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.610 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.657 2 DEBUG nova.policy [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '664abc01a11d458d9644488bf31e47f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '99649891054745d8a5186a1ad099e5a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:18 compute-0 ceph-mon[74249]: pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:58:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4283171025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2348919076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.721 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.722 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.724 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.725 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.755 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.760 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 a071857d-db87-4931-95ad-f8c627f74160_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583583320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.989 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.995 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:18 compute-0 nova_compute[259627]: 2025-10-14 08:58:18.997 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 a071857d-db87-4931-95ad-f8c627f74160_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.027 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.060 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.061 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.061 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.062 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.067 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] resizing rbd image a071857d-db87-4931-95ad-f8c627f74160_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.096 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.154 2 DEBUG nova.objects.instance [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'migration_context' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.171 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.172 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Ensure instance console log exists: /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.172 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.173 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.173 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.315 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432299.315043, 51c76e0f-284d-4122-83b4-32c4518b9056 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.316 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Resumed (Lifecycle Event)
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.317 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.318 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.322 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance spawned successfully.
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.322 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.354 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.360 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.363 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.363 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.363 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.364 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.364 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.365 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.410 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.412 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432299.3158717, 51c76e0f-284d-4122-83b4-32c4518b9056 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.412 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Started (Lifecycle Event)
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.439 2 INFO nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Took 4.28 seconds to spawn the instance on the hypervisor.
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.440 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.441 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.450 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.494 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.533 2 INFO nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Took 5.68 seconds to build instance.
Oct 14 08:58:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.564 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1583583320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:19 compute-0 nova_compute[259627]: 2025-10-14 08:58:19.816 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Successfully created port: fd335735-b88a-42f7-911e-af4b2b9396fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:20 compute-0 sshd-session[290260]: banner exchange: Connection from 93.123.109.214 port 37582: invalid format
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.097 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.183 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.184 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.213 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:20 compute-0 sshd-session[290261]: banner exchange: Connection from 93.123.109.214 port 37598: invalid format
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.295 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.296 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.302 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.303 2 INFO nova.compute.claims [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.426 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:20 compute-0 ceph-mon[74249]: pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 08:58:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225914499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.845 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.850 2 DEBUG nova.compute.provider_tree [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.870 2 DEBUG nova.scheduler.client.report [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.904 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.905 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.974 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.975 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.982 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Successfully updated port: fd335735-b88a-42f7-911e-af4b2b9396fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:20 compute-0 nova_compute[259627]: 2025-10-14 08:58:20.987 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.000 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.002 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.003 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquired lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.003 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.020 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.122 2 DEBUG nova.compute.manager [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-changed-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.123 2 DEBUG nova.compute.manager [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Refreshing instance network info cache due to event network-changed-fd335735-b88a-42f7-911e-af4b2b9396fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.124 2 DEBUG oslo_concurrency.lockutils [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.127 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.129 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.130 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Creating image(s)
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.160 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.193 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.221 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.226 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.269 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.300 2 DEBUG nova.policy [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.310 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.312 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.313 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.313 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.348 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.354 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2826d9ce-d739-49a1-abfa-80cee62173fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 134 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.619 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2826d9ce-d739-49a1-abfa-80cee62173fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.673 2 INFO nova.compute.manager [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Rebuilding instance
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.680 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/225914499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.761 2 DEBUG nova.objects.instance [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid 2826d9ce-d739-49a1-abfa-80cee62173fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.779 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.780 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Ensure instance console log exists: /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.780 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.781 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.781 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.925 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Successfully created port: 146ca52f-0b4f-46f0-9153-1120bf1c9e4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.980 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:58:21 compute-0 nova_compute[259627]: 2025-10-14 08:58:21.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.033 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.067 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'trusted_certs' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.089 2 DEBUG nova.compute.manager [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.141 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'pci_requests' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.158 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.178 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'resources' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.199 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'migration_context' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.220 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.224 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.280 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.281 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.297 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.340 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.360 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Releasing lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.360 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance network_info: |[{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.361 2 DEBUG oslo_concurrency.lockutils [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.362 2 DEBUG nova.network.neutron [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Refreshing network info cache for port fd335735-b88a-42f7-911e-af4b2b9396fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.365 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start _get_guest_xml network_info=[{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.369 2 WARNING nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.371 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.372 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.378 2 DEBUG nova.virt.libvirt.host [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.379 2 DEBUG nova.virt.libvirt.host [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.381 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.381 2 INFO nova.compute.claims [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.409 2 DEBUG nova.virt.libvirt.host [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.410 2 DEBUG nova.virt.libvirt.host [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.410 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.410 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.411 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.411 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.411 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.412 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.412 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.412 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.413 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.413 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.413 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.414 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.416 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:22 compute-0 ceph-mon[74249]: pgmap v1201: 305 pgs: 305 active+clean; 134 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.812 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611157671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.864 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.887 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:22 compute-0 nova_compute[259627]: 2025-10-14 08:58:22.890 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.021 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Successfully updated port: 146ca52f-0b4f-46f0-9153-1120bf1c9e4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.044 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.045 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.045 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.174 2 DEBUG nova.compute.manager [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-changed-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.175 2 DEBUG nova.compute.manager [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Refreshing instance network info cache due to event network-changed-146ca52f-0b4f-46f0-9153-1120bf1c9e4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.175 2 DEBUG oslo_concurrency.lockutils [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488972167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.253 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.255 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.262 2 DEBUG nova.compute.provider_tree [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589485826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.286 2 DEBUG nova.scheduler.client.report [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.300 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.301 2 DEBUG nova.virt.libvirt.vif [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:18Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.302 2 DEBUG nova.network.os_vif_util [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.303 2 DEBUG nova.network.os_vif_util [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.304 2 DEBUG nova.objects.instance [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.317 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.317 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.325 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <uuid>a071857d-db87-4931-95ad-f8c627f74160</uuid>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <name>instance-00000017</name>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <nova:name>tempest-AttachInterfacesV270Test-server-635702397</nova:name>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:22</nova:creationTime>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <nova:user uuid="664abc01a11d458d9644488bf31e47f4">tempest-AttachInterfacesV270Test-1504567615-project-member</nova:user>
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <nova:project uuid="99649891054745d8a5186a1ad099e5a7">tempest-AttachInterfacesV270Test-1504567615</nova:project>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <nova:port uuid="fd335735-b88a-42f7-911e-af4b2b9396fb">
Oct 14 08:58:23 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <entry name="serial">a071857d-db87-4931-95ad-f8c627f74160</entry>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <entry name="uuid">a071857d-db87-4931-95ad-f8c627f74160</entry>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/a071857d-db87-4931-95ad-f8c627f74160_disk">
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/a071857d-db87-4931-95ad-f8c627f74160_disk.config">
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:58:04:b2"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <target dev="tapfd335735-b8"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/console.log" append="off"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:23 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:23 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:23 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:23 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:23 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.326 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Preparing to wait for external event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.327 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.327 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.327 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.328 2 DEBUG nova.virt.libvirt.vif [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:18Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.328 2 DEBUG nova.network.os_vif_util [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.329 2 DEBUG nova.network.os_vif_util [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.329 2 DEBUG os_vif [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd335735-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd335735-b8, col_values=(('external_ids', {'iface-id': 'fd335735-b88a-42f7-911e-af4b2b9396fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:04:b2', 'vm-uuid': 'a071857d-db87-4931-95ad-f8c627f74160'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.356 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.356 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:23 compute-0 NetworkManager[44885]: <info>  [1760432303.3677] manager: (tapfd335735-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.372 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.375 2 INFO os_vif [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8')
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.389 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.464 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.465 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.465 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No VIF found with MAC fa:16:3e:58:04:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.466 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Using config drive
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.484 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 134 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.545 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.548 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.549 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Creating image(s)
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.575 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.601 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.626 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.630 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.673 2 DEBUG nova.policy [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5aacb60ad29c418c9161e71bb72da036', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1611157671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/488972167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/589485826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.709 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.710 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.711 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.711 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.731 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.734 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:23 compute-0 nova_compute[259627]: 2025-10-14 08:58:23.965 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.035 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] resizing rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.120 2 DEBUG nova.objects.instance [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'migration_context' on Instance uuid aefbf308-7f99-4a76-8d5e-54613f6bdf83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.132 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.132 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Ensure instance console log exists: /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.132 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.133 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.133 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.178 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Creating config drive at /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.182 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoenbk_71 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.240 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully created port: 57b4d441-0c29-4419-b20b-3b5c4223b7a8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.310 2 DEBUG nova.network.neutron [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updated VIF entry in instance network info cache for port fd335735-b88a-42f7-911e-af4b2b9396fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.311 2 DEBUG nova.network.neutron [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.320 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Updating instance_info_cache with network_info: [{"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.325 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoenbk_71" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.326 2 DEBUG oslo_concurrency.lockutils [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.353 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.358 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config a071857d-db87-4931-95ad-f8c627f74160_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.394 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.395 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance network_info: |[{"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.396 2 DEBUG oslo_concurrency.lockutils [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.396 2 DEBUG nova.network.neutron [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Refreshing network info cache for port 146ca52f-0b4f-46f0-9153-1120bf1c9e4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.398 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start _get_guest_xml network_info=[{"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.403 2 WARNING nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.408 2 DEBUG nova.virt.libvirt.host [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.408 2 DEBUG nova.virt.libvirt.host [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.423 2 DEBUG nova.virt.libvirt.host [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.423 2 DEBUG nova.virt.libvirt.host [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.424 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.424 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.424 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.426 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.426 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.426 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.426 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.429 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.567 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config a071857d-db87-4931-95ad-f8c627f74160_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.568 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Deleting local config drive /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config because it was imported into RBD.
Oct 14 08:58:24 compute-0 kernel: tapfd335735-b8: entered promiscuous mode
Oct 14 08:58:24 compute-0 NetworkManager[44885]: <info>  [1760432304.6206] manager: (tapfd335735-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Oct 14 08:58:24 compute-0 ovn_controller[152662]: 2025-10-14T08:58:24Z|00140|binding|INFO|Claiming lport fd335735-b88a-42f7-911e-af4b2b9396fb for this chassis.
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:24 compute-0 ovn_controller[152662]: 2025-10-14T08:58:24Z|00141|binding|INFO|fd335735-b88a-42f7-911e-af4b2b9396fb: Claiming fa:16:3e:58:04:b2 10.100.0.3
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.640 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:04:b2 10.100.0.3'], port_security=['fa:16:3e:58:04:b2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a071857d-db87-4931-95ad-f8c627f74160', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e06007a-4993-4328-9612-b43b931e2e3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99649891054745d8a5186a1ad099e5a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cdd7ee35-5316-4ad1-b1f1-84c66df2ce6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb643099-5fd1-493f-8a92-ffa6e64eb2b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fd335735-b88a-42f7-911e-af4b2b9396fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.641 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fd335735-b88a-42f7-911e-af4b2b9396fb in datapath 8e06007a-4993-4328-9612-b43b931e2e3b bound to our chassis
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.643 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e06007a-4993-4328-9612-b43b931e2e3b
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.659 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b58a496a-90ee-43f9-8a2f-840401fe2cdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.659 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e06007a-41 in ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:58:24 compute-0 systemd-machined[214636]: New machine qemu-24-instance-00000017.
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.671 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e06007a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[49fbefb9-c950-4d89-8764-e42c6463d97e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.673 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2428ea-d4cc-4823-b53f-fd92add5f4da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.684 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6acb39eb-86cf-4ee7-aac6-0a43da11fb0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000017.
Oct 14 08:58:24 compute-0 ceph-mon[74249]: pgmap v1202: 305 pgs: 305 active+clean; 134 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.708 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac682ee0-8bf2-4801-a0a2-7d3efbf71ee3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 systemd-udevd[290797]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:24 compute-0 ovn_controller[152662]: 2025-10-14T08:58:24Z|00142|binding|INFO|Setting lport fd335735-b88a-42f7-911e-af4b2b9396fb ovn-installed in OVS
Oct 14 08:58:24 compute-0 ovn_controller[152662]: 2025-10-14T08:58:24Z|00143|binding|INFO|Setting lport fd335735-b88a-42f7-911e-af4b2b9396fb up in Southbound
Oct 14 08:58:24 compute-0 NetworkManager[44885]: <info>  [1760432304.7271] device (tapfd335735-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:58:24 compute-0 NetworkManager[44885]: <info>  [1760432304.7284] device (tapfd335735-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.758 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf6c403-87c2-4f8b-a45d-b95f281bd821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 NetworkManager[44885]: <info>  [1760432304.7692] manager: (tap8e06007a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Oct 14 08:58:24 compute-0 systemd-udevd[290800]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.768 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36af6436-80d5-424d-8b02-cbac564cd0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.821 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4dfb51-0427-4209-bf08-d6fb95199ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.825 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7a6730-642d-41ce-8765-7c1f85f5d304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 NetworkManager[44885]: <info>  [1760432304.8548] device (tap8e06007a-40): carrier: link connected
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.863 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[61d9d308-e576-4ede-ab8d-ed716a80387f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.880 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[657db799-eeb2-48c3-aaa5-e23c3b7c57cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e06007a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608361, 'reachable_time': 37451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290827, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.898 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98e50c6b-28db-4eb0-8ea6-c5292ad9f8fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:643c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608361, 'tstamp': 608361}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290828, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.914 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8db4dee2-a35a-4b90-b32a-cc140605b92c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e06007a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608361, 'reachable_time': 37451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290829, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501182591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.946 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e72065e2-438b-444b-a1c0-ee773b3b1535]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.959 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.977 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:24 compute-0 nova_compute[259627]: 2025-10-14 08:58:24.994 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.007 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9058b3c1-c734-4b43-be18-4832d3f33779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.008 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e06007a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.008 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.009 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e06007a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:25 compute-0 NetworkManager[44885]: <info>  [1760432305.0110] manager: (tap8e06007a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct 14 08:58:25 compute-0 kernel: tap8e06007a-40: entered promiscuous mode
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.013 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e06007a-40, col_values=(('external_ids', {'iface-id': 'd2ffe2ca-9cdc-4c2a-a690-8b35b09cf563'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:25 compute-0 ovn_controller[152662]: 2025-10-14T08:58:25Z|00144|binding|INFO|Releasing lport d2ffe2ca-9cdc-4c2a-a690-8b35b09cf563 from this chassis (sb_readonly=0)
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.037 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e06007a-4993-4328-9612-b43b931e2e3b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e06007a-4993-4328-9612-b43b931e2e3b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.038 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a81026-01dc-41ff-8ff6-9f1b58f90b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.038 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-8e06007a-4993-4328-9612-b43b931e2e3b
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/8e06007a-4993-4328-9612-b43b931e2e3b.pid.haproxy
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 8e06007a-4993-4328-9612-b43b931e2e3b
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:58:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.039 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'env', 'PROCESS_TAG=haproxy-8e06007a-4993-4328-9612-b43b931e2e3b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e06007a-4993-4328-9612-b43b931e2e3b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.271 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully created port: 685df8a6-7b64-441e-9a56-4ede8db5faa9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:25 compute-0 podman[290901]: 2025-10-14 08:58:25.430455465 +0000 UTC m=+0.061017020 container create 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 08:58:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031768786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:25 compute-0 systemd[1]: Started libpod-conmon-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49.scope.
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.473 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.475 2 DEBUG nova.virt.libvirt.vif [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877571750',display_name='tempest-ImagesTestJSON-server-1877571750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877571750',id=24,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3vi6yohy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:21Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=2826d9ce-d739-49a1-abfa-80cee62173fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.476 2 DEBUG nova.network.os_vif_util [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.477 2 DEBUG nova.network.os_vif_util [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.478 2 DEBUG nova.objects.instance [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid 2826d9ce-d739-49a1-abfa-80cee62173fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:58:25 compute-0 podman[290901]: 2025-10-14 08:58:25.399693529 +0000 UTC m=+0.030255104 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbf6ecce0931ae581de6497fdff55fce469de3179658359b1e470c5e9f85c3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.496 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <uuid>2826d9ce-d739-49a1-abfa-80cee62173fb</uuid>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <name>instance-00000018</name>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesTestJSON-server-1877571750</nova:name>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:24</nova:creationTime>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <nova:port uuid="146ca52f-0b4f-46f0-9153-1120bf1c9e4e">
Oct 14 08:58:25 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <entry name="serial">2826d9ce-d739-49a1-abfa-80cee62173fb</entry>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <entry name="uuid">2826d9ce-d739-49a1-abfa-80cee62173fb</entry>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2826d9ce-d739-49a1-abfa-80cee62173fb_disk">
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config">
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:25 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c7:a1:5d"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <target dev="tap146ca52f-0b"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/console.log" append="off"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:25 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:25 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:25 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:25 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:25 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.497 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Preparing to wait for external event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.497 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.498 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.498 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.499 2 DEBUG nova.virt.libvirt.vif [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877571750',display_name='tempest-ImagesTestJSON-server-1877571750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877571750',id=24,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3vi6yohy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:21Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=2826d9ce-d739-49a1-abfa-80cee62173fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.499 2 DEBUG nova.network.os_vif_util [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.500 2 DEBUG nova.network.os_vif_util [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.500 2 DEBUG os_vif [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap146ca52f-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap146ca52f-0b, col_values=(('external_ids', {'iface-id': '146ca52f-0b4f-46f0-9153-1120bf1c9e4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:a1:5d', 'vm-uuid': '2826d9ce-d739-49a1-abfa-80cee62173fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:25 compute-0 NetworkManager[44885]: <info>  [1760432305.5074] manager: (tap146ca52f-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Oct 14 08:58:25 compute-0 podman[290901]: 2025-10-14 08:58:25.508604274 +0000 UTC m=+0.139165859 container init 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:25 compute-0 podman[290901]: 2025-10-14 08:58:25.516109518 +0000 UTC m=+0.146671073 container start 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.516 2 INFO os_vif [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b')
Oct 14 08:58:25 compute-0 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [NOTICE]   (290946) : New worker (290955) forked
Oct 14 08:58:25 compute-0 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [NOTICE]   (290946) : Loading success.
Oct 14 08:58:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 186 op/s
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.572 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.573 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.573 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:c7:a1:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.573 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Using config drive
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.595 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.613 2 DEBUG nova.compute.manager [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.614 2 DEBUG oslo_concurrency.lockutils [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.614 2 DEBUG oslo_concurrency.lockutils [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.614 2 DEBUG oslo_concurrency.lockutils [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.615 2 DEBUG nova.compute.manager [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Processing event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:58:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2501182591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3031768786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.722025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305722051, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1497, "num_deletes": 255, "total_data_size": 2114606, "memory_usage": 2151312, "flush_reason": "Manual Compaction"}
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305730571, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2068577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24145, "largest_seqno": 25641, "table_properties": {"data_size": 2061594, "index_size": 3994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15573, "raw_average_key_size": 20, "raw_value_size": 2047250, "raw_average_value_size": 2708, "num_data_blocks": 177, "num_entries": 756, "num_filter_entries": 756, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432188, "oldest_key_time": 1760432188, "file_creation_time": 1760432305, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 8599 microseconds, and 4637 cpu microseconds.
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.730619) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2068577 bytes OK
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.730640) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.732225) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.732241) EVENT_LOG_v1 {"time_micros": 1760432305732236, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.732259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2107844, prev total WAL file size 2107844, number of live WAL files 2.
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.733040) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2020KB)], [56(6701KB)]
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305733091, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 8931006, "oldest_snapshot_seqno": -1}
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4717 keys, 7199110 bytes, temperature: kUnknown
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305766594, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7199110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7167794, "index_size": 18406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118281, "raw_average_key_size": 25, "raw_value_size": 7082815, "raw_average_value_size": 1501, "num_data_blocks": 760, "num_entries": 4717, "num_filter_entries": 4717, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432305, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.766987) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7199110 bytes
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.768281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 264.8 rd, 213.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 5240, records dropped: 523 output_compression: NoCompression
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.768300) EVENT_LOG_v1 {"time_micros": 1760432305768290, "job": 30, "event": "compaction_finished", "compaction_time_micros": 33727, "compaction_time_cpu_micros": 18251, "output_level": 6, "num_output_files": 1, "total_output_size": 7199110, "num_input_records": 5240, "num_output_records": 4717, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305769174, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305770526, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.732930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:58:25 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 08:58:25 compute-0 nova_compute[259627]: 2025-10-14 08:58:25.953 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully created port: 9db6eef3-e4da-4c17-91ea-1c3124906f61 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.029 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432306.028866, a071857d-db87-4931-95ad-f8c627f74160 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.030 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] VM Started (Lifecycle Event)
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.032 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.034 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.038 2 INFO nova.virt.libvirt.driver [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance spawned successfully.
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.038 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.068 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.076 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.082 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.083 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.084 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.084 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.085 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.086 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.128 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.128 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432306.0299046, a071857d-db87-4931-95ad-f8c627f74160 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.129 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] VM Paused (Lifecycle Event)
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.166 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.170 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432306.0339997, a071857d-db87-4931-95ad-f8c627f74160 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.171 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] VM Resumed (Lifecycle Event)
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.183 2 INFO nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Took 7.68 seconds to spawn the instance on the hypervisor.
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.183 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.193 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.197 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.241 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.272 2 INFO nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Took 8.81 seconds to build instance.
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.296 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.308 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Creating config drive at /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.318 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9l97a1xi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.359 2 DEBUG nova.network.neutron [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Updated VIF entry in instance network info cache for port 146ca52f-0b4f-46f0-9153-1120bf1c9e4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.361 2 DEBUG nova.network.neutron [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Updating instance_info_cache with network_info: [{"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.379 2 DEBUG oslo_concurrency.lockutils [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.469 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9l97a1xi" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.515 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.522 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.676 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.677 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Deleting local config drive /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config because it was imported into RBD.
Oct 14 08:58:26 compute-0 kernel: tap146ca52f-0b: entered promiscuous mode
Oct 14 08:58:26 compute-0 NetworkManager[44885]: <info>  [1760432306.7257] manager: (tap146ca52f-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Oct 14 08:58:26 compute-0 ceph-mon[74249]: pgmap v1203: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 186 op/s
Oct 14 08:58:26 compute-0 systemd-udevd[290824]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:26 compute-0 ovn_controller[152662]: 2025-10-14T08:58:26Z|00145|binding|INFO|Claiming lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e for this chassis.
Oct 14 08:58:26 compute-0 ovn_controller[152662]: 2025-10-14T08:58:26Z|00146|binding|INFO|146ca52f-0b4f-46f0-9153-1120bf1c9e4e: Claiming fa:16:3e:c7:a1:5d 10.100.0.10
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.747 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a1:5d 10.100.0.10'], port_security=['fa:16:3e:c7:a1:5d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2826d9ce-d739-49a1-abfa-80cee62173fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=146ca52f-0b4f-46f0-9153-1120bf1c9e4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.749 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 146ca52f-0b4f-46f0-9153-1120bf1c9e4e in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis
Oct 14 08:58:26 compute-0 NetworkManager[44885]: <info>  [1760432306.7511] device (tap146ca52f-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.752 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:58:26 compute-0 NetworkManager[44885]: <info>  [1760432306.7532] device (tap146ca52f-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.768 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c2ea55-7036-4883-804b-ac32cf6eedd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.769 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2322cf7a-01 in ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.772 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2322cf7a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.772 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd99117-b904-4a2f-b680-e06360cb2bf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.773 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fa27032c-004e-452f-ada6-5cd89ef51cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.787 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[905377d5-83df-44e5-8fb1-a9a3992d0aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 systemd-machined[214636]: New machine qemu-25-instance-00000018.
Oct 14 08:58:26 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000018.
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.812 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d42b7f-c3da-4930-a200-1174e3aec892]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:26 compute-0 ovn_controller[152662]: 2025-10-14T08:58:26Z|00147|binding|INFO|Setting lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e ovn-installed in OVS
Oct 14 08:58:26 compute-0 ovn_controller[152662]: 2025-10-14T08:58:26Z|00148|binding|INFO|Setting lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e up in Southbound
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.845 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4250ef29-ec6e-4076-af70-e2fcc2e10864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.850 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc8f855-ff37-419a-8b6d-3c487be36da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 NetworkManager[44885]: <info>  [1760432306.8519] manager: (tap2322cf7a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.878 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully updated port: 57b4d441-0c29-4419-b20b-3b5c4223b7a8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.889 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8bcfbe-ddd0-43be-90f6-731b83344f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.893 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8c179c28-0f45-4488-b50c-dfcfe0f469dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 nova_compute[259627]: 2025-10-14 08:58:26.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:26 compute-0 NetworkManager[44885]: <info>  [1760432306.9214] device (tap2322cf7a-00): carrier: link connected
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.928 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fec130-7cc9-437c-832b-952670cd3be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.946 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26f421e9-e830-4271-af49-553d3ae5d3b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608568, 'reachable_time': 17290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291065, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.962 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[981cf5b4-18fd-4bf1-9d3a-7b66e4ff9d2e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:956c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608568, 'tstamp': 608568}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291066, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.986 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2d3f25-6e65-4ae4-8800-c50f8b1cc22e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608568, 'reachable_time': 17290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291067, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.018 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e94d5a56-695d-44d3-9b4e-8e63529849bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.078 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29f38978-6a4e-43dc-820c-ec6df1ecf5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.080 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.081 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.081 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:27 compute-0 NetworkManager[44885]: <info>  [1760432307.0842] manager: (tap2322cf7a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:27 compute-0 kernel: tap2322cf7a-00: entered promiscuous mode
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.087 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:27 compute-0 ovn_controller[152662]: 2025-10-14T08:58:27Z|00149|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.090 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.092 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83303758-8dbf-4ab7-bd3c-425203c55038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.093 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:58:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.094 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'env', 'PROCESS_TAG=haproxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2322cf7a-0090-40fa-a558-42d84cc6fc2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.107 2 DEBUG nova.compute.manager [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-changed-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.107 2 DEBUG nova.compute.manager [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing instance network info cache due to event network-changed-57b4d441-0c29-4419-b20b-3b5c4223b7a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.108 2 DEBUG oslo_concurrency.lockutils [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.108 2 DEBUG oslo_concurrency.lockutils [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.109 2 DEBUG nova.network.neutron [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing network info cache for port 57b4d441-0c29-4419-b20b-3b5c4223b7a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.248 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "interface-a071857d-db87-4931-95ad-f8c627f74160-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.249 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "interface-a071857d-db87-4931-95ad-f8c627f74160-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.249 2 DEBUG nova.objects.instance [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'flavor' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.277 2 DEBUG nova.network.neutron [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.290 2 DEBUG nova.objects.instance [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'pci_requests' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.308 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:27 compute-0 podman[291140]: 2025-10-14 08:58:27.505186682 +0000 UTC m=+0.066598786 container create 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 08:58:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 186 op/s
Oct 14 08:58:27 compute-0 systemd[1]: Started libpod-conmon-66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc.scope.
Oct 14 08:58:27 compute-0 podman[291140]: 2025-10-14 08:58:27.46885811 +0000 UTC m=+0.030270304 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:58:27 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c703f783893fc5cf0d9cbca7dbc962deb1b04986a44b3368c6b7cd5b11d993/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:58:27 compute-0 podman[291140]: 2025-10-14 08:58:27.582311236 +0000 UTC m=+0.143723340 container init 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 08:58:27 compute-0 podman[291140]: 2025-10-14 08:58:27.587993755 +0000 UTC m=+0.149405859 container start 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 08:58:27 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [NOTICE]   (291158) : New worker (291160) forked
Oct 14 08:58:27 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [NOTICE]   (291158) : Loading success.
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.640 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully updated port: 685df8a6-7b64-441e-9a56-4ede8db5faa9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.840 2 DEBUG nova.network.neutron [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.843 2 DEBUG nova.policy [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '664abc01a11d458d9644488bf31e47f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '99649891054745d8a5186a1ad099e5a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.879 2 DEBUG oslo_concurrency.lockutils [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.892 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432307.8921707, 2826d9ce-d739-49a1-abfa-80cee62173fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.893 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Started (Lifecycle Event)
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.915 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.921 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432307.8929381, 2826d9ce-d739-49a1-abfa-80cee62173fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.921 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Paused (Lifecycle Event)
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.947 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.952 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:27 compute-0 nova_compute[259627]: 2025-10-14 08:58:27.974 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.162 2 DEBUG nova.compute.manager [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.163 2 DEBUG oslo_concurrency.lockutils [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.163 2 DEBUG oslo_concurrency.lockutils [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.163 2 DEBUG oslo_concurrency.lockutils [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.163 2 DEBUG nova.compute.manager [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.163 2 WARNING nova.compute.manager [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb for instance with vm_state active and task_state None.
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.449 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Successfully created port: cada6b6a-e534-4cd7-8abf-e402059d6964 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.486 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully updated port: 9db6eef3-e4da-4c17-91ea-1c3124906f61 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.500 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.501 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquired lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.501 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:28 compute-0 nova_compute[259627]: 2025-10-14 08:58:28.648 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:28 compute-0 ceph-mon[74249]: pgmap v1204: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 186 op/s
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.051 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Successfully updated port: cada6b6a-e534-4cd7-8abf-e402059d6964 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.066 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.067 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquired lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.067 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.235 2 WARNING nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] 8e06007a-4993-4328-9612-b43b931e2e3b already exists in list: networks containing: ['8e06007a-4993-4328-9612-b43b931e2e3b']. ignoring it
Oct 14 08:58:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 187 op/s
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.627 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.628 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.629 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.630 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.631 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Processing event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.631 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.632 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.633 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.634 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.634 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] No waiting events found dispatching network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.635 2 WARNING nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received unexpected event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e for instance with vm_state building and task_state spawning.
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.636 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-changed-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.636 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing instance network info cache due to event network-changed-685df8a6-7b64-441e-9a56-4ede8db5faa9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.637 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.643 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.652 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.654 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432309.6528177, 2826d9ce-d739-49a1-abfa-80cee62173fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.654 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Resumed (Lifecycle Event)
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.662 2 INFO nova.virt.libvirt.driver [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance spawned successfully.
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.662 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.679 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.690 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.700 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.701 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.701 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.702 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.703 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.703 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.741 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.785 2 INFO nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 8.66 seconds to spawn the instance on the hypervisor.
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.786 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:29 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.847 2 INFO nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 9.57 seconds to build instance.
Oct 14 08:58:29 compute-0 nova_compute[259627]: 2025-10-14 08:58:29.867 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.511 2 DEBUG nova.compute.manager [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-changed-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.511 2 DEBUG nova.compute.manager [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Refreshing instance network info cache due to event network-changed-cada6b6a-e534-4cd7-8abf-e402059d6964. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.511 2 DEBUG oslo_concurrency.lockutils [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:30 compute-0 podman[291169]: 2025-10-14 08:58:30.638821941 +0000 UTC m=+0.060625180 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:58:30 compute-0 podman[291170]: 2025-10-14 08:58:30.64081743 +0000 UTC m=+0.057966244 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 14 08:58:30 compute-0 ceph-mon[74249]: pgmap v1205: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 187 op/s
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.856 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.880 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Releasing lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.881 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance network_info: |[{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.881 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.882 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing network info cache for port 685df8a6-7b64-441e-9a56-4ede8db5faa9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.886 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start _get_guest_xml network_info=[{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.890 2 WARNING nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.896 2 DEBUG nova.virt.libvirt.host [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.897 2 DEBUG nova.virt.libvirt.host [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.900 2 DEBUG nova.virt.libvirt.host [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.901 2 DEBUG nova.virt.libvirt.host [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.901 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.902 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.902 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.903 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.903 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.903 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.904 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.904 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.904 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.905 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.905 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.905 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:30 compute-0 nova_compute[259627]: 2025-10-14 08:58:30.908 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.135 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.159 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Releasing lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.160 2 DEBUG oslo_concurrency.lockutils [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.161 2 DEBUG nova.network.neutron [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Refreshing network info cache for port cada6b6a-e534-4cd7-8abf-e402059d6964 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.164 2 DEBUG nova.virt.libvirt.vif [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:26Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.164 2 DEBUG nova.network.os_vif_util [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.165 2 DEBUG nova.network.os_vif_util [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.166 2 DEBUG os_vif [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.170 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcada6b6a-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcada6b6a-e5, col_values=(('external_ids', {'iface-id': 'cada6b6a-e534-4cd7-8abf-e402059d6964', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:47:df', 'vm-uuid': 'a071857d-db87-4931-95ad-f8c627f74160'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 NetworkManager[44885]: <info>  [1760432311.1736] manager: (tapcada6b6a-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.182 2 INFO os_vif [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5')
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.183 2 DEBUG nova.virt.libvirt.vif [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:26Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.184 2 DEBUG nova.network.os_vif_util [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.184 2 DEBUG nova.network.os_vif_util [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.188 2 DEBUG nova.virt.libvirt.guest [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] attach device xml: <interface type="ethernet">
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:49:47:df"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <target dev="tapcada6b6a-e5"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]: </interface>
Oct 14 08:58:31 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 08:58:31 compute-0 kernel: tapcada6b6a-e5: entered promiscuous mode
Oct 14 08:58:31 compute-0 NetworkManager[44885]: <info>  [1760432311.2028] manager: (tapcada6b6a-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Oct 14 08:58:31 compute-0 ovn_controller[152662]: 2025-10-14T08:58:31Z|00150|binding|INFO|Claiming lport cada6b6a-e534-4cd7-8abf-e402059d6964 for this chassis.
Oct 14 08:58:31 compute-0 ovn_controller[152662]: 2025-10-14T08:58:31Z|00151|binding|INFO|cada6b6a-e534-4cd7-8abf-e402059d6964: Claiming fa:16:3e:49:47:df 10.100.0.10
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.216 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:47:df 10.100.0.10'], port_security=['fa:16:3e:49:47:df 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a071857d-db87-4931-95ad-f8c627f74160', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e06007a-4993-4328-9612-b43b931e2e3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99649891054745d8a5186a1ad099e5a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cdd7ee35-5316-4ad1-b1f1-84c66df2ce6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb643099-5fd1-493f-8a92-ffa6e64eb2b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=cada6b6a-e534-4cd7-8abf-e402059d6964) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.220 162547 INFO neutron.agent.ovn.metadata.agent [-] Port cada6b6a-e534-4cd7-8abf-e402059d6964 in datapath 8e06007a-4993-4328-9612-b43b931e2e3b bound to our chassis
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.223 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e06007a-4993-4328-9612-b43b931e2e3b
Oct 14 08:58:31 compute-0 ovn_controller[152662]: 2025-10-14T08:58:31Z|00152|binding|INFO|Setting lport cada6b6a-e534-4cd7-8abf-e402059d6964 ovn-installed in OVS
Oct 14 08:58:31 compute-0 ovn_controller[152662]: 2025-10-14T08:58:31Z|00153|binding|INFO|Setting lport cada6b6a-e534-4cd7-8abf-e402059d6964 up in Southbound
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.241 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6db17c87-6aab-4860-bcb0-9e0512b20abd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 systemd-udevd[291235]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:31 compute-0 NetworkManager[44885]: <info>  [1760432311.2675] device (tapcada6b6a-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:58:31 compute-0 NetworkManager[44885]: <info>  [1760432311.2687] device (tapcada6b6a-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.281 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e9329773-b714-4e65-8616-dc95b9097319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.285 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[94b726a4-ff29-4b66-9376-47d84708075c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.311 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb2e752-1a4a-4b13-9149-bee2b1449517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.328 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[436e94c1-02ec-449a-83a7-cbd7d52dee60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e06007a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608361, 'reachable_time': 37451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291242, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.331 2 DEBUG nova.virt.libvirt.driver [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.331 2 DEBUG nova.virt.libvirt.driver [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.331 2 DEBUG nova.virt.libvirt.driver [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No VIF found with MAC fa:16:3e:58:04:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.332 2 DEBUG nova.virt.libvirt.driver [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No VIF found with MAC fa:16:3e:49:47:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[44a98871-76f8-47c0-be59-08a5846c4e1a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8e06007a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608373, 'tstamp': 608373}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291243, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8e06007a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608376, 'tstamp': 608376}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291243, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.347 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e06007a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.351 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e06007a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.352 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.352 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e06007a-40, col_values=(('external_ids', {'iface-id': 'd2ffe2ca-9cdc-4c2a-a690-8b35b09cf563'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.353 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.354 2 DEBUG nova.virt.libvirt.guest [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesV270Test-server-635702397</nova:name>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 08:58:31</nova:creationTime>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:user uuid="664abc01a11d458d9644488bf31e47f4">tempest-AttachInterfacesV270Test-1504567615-project-member</nova:user>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:project uuid="99649891054745d8a5186a1ad099e5a7">tempest-AttachInterfacesV270Test-1504567615</nova:project>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:port uuid="fd335735-b88a-42f7-911e-af4b2b9396fb">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:port uuid="cada6b6a-e534-4cd7-8abf-e402059d6964">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 08:58:31 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 08:58:31 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 08:58:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940373503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.379 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "interface-a071857d-db87-4931-95ad-f8c627f74160-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.407 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.436 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.440 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 248 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 9.2 MiB/s wr, 302 op/s
Oct 14 08:58:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3940373503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942476262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.842 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.845 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.846 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.847 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.849 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.849 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.850 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.852 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.852 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.853 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.855 2 DEBUG nova.objects.instance [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbf308-7f99-4a76-8d5e-54613f6bdf83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.870 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <uuid>aefbf308-7f99-4a76-8d5e-54613f6bdf83</uuid>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <name>instance-00000019</name>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestMultiNic-server-620013762</nova:name>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:30</nova:creationTime>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:user uuid="5aacb60ad29c418c9161e71bb72da036">tempest-ServersTestMultiNic-840673976-project-member</nova:user>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:project uuid="3566f35c659a45bd9b9bbddf6552ed43">tempest-ServersTestMultiNic-840673976</nova:project>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:port uuid="57b4d441-0c29-4419-b20b-3b5c4223b7a8">
Oct 14 08:58:31 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.177" ipVersion="4"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:port uuid="685df8a6-7b64-441e-9a56-4ede8db5faa9">
Oct 14 08:58:31 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.1.144" ipVersion="4"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <nova:port uuid="9db6eef3-e4da-4c17-91ea-1c3124906f61">
Oct 14 08:58:31 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.215" ipVersion="4"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <entry name="serial">aefbf308-7f99-4a76-8d5e-54613f6bdf83</entry>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <entry name="uuid">aefbf308-7f99-4a76-8d5e-54613f6bdf83</entry>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk">
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config">
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:3c:7f:93"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <target dev="tap57b4d441-0c"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:22:86:b1"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <target dev="tap685df8a6-7b"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:24:11:79"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <target dev="tap9db6eef3-e4"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/console.log" append="off"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:31 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:31 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:31 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:31 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:31 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.873 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Preparing to wait for external event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.873 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.874 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.875 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.875 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Preparing to wait for external event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.876 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.876 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.877 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.877 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Preparing to wait for external event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.878 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.878 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.879 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.880 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.881 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.882 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.883 2 DEBUG os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.885 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.886 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57b4d441-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.896 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57b4d441-0c, col_values=(('external_ids', {'iface-id': '57b4d441-0c29-4419-b20b-3b5c4223b7a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:7f:93', 'vm-uuid': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 NetworkManager[44885]: <info>  [1760432311.8999] manager: (tap57b4d441-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.909 2 INFO os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c')
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.910 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.911 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.912 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.913 2 DEBUG os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.915 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updated VIF entry in instance network info cache for port 685df8a6-7b64-441e-9a56-4ede8db5faa9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.916 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685df8a6-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap685df8a6-7b, col_values=(('external_ids', {'iface-id': '685df8a6-7b64-441e-9a56-4ede8db5faa9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:86:b1', 'vm-uuid': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 NetworkManager[44885]: <info>  [1760432311.9292] manager: (tap685df8a6-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.936 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.937 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-changed-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.938 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing instance network info cache due to event network-changed-9db6eef3-e4da-4c17-91ea-1c3124906f61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.938 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.939 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.940 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing network info cache for port 9db6eef3-e4da-4c17-91ea-1c3124906f61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.945 2 INFO os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b')
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.947 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.947 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.949 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.950 2 DEBUG os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.951 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.952 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9db6eef3-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9db6eef3-e4, col_values=(('external_ids', {'iface-id': '9db6eef3-e4da-4c17-91ea-1c3124906f61', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:11:79', 'vm-uuid': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 NetworkManager[44885]: <info>  [1760432311.9592] manager: (tap9db6eef3-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:31 compute-0 nova_compute[259627]: 2025-10-14 08:58:31.973 2 INFO os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4')
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.033 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.034 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.035 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:3c:7f:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.035 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:22:86:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.036 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:24:11:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.037 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Using config drive
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.069 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.110 2 INFO nova.compute.manager [None req-ff69c675-271b-44e6-9964-e902424590ac 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Pausing
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.112 2 DEBUG nova.objects.instance [None req-ff69c675-271b-44e6-9964-e902424590ac 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'flavor' on Instance uuid 2826d9ce-d739-49a1-abfa-80cee62173fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.135 2 DEBUG nova.compute.manager [None req-ff69c675-271b-44e6-9964-e902424590ac 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.136 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432312.135138, 2826d9ce-d739-49a1-abfa-80cee62173fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.136 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Paused (Lifecycle Event)
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.160 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.164 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.197 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.281 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 08:58:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:58:32
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', '.rgw.root', 'images']
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:58:32 compute-0 ceph-mon[74249]: pgmap v1206: 305 pgs: 305 active+clean; 248 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 9.2 MiB/s wr, 302 op/s
Oct 14 08:58:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3942476262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.799 2 DEBUG nova.network.neutron [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updated VIF entry in instance network info cache for port cada6b6a-e534-4cd7-8abf-e402059d6964. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.800 2 DEBUG nova.network.neutron [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.806 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Creating config drive at /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.815 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph7cp_cpl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.853 2 DEBUG oslo_concurrency.lockutils [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.889 2 DEBUG nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.889 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.890 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.890 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.891 2 DEBUG nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.891 2 WARNING nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 for instance with vm_state active and task_state None.
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.892 2 DEBUG nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.892 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.893 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.893 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.893 2 DEBUG nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.894 2 WARNING nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 for instance with vm_state active and task_state None.
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.950 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph7cp_cpl" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:58:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.991 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:32 compute-0 nova_compute[259627]: 2025-10-14 08:58:32.994 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.144 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.145 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Deleting local config drive /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config because it was imported into RBD.
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.2028] manager: (tap57b4d441-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Oct 14 08:58:33 compute-0 systemd-udevd[291239]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:33 compute-0 kernel: tap57b4d441-0c: entered promiscuous mode
Oct 14 08:58:33 compute-0 kernel: tap685df8a6-7b: entered promiscuous mode
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.2218] manager: (tap685df8a6-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00154|binding|INFO|Claiming lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 for this chassis.
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00155|binding|INFO|685df8a6-7b64-441e-9a56-4ede8db5faa9: Claiming fa:16:3e:22:86:b1 10.100.1.144
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00156|binding|INFO|Claiming lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 for this chassis.
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00157|binding|INFO|57b4d441-0c29-4419-b20b-3b5c4223b7a8: Claiming fa:16:3e:3c:7f:93 10.100.0.177
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.2388] device (tap57b4d441-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.2421] device (tap57b4d441-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.2498] manager: (tap9db6eef3-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.2531] device (tap685df8a6-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.2572] device (tap685df8a6-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.251 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:7f:93 10.100.0.177'], port_security=['fa:16:3e:3c:7f:93 10.100.0.177'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.177/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03753014-b87c-4672-9d66-fdc254813b6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=494b06c3-b496-4326-9e04-09e435735a40, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=57b4d441-0c29-4419-b20b-3b5c4223b7a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.256 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:86:b1 10.100.1.144'], port_security=['fa:16:3e:22:86:b1 10.100.1.144'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.144/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2534100b-a4f5-4f68-9f75-a1af37008664', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4c418fd-a28b-433f-be67-07c285fde4ec, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=685df8a6-7b64-441e-9a56-4ede8db5faa9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.258 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 57b4d441-0c29-4419-b20b-3b5c4223b7a8 in datapath 03753014-b87c-4672-9d66-fdc254813b6e bound to our chassis
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.261 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03753014-b87c-4672-9d66-fdc254813b6e
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.276 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13a9ed18-26f2-4a19-a475-9b036c210595]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.277 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03753014-b1 in ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.281 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03753014-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[abe08a3d-c945-4a7e-a258-84072b6f8e27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf32c00-fd34-41c3-b4d5-a5a6ff7d0910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 systemd-machined[214636]: New machine qemu-26-instance-00000019.
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.305 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[048e3be3-4ffb-4b84-83de-4a09efb18ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000019.
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.327 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fa02c0fe-115d-454c-9409-3a815ee19dff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 kernel: tap9db6eef3-e4: entered promiscuous mode
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.3513] device (tap9db6eef3-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00158|binding|INFO|Claiming lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 for this chassis.
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00159|binding|INFO|9db6eef3-e4da-4c17-91ea-1c3124906f61: Claiming fa:16:3e:24:11:79 10.100.0.215
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.3544] device (tap9db6eef3-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.360 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:11:79 10.100.0.215'], port_security=['fa:16:3e:24:11:79 10.100.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.215/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03753014-b87c-4672-9d66-fdc254813b6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=494b06c3-b496-4326-9e04-09e435735a40, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9db6eef3-e4da-4c17-91ea-1c3124906f61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00160|binding|INFO|Setting lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 ovn-installed in OVS
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00161|binding|INFO|Setting lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 up in Southbound
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00162|binding|INFO|Setting lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 ovn-installed in OVS
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00163|binding|INFO|Setting lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 up in Southbound
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.367 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d173dec2-c79a-45d2-b848-3cd1b1dfb743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00164|binding|INFO|Setting lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 ovn-installed in OVS
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00165|binding|INFO|Setting lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 up in Southbound
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.379 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9464890e-a0cb-4608-bb8d-c6382e583474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.3806] manager: (tap03753014-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.417 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5239c1b5-17f6-45b4-ab8a-013b2c405f8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.421 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[74426f26-e923-4962-978b-06e011500bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.4454] device (tap03753014-b0): carrier: link connected
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.452 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0a192c52-5646-4f85-a0b8-3bba2c206bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.473 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35d0b3bd-8cc2-4720-a03c-547a6fc966bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03753014-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:7a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609220, 'reachable_time': 43338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291404, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.490 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updated VIF entry in instance network info cache for port 9db6eef3-e4da-4c17-91ea-1c3124906f61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.491 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.507 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7b4128-b508-4ce5-b159-f5cec453fb66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:7a55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609220, 'tstamp': 609220}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291405, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 248 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 204 op/s
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.542 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[809613e6-e83a-48fe-8333-8421cde75b07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03753014-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:7a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609220, 'reachable_time': 43338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291406, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.589 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[380663b0-8171-4da9-821c-459c6f911710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.638 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2275d122-74fd-42cd-a679-7e5e748d1c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.639 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03753014-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.639 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.639 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03753014-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.649 2 DEBUG nova.compute.manager [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.650 2 DEBUG oslo_concurrency.lockutils [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.650 2 DEBUG oslo_concurrency.lockutils [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.650 2 DEBUG oslo_concurrency.lockutils [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.650 2 DEBUG nova.compute.manager [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Processing event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.691 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.691 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.691 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.692 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.692 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.693 2 INFO nova.compute.manager [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Terminating instance
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.694 2 DEBUG nova.compute.manager [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.6962] manager: (tap03753014-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Oct 14 08:58:33 compute-0 kernel: tap03753014-b0: entered promiscuous mode
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03753014-b0, col_values=(('external_ids', {'iface-id': '70e65942-3441-4aa4-b413-2595e7186410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00166|binding|INFO|Releasing lport 70e65942-3441-4aa4-b413-2595e7186410 from this chassis (sb_readonly=0)
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.716 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03753014-b87c-4672-9d66-fdc254813b6e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03753014-b87c-4672-9d66-fdc254813b6e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.716 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2dacca3-0c18-402f-a681-6d69598db594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.717 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-03753014-b87c-4672-9d66-fdc254813b6e
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/03753014-b87c-4672-9d66-fdc254813b6e.pid.haproxy
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 03753014-b87c-4672-9d66-fdc254813b6e
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.719 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'env', 'PROCESS_TAG=haproxy-03753014-b87c-4672-9d66-fdc254813b6e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03753014-b87c-4672-9d66-fdc254813b6e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:58:33 compute-0 kernel: tapfd335735-b8 (unregistering): left promiscuous mode
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.7335] device (tapfd335735-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.736 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00167|binding|INFO|Releasing lport fd335735-b88a-42f7-911e-af4b2b9396fb from this chassis (sb_readonly=0)
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00168|binding|INFO|Setting lport fd335735-b88a-42f7-911e-af4b2b9396fb down in Southbound
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00169|binding|INFO|Removing iface tapfd335735-b8 ovn-installed in OVS
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.754 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:04:b2 10.100.0.3'], port_security=['fa:16:3e:58:04:b2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a071857d-db87-4931-95ad-f8c627f74160', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e06007a-4993-4328-9612-b43b931e2e3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99649891054745d8a5186a1ad099e5a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cdd7ee35-5316-4ad1-b1f1-84c66df2ce6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb643099-5fd1-493f-8a92-ffa6e64eb2b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fd335735-b88a-42f7-911e-af4b2b9396fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 kernel: tapcada6b6a-e5 (unregistering): left promiscuous mode
Oct 14 08:58:33 compute-0 NetworkManager[44885]: <info>  [1760432313.7883] device (tapcada6b6a-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00170|binding|INFO|Releasing lport cada6b6a-e534-4cd7-8abf-e402059d6964 from this chassis (sb_readonly=0)
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00171|binding|INFO|Setting lport cada6b6a-e534-4cd7-8abf-e402059d6964 down in Southbound
Oct 14 08:58:33 compute-0 ovn_controller[152662]: 2025-10-14T08:58:33Z|00172|binding|INFO|Removing iface tapcada6b6a-e5 ovn-installed in OVS
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.809 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:47:df 10.100.0.10'], port_security=['fa:16:3e:49:47:df 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a071857d-db87-4931-95ad-f8c627f74160', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e06007a-4993-4328-9612-b43b931e2e3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99649891054745d8a5186a1ad099e5a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cdd7ee35-5316-4ad1-b1f1-84c66df2ce6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb643099-5fd1-493f-8a92-ffa6e64eb2b1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=cada6b6a-e534-4cd7-8abf-e402059d6964) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 14 08:58:33 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000017.scope: Consumed 8.893s CPU time.
Oct 14 08:58:33 compute-0 systemd-machined[214636]: Machine qemu-24-instance-00000017 terminated.
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.952 2 INFO nova.virt.libvirt.driver [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance destroyed successfully.
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.953 2 DEBUG nova.objects.instance [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'resources' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.976 2 DEBUG nova.virt.libvirt.vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:26Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.976 2 DEBUG nova.network.os_vif_util [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.977 2 DEBUG nova.network.os_vif_util [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.978 2 DEBUG os_vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd335735-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.989 2 INFO os_vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8')
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.990 2 DEBUG nova.virt.libvirt.vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:26Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.991 2 DEBUG nova.network.os_vif_util [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.992 2 DEBUG nova.network.os_vif_util [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.992 2 DEBUG os_vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.994 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcada6b6a-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:33 compute-0 nova_compute[259627]: 2025-10-14 08:58:33.998 2 INFO os_vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5')
Oct 14 08:58:34 compute-0 podman[291529]: 2025-10-14 08:58:34.137081324 +0000 UTC m=+0.064609197 container create 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 08:58:34 compute-0 systemd[1]: Started libpod-conmon-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca.scope.
Oct 14 08:58:34 compute-0 podman[291529]: 2025-10-14 08:58:34.108426861 +0000 UTC m=+0.035954754 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:58:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e1eac5cfa0dfaeaa47633dd3bc85d51f3de862c522038183f2763ceaaa4205/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:58:34 compute-0 podman[291529]: 2025-10-14 08:58:34.22611385 +0000 UTC m=+0.153641723 container init 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 08:58:34 compute-0 podman[291529]: 2025-10-14 08:58:34.233398079 +0000 UTC m=+0.160925952 container start 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 08:58:34 compute-0 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [NOTICE]   (291551) : New worker (291553) forked
Oct 14 08:58:34 compute-0 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [NOTICE]   (291551) : Loading success.
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.291 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 685df8a6-7b64-441e-9a56-4ede8db5faa9 in datapath 2534100b-a4f5-4f68-9f75-a1af37008664 unbound from our chassis
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.294 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2534100b-a4f5-4f68-9f75-a1af37008664
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.309 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17ccbed9-ca71-4c7e-96a8-ce0ecdd99342]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.310 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2534100b-a1 in ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.313 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2534100b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.313 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11930c83-9e1a-430c-bc97-a670c8c6e16b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.314 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a69237e-ac9d-4898-ac05-9892268fb581]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.327 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b28a135c-5b09-4aed-992c-df9f5b834fb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2161c94a-724f-4efa-8683-3dfb9b013f3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.372 2 INFO nova.virt.libvirt.driver [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Deleting instance files /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160_del
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.373 2 INFO nova.virt.libvirt.driver [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Deletion of /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160_del complete
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.378 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[289855a0-e317-4084-9b98-015e2a2c4487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 NetworkManager[44885]: <info>  [1760432314.3907] manager: (tap2534100b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d3c135-a109-4bf0-abb9-fd72fe485161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 systemd-udevd[291569]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.424 2 INFO nova.compute.manager [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.424 2 DEBUG oslo.service.loopingcall [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.425 2 DEBUG nova.compute.manager [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.425 2 DEBUG nova.network.neutron [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.430 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a19da300-fb18-4b8b-97bd-c352f327dd35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.432 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[719048e5-b21a-4efe-9072-501d8a5b2c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 NetworkManager[44885]: <info>  [1760432314.4559] device (tap2534100b-a0): carrier: link connected
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.462 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd46360-c756-47ac-9d33-d4a2fbbdd42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.483 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab7c79e-432f-4c61-8843-4830d820ed6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2534100b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:72:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609321, 'reachable_time': 44734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291588, 'error': None, 'target': 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[867ab156-52ef-41fd-a7ba-9f896d7487d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:72ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609321, 'tstamp': 609321}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291589, 'error': None, 'target': 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.520 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[22c86e79-3aa1-4fc8-a6ee-f6a0d4c1ade7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2534100b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:72:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609321, 'reachable_time': 44734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291590, 'error': None, 'target': 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 14 08:58:34 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000016.scope: Consumed 12.239s CPU time.
Oct 14 08:58:34 compute-0 systemd-machined[214636]: Machine qemu-23-instance-00000016 terminated.
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.550 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14c44308-5eb0-432d-a567-81b15bf29c56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.568 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432314.568571, aefbf308-7f99-4a76-8d5e-54613f6bdf83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.569 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] VM Started (Lifecycle Event)
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.592 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.596 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432314.5687027, aefbf308-7f99-4a76-8d5e-54613f6bdf83 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] VM Paused (Lifecycle Event)
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.609 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d48bf08-356a-4f72-9a4c-cfeab513dc49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.611 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2534100b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.611 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.612 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2534100b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:34 compute-0 kernel: tap2534100b-a0: entered promiscuous mode
Oct 14 08:58:34 compute-0 NetworkManager[44885]: <info>  [1760432314.6145] manager: (tap2534100b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.619 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2534100b-a0, col_values=(('external_ids', {'iface-id': 'a903215b-fae6-434d-8681-fcd07d014218'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.619 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:34 compute-0 ovn_controller[152662]: 2025-10-14T08:58:34Z|00173|binding|INFO|Releasing lport a903215b-fae6-434d-8681-fcd07d014218 from this chassis (sb_readonly=0)
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.638 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:34 compute-0 nova_compute[259627]: 2025-10-14 08:58:34.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.646 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2534100b-a4f5-4f68-9f75-a1af37008664.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2534100b-a4f5-4f68-9f75-a1af37008664.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.647 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[22e37e2c-e2a3-4713-8f3d-9095cf3b8a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.649 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-2534100b-a4f5-4f68-9f75-a1af37008664
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/2534100b-a4f5-4f68-9f75-a1af37008664.pid.haproxy
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 2534100b-a4f5-4f68-9f75-a1af37008664
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:58:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.653 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'env', 'PROCESS_TAG=haproxy-2534100b-a4f5-4f68-9f75-a1af37008664', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2534100b-a4f5-4f68-9f75-a1af37008664.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:58:34 compute-0 ceph-mon[74249]: pgmap v1207: 305 pgs: 305 active+clean; 248 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 204 op/s
Oct 14 08:58:35 compute-0 podman[291625]: 2025-10-14 08:58:35.072930794 +0000 UTC m=+0.064353311 container create 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:58:35 compute-0 podman[291625]: 2025-10-14 08:58:35.035917405 +0000 UTC m=+0.027339962 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:58:35 compute-0 systemd[1]: Started libpod-conmon-187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef.scope.
Oct 14 08:58:35 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7d9746db43f2554dc647176d5a6e0cd2f0b471a69b65de77f8535040dad214/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:58:35 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 08:58:35 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 08:58:35 compute-0 podman[291625]: 2025-10-14 08:58:35.192612133 +0000 UTC m=+0.184034690 container init 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 08:58:35 compute-0 podman[291625]: 2025-10-14 08:58:35.20308324 +0000 UTC m=+0.194505747 container start 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 08:58:35 compute-0 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [NOTICE]   (291645) : New worker (291647) forked
Oct 14 08:58:35 compute-0 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [NOTICE]   (291645) : Loading success.
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.263 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9db6eef3-e4da-4c17-91ea-1c3124906f61 in datapath 03753014-b87c-4672-9d66-fdc254813b6e unbound from our chassis
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.267 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03753014-b87c-4672-9d66-fdc254813b6e
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.288 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6d3716-e1a8-409b-90d6-6576b65168e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.313 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance shutdown successfully after 13 seconds.
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.319 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.325 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.328 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f1704f-0807-443a-8e2a-29debe029961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.333 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff17cb8-2be7-48e7-bc17-afc683d09d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.375 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7c44ad4d-ca4f-4a5c-a88b-36ed80f60e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.399 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42db1afc-871b-4173-a88d-262de1bba854]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03753014-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:7a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 742, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 742, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609220, 'reachable_time': 43338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 644, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 644, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291679, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.421 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ffda2c-8442-41b0-aeea-dee9b9b75301]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03753014-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609237, 'tstamp': 609237}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291680, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap03753014-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609239, 'tstamp': 609239}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291680, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.422 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03753014-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.425 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03753014-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.426 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.426 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03753014-b0, col_values=(('external_ids', {'iface-id': '70e65942-3441-4aa4-b413-2595e7186410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.427 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.427 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.428 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fd335735-b88a-42f7-911e-af4b2b9396fb in datapath 8e06007a-4993-4328-9612-b43b931e2e3b unbound from our chassis
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.429 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e06007a-4993-4328-9612-b43b931e2e3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.430 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[672a799f-39c5-4a5f-8687-7fbede3cca39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.431 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b namespace which is not needed anymore
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.434 2 DEBUG nova.compute.manager [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-unplugged-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.434 2 DEBUG oslo_concurrency.lockutils [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.435 2 DEBUG oslo_concurrency.lockutils [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.435 2 DEBUG oslo_concurrency.lockutils [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.435 2 DEBUG nova.compute.manager [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-unplugged-fd335735-b88a-42f7-911e-af4b2b9396fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.435 2 DEBUG nova.compute.manager [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-unplugged-fd335735-b88a-42f7-911e-af4b2b9396fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:58:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 333 op/s
Oct 14 08:58:35 compute-0 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [NOTICE]   (290946) : haproxy version is 2.8.14-c23fe91
Oct 14 08:58:35 compute-0 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [NOTICE]   (290946) : path to executable is /usr/sbin/haproxy
Oct 14 08:58:35 compute-0 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [WARNING]  (290946) : Exiting Master process...
Oct 14 08:58:35 compute-0 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [WARNING]  (290946) : Exiting Master process...
Oct 14 08:58:35 compute-0 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [ALERT]    (290946) : Current worker (290955) exited with code 143 (Terminated)
Oct 14 08:58:35 compute-0 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [WARNING]  (290946) : All workers exited. Exiting... (0)
Oct 14 08:58:35 compute-0 systemd[1]: libpod-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49.scope: Deactivated successfully.
Oct 14 08:58:35 compute-0 conmon[290916]: conmon 05b21863dda394c5167a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49.scope/container/memory.events
Oct 14 08:58:35 compute-0 podman[291698]: 2025-10-14 08:58:35.61519894 +0000 UTC m=+0.068270857 container died 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49-userdata-shm.mount: Deactivated successfully.
Oct 14 08:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dbf6ecce0931ae581de6497fdff55fce469de3179658359b1e470c5e9f85c3e-merged.mount: Deactivated successfully.
Oct 14 08:58:35 compute-0 podman[291698]: 2025-10-14 08:58:35.664556932 +0000 UTC m=+0.117628849 container cleanup 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 08:58:35 compute-0 systemd[1]: libpod-conmon-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49.scope: Deactivated successfully.
Oct 14 08:58:35 compute-0 podman[291730]: 2025-10-14 08:58:35.752525492 +0000 UTC m=+0.062170497 container remove 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.756 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting instance files /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.757 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deletion of /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del complete
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No event matching network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 in dict_keys([('network-vif-plugged', '57b4d441-0c29-4419-b20b-3b5c4223b7a8'), ('network-vif-plugged', '9db6eef3-e4da-4c17-91ea-1c3124906f61')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.764 2 WARNING nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 for instance with vm_state building and task_state spawning.
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Processing event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No event matching network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 in dict_keys([('network-vif-plugged', '9db6eef3-e4da-4c17-91ea-1c3124906f61')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.765 2 WARNING nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 for instance with vm_state building and task_state spawning.
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Processing event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.767 2 WARNING nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 for instance with vm_state building and task_state spawning.
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-deleted-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.768 2 INFO nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Neutron deleted interface fd335735-b88a-42f7-911e-af4b2b9396fb; detaching it from the instance and deleting it from the info cache
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.768 2 DEBUG nova.network.neutron [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.768 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88f0201c-dd07-4373-b6ed-c5da5466a548]: (4, ('Tue Oct 14 08:58:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b (05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49)\n05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49\nTue Oct 14 08:58:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b (05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49)\n05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.769 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.770 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[386c765e-020e-4cc1-8bf8-bd8aa788deda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.771 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e06007a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.774 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432315.7737002, aefbf308-7f99-4a76-8d5e-54613f6bdf83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.774 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] VM Resumed (Lifecycle Event)
Oct 14 08:58:35 compute-0 kernel: tap8e06007a-40: left promiscuous mode
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.776 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.783 2 INFO nova.virt.libvirt.driver [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance spawned successfully.
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.784 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.798 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0512634-9ab7-4a56-99ba-2846674fb117]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.829 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.827 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0320a5-e861-402b-ae1f-c5f572769e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.831 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe95ca25-d2e6-43af-9c1b-00ea5ba3a52c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.832 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.847 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57b5269d-dcab-463a-bd63-15b55b7ee157]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608351, 'reachable_time': 44285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291746, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.849 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.849 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[27541185-afb6-447e-b8f4-d58b58f4aa05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.850 162547 INFO neutron.agent.ovn.metadata.agent [-] Port cada6b6a-e534-4cd7-8abf-e402059d6964 in datapath 8e06007a-4993-4328-9612-b43b931e2e3b unbound from our chassis
Oct 14 08:58:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e06007a\x2d4993\x2d4328\x2d9612\x2db43b931e2e3b.mount: Deactivated successfully.
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.852 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e06007a-4993-4328-9612-b43b931e2e3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:58:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.852 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e95d6eb3-6c94-4547-8457-c0ff0d1fc524]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.861 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Detach interface failed, port_id=fd335735-b88a-42f7-911e-af4b2b9396fb, reason: Instance a071857d-db87-4931-95ad-f8c627f74160 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.881 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.900 2 DEBUG nova.network.neutron [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.912 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.912 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.913 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.913 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.913 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.914 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:35 compute-0 nova_compute[259627]: 2025-10-14 08:58:35.945 2 INFO nova.compute.manager [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Took 1.52 seconds to deallocate network for instance.
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.033 2 INFO nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Took 12.49 seconds to spawn the instance on the hypervisor.
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.033 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.046 2 DEBUG nova.compute.manager [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.047 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.047 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.129 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.130 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating image(s)
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.151 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.172 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.195 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.198 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.201 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.210 2 INFO nova.compute.manager [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] instance snapshotting
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.210 2 WARNING nova.compute.manager [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] trying to snapshot a non-running instance: (state: 3 expected: 1)
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.221 2 INFO nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Took 13.87 seconds to build instance.
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.234 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.256 2 DEBUG oslo_concurrency.processutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.424 2 INFO nova.virt.libvirt.driver [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Beginning live snapshot process
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.558 2 DEBUG nova.virt.libvirt.imagebackend [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/e2368e3e-f504-40e6-a9d3-67df18c845bb/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/e2368e3e-f504-40e6-a9d3-67df18c845bb/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.564 2 DEBUG nova.virt.libvirt.imagebackend [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 08:58:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454276195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.685 2 DEBUG oslo_concurrency.processutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.689 2 DEBUG nova.compute.provider_tree [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.707 2 DEBUG nova.scheduler.client.report [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.732 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.760 2 INFO nova.scheduler.client.report [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Deleted allocations for instance a071857d-db87-4931-95ad-f8c627f74160
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.768 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(b279fc5a287f476b829290bb8093336b) on rbd image(2826d9ce-d739-49a1-abfa-80cee62173fb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:58:36 compute-0 ceph-mon[74249]: pgmap v1208: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 333 op/s
Oct 14 08:58:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3454276195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.863 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:36 compute-0 nova_compute[259627]: 2025-10-14 08:58:36.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.449 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.450 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.450 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.451 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.451 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.452 2 INFO nova.compute.manager [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Terminating instance
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.453 2 DEBUG nova.compute.manager [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:58:37 compute-0 kernel: tap57b4d441-0c (unregistering): left promiscuous mode
Oct 14 08:58:37 compute-0 NetworkManager[44885]: <info>  [1760432317.4913] device (tap57b4d441-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:58:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 244 op/s
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00174|binding|INFO|Releasing lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 from this chassis (sb_readonly=0)
Oct 14 08:58:37 compute-0 kernel: tap685df8a6-7b (unregistering): left promiscuous mode
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00175|binding|INFO|Setting lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 down in Southbound
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00176|binding|INFO|Removing iface tap57b4d441-0c ovn-installed in OVS
Oct 14 08:58:37 compute-0 NetworkManager[44885]: <info>  [1760432317.5626] device (tap685df8a6-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.563 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:7f:93 10.100.0.177'], port_security=['fa:16:3e:3c:7f:93 10.100.0.177'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.177/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03753014-b87c-4672-9d66-fdc254813b6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=494b06c3-b496-4326-9e04-09e435735a40, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=57b4d441-0c29-4419-b20b-3b5c4223b7a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.564 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 57b4d441-0c29-4419-b20b-3b5c4223b7a8 in datapath 03753014-b87c-4672-9d66-fdc254813b6e unbound from our chassis
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.566 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03753014-b87c-4672-9d66-fdc254813b6e
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.578 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a246e4-0caa-4b2a-8cfc-fba9564842af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00177|binding|INFO|Releasing lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 from this chassis (sb_readonly=0)
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00178|binding|INFO|Setting lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 down in Southbound
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00179|binding|INFO|Removing iface tap685df8a6-7b ovn-installed in OVS
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.598 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:86:b1 10.100.1.144'], port_security=['fa:16:3e:22:86:b1 10.100.1.144'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.144/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2534100b-a4f5-4f68-9f75-a1af37008664', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4c418fd-a28b-433f-be67-07c285fde4ec, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=685df8a6-7b64-441e-9a56-4ede8db5faa9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.600 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:37 compute-0 kernel: tap9db6eef3-e4 (unregistering): left promiscuous mode
Oct 14 08:58:37 compute-0 NetworkManager[44885]: <info>  [1760432317.6116] device (tap9db6eef3-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.623 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70aab4f4-c4f8-49bb-ac63-13e5447eb858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.628 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e8845f10-d645-42f0-84c3-9a5c657ded43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00180|binding|INFO|Releasing lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 from this chassis (sb_readonly=0)
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00181|binding|INFO|Setting lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 down in Southbound
Oct 14 08:58:37 compute-0 ovn_controller[152662]: 2025-10-14T08:58:37Z|00182|binding|INFO|Removing iface tap9db6eef3-e4 ovn-installed in OVS
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.642 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:11:79 10.100.0.215'], port_security=['fa:16:3e:24:11:79 10.100.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.215/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03753014-b87c-4672-9d66-fdc254813b6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=494b06c3-b496-4326-9e04-09e435735a40, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9db6eef3-e4da-4c17-91ea-1c3124906f61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.641 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.642 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.642 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.642 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.643 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.643 2 WARNING nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb for instance with vm_state deleted and task_state None.
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.643 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-unplugged-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.643 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.643 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.644 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.644 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-unplugged-cada6b6a-e534-4cd7-8abf-e402059d6964 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.645 2 WARNING nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-unplugged-cada6b6a-e534-4cd7-8abf-e402059d6964 for instance with vm_state deleted and task_state None.
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.645 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.645 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.645 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.645 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.646 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.646 2 WARNING nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 for instance with vm_state deleted and task_state None.
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.661 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76e871eb-e06b-48b0-8e6d-981b38a2ebc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cef2f1f-e44e-4fa3-9144-3260a4bd399c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03753014-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:7a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609220, 'reachable_time': 43338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291896, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.678 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.679 2 DEBUG nova.virt.images [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] e2368e3e-f504-40e6-a9d3-67df18c845bb was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.680 2 DEBUG nova.privsep.utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.680 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.part /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:37 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 14 08:58:37 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000019.scope: Consumed 2.811s CPU time.
Oct 14 08:58:37 compute-0 systemd-machined[214636]: Machine qemu-26-instance-00000019 terminated.
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.692 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[304e13f7-de77-4cfe-beb6-5fa1293eb186]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03753014-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609237, 'tstamp': 609237}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291899, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap03753014-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609239, 'tstamp': 609239}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291899, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.693 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03753014-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.708 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03753014-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03753014-b0, col_values=(('external_ids', {'iface-id': '70e65942-3441-4aa4-b413-2595e7186410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.710 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.710 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 685df8a6-7b64-441e-9a56-4ede8db5faa9 in datapath 2534100b-a4f5-4f68-9f75-a1af37008664 unbound from our chassis
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.711 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2534100b-a4f5-4f68-9f75-a1af37008664, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c23ef7b-881b-40cc-a859-f98349691eda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.713 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 namespace which is not needed anymore
Oct 14 08:58:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 14 08:58:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 14 08:58:37 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 14 08:58:37 compute-0 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [NOTICE]   (291645) : haproxy version is 2.8.14-c23fe91
Oct 14 08:58:37 compute-0 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [NOTICE]   (291645) : path to executable is /usr/sbin/haproxy
Oct 14 08:58:37 compute-0 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [WARNING]  (291645) : Exiting Master process...
Oct 14 08:58:37 compute-0 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [ALERT]    (291645) : Current worker (291647) exited with code 143 (Terminated)
Oct 14 08:58:37 compute-0 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [WARNING]  (291645) : All workers exited. Exiting... (0)
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.863 2 DEBUG nova.compute.manager [req-b6e282d1-7f1c-4ed2-bd12-5a822b46bc4c req-171990bd-83ad-4329-9b49-f2e5f14b79a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-deleted-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:37 compute-0 systemd[1]: libpod-187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef.scope: Deactivated successfully.
Oct 14 08:58:37 compute-0 podman[291932]: 2025-10-14 08:58:37.87081673 +0000 UTC m=+0.054609342 container died 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:58:37 compute-0 NetworkManager[44885]: <info>  [1760432317.8845] manager: (tap685df8a6-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Oct 14 08:58:37 compute-0 NetworkManager[44885]: <info>  [1760432317.8980] manager: (tap9db6eef3-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.901 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.part /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.converted" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.906 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef-userdata-shm.mount: Deactivated successfully.
Oct 14 08:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e7d9746db43f2554dc647176d5a6e0cd2f0b471a69b65de77f8535040dad214-merged.mount: Deactivated successfully.
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.937 2 INFO nova.virt.libvirt.driver [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance destroyed successfully.
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.938 2 DEBUG nova.objects.instance [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'resources' on Instance uuid aefbf308-7f99-4a76-8d5e-54613f6bdf83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:37 compute-0 podman[291932]: 2025-10-14 08:58:37.943934465 +0000 UTC m=+0.127727047 container cleanup 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.954 2 DEBUG nova.virt.libvirt.vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:36Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.954 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.958 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.958 2 DEBUG os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57b4d441-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:37 compute-0 systemd[1]: libpod-conmon-187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef.scope: Deactivated successfully.
Oct 14 08:58:37 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.974 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning vms/2826d9ce-d739-49a1-abfa-80cee62173fb_disk@b279fc5a287f476b829290bb8093336b to images/58a309a9-ebdf-4853-9550-ca13b12b33e8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:37.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.001 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.converted --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.002 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:38 compute-0 podman[291991]: 2025-10-14 08:58:38.017222795 +0000 UTC m=+0.045066368 container remove 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.024 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.024 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d84dbda-e118-46f7-9cac-730e1ccbaa74]: (4, ('Tue Oct 14 08:58:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 (187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef)\n187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef\nTue Oct 14 08:58:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 (187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef)\n187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.026 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7450ee30-eb43-4605-95c2-aedc6783fa7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.027 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2534100b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.028 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:38 compute-0 kernel: tap2534100b-a0: left promiscuous mode
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.054 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8b789d-db55-4629-b95d-7901af98bbc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.055 2 INFO os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c')
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.056 2 DEBUG nova.virt.libvirt.vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:36Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.057 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.058 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.058 2 DEBUG os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685df8a6-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.072 2 INFO os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b')
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.073 2 DEBUG nova.virt.libvirt.vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:36Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.074 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.074 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.074 2 DEBUG os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9db6eef3-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.085 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56bd13aa-e34d-4ca2-a7fa-b1fa34b46125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.086 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83ad917b-f0fc-42a9-8449-f7995bad0172]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.086 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] flattening images/58a309a9-ebdf-4853-9550-ca13b12b33e8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.101 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3ee7e1-94f7-41dc-a122-1b462b413530]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609313, 'reachable_time': 38909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292071, 'error': None, 'target': 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.103 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.103 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b5df470d-15fb-49b7-8a6f-7a3a0bd4eae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.104 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9db6eef3-e4da-4c17-91ea-1c3124906f61 in datapath 03753014-b87c-4672-9d66-fdc254813b6e unbound from our chassis
Oct 14 08:58:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d2534100b\x2da4f5\x2d4f68\x2d9f75\x2da1af37008664.mount: Deactivated successfully.
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.106 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03753014-b87c-4672-9d66-fdc254813b6e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.107 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35ffd825-e041-429d-97a8-e654bb096543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.108 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e namespace which is not needed anymore
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.126 2 INFO os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4')
Oct 14 08:58:38 compute-0 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [NOTICE]   (291551) : haproxy version is 2.8.14-c23fe91
Oct 14 08:58:38 compute-0 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [NOTICE]   (291551) : path to executable is /usr/sbin/haproxy
Oct 14 08:58:38 compute-0 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [WARNING]  (291551) : Exiting Master process...
Oct 14 08:58:38 compute-0 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [ALERT]    (291551) : Current worker (291553) exited with code 143 (Terminated)
Oct 14 08:58:38 compute-0 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [WARNING]  (291551) : All workers exited. Exiting... (0)
Oct 14 08:58:38 compute-0 systemd[1]: libpod-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca.scope: Deactivated successfully.
Oct 14 08:58:38 compute-0 conmon[291545]: conmon 67a451c8f393c839e08a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca.scope/container/memory.events
Oct 14 08:58:38 compute-0 podman[292142]: 2025-10-14 08:58:38.290825013 +0000 UTC m=+0.070543693 container died 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca-userdata-shm.mount: Deactivated successfully.
Oct 14 08:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-82e1eac5cfa0dfaeaa47633dd3bc85d51f3de862c522038183f2763ceaaa4205-merged.mount: Deactivated successfully.
Oct 14 08:58:38 compute-0 podman[292142]: 2025-10-14 08:58:38.364915842 +0000 UTC m=+0.144634522 container cleanup 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 08:58:38 compute-0 systemd[1]: libpod-conmon-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca.scope: Deactivated successfully.
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.395 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:38 compute-0 podman[292186]: 2025-10-14 08:58:38.431578399 +0000 UTC m=+0.043044588 container remove 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.437 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0e04bf-618a-4e96-a93a-787a75aa1040]: (4, ('Tue Oct 14 08:58:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e (67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca)\n67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca\nTue Oct 14 08:58:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e (67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca)\n67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.438 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2d113bb0-b213-42ea-8865-36f69f137f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.439 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03753014-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:38 compute-0 kernel: tap03753014-b0: left promiscuous mode
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18151158-fab8-4657-ba44-ab16599f5655]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.467 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3c18f7e0-d95d-408c-a8c5-d787be083de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.469 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f304684-93e2-4da1-b1d1-17d9584628dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.475 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(b279fc5a287f476b829290bb8093336b) on rbd image(2826d9ce-d739-49a1-abfa-80cee62173fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.481 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] resizing rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.485 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2c12e9-c104-4b45-8d79-4ed7025fa970]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609212, 'reachable_time': 40198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292241, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.488 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:58:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.488 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[60ecf702-6052-418b-8510-ce5202dcf638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.572 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.573 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Ensure instance console log exists: /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.574 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.574 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.574 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.575 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.580 2 WARNING nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.584 2 DEBUG nova.virt.libvirt.host [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.585 2 DEBUG nova.virt.libvirt.host [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.587 2 DEBUG nova.virt.libvirt.host [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.588 2 DEBUG nova.virt.libvirt.host [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.588 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.588 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'vcpu_model' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.609 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.691 2 INFO nova.virt.libvirt.driver [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Deleting instance files /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83_del
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.693 2 INFO nova.virt.libvirt.driver [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Deletion of /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83_del complete
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.754 2 INFO nova.compute.manager [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.755 2 DEBUG oslo.service.loopingcall [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.756 2 DEBUG nova.compute.manager [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.757 2 DEBUG nova.network.neutron [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:58:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 14 08:58:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 14 08:58:38 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 14 08:58:38 compute-0 ceph-mon[74249]: pgmap v1209: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 244 op/s
Oct 14 08:58:38 compute-0 ceph-mon[74249]: osdmap e137: 3 total, 3 up, 3 in
Oct 14 08:58:38 compute-0 nova_compute[259627]: 2025-10-14 08:58:38.880 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(snap) on rbd image(58a309a9-ebdf-4853-9550-ca13b12b33e8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:58:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d03753014\x2db87c\x2d4672\x2d9d66\x2dfdc254813b6e.mount: Deactivated successfully.
Oct 14 08:58:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480408461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.074 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.092 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.095 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 192 KiB/s wr, 193 op/s
Oct 14 08:58:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2901476574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.577 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.582 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <uuid>51c76e0f-284d-4122-83b4-32c4518b9056</uuid>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <name>instance-00000016</name>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdmin275Test-server-546094612</nova:name>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:38</nova:creationTime>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <nova:user uuid="24a7b84f511340ae859b668a0e7becf6">tempest-ServersAdmin275Test-1795131452-project-member</nova:user>
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <nova:project uuid="61066a48551647f18a4cfb7a7147e7ed">tempest-ServersAdmin275Test-1795131452</nova:project>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <entry name="serial">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <entry name="uuid">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk">
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk.config">
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log" append="off"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:39 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:39 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:39 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.659 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.659 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.660 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Using config drive
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.679 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.697 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'ec2_ids' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.767 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'keypairs' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 14 08:58:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 14 08:58:39 compute-0 ceph-mon[74249]: osdmap e138: 3 total, 3 up, 3 in
Oct 14 08:58:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3480408461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2901476574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:39 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.987 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.988 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.989 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.989 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.990 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-unplugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.990 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.991 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.991 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.992 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.992 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.993 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.993 2 WARNING nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 for instance with vm_state active and task_state deleting.
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.994 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.994 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.995 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.995 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.996 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-unplugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.996 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.997 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.997 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.998 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.998 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:39 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.999 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:39.999 2 WARNING nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 for instance with vm_state active and task_state deleting.
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.000 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.000 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.001 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.001 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.001 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-unplugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.002 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.002 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.003 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.003 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.004 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.004 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.005 2 WARNING nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 for instance with vm_state active and task_state deleting.
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.005 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-deleted-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.006 2 INFO nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Neutron deleted interface 9db6eef3-e4da-4c17-91ea-1c3124906f61; detaching it from the instance and deleting it from the info cache
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.006 2 DEBUG nova.network.neutron [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.011 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating config drive at /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.021 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89uk52cc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.088 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Detach interface failed, port_id=9db6eef3-e4da-4c17-91ea-1c3124906f61, reason: Instance aefbf308-7f99-4a76-8d5e-54613f6bdf83 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 08:58:40 compute-0 ovn_controller[152662]: 2025-10-14T08:58:40Z|00183|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.170 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89uk52cc" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.206 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.209 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.393 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.394 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting local config drive /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config because it was imported into RBD.
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.483 2 DEBUG nova.network.neutron [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:40 compute-0 systemd-machined[214636]: New machine qemu-27-instance-00000016.
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.503 2 INFO nova.compute.manager [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Took 1.75 seconds to deallocate network for instance.
Oct 14 08:58:40 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000016.
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.548 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.548 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:40 compute-0 podman[292424]: 2025-10-14 08:58:40.573962517 +0000 UTC m=+0.085233474 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 08:58:40 compute-0 podman[292423]: 2025-10-14 08:58:40.58304309 +0000 UTC m=+0.097404993 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Oct 14 08:58:40 compute-0 nova_compute[259627]: 2025-10-14 08:58:40.667 2 DEBUG oslo_concurrency.processutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:40 compute-0 ceph-mon[74249]: pgmap v1212: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 192 KiB/s wr, 193 op/s
Oct 14 08:58:40 compute-0 ceph-mon[74249]: osdmap e139: 3 total, 3 up, 3 in
Oct 14 08:58:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1943102798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.096 2 DEBUG oslo_concurrency.processutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.104 2 DEBUG nova.compute.provider_tree [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.125 2 DEBUG nova.scheduler.client.report [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.152 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.196 2 INFO nova.scheduler.client.report [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Deleted allocations for instance aefbf308-7f99-4a76-8d5e-54613f6bdf83
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.273 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.408 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 51c76e0f-284d-4122-83b4-32c4518b9056 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.408 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432321.4078386, 51c76e0f-284d-4122-83b4-32c4518b9056 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.409 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Resumed (Lifecycle Event)
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.411 2 DEBUG nova.compute.manager [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.412 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.415 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance spawned successfully.
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.415 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.440 2 INFO nova.virt.libvirt.driver [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Snapshot image upload complete
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.441 2 INFO nova.compute.manager [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 5.23 seconds to snapshot the instance on the hypervisor.
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.446 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.449 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.456 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.456 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.457 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.457 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.458 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.458 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.497 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.498 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432321.4106197, 51c76e0f-284d-4122-83b4-32c4518b9056 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.498 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Started (Lifecycle Event)
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.522 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.525 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.551 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 08:58:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 180 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 7.1 MiB/s wr, 413 op/s
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.568 2 DEBUG nova.compute.manager [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.626 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.627 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.628 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.689 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1943102798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:41 compute-0 nova_compute[259627]: 2025-10-14 08:58:41.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 14 08:58:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 14 08:58:42 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 14 08:58:42 compute-0 nova_compute[259627]: 2025-10-14 08:58:42.839 2 DEBUG nova.compute.manager [req-9f2af65b-a0cd-49b0-9266-d3dc1dce6562 req-fe034459-e12e-4a0a-b17b-b1b14ef5be84 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-deleted-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:42 compute-0 nova_compute[259627]: 2025-10-14 08:58:42.840 2 DEBUG nova.compute.manager [req-9f2af65b-a0cd-49b0-9266-d3dc1dce6562 req-fe034459-e12e-4a0a-b17b-b1b14ef5be84 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-deleted-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000694346938692453 of space, bias 1.0, pg target 0.2083040816077359 quantized to 32 (current 32)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:58:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.271 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.272 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.273 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.274 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.274 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.276 2 INFO nova.compute.manager [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Terminating instance
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.278 2 DEBUG nova.compute.manager [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:58:43 compute-0 kernel: tap146ca52f-0b (unregistering): left promiscuous mode
Oct 14 08:58:43 compute-0 NetworkManager[44885]: <info>  [1760432323.3184] device (tap146ca52f-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:58:43 compute-0 ovn_controller[152662]: 2025-10-14T08:58:43Z|00184|binding|INFO|Releasing lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e from this chassis (sb_readonly=0)
Oct 14 08:58:43 compute-0 ovn_controller[152662]: 2025-10-14T08:58:43Z|00185|binding|INFO|Setting lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e down in Southbound
Oct 14 08:58:43 compute-0 ovn_controller[152662]: 2025-10-14T08:58:43Z|00186|binding|INFO|Removing iface tap146ca52f-0b ovn-installed in OVS
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.342 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a1:5d 10.100.0.10'], port_security=['fa:16:3e:c7:a1:5d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2826d9ce-d739-49a1-abfa-80cee62173fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=146ca52f-0b4f-46f0-9153-1120bf1c9e4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.343 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 146ca52f-0b4f-46f0-9153-1120bf1c9e4e in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.343 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c67909b3-3304-4995-9006-971f1dc6183e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.345 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace which is not needed anymore
Oct 14 08:58:43 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:43 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000018.scope: Consumed 3.455s CPU time.
Oct 14 08:58:43 compute-0 systemd-machined[214636]: Machine qemu-25-instance-00000018 terminated.
Oct 14 08:58:43 compute-0 NetworkManager[44885]: <info>  [1760432323.4956] manager: (tap146ca52f-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct 14 08:58:43 compute-0 ceph-mon[74249]: pgmap v1214: 305 pgs: 305 active+clean; 180 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 7.1 MiB/s wr, 413 op/s
Oct 14 08:58:43 compute-0 ceph-mon[74249]: osdmap e140: 3 total, 3 up, 3 in
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.514 2 INFO nova.virt.libvirt.driver [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance destroyed successfully.
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.515 2 DEBUG nova.objects.instance [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid 2826d9ce-d739-49a1-abfa-80cee62173fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.537 2 DEBUG nova.virt.libvirt.vif [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877571750',display_name='tempest-ImagesTestJSON-server-1877571750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877571750',id=24,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3vi6yohy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:41Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=2826d9ce-d739-49a1-abfa-80cee62173fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.537 2 DEBUG nova.network.os_vif_util [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.538 2 DEBUG nova.network.os_vif_util [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.538 2 DEBUG os_vif [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap146ca52f-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.548 2 INFO os_vif [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b')
Oct 14 08:58:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [NOTICE]   (291158) : haproxy version is 2.8.14-c23fe91
Oct 14 08:58:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [NOTICE]   (291158) : path to executable is /usr/sbin/haproxy
Oct 14 08:58:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [WARNING]  (291158) : Exiting Master process...
Oct 14 08:58:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [WARNING]  (291158) : Exiting Master process...
Oct 14 08:58:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [ALERT]    (291158) : Current worker (291160) exited with code 143 (Terminated)
Oct 14 08:58:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [WARNING]  (291158) : All workers exited. Exiting... (0)
Oct 14 08:58:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 180 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 7.4 MiB/s wr, 432 op/s
Oct 14 08:58:43 compute-0 systemd[1]: libpod-66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc.scope: Deactivated successfully.
Oct 14 08:58:43 compute-0 podman[292563]: 2025-10-14 08:58:43.564895492 +0000 UTC m=+0.124840156 container died 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 08:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc-userdata-shm.mount: Deactivated successfully.
Oct 14 08:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-48c703f783893fc5cf0d9cbca7dbc962deb1b04986a44b3368c6b7cd5b11d993-merged.mount: Deactivated successfully.
Oct 14 08:58:43 compute-0 podman[292563]: 2025-10-14 08:58:43.605851878 +0000 UTC m=+0.165796512 container cleanup 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 08:58:43 compute-0 systemd[1]: libpod-conmon-66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc.scope: Deactivated successfully.
Oct 14 08:58:43 compute-0 podman[292620]: 2025-10-14 08:58:43.686066438 +0000 UTC m=+0.053973777 container remove 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.693 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc636168-fe0f-4a23-adf9-e1ea25ba9885]: (4, ('Tue Oct 14 08:58:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc)\n66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc\nTue Oct 14 08:58:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc)\n66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.695 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe0d9f4-a2dc-412c-8732-da9f87ea12de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.695 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:43 compute-0 kernel: tap2322cf7a-00: left promiscuous mode
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb388c4-31a7-455f-ab53-ab7775c355fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.759 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[266ca164-e325-4f72-9629-8bbcc30397cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.760 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3feb87-13ac-469d-84b0-c9890f242c6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.775 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fcc8c7-cc78-487c-a6e3-fae2778649de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608559, 'reachable_time': 18363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292635, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d2322cf7a\x2d0090\x2d40fa\x2da558\x2d42d84cc6fc2a.mount: Deactivated successfully.
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.780 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:58:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.780 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[45cbd338-99e1-4b13-9e48-b49f670811c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.936 2 INFO nova.compute.manager [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Rebuilding instance
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.945 2 INFO nova.virt.libvirt.driver [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Deleting instance files /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb_del
Oct 14 08:58:43 compute-0 nova_compute[259627]: 2025-10-14 08:58:43.947 2 INFO nova.virt.libvirt.driver [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Deletion of /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb_del complete
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.023 2 INFO nova.compute.manager [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.024 2 DEBUG oslo.service.loopingcall [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.024 2 DEBUG nova.compute.manager [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.024 2 DEBUG nova.network.neutron [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.247 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.263 2 DEBUG nova.compute.manager [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.309 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'pci_requests' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.327 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.340 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'resources' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.352 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'migration_context' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.366 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.369 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 08:58:44 compute-0 ceph-mon[74249]: pgmap v1216: 305 pgs: 305 active+clean; 180 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 7.4 MiB/s wr, 432 op/s
Oct 14 08:58:44 compute-0 nova_compute[259627]: 2025-10-14 08:58:44.999 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-unplugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.000 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.000 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.001 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.001 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] No waiting events found dispatching network-vif-unplugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.002 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-unplugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.002 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.003 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.003 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.004 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.004 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] No waiting events found dispatching network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.005 2 WARNING nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received unexpected event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e for instance with vm_state paused and task_state deleting.
Oct 14 08:58:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:45.428 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 6.3 MiB/s wr, 602 op/s
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.675 2 DEBUG nova.network.neutron [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.707 2 INFO nova.compute.manager [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 1.68 seconds to deallocate network for instance.
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.765 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.766 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.777 2 DEBUG nova.compute.manager [req-a4c94432-1834-43a2-8202-15369689fbdc req-61dc2ddc-dbce-4055-825c-470b65b877b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-deleted-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:45 compute-0 nova_compute[259627]: 2025-10-14 08:58:45.834 2 DEBUG oslo_concurrency.processutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192626390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.277 2 DEBUG oslo_concurrency.processutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.286 2 DEBUG nova.compute.provider_tree [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.309 2 DEBUG nova.scheduler.client.report [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.336 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.371 2 INFO nova.scheduler.client.report [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance 2826d9ce-d739-49a1-abfa-80cee62173fb
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.448 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.448 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.453 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.475 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.529 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.529 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.537 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.538 2 INFO nova.compute.claims [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:46 compute-0 ceph-mon[74249]: pgmap v1217: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 6.3 MiB/s wr, 602 op/s
Oct 14 08:58:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4192626390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.644 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:46 compute-0 nova_compute[259627]: 2025-10-14 08:58:46.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626172182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.105 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.114 2 DEBUG nova.compute.provider_tree [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.133 2 DEBUG nova.scheduler.client.report [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.158 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.159 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.207 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.209 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.233 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.255 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.334 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.337 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.337 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Creating image(s)
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.370 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.405 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.438 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.452 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 14 08:58:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 14 08:58:47 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 14 08:58:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 5.5 MiB/s wr, 525 op/s
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.557 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.558 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.559 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.559 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1626172182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:47 compute-0 ceph-mon[74249]: osdmap e141: 3 total, 3 up, 3 in
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.597 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.601 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.637 2 DEBUG nova.policy [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.873 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:47 compute-0 nova_compute[259627]: 2025-10-14 08:58:47.925 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.017 2 DEBUG nova.objects.instance [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.032 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.032 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Ensure instance console log exists: /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.033 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.033 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.033 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.225 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Successfully created port: bcdd5079-efdb-47f7-99b0-21394b1d16e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:48 compute-0 ceph-mon[74249]: pgmap v1219: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 5.5 MiB/s wr, 525 op/s
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.938 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432313.9370747, a071857d-db87-4931-95ad-f8c627f74160 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.938 2 INFO nova.compute.manager [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] VM Stopped (Lifecycle Event)
Oct 14 08:58:48 compute-0 nova_compute[259627]: 2025-10-14 08:58:48.961 2 DEBUG nova.compute.manager [None req-dd6757a8-c68d-4c82-ad37-e2076fc276b3 - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.500 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Successfully updated port: bcdd5079-efdb-47f7-99b0-21394b1d16e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.515 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.516 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.516 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 197 op/s
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.654 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.813 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.814 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.833 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.914 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.915 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.925 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:49 compute-0 nova_compute[259627]: 2025-10-14 08:58:49.926 2 INFO nova.compute.claims [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.060 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.424 2 DEBUG nova.compute.manager [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-changed-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.425 2 DEBUG nova.compute.manager [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Refreshing instance network info cache due to event network-changed-bcdd5079-efdb-47f7-99b0-21394b1d16e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.426 2 DEBUG oslo_concurrency.lockutils [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160813913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.507 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.517 2 DEBUG nova.compute.provider_tree [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.533 2 DEBUG nova.scheduler.client.report [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.556 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.558 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:50 compute-0 ceph-mon[74249]: pgmap v1220: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 197 op/s
Oct 14 08:58:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1160813913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.602 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.603 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.621 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.638 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.711 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.713 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.713 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Creating image(s)
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.737 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.759 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.784 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.787 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.849 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.850 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.850 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.851 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.871 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.874 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.926 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Updating instance_info_cache with network_info: [{"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.930 2 DEBUG nova.policy [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56f2f9bf9b064a208d9ce5fe732c4ff7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.947 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.948 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance network_info: |[{"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.949 2 DEBUG oslo_concurrency.lockutils [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.949 2 DEBUG nova.network.neutron [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Refreshing network info cache for port bcdd5079-efdb-47f7-99b0-21394b1d16e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.952 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start _get_guest_xml network_info=[{"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.956 2 WARNING nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.963 2 DEBUG nova.virt.libvirt.host [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.964 2 DEBUG nova.virt.libvirt.host [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.969 2 DEBUG nova.virt.libvirt.host [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.969 2 DEBUG nova.virt.libvirt.host [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.970 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.970 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.971 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.971 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.971 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.971 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.972 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.972 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.972 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.973 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.973 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.973 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:50 compute-0 nova_compute[259627]: 2025-10-14 08:58:50.976 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.119 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.185 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] resizing rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.273 2 DEBUG nova.objects.instance [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'migration_context' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.301 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.302 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Ensure instance console log exists: /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.302 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.303 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.303 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.400 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.401 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447736462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.418 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.426 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.449 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.454 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.516 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.517 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.526 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.526 2 INFO nova.compute.claims [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 134 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 210 op/s
Oct 14 08:58:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2447736462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.673 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.897 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Successfully created port: 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673038839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.983 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.985 2 DEBUG nova.virt.libvirt.vif [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107832270',display_name='tempest-ImagesTestJSON-server-107832270',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-107832270',id=26,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-bhq8zg2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:47Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=3f3d9640-8200-45d8-ac25-bbc5d016d49f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.985 2 DEBUG nova.network.os_vif_util [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.986 2 DEBUG nova.network.os_vif_util [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:51 compute-0 nova_compute[259627]: 2025-10-14 08:58:51.987 2 DEBUG nova.objects.instance [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.009 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <uuid>3f3d9640-8200-45d8-ac25-bbc5d016d49f</uuid>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <name>instance-0000001a</name>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesTestJSON-server-107832270</nova:name>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:50</nova:creationTime>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <nova:port uuid="bcdd5079-efdb-47f7-99b0-21394b1d16e2">
Oct 14 08:58:52 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <entry name="serial">3f3d9640-8200-45d8-ac25-bbc5d016d49f</entry>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <entry name="uuid">3f3d9640-8200-45d8-ac25-bbc5d016d49f</entry>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk">
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config">
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:52 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:da:fb:42"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <target dev="tapbcdd5079-ef"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/console.log" append="off"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:52 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:52 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:52 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:52 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:52 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.015 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Preparing to wait for external event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.016 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.016 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.016 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.017 2 DEBUG nova.virt.libvirt.vif [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107832270',display_name='tempest-ImagesTestJSON-server-107832270',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-107832270',id=26,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-bhq8zg2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:47Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=3f3d9640-8200-45d8-ac25-bbc5d016d49f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.017 2 DEBUG nova.network.os_vif_util [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.019 2 DEBUG nova.network.os_vif_util [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.019 2 DEBUG os_vif [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbcdd5079-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbcdd5079-ef, col_values=(('external_ids', {'iface-id': 'bcdd5079-efdb-47f7-99b0-21394b1d16e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:fb:42', 'vm-uuid': '3f3d9640-8200-45d8-ac25-bbc5d016d49f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:52 compute-0 NetworkManager[44885]: <info>  [1760432332.0278] manager: (tapbcdd5079-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.034 2 INFO os_vif [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef')
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.106 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.106 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.106 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:da:fb:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.107 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Using config drive
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.130 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022107505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.222 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.222 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.238 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.244 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.249 2 DEBUG nova.compute.provider_tree [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.270 2 DEBUG nova.scheduler.client.report [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.293 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.294 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.320 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.321 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.328 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.328 2 INFO nova.compute.claims [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.343 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.344 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.371 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.391 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.424 2 DEBUG nova.network.neutron [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Updated VIF entry in instance network info cache for port bcdd5079-efdb-47f7-99b0-21394b1d16e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.424 2 DEBUG nova.network.neutron [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Updating instance_info_cache with network_info: [{"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.446 2 DEBUG oslo_concurrency.lockutils [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.491 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.493 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.493 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Creating image(s)
Oct 14 08:58:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.515 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.539 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.563 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.566 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:52 compute-0 ceph-mon[74249]: pgmap v1221: 305 pgs: 305 active+clean; 134 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 210 op/s
Oct 14 08:58:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2673038839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4022107505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.608 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.646 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.647 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.647 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.648 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.668 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.671 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 654413e6-01cd-4e54-a271-6b515a8561e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.881 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Creating config drive at /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.885 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxg2gle5n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.920 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 654413e6-01cd-4e54-a271-6b515a8561e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.965 2 DEBUG nova.policy [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5aacb60ad29c418c9161e71bb72da036', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.971 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432317.9163482, aefbf308-7f99-4a76-8d5e-54613f6bdf83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:52 compute-0 nova_compute[259627]: 2025-10-14 08:58:52.972 2 INFO nova.compute.manager [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] VM Stopped (Lifecycle Event)
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.012 2 DEBUG nova.compute.manager [None req-6dab8613-8a65-4fe2-81d4-3029e3fe75b3 - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.016 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] resizing rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.041 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxg2gle5n" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.062 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.065 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188578432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.097 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.104 2 DEBUG nova.compute.provider_tree [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.140 2 DEBUG nova.scheduler.client.report [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.148 2 DEBUG nova.objects.instance [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'migration_context' on Instance uuid 654413e6-01cd-4e54-a271-6b515a8561e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.168 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.169 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.171 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.172 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Ensure instance console log exists: /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.172 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.172 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.172 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.245 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.245 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.249 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.249 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Deleting local config drive /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config because it was imported into RBD.
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.288 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.307 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:53 compute-0 kernel: tapbcdd5079-ef: entered promiscuous mode
Oct 14 08:58:53 compute-0 ovn_controller[152662]: 2025-10-14T08:58:53Z|00187|binding|INFO|Claiming lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 for this chassis.
Oct 14 08:58:53 compute-0 ovn_controller[152662]: 2025-10-14T08:58:53Z|00188|binding|INFO|bcdd5079-efdb-47f7-99b0-21394b1d16e2: Claiming fa:16:3e:da:fb:42 10.100.0.3
Oct 14 08:58:53 compute-0 NetworkManager[44885]: <info>  [1760432333.3168] manager: (tapbcdd5079-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.324 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:fb:42 10.100.0.3'], port_security=['fa:16:3e:da:fb:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f3d9640-8200-45d8-ac25-bbc5d016d49f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bcdd5079-efdb-47f7-99b0-21394b1d16e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.327 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bcdd5079-efdb-47f7-99b0-21394b1d16e2 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.330 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.339 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Successfully updated port: 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:53 compute-0 ovn_controller[152662]: 2025-10-14T08:58:53Z|00189|binding|INFO|Setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 ovn-installed in OVS
Oct 14 08:58:53 compute-0 ovn_controller[152662]: 2025-10-14T08:58:53Z|00190|binding|INFO|Setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 up in Southbound
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.346 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[16e34ab0-d4c0-4a69-955c-2801729ecb54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.346 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2322cf7a-01 in ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.349 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2322cf7a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5f4a2f-7ff9-4941-b948-af37e2b6fd33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f4adc0-119d-4b86-8ebc-ea39f72d1f71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.358 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.358 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquired lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.358 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:53 compute-0 systemd-machined[214636]: New machine qemu-28-instance-0000001a.
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.365 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b993204b-6414-48ab-a08d-56075fcb1f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000001a.
Oct 14 08:58:53 compute-0 systemd-udevd[293384]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:53 compute-0 NetworkManager[44885]: <info>  [1760432333.3910] device (tapbcdd5079-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:58:53 compute-0 NetworkManager[44885]: <info>  [1760432333.3918] device (tapbcdd5079-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.395 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b70e17e1-d948-42cf-a043-33985fa75561]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.400 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.402 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.402 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Creating image(s)
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.434 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4340e5fd-c94a-418d-bf78-de6018400c92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.434 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:53 compute-0 systemd-udevd[293386]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.440 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5011ca39-1b82-4cb0-84ab-52c45e79f492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 NetworkManager[44885]: <info>  [1760432333.4423] manager: (tap2322cf7a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.490 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[24bc9399-46f8-4bb0-966c-c7cd1448dbb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.493 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.499 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dab3a45d-5398-4b36-8655-2dfce75e9f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 NetworkManager[44885]: <info>  [1760432333.5339] device (tap2322cf7a-00): carrier: link connected
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.547 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.547 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7d102e-ffc5-43da-af42-78bd3bbb332c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 134 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.559 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.574 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c55d118-0756-4c77-ace9-60ca6d3d3cea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611229, 'reachable_time': 23663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293468, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.597 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da71f4a1-7d4e-447f-8ef4-0957b8d81259]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:956c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611229, 'tstamp': 611229}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293470, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3188578432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.606 2 DEBUG nova.policy [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56f2f9bf9b064a208d9ce5fe732c4ff7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.617 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6cd382-b36a-4805-8700-316e2e7e1141]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611229, 'reachable_time': 23663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293471, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.627 2 DEBUG nova.compute.manager [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-changed-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.627 2 DEBUG nova.compute.manager [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Refreshing instance network info cache due to event network-changed-0b16cd6a-fe42-4a54-8bbe-810915fcaa93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.627 2 DEBUG oslo_concurrency.lockutils [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.629 2 DEBUG nova.compute.manager [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.630 2 DEBUG oslo_concurrency.lockutils [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.630 2 DEBUG oslo_concurrency.lockutils [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.630 2 DEBUG oslo_concurrency.lockutils [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.631 2 DEBUG nova.compute.manager [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Processing event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.646 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.647 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.648 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.648 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.654 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[252727b9-fa8f-45df-936e-48d2bc9683eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.674 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.684 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf eb820455-d45c-4331-9363-124f11537f52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.716 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73310eb9-5682-469a-8458-4557da0bbf1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.717 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:53 compute-0 kernel: tap2322cf7a-00: entered promiscuous mode
Oct 14 08:58:53 compute-0 NetworkManager[44885]: <info>  [1760432333.7205] manager: (tap2322cf7a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.725 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:53 compute-0 ovn_controller[152662]: 2025-10-14T08:58:53Z|00191|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.729 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.730 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b40cebe-efd4-465b-9633-9db72da7170e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.731 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:58:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.731 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'env', 'PROCESS_TAG=haproxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2322cf7a-0090-40fa-a558-42d84cc6fc2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.923 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf eb820455-d45c-4331-9363-124f11537f52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:53 compute-0 nova_compute[259627]: 2025-10-14 08:58:53.976 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] resizing rbd image eb820455-d45c-4331-9363-124f11537f52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.062 2 DEBUG nova.objects.instance [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'migration_context' on Instance uuid eb820455-d45c-4331-9363-124f11537f52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.088 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.088 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Ensure instance console log exists: /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.088 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.089 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.089 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:54 compute-0 podman[293655]: 2025-10-14 08:58:54.145856027 +0000 UTC m=+0.047585689 container create 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.182 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Successfully created port: 902a062a-858b-4495-936b-47a675567467 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:54 compute-0 systemd[1]: Started libpod-conmon-7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb.scope.
Oct 14 08:58:54 compute-0 podman[293655]: 2025-10-14 08:58:54.120238508 +0000 UTC m=+0.021968220 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:58:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7281a4ba77e5cabb29255d89d41089125e8d4a234511b13cf81fb479f2c3742c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:58:54 compute-0 podman[293655]: 2025-10-14 08:58:54.241173878 +0000 UTC m=+0.142903540 container init 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 08:58:54 compute-0 podman[293655]: 2025-10-14 08:58:54.246617512 +0000 UTC m=+0.148347174 container start 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.262 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.262 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:54 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [NOTICE]   (293674) : New worker (293676) forked
Oct 14 08:58:54 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [NOTICE]   (293674) : Loading success.
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.283 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.492 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.493 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.501 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.502 2 INFO nova.compute.claims [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.536 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432334.535851, 3f3d9640-8200-45d8-ac25-bbc5d016d49f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.536 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] VM Started (Lifecycle Event)
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.537 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.542 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.545 2 INFO nova.virt.libvirt.driver [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance spawned successfully.
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.545 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.555 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.558 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.570 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.570 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.571 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.571 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.571 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.572 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.596 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432334.5375664, 3f3d9640-8200-45d8-ac25-bbc5d016d49f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] VM Paused (Lifecycle Event)
Oct 14 08:58:54 compute-0 ceph-mon[74249]: pgmap v1222: 305 pgs: 305 active+clean; 134 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.622 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.625 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432334.5402305, 3f3d9640-8200-45d8-ac25-bbc5d016d49f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.625 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] VM Resumed (Lifecycle Event)
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.644 2 INFO nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 7.31 seconds to spawn the instance on the hypervisor.
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.645 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.646 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.646 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.656 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.698 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.701 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.753 2 INFO nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 8.24 seconds to build instance.
Oct 14 08:58:54 compute-0 nova_compute[259627]: 2025-10-14 08:58:54.784 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.121 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Successfully created port: acc7c80f-8812-4bbf-93f8-cc3f1556b62a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:58:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523924861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.207 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.211 2 DEBUG nova.compute.provider_tree [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.225 2 DEBUG nova.scheduler.client.report [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.291 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.292 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.362 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.363 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.392 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.425 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.467 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.505 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Releasing lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.505 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance network_info: |[{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.506 2 DEBUG oslo_concurrency.lockutils [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.506 2 DEBUG nova.network.neutron [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Refreshing network info cache for port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.509 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start _get_guest_xml network_info=[{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.514 2 WARNING nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.527 2 DEBUG nova.virt.libvirt.host [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.528 2 DEBUG nova.virt.libvirt.host [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.531 2 DEBUG nova.virt.libvirt.host [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.532 2 DEBUG nova.virt.libvirt.host [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.532 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.532 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.533 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.533 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.533 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.535 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.535 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.538 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 803 KiB/s rd, 11 MiB/s wr, 230 op/s
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.572 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.574 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.575 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Creating image(s)
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.601 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3523924861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.633 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.658 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.666 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.695 2 DEBUG nova.policy [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56f2f9bf9b064a208d9ce5fe732c4ff7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.726 2 DEBUG nova.compute.manager [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.726 2 DEBUG oslo_concurrency.lockutils [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.727 2 DEBUG oslo_concurrency.lockutils [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.727 2 DEBUG oslo_concurrency.lockutils [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.727 2 DEBUG nova.compute.manager [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] No waiting events found dispatching network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.727 2 WARNING nova.compute.manager [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received unexpected event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 for instance with vm_state active and task_state None.
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.741 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.742 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.742 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.742 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.764 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.767 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.887 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Successfully created port: fa5f1925-a535-45ee-b96e-f79c725d7960 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:55 compute-0 nova_compute[259627]: 2025-10-14 08:58:55.993 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769225019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.045 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] resizing rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.072 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.093 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.096 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.162 2 DEBUG nova.objects.instance [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'migration_context' on Instance uuid 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.177 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.178 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Ensure instance console log exists: /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.178 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.178 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.179 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.193 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Successfully created port: 2da46865-98ea-42a7-a5cc-44b5bef36a3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:58:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557437425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.501 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.503 2 DEBUG nova.virt.libvirt.vif [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:50Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.504 2 DEBUG nova.network.os_vif_util [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.505 2 DEBUG nova.network.os_vif_util [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.507 2 DEBUG nova.objects.instance [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.521 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <uuid>27fa4cf8-c08c-46a2-af8f-17c8980a2317</uuid>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <name>instance-0000001b</name>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1883268496</nova:name>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:55</nova:creationTime>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <nova:user uuid="56f2f9bf9b064a208d9ce5fe732c4ff7">tempest-ListServerFiltersTestJSON-1842486796-project-member</nova:user>
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <nova:project uuid="3d3a647aa3914555a8a2c5fd6fe7a543">tempest-ListServerFiltersTestJSON-1842486796</nova:project>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <nova:port uuid="0b16cd6a-fe42-4a54-8bbe-810915fcaa93">
Oct 14 08:58:56 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <entry name="serial">27fa4cf8-c08c-46a2-af8f-17c8980a2317</entry>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <entry name="uuid">27fa4cf8-c08c-46a2-af8f-17c8980a2317</entry>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk">
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config">
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:56 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c3:07:ec"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <target dev="tap0b16cd6a-fe"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/console.log" append="off"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:56 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:56 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:56 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:56 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:56 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.521 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Preparing to wait for external event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.521 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.521 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.522 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.522 2 DEBUG nova.virt.libvirt.vif [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:50Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.522 2 DEBUG nova.network.os_vif_util [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.523 2 DEBUG nova.network.os_vif_util [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.523 2 DEBUG os_vif [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.527 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b16cd6a-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.527 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b16cd6a-fe, col_values=(('external_ids', {'iface-id': '0b16cd6a-fe42-4a54-8bbe-810915fcaa93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:07:ec', 'vm-uuid': '27fa4cf8-c08c-46a2-af8f-17c8980a2317'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:56 compute-0 NetworkManager[44885]: <info>  [1760432336.5291] manager: (tap0b16cd6a-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.540 2 INFO os_vif [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe')
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.595 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.596 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.596 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No VIF found with MAC fa:16:3e:c3:07:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.597 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Using config drive
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.614 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:56 compute-0 ceph-mon[74249]: pgmap v1223: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 803 KiB/s rd, 11 MiB/s wr, 230 op/s
Oct 14 08:58:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3769225019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1557437425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:56 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 14 08:58:56 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000016.scope: Consumed 12.571s CPU time.
Oct 14 08:58:56 compute-0 systemd-machined[214636]: Machine qemu-27-instance-00000016 terminated.
Oct 14 08:58:56 compute-0 nova_compute[259627]: 2025-10-14 08:58:56.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.093 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Creating config drive at /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.103 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuxj8b9dc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.171 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Successfully updated port: 2da46865-98ea-42a7-a5cc-44b5bef36a3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.191 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.192 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquired lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.192 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.259 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuxj8b9dc" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.282 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.285 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.312 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Successfully updated port: acc7c80f-8812-4bbf-93f8-cc3f1556b62a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.339 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.339 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquired lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.339 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.368 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.427 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.430 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Deleting local config drive /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config because it was imported into RBD.
Oct 14 08:58:57 compute-0 kernel: tap0b16cd6a-fe: entered promiscuous mode
Oct 14 08:58:57 compute-0 NetworkManager[44885]: <info>  [1760432337.4854] manager: (tap0b16cd6a-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Oct 14 08:58:57 compute-0 systemd-udevd[293956]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:57 compute-0 ovn_controller[152662]: 2025-10-14T08:58:57Z|00192|binding|INFO|Claiming lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for this chassis.
Oct 14 08:58:57 compute-0 ovn_controller[152662]: 2025-10-14T08:58:57Z|00193|binding|INFO|0b16cd6a-fe42-4a54-8bbe-810915fcaa93: Claiming fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.496 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.498 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis
Oct 14 08:58:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.503 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:58:57 compute-0 NetworkManager[44885]: <info>  [1760432337.5038] device (tap0b16cd6a-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:58:57 compute-0 NetworkManager[44885]: <info>  [1760432337.5057] device (tap0b16cd6a-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.522 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1f4e90-6f1a-4f33-852e-26c9b8a82877]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.523 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4d50d6a-61 in ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:58:57 compute-0 systemd-machined[214636]: New machine qemu-29-instance-0000001b.
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.527 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4d50d6a-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.527 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5f2656-9c5d-40d1-8320-a3b90db3302d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.529 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[855ad7ca-0b67-4ebc-89e4-3e7c09cd4ad6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.547 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b52e6efe-3ac2-4d54-a612-5f0b1b9f9226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001b.
Oct 14 08:58:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 798 KiB/s rd, 11 MiB/s wr, 229 op/s
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.563 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:57 compute-0 ovn_controller[152662]: 2025-10-14T08:58:57Z|00194|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 ovn-installed in OVS
Oct 14 08:58:57 compute-0 ovn_controller[152662]: 2025-10-14T08:58:57Z|00195|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 up in Southbound
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.569 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f104c0fd-f235-428e-bb7a-6241c14643c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.584 2 DEBUG nova.compute.manager [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received event network-changed-acc7c80f-8812-4bbf-93f8-cc3f1556b62a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.585 2 DEBUG nova.compute.manager [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Refreshing instance network info cache due to event network-changed-acc7c80f-8812-4bbf-93f8-cc3f1556b62a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.585 2 DEBUG oslo_concurrency.lockutils [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.598 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b4622c8b-6ce6-4ad1-b414-785f3003f1d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.607 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19927cf6-ed44-4b6d-987b-2cf0a8852614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 NetworkManager[44885]: <info>  [1760432337.6096] manager: (tapc4d50d6a-60): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.615 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Successfully updated port: 902a062a-858b-4495-936b-47a675567467 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.631 2 DEBUG nova.network.neutron [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updated VIF entry in instance network info cache for port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.632 2 DEBUG nova.network.neutron [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.640 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[063ef4b5-8f1d-4e7a-af5a-af8188cee13e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.642 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[be6ffd68-c068-45d6-ae61-e42a6b6b0386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.660 2 DEBUG oslo_concurrency.lockutils [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:57 compute-0 NetworkManager[44885]: <info>  [1760432337.6734] device (tapc4d50d6a-60): carrier: link connected
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.680 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6413c2-3820-47f8-a920-e9846fb6751c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.697 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d650797d-c83d-4647-9874-2bfc40d55cfa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294044, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.709 2 DEBUG oslo_concurrency.lockutils [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.710 2 DEBUG oslo_concurrency.lockutils [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.710 2 DEBUG nova.compute.manager [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.711 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance shutdown successfully after 13 seconds.
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.714 2 DEBUG nova.compute.manager [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.715 2 DEBUG nova.objects.instance [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'flavor' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f29c75-de64-4fbc-95e6-3564fc01ba6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:914e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611643, 'tstamp': 611643}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294045, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.721 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.727 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.739 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7abb9af9-1d7b-4554-9541-12893098fa64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294046, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.767 2 DEBUG nova.virt.libvirt.driver [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.781 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0204a7a8-5d17-4644-a6c2-c05f888c54f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.820 2 DEBUG nova.compute.manager [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received event network-changed-2da46865-98ea-42a7-a5cc-44b5bef36a3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.821 2 DEBUG nova.compute.manager [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Refreshing instance network info cache due to event network-changed-2da46865-98ea-42a7-a5cc-44b5bef36a3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.821 2 DEBUG oslo_concurrency.lockutils [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[50161bc8-3a5e-4d49-8338-e3f6875772c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.869 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:57 compute-0 NetworkManager[44885]: <info>  [1760432337.8718] manager: (tapc4d50d6a-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct 14 08:58:57 compute-0 kernel: tapc4d50d6a-60: entered promiscuous mode
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.875 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.878 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4d50d6a-6686-4b50-b1e5-9f71bae17a99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4d50d6a-6686-4b50-b1e5-9f71bae17a99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.879 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c72a49b-11fc-433f-9c83-983d38f8ae07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.880 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/c4d50d6a-6686-4b50-b1e5-9f71bae17a99.pid.haproxy
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:58:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.881 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'env', 'PROCESS_TAG=haproxy-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4d50d6a-6686-4b50-b1e5-9f71bae17a99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:58:57 compute-0 ovn_controller[152662]: 2025-10-14T08:58:57Z|00196|binding|INFO|Releasing lport 650f034e-5333-49ba-9907-b0409944aee7 from this chassis (sb_readonly=0)
Oct 14 08:58:57 compute-0 nova_compute[259627]: 2025-10-14 08:58:57.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.197 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting instance files /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.200 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deletion of /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del complete
Oct 14 08:58:58 compute-0 podman[294097]: 2025-10-14 08:58:58.312063803 +0000 UTC m=+0.060282082 container create 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:58:58 compute-0 systemd[1]: Started libpod-conmon-212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782.scope.
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.356 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Updating instance_info_cache with network_info: [{"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.379 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Releasing lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:58 compute-0 podman[294097]: 2025-10-14 08:58:58.287377176 +0000 UTC m=+0.035595495 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.380 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance network_info: |[{"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.380 2 DEBUG oslo_concurrency.lockutils [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.381 2 DEBUG nova.network.neutron [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Refreshing network info cache for port 2da46865-98ea-42a7-a5cc-44b5bef36a3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.384 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start _get_guest_xml network_info=[{"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7915092ded73a0a1e93e8fda5fd9538a3414c7bf089afe2c761ba10dc82e95d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.389 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.390 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating image(s)
Oct 14 08:58:58 compute-0 podman[294097]: 2025-10-14 08:58:58.404192745 +0000 UTC m=+0.152411054 container init 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 08:58:58 compute-0 podman[294097]: 2025-10-14 08:58:58.411428483 +0000 UTC m=+0.159646772 container start 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.426 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:58 compute-0 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [NOTICE]   (294131) : New worker (294143) forked
Oct 14 08:58:58 compute-0 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [NOTICE]   (294131) : Loading success.
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.462 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.488 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.492 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.527 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432323.5125923, 2826d9ce-d739-49a1-abfa-80cee62173fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.528 2 INFO nova.compute.manager [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Stopped (Lifecycle Event)
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.540 2 WARNING nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.547 2 DEBUG nova.virt.libvirt.host [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.548 2 DEBUG nova.virt.libvirt.host [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.562 2 DEBUG nova.virt.libvirt.host [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.563 2 DEBUG nova.virt.libvirt.host [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.564 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.564 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='446f2537-86e9-41eb-8a0d-254e85da4be1',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.565 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.565 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.566 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.566 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.567 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.567 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.568 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.568 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.568 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.569 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.573 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.603 2 DEBUG nova.compute.manager [None req-c9c60327-175e-4545-9dc0-eaecc3c56944 - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.605 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.606 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.607 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.607 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.632 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.636 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:58 compute-0 ceph-mon[74249]: pgmap v1224: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 798 KiB/s rd, 11 MiB/s wr, 229 op/s
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.718 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Updating instance_info_cache with network_info: [{"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.848 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Releasing lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.849 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance network_info: |[{"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.850 2 DEBUG oslo_concurrency.lockutils [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.850 2 DEBUG nova.network.neutron [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Refreshing network info cache for port acc7c80f-8812-4bbf-93f8-cc3f1556b62a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.855 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start _get_guest_xml network_info=[{"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'e2368e3e-f504-40e6-a9d3-67df18c845bb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.861 2 WARNING nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.865 2 DEBUG nova.virt.libvirt.host [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.866 2 DEBUG nova.virt.libvirt.host [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.873 2 DEBUG nova.virt.libvirt.host [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.874 2 DEBUG nova.virt.libvirt.host [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.874 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.875 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.875 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.876 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.877 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.877 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.878 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.878 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.878 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.879 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.879 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.879 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.883 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.913 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.971 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] resizing rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:58:58 compute-0 nova_compute[259627]: 2025-10-14 08:58:58.995 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Successfully updated port: fa5f1925-a535-45ee-b96e-f79c725d7960 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.009 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.009 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquired lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.009 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:58:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375667791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.062 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.081 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.084 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.127 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.127 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Ensure instance console log exists: /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.128 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.128 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.129 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.131 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.134 2 WARNING nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.140 2 DEBUG nova.virt.libvirt.host [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.141 2 DEBUG nova.virt.libvirt.host [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.145 2 DEBUG nova.virt.libvirt.host [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.145 2 DEBUG nova.virt.libvirt.host [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.146 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.146 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.147 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.147 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.148 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.148 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.148 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.149 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.149 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.149 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.150 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.150 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.150 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.166 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.192 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:58:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171146991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.390 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.424 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.429 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.475 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432339.475054, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.476 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Started (Lifecycle Event)
Oct 14 08:58:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816079181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.513 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.515 2 DEBUG nova.virt.libvirt.vif [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-208119549',display_name='tempest-ListServerFiltersTestJSON-instance-208119549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-208119549',id=30,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-ldwr4ls0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:55Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.516 2 DEBUG nova.network.os_vif_util [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.517 2 DEBUG nova.network.os_vif_util [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.519 2 DEBUG nova.objects.instance [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 669 KiB/s rd, 9.2 MiB/s wr, 192 op/s
Oct 14 08:58:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629039864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.616 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.640 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.646 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:58:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/375667791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2171146991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1816079181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3629039864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.678 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <uuid>82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9</uuid>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <name>instance-0000001e</name>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <memory>196608</memory>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-208119549</nova:name>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:58</nova:creationTime>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:flavor name="m1.micro">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:memory>192</nova:memory>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:user uuid="56f2f9bf9b064a208d9ce5fe732c4ff7">tempest-ListServerFiltersTestJSON-1842486796-project-member</nova:user>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:project uuid="3d3a647aa3914555a8a2c5fd6fe7a543">tempest-ListServerFiltersTestJSON-1842486796</nova:project>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:port uuid="2da46865-98ea-42a7-a5cc-44b5bef36a3d">
Oct 14 08:58:59 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="serial">82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="uuid">82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:28:d2:4a"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <target dev="tap2da46865-98"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/console.log" append="off"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:59 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:59 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.678 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Preparing to wait for external event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.679 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.679 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.679 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.680 2 DEBUG nova.virt.libvirt.vif [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-208119549',display_name='tempest-ListServerFiltersTestJSON-instance-208119549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-208119549',id=30,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-ldwr4ls0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:55Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.680 2 DEBUG nova.network.os_vif_util [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.681 2 DEBUG nova.network.os_vif_util [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.682 2 DEBUG os_vif [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.683 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2da46865-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2da46865-98, col_values=(('external_ids', {'iface-id': '2da46865-98ea-42a7-a5cc-44b5bef36a3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:d2:4a', 'vm-uuid': '82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.689 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432339.4768302, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.689 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Paused (Lifecycle Event)
Oct 14 08:58:59 compute-0 NetworkManager[44885]: <info>  [1760432339.6906] manager: (tap2da46865-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.697 2 INFO os_vif [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98')
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.715 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.725 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.744 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.760 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.761 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.761 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No VIF found with MAC fa:16:3e:28:d2:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.761 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Using config drive
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.782 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.815 2 DEBUG nova.compute.manager [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-changed-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.815 2 DEBUG nova.compute.manager [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Refreshing instance network info cache due to event network-changed-902a062a-858b-4495-936b-47a675567467. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.815 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:58:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:58:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841665192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.881 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.883 2 DEBUG nova.virt.libvirt.vif [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-195518745',display_name='tempest-ListServerFiltersTestJSON-instance-195518745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-195518745',id=29,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-0kd7c49h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:53Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=eb820455-d45c-4331-9363-124f11537f52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.883 2 DEBUG nova.network.os_vif_util [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.885 2 DEBUG nova.network.os_vif_util [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.886 2 DEBUG nova.objects.instance [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb820455-d45c-4331-9363-124f11537f52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.945 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <uuid>eb820455-d45c-4331-9363-124f11537f52</uuid>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <name>instance-0000001d</name>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-195518745</nova:name>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:58</nova:creationTime>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:user uuid="56f2f9bf9b064a208d9ce5fe732c4ff7">tempest-ListServerFiltersTestJSON-1842486796-project-member</nova:user>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:project uuid="3d3a647aa3914555a8a2c5fd6fe7a543">tempest-ListServerFiltersTestJSON-1842486796</nova:project>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <nova:port uuid="acc7c80f-8812-4bbf-93f8-cc3f1556b62a">
Oct 14 08:58:59 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <system>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="serial">eb820455-d45c-4331-9363-124f11537f52</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="uuid">eb820455-d45c-4331-9363-124f11537f52</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </system>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <os>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </os>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <features>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </features>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/eb820455-d45c-4331-9363-124f11537f52_disk">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/eb820455-d45c-4331-9363-124f11537f52_disk.config">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </source>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:58:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:da:46:6c"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <target dev="tapacc7c80f-88"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/console.log" append="off"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <video>
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </video>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:58:59 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:58:59 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:58:59 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:58:59 compute-0 nova_compute[259627]: </domain>
Oct 14 08:58:59 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.945 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Preparing to wait for external event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.946 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.946 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.947 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.948 2 DEBUG nova.virt.libvirt.vif [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-195518745',display_name='tempest-ListServerFiltersTestJSON-instance-195518745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-195518745',id=29,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-0kd7c49h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:53Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=eb820455-d45c-4331-9363-124f11537f52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.949 2 DEBUG nova.network.os_vif_util [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.950 2 DEBUG nova.network.os_vif_util [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.950 2 DEBUG os_vif [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacc7c80f-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapacc7c80f-88, col_values=(('external_ids', {'iface-id': 'acc7c80f-8812-4bbf-93f8-cc3f1556b62a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:46:6c', 'vm-uuid': 'eb820455-d45c-4331-9363-124f11537f52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:59 compute-0 NetworkManager[44885]: <info>  [1760432339.9641] manager: (tapacc7c80f-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:58:59 compute-0 nova_compute[259627]: 2025-10-14 08:58:59.973 2 INFO os_vif [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88')
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.068 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.069 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.069 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No VIF found with MAC fa:16:3e:da:46:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.070 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Using config drive
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.093 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4235467809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.150 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.152 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <uuid>51c76e0f-284d-4122-83b4-32c4518b9056</uuid>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <name>instance-00000016</name>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdmin275Test-server-546094612</nova:name>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:58:59</nova:creationTime>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <nova:user uuid="24a7b84f511340ae859b668a0e7becf6">tempest-ServersAdmin275Test-1795131452-project-member</nova:user>
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <nova:project uuid="61066a48551647f18a4cfb7a7147e7ed">tempest-ServersAdmin275Test-1795131452</nova:project>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <system>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <entry name="serial">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <entry name="uuid">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     </system>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <os>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   </os>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <features>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   </features>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk">
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk.config">
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:00 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log" append="off"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <video>
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     </video>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:59:00 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:59:00 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:59:00 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:59:00 compute-0 nova_compute[259627]: </domain>
Oct 14 08:59:00 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.220 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.221 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.221 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Using config drive
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.242 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.271 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.312 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'keypairs' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.464 2 DEBUG nova.network.neutron [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Updated VIF entry in instance network info cache for port 2da46865-98ea-42a7-a5cc-44b5bef36a3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.465 2 DEBUG nova.network.neutron [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Updating instance_info_cache with network_info: [{"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.482 2 DEBUG oslo_concurrency.lockutils [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.593 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Creating config drive at /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.602 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7conj5k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:00 compute-0 ceph-mon[74249]: pgmap v1225: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 669 KiB/s rd, 9.2 MiB/s wr, 192 op/s
Oct 14 08:59:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2841665192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4235467809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.669 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating config drive at /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.675 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1n0wesz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:00 compute-0 sudo[294580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:00 compute-0 sudo[294580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:00 compute-0 sudo[294580]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.713 2 DEBUG nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.715 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.715 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.715 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.716 2 DEBUG nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Processing event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.716 2 DEBUG nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.717 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.717 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.717 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.717 2 DEBUG nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.718 2 WARNING nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state building and task_state spawning.
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.726 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.738 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432340.7378342, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.739 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Resumed (Lifecycle Event)
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.742 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.756 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance spawned successfully.
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.756 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.773 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.783 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.785 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7conj5k" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:00 compute-0 sudo[294622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:59:00 compute-0 sudo[294622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:00 compute-0 sudo[294622]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:00 compute-0 podman[294605]: 2025-10-14 08:59:00.822829066 +0000 UTC m=+0.109988661 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.830 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.835 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:00 compute-0 podman[294609]: 2025-10-14 08:59:00.840872 +0000 UTC m=+0.132282530 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:59:00 compute-0 sudo[294678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.875 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1n0wesz" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:00 compute-0 sudo[294678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:00 compute-0 sudo[294678]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.901 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.904 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.930 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.934 2 DEBUG nova.network.neutron [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Updated VIF entry in instance network info cache for port acc7c80f-8812-4bbf-93f8-cc3f1556b62a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.935 2 DEBUG nova.network.neutron [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Updating instance_info_cache with network_info: [{"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.940 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Creating config drive at /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.946 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0isfzie8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:00 compute-0 sudo[294723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 08:59:00 compute-0 sudo[294723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.982 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.983 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.983 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.984 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.984 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:00 compute-0 nova_compute[259627]: 2025-10-14 08:59:00.987 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.020 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.021 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Deleting local config drive /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config because it was imported into RBD.
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.036 2 DEBUG oslo_concurrency.lockutils [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.064 2 INFO nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Took 10.35 seconds to spawn the instance on the hypervisor.
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.065 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:01 compute-0 NetworkManager[44885]: <info>  [1760432341.0795] manager: (tap2da46865-98): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct 14 08:59:01 compute-0 kernel: tap2da46865-98: entered promiscuous mode
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.084 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0isfzie8" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:01 compute-0 ovn_controller[152662]: 2025-10-14T08:59:01Z|00197|binding|INFO|Claiming lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d for this chassis.
Oct 14 08:59:01 compute-0 ovn_controller[152662]: 2025-10-14T08:59:01Z|00198|binding|INFO|2da46865-98ea-42a7-a5cc-44b5bef36a3d: Claiming fa:16:3e:28:d2:4a 10.100.0.13
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.109 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d2:4a 10.100.0.13'], port_security=['fa:16:3e:28:d2:4a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2da46865-98ea-42a7-a5cc-44b5bef36a3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.110 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2da46865-98ea-42a7-a5cc-44b5bef36a3d in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.111 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:59:01 compute-0 ovn_controller[152662]: 2025-10-14T08:59:01Z|00199|binding|INFO|Setting lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d ovn-installed in OVS
Oct 14 08:59:01 compute-0 ovn_controller[152662]: 2025-10-14T08:59:01Z|00200|binding|INFO|Setting lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d up in Southbound
Oct 14 08:59:01 compute-0 systemd-udevd[294841]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.132 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc3f742-6d9f-44ca-b46c-4684d211826e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.140 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:01 compute-0 systemd-machined[214636]: New machine qemu-30-instance-0000001e.
Oct 14 08:59:01 compute-0 NetworkManager[44885]: <info>  [1760432341.1468] device (tap2da46865-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:01 compute-0 NetworkManager[44885]: <info>  [1760432341.1476] device (tap2da46865-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:01 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.170 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config eb820455-d45c-4331-9363-124f11537f52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.172 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c44ddd1d-7c22-4d5b-b557-987078c340e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.175 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c7aa3fd7-3569-4cad-86aa-67d107e7887a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.220 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d12f8a-482e-43d7-b853-758b1b23c03a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.221 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.222 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting local config drive /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config because it was imported into RBD.
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.224 2 INFO nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Took 11.34 seconds to build instance.
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.239 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[706d20ae-7e55-40cc-abfe-3bdeed30c15d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294860, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.242 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.256 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12a41923-cd44-4dcd-aa8c-60c8d5234f7f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294867, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294867, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.258 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.266 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.267 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.267 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.268 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:01 compute-0 systemd-machined[214636]: New machine qemu-31-instance-00000016.
Oct 14 08:59:01 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-00000016.
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.388 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config eb820455-d45c-4331-9363-124f11537f52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.389 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Deleting local config drive /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config because it was imported into RBD.
Oct 14 08:59:01 compute-0 kernel: tapacc7c80f-88: entered promiscuous mode
Oct 14 08:59:01 compute-0 NetworkManager[44885]: <info>  [1760432341.4347] manager: (tapacc7c80f-88): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct 14 08:59:01 compute-0 systemd-udevd[294850]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:01 compute-0 NetworkManager[44885]: <info>  [1760432341.4511] device (tapacc7c80f-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:01 compute-0 NetworkManager[44885]: <info>  [1760432341.4522] device (tapacc7c80f-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:01 compute-0 ovn_controller[152662]: 2025-10-14T08:59:01Z|00201|binding|INFO|Claiming lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a for this chassis.
Oct 14 08:59:01 compute-0 ovn_controller[152662]: 2025-10-14T08:59:01Z|00202|binding|INFO|acc7c80f-8812-4bbf-93f8-cc3f1556b62a: Claiming fa:16:3e:da:46:6c 10.100.0.14
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.479 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:46:6c 10.100.0.14'], port_security=['fa:16:3e:da:46:6c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'eb820455-d45c-4331-9363-124f11537f52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=acc7c80f-8812-4bbf-93f8-cc3f1556b62a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.481 162547 INFO neutron.agent.ovn.metadata.agent [-] Port acc7c80f-8812-4bbf-93f8-cc3f1556b62a in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.482 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:59:01 compute-0 systemd-machined[214636]: New machine qemu-32-instance-0000001d.
Oct 14 08:59:01 compute-0 ovn_controller[152662]: 2025-10-14T08:59:01Z|00203|binding|INFO|Setting lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a ovn-installed in OVS
Oct 14 08:59:01 compute-0 ovn_controller[152662]: 2025-10-14T08:59:01Z|00204|binding|INFO|Setting lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a up in Southbound
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:01 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001d.
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.506 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2829edc-f706-4361-9d03-a94908a5764b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 sudo[294723]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.550 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb11ff9-1d51-44b3-86d0-0cce81fc048c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.560 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2178aaf2-7369-4150-aafa-2d117be85124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 319 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 13 MiB/s wr, 336 op/s
Oct 14 08:59:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:59:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:59:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 08:59:01 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.606 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[35bcf593-1bca-4d3f-9daf-8d12d4f57abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 08:59:01 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:59:01 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 59204c16-b91d-44e5-95b9-9ae085772098 does not exist
Oct 14 08:59:01 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 01099060-9d07-4938-9e55-b92a8dfa4dd1 does not exist
Oct 14 08:59:01 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev dd8fafc5-f278-4409-a8be-0056ff69e3cf does not exist
Oct 14 08:59:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 08:59:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:59:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 08:59:01 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:59:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 08:59:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.627 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[558f6d6b-b36a-46ba-aa4c-f99ae00a808c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294938, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.666 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0758118-aa99-47c1-a386-41b529a37904]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294940, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294940, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.670 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:59:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 08:59:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:59:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 08:59:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 08:59:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.677 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.677 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.678 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.678 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:01 compute-0 sudo[294939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:01 compute-0 sudo[294939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:01 compute-0 sudo[294939]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:01 compute-0 sudo[294965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:59:01 compute-0 sudo[294965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:01 compute-0 sudo[294965]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:01 compute-0 sudo[295022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:01 compute-0 sudo[295022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:01 compute-0 sudo[295022]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:01 compute-0 anacron[27473]: Job `cron.weekly' started
Oct 14 08:59:01 compute-0 anacron[27473]: Job `cron.weekly' terminated
Oct 14 08:59:01 compute-0 sudo[295061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 08:59:01 compute-0 sudo[295061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:01 compute-0 nova_compute[259627]: 2025-10-14 08:59:01.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.232 2 DEBUG nova.compute.manager [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.232 2 DEBUG oslo_concurrency.lockutils [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.233 2 DEBUG oslo_concurrency.lockutils [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.233 2 DEBUG oslo_concurrency.lockutils [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.233 2 DEBUG nova.compute.manager [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Processing event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.242 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updating instance_info_cache with network_info: [{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:02 compute-0 podman[295198]: 2025-10-14 08:59:02.257553128 +0000 UTC m=+0.055656238 container create 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.267 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Releasing lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.267 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance network_info: |[{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.268 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.268 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Refreshing network info cache for port 902a062a-858b-4495-936b-47a675567467 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.271 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start _get_guest_xml network_info=[{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.275 2 WARNING nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.280 2 DEBUG nova.virt.libvirt.host [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.281 2 DEBUG nova.virt.libvirt.host [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.284 2 DEBUG nova.virt.libvirt.host [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.285 2 DEBUG nova.virt.libvirt.host [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.285 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.285 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.286 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.286 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.286 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.286 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.288 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.291 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:02 compute-0 systemd[1]: Started libpod-conmon-71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71.scope.
Oct 14 08:59:02 compute-0 podman[295198]: 2025-10-14 08:59:02.233810705 +0000 UTC m=+0.031913835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:59:02 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:02 compute-0 podman[295198]: 2025-10-14 08:59:02.348887611 +0000 UTC m=+0.146990721 container init 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 08:59:02 compute-0 podman[295198]: 2025-10-14 08:59:02.355857972 +0000 UTC m=+0.153961082 container start 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:59:02 compute-0 podman[295198]: 2025-10-14 08:59:02.359923012 +0000 UTC m=+0.158026122 container attach 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:59:02 compute-0 suspicious_lederberg[295212]: 167 167
Oct 14 08:59:02 compute-0 systemd[1]: libpod-71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71.scope: Deactivated successfully.
Oct 14 08:59:02 compute-0 conmon[295212]: conmon 71392b55835803ada86e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71.scope/container/memory.events
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.403 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.4034772, 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.405 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] VM Started (Lifecycle Event)
Oct 14 08:59:02 compute-0 podman[295217]: 2025-10-14 08:59:02.41238447 +0000 UTC m=+0.026570004 container died 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.433 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c39da517717033c9b396b95d8fb02eeec0da300b5892765df2772d051a872d46-merged.mount: Deactivated successfully.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.440 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.403573, 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.440 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] VM Paused (Lifecycle Event)
Oct 14 08:59:02 compute-0 podman[295217]: 2025-10-14 08:59:02.453267344 +0000 UTC m=+0.067452858 container remove 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:59:02 compute-0 systemd[1]: libpod-conmon-71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71.scope: Deactivated successfully.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.467 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.474 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.496 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.542 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.541677, eb820455-d45c-4331-9363-124f11537f52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] VM Started (Lifecycle Event)
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.544 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.552 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.560 2 INFO nova.virt.libvirt.driver [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance spawned successfully.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.560 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.567 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.572 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.582 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.582 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.583 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.583 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.584 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.584 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.618 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.618 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.5417562, eb820455-d45c-4331-9363-124f11537f52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.618 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] VM Paused (Lifecycle Event)
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.645 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.648 2 DEBUG nova.compute.manager [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.648 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.656 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.546942, eb820455-d45c-4331-9363-124f11537f52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.656 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] VM Resumed (Lifecycle Event)
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.661 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance spawned successfully.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.661 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:02 compute-0 podman[295255]: 2025-10-14 08:59:02.666666444 +0000 UTC m=+0.059453251 container create 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 08:59:02 compute-0 ceph-mon[74249]: pgmap v1226: 305 pgs: 305 active+clean; 319 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 13 MiB/s wr, 336 op/s
Oct 14 08:59:02 compute-0 systemd[1]: Started libpod-conmon-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope.
Oct 14 08:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.720 2 INFO nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Took 9.32 seconds to spawn the instance on the hypervisor.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.720 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.725 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:02 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.733 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.733 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.734 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.734 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.735 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.735 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 podman[295255]: 2025-10-14 08:59:02.640910212 +0000 UTC m=+0.033697049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:02 compute-0 podman[295255]: 2025-10-14 08:59:02.755778712 +0000 UTC m=+0.148565559 container init 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:59:02 compute-0 podman[295255]: 2025-10-14 08:59:02.763576644 +0000 UTC m=+0.156363461 container start 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 08:59:02 compute-0 podman[295255]: 2025-10-14 08:59:02.766933326 +0000 UTC m=+0.159720163 container attach 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.796 2 DEBUG nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Processing event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.798 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.798 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.798 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.798 2 DEBUG nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] No waiting events found dispatching network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.798 2 WARNING nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received unexpected event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d for instance with vm_state building and task_state spawning.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.802 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.803 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3505099324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.805 2 INFO nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Took 10.51 seconds to build instance.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.807 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.812 2 DEBUG nova.compute.manager [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.814 2 INFO nova.virt.libvirt.driver [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance spawned successfully.
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.814 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.828 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 51c76e0f-284d-4122-83b4-32c4518b9056 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.828 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.646253, 51c76e0f-284d-4122-83b4-32c4518b9056 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.828 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Resumed (Lifecycle Event)
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.848 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.852 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.853 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.853 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.854 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.854 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.855 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.861 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.864 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.872 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.939 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:02 compute-0 nova_compute[259627]: 2025-10-14 08:59:02.964 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.016 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.64632, 51c76e0f-284d-4122-83b4-32c4518b9056 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.016 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Started (Lifecycle Event)
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.024 2 INFO nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Took 7.45 seconds to spawn the instance on the hypervisor.
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.024 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.028 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.030 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.030 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.061 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.098 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.156 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.169 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.80423, 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.169 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] VM Resumed (Lifecycle Event)
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.210 2 INFO nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Took 8.88 seconds to build instance.
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.219 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.230 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.243 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:03 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794689739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.557 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.558 2 DEBUG nova.virt.libvirt.vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:52Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.559 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.559 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 319 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 11 MiB/s wr, 309 op/s
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.562 2 DEBUG nova.virt.libvirt.vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:52Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.565 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.567 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.572 2 DEBUG nova.objects.instance [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 654413e6-01cd-4e54-a271-6b515a8561e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.597 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <uuid>654413e6-01cd-4e54-a271-6b515a8561e6</uuid>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <name>instance-0000001c</name>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestMultiNic-server-1217516639</nova:name>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:59:02</nova:creationTime>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:user uuid="5aacb60ad29c418c9161e71bb72da036">tempest-ServersTestMultiNic-840673976-project-member</nova:user>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:project uuid="3566f35c659a45bd9b9bbddf6552ed43">tempest-ServersTestMultiNic-840673976</nova:project>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:port uuid="902a062a-858b-4495-936b-47a675567467">
Oct 14 08:59:03 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.125" ipVersion="4"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <nova:port uuid="fa5f1925-a535-45ee-b96e-f79c725d7960">
Oct 14 08:59:03 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.1.203" ipVersion="4"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <system>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <entry name="serial">654413e6-01cd-4e54-a271-6b515a8561e6</entry>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <entry name="uuid">654413e6-01cd-4e54-a271-6b515a8561e6</entry>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </system>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <os>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   </os>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <features>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   </features>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/654413e6-01cd-4e54-a271-6b515a8561e6_disk">
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/654413e6-01cd-4e54-a271-6b515a8561e6_disk.config">
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:03 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:99:b0:d5"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <target dev="tap902a062a-85"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:9d:59:d2"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <target dev="tapfa5f1925-a5"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/console.log" append="off"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <video>
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </video>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:59:03 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:59:03 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:59:03 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:59:03 compute-0 nova_compute[259627]: </domain>
Oct 14 08:59:03 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.601 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Preparing to wait for external event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.602 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.602 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.602 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.602 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Preparing to wait for external event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.623 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.623 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.624 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.624 2 DEBUG nova.virt.libvirt.vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:52Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.624 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.625 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.625 2 DEBUG os_vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.626 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.631 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap902a062a-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap902a062a-85, col_values=(('external_ids', {'iface-id': '902a062a-858b-4495-936b-47a675567467', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:b0:d5', 'vm-uuid': '654413e6-01cd-4e54-a271-6b515a8561e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:03 compute-0 NetworkManager[44885]: <info>  [1760432343.6343] manager: (tap902a062a-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.645 2 INFO os_vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85')
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.646 2 DEBUG nova.virt.libvirt.vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:52Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.646 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.647 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.647 2 DEBUG os_vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa5f1925-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.651 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa5f1925-a5, col_values=(('external_ids', {'iface-id': 'fa5f1925-a535-45ee-b96e-f79c725d7960', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:59:d2', 'vm-uuid': '654413e6-01cd-4e54-a271-6b515a8561e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:03 compute-0 NetworkManager[44885]: <info>  [1760432343.6526] manager: (tapfa5f1925-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.666 2 INFO os_vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5')
Oct 14 08:59:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3505099324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/794689739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.739 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.740 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.740 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:99:b0:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.740 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:9d:59:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.740 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Using config drive
Oct 14 08:59:03 compute-0 nova_compute[259627]: 2025-10-14 08:59:03.844 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:03 compute-0 sleepy_faraday[295271]: --> passed data devices: 0 physical, 3 LVM
Oct 14 08:59:03 compute-0 sleepy_faraday[295271]: --> relative data size: 1.0
Oct 14 08:59:03 compute-0 sleepy_faraday[295271]: --> All data devices are unavailable
Oct 14 08:59:04 compute-0 systemd[1]: libpod-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope: Deactivated successfully.
Oct 14 08:59:04 compute-0 systemd[1]: libpod-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope: Consumed 1.025s CPU time.
Oct 14 08:59:04 compute-0 conmon[295271]: conmon 0fb67794b0f1e97e0b14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope/container/memory.events
Oct 14 08:59:04 compute-0 podman[295255]: 2025-10-14 08:59:04.01650615 +0000 UTC m=+1.409292967 container died 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:59:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615-merged.mount: Deactivated successfully.
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.088 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "51c76e0f-284d-4122-83b4-32c4518b9056" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.088 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.089 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "51c76e0f-284d-4122-83b4-32c4518b9056-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.089 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.089 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.090 2 INFO nova.compute.manager [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Terminating instance
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.090 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "refresh_cache-51c76e0f-284d-4122-83b4-32c4518b9056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.091 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquired lock "refresh_cache-51c76e0f-284d-4122-83b4-32c4518b9056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.091 2 DEBUG nova.network.neutron [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:59:04 compute-0 podman[295255]: 2025-10-14 08:59:04.09226068 +0000 UTC m=+1.485047487 container remove 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 08:59:04 compute-0 systemd[1]: libpod-conmon-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope: Deactivated successfully.
Oct 14 08:59:04 compute-0 sudo[295061]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.161 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updated VIF entry in instance network info cache for port 902a062a-858b-4495-936b-47a675567467. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.161 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updating instance_info_cache with network_info: [{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:04 compute-0 sudo[295375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.182 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.182 2 DEBUG nova.compute.manager [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-changed-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.183 2 DEBUG nova.compute.manager [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Refreshing instance network info cache due to event network-changed-fa5f1925-a535-45ee-b96e-f79c725d7960. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.183 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.183 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.183 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Refreshing network info cache for port fa5f1925-a535-45ee-b96e-f79c725d7960 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:59:04 compute-0 sudo[295375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:04 compute-0 sudo[295375]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:04 compute-0 sudo[295400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:59:04 compute-0 sudo[295400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:04 compute-0 sudo[295400]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:04 compute-0 sudo[295425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:04 compute-0 sudo[295425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:04 compute-0 sudo[295425]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.319 2 DEBUG nova.network.neutron [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:59:04 compute-0 sudo[295450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 08:59:04 compute-0 sudo[295450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.493 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Creating config drive at /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.498 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp050kbihl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.634 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp050kbihl" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.659 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.662 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:04 compute-0 ceph-mon[74249]: pgmap v1227: 305 pgs: 305 active+clean; 319 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 11 MiB/s wr, 309 op/s
Oct 14 08:59:04 compute-0 podman[295533]: 2025-10-14 08:59:04.788335183 +0000 UTC m=+0.059906622 container create bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.806 2 DEBUG nova.compute.manager [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.806 2 DEBUG oslo_concurrency.lockutils [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.806 2 DEBUG oslo_concurrency.lockutils [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.809 2 DEBUG oslo_concurrency.lockutils [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.809 2 DEBUG nova.compute.manager [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] No waiting events found dispatching network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.809 2 WARNING nova.compute.manager [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received unexpected event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a for instance with vm_state active and task_state None.
Oct 14 08:59:04 compute-0 systemd[1]: Started libpod-conmon-bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e.scope.
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.857 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.859 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Deleting local config drive /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config because it was imported into RBD.
Oct 14 08:59:04 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:04 compute-0 podman[295533]: 2025-10-14 08:59:04.76703754 +0000 UTC m=+0.038608999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:59:04 compute-0 podman[295533]: 2025-10-14 08:59:04.879636665 +0000 UTC m=+0.151208124 container init bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:59:04 compute-0 podman[295533]: 2025-10-14 08:59:04.888760489 +0000 UTC m=+0.160331928 container start bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:59:04 compute-0 podman[295533]: 2025-10-14 08:59:04.893907776 +0000 UTC m=+0.165479235 container attach bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 08:59:04 compute-0 nifty_yonath[295565]: 167 167
Oct 14 08:59:04 compute-0 systemd[1]: libpod-bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e.scope: Deactivated successfully.
Oct 14 08:59:04 compute-0 podman[295533]: 2025-10-14 08:59:04.899239237 +0000 UTC m=+0.170810676 container died bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 08:59:04 compute-0 kernel: tap902a062a-85: entered promiscuous mode
Oct 14 08:59:04 compute-0 NetworkManager[44885]: <info>  [1760432344.9260] manager: (tap902a062a-85): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Oct 14 08:59:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0b802cc55b58cbfe12901c139270c1a45029af642c1540c9ddc72bcecd43b56-merged.mount: Deactivated successfully.
Oct 14 08:59:04 compute-0 podman[295533]: 2025-10-14 08:59:04.958400589 +0000 UTC m=+0.229972028 container remove bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 08:59:04 compute-0 nova_compute[259627]: 2025-10-14 08:59:04.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:04 compute-0 ovn_controller[152662]: 2025-10-14T08:59:04Z|00205|binding|INFO|Claiming lport 902a062a-858b-4495-936b-47a675567467 for this chassis.
Oct 14 08:59:04 compute-0 ovn_controller[152662]: 2025-10-14T08:59:04Z|00206|binding|INFO|902a062a-858b-4495-936b-47a675567467: Claiming fa:16:3e:99:b0:d5 10.100.0.125
Oct 14 08:59:04 compute-0 NetworkManager[44885]: <info>  [1760432344.9646] manager: (tapfa5f1925-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Oct 14 08:59:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.969 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b0:d5 10.100.0.125'], port_security=['fa:16:3e:99:b0:d5 10.100.0.125'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.125/24', 'neutron:device_id': '654413e6-01cd-4e54-a271-6b515a8561e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba2a7d22-c618-4486-a804-fe221f5826d8, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=902a062a-858b-4495-936b-47a675567467) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.970 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 902a062a-858b-4495-936b-47a675567467 in datapath b65b17d8-22d8-41b1-aa72-fd93aefdff30 bound to our chassis
Oct 14 08:59:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.972 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b65b17d8-22d8-41b1-aa72-fd93aefdff30
Oct 14 08:59:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.985 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1bc40b-2223-4e11-b02b-3006f612455b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.986 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb65b17d8-21 in ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:59:04 compute-0 systemd-udevd[295597]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.990 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb65b17d8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:59:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.990 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3de697-7650-451e-a81a-3dac4a9a40b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b24e65be-1a4b-46bc-ae65-af77185808e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:04 compute-0 systemd-udevd[295600]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:05 compute-0 NetworkManager[44885]: <info>  [1760432345.0027] device (tap902a062a-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:05 compute-0 NetworkManager[44885]: <info>  [1760432345.0036] device (tap902a062a-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:05 compute-0 systemd-machined[214636]: New machine qemu-33-instance-0000001c.
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.017 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad6c9b1-d1b6-4464-a424-ea231b8477a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 NetworkManager[44885]: <info>  [1760432345.0196] device (tapfa5f1925-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:05 compute-0 kernel: tapfa5f1925-a5: entered promiscuous mode
Oct 14 08:59:05 compute-0 NetworkManager[44885]: <info>  [1760432345.0209] device (tapfa5f1925-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:05 compute-0 ovn_controller[152662]: 2025-10-14T08:59:05Z|00207|binding|INFO|Claiming lport fa5f1925-a535-45ee-b96e-f79c725d7960 for this chassis.
Oct 14 08:59:05 compute-0 ovn_controller[152662]: 2025-10-14T08:59:05Z|00208|binding|INFO|fa5f1925-a535-45ee-b96e-f79c725d7960: Claiming fa:16:3e:9d:59:d2 10.100.1.203
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:05 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001c.
Oct 14 08:59:05 compute-0 systemd[1]: libpod-conmon-bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e.scope: Deactivated successfully.
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:05 compute-0 ovn_controller[152662]: 2025-10-14T08:59:05Z|00209|binding|INFO|Setting lport 902a062a-858b-4495-936b-47a675567467 ovn-installed in OVS
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:05 compute-0 ovn_controller[152662]: 2025-10-14T08:59:05Z|00210|binding|INFO|Setting lport 902a062a-858b-4495-936b-47a675567467 up in Southbound
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.040 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:59:d2 10.100.1.203'], port_security=['fa:16:3e:9d:59:d2 10.100.1.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.203/24', 'neutron:device_id': '654413e6-01cd-4e54-a271-6b515a8561e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26d3dea5-d3a1-43cc-a801-df7cba99d5e2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fa5f1925-a535-45ee-b96e-f79c725d7960) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.048 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ba57cc-97df-4191-b376-596c617b221b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 ovn_controller[152662]: 2025-10-14T08:59:05Z|00211|binding|INFO|Setting lport fa5f1925-a535-45ee-b96e-f79c725d7960 ovn-installed in OVS
Oct 14 08:59:05 compute-0 ovn_controller[152662]: 2025-10-14T08:59:05Z|00212|binding|INFO|Setting lport fa5f1925-a535-45ee-b96e-f79c725d7960 up in Southbound
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.084 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab9c019-be07-4389-b46d-28c928ad0e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 NetworkManager[44885]: <info>  [1760432345.0962] manager: (tapb65b17d8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.095 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8f0134-1d02-4027-8105-169b6f1c3113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 systemd-udevd[295604]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.146 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5690b5-d5a9-4db3-a613-b07532f0d5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.149 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a489eb21-0b7e-42ee-bb43-c77d8f270d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 NetworkManager[44885]: <info>  [1760432345.1924] device (tapb65b17d8-20): carrier: link connected
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.202 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9b8f79-2c85-4f1b-8036-a55d26fd4755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 podman[295624]: 2025-10-14 08:59:05.209042854 +0000 UTC m=+0.075289430 container create ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.241 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e40b419-245a-40b4-b8b3-c2133d7fc0f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb65b17d8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:78:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612395, 'reachable_time': 39120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295649, 'error': None, 'target': 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 systemd[1]: Started libpod-conmon-ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a.scope.
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.267 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67dfa044-9007-46ff-9389-356addff73b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:7891'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612395, 'tstamp': 612395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295652, 'error': None, 'target': 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 podman[295624]: 2025-10-14 08:59:05.180450432 +0000 UTC m=+0.046697038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:59:05 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.293 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79288cb1-bb27-46ba-ae0f-7e233eb2843a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb65b17d8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:78:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612395, 'reachable_time': 39120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295656, 'error': None, 'target': 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 podman[295624]: 2025-10-14 08:59:05.316518693 +0000 UTC m=+0.182765299 container init ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 08:59:05 compute-0 podman[295624]: 2025-10-14 08:59:05.32613902 +0000 UTC m=+0.192385596 container start ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 08:59:05 compute-0 podman[295624]: 2025-10-14 08:59:05.330102567 +0000 UTC m=+0.196349143 container attach ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8eba1017-77a3-4c47-861a-a301ca2901a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.438 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b13c4a14-5b90-47ec-8c13-c8b5ed220a3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.441 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb65b17d8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.441 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.442 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb65b17d8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:05 compute-0 kernel: tapb65b17d8-20: entered promiscuous mode
Oct 14 08:59:05 compute-0 NetworkManager[44885]: <info>  [1760432345.4761] manager: (tapb65b17d8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.488 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb65b17d8-20, col_values=(('external_ids', {'iface-id': 'a9cc546a-2b36-4c1a-bae3-27bbea3be016'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:05 compute-0 ovn_controller[152662]: 2025-10-14T08:59:05Z|00213|binding|INFO|Releasing lport a9cc546a-2b36-4c1a-bae3-27bbea3be016 from this chassis (sb_readonly=0)
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.494 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b65b17d8-22d8-41b1-aa72-fd93aefdff30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b65b17d8-22d8-41b1-aa72-fd93aefdff30.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.508 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3684156a-302b-400b-ae9c-875e5e63594b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.516 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-b65b17d8-22d8-41b1-aa72-fd93aefdff30
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/b65b17d8-22d8-41b1-aa72-fd93aefdff30.pid.haproxy
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID b65b17d8-22d8-41b1-aa72-fd93aefdff30
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:59:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.517 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'env', 'PROCESS_TAG=haproxy-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b65b17d8-22d8-41b1-aa72-fd93aefdff30.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:59:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 08:59:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1573849996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:59:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 08:59:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1573849996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:59:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.8 MiB/s rd, 11 MiB/s wr, 598 op/s
Oct 14 08:59:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1573849996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 08:59:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1573849996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.841 2 DEBUG nova.network.neutron [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.862 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Releasing lock "refresh_cache-51c76e0f-284d-4122-83b4-32c4518b9056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:05 compute-0 nova_compute[259627]: 2025-10-14 08:59:05.862 2 DEBUG nova.compute.manager [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:59:05 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 14 08:59:05 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000016.scope: Consumed 3.820s CPU time.
Oct 14 08:59:05 compute-0 systemd-machined[214636]: Machine qemu-31-instance-00000016 terminated.
Oct 14 08:59:06 compute-0 podman[295729]: 2025-10-14 08:59:06.042223474 +0000 UTC m=+0.057670667 container create eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 08:59:06 compute-0 systemd[1]: Started libpod-conmon-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8.scope.
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.089 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.091 2 DEBUG nova.objects.instance [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'resources' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:06 compute-0 podman[295729]: 2025-10-14 08:59:06.015896537 +0000 UTC m=+0.031343760 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:59:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2261a989d2df2c2391ffadc57ac59269fc091a983c6619c6804ba5d6a17e8d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:06 compute-0 podman[295729]: 2025-10-14 08:59:06.130475441 +0000 UTC m=+0.145922624 container init eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 08:59:06 compute-0 podman[295729]: 2025-10-14 08:59:06.136157221 +0000 UTC m=+0.151604404 container start eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]: {
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:     "0": [
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:         {
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "devices": [
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "/dev/loop3"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             ],
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_name": "ceph_lv0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_size": "21470642176",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "name": "ceph_lv0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "tags": {
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cluster_name": "ceph",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.crush_device_class": "",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.encrypted": "0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osd_id": "0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.type": "block",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.vdo": "0"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             },
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "type": "block",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "vg_name": "ceph_vg0"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:         }
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:     ],
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:     "1": [
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:         {
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "devices": [
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "/dev/loop4"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             ],
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_name": "ceph_lv1",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_size": "21470642176",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "name": "ceph_lv1",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "tags": {
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cluster_name": "ceph",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.crush_device_class": "",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.encrypted": "0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osd_id": "1",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.type": "block",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.vdo": "0"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             },
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "type": "block",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "vg_name": "ceph_vg1"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:         }
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:     ],
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:     "2": [
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:         {
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "devices": [
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "/dev/loop5"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             ],
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_name": "ceph_lv2",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_size": "21470642176",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "name": "ceph_lv2",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "tags": {
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.cluster_name": "ceph",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.crush_device_class": "",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.encrypted": "0",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osd_id": "2",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.type": "block",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:                 "ceph.vdo": "0"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             },
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "type": "block",
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:             "vg_name": "ceph_vg2"
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:         }
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]:     ]
Oct 14 08:59:06 compute-0 cranky_vaughan[295653]: }
Oct 14 08:59:06 compute-0 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [NOTICE]   (295760) : New worker (295770) forked
Oct 14 08:59:06 compute-0 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [NOTICE]   (295760) : Loading success.
Oct 14 08:59:06 compute-0 systemd[1]: libpod-ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a.scope: Deactivated successfully.
Oct 14 08:59:06 compute-0 conmon[295653]: conmon ff3296f6e4bc3cd61276 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a.scope/container/memory.events
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.200 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fa5f1925-a535-45ee-b96e-f79c725d7960 in datapath 7b0bcfe7-c41f-42d1-9739-3dab148a181f unbound from our chassis
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.207 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0bcfe7-c41f-42d1-9739-3dab148a181f
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.218 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3a880936-0638-4877-8834-4de9c3fb7e65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.218 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b0bcfe7-c1 in ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.220 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b0bcfe7-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc54952-ee94-4598-aeec-3134d16d9595]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.221 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33f6ced0-0357-46ca-b6a1-235483e5da36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.233 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432346.2328403, 654413e6-01cd-4e54-a271-6b515a8561e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.233 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] VM Started (Lifecycle Event)
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.234 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[582bb1c7-a013-48b2-bf51-91e2877b37fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 podman[295779]: 2025-10-14 08:59:06.236559136 +0000 UTC m=+0.037798209 container died ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.250 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3acf2d-9db5-4fb1-837d-d071af32e114]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.257 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.263 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432346.2346213, 654413e6-01cd-4e54-a271-6b515a8561e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.263 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] VM Paused (Lifecycle Event)
Oct 14 08:59:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd-merged.mount: Deactivated successfully.
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.277 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[91785e8e-6d34-40ba-a21e-bc15fe0b42ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.288 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1fa541-f215-4cf8-b5fe-c16b66876b51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 NetworkManager[44885]: <info>  [1760432346.2894] manager: (tap7b0bcfe7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/115)
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.289 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:06 compute-0 systemd-udevd[295640]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.296 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:06 compute-0 podman[295779]: 2025-10-14 08:59:06.326272769 +0000 UTC m=+0.127511832 container remove ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.327 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:06 compute-0 systemd[1]: libpod-conmon-ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a.scope: Deactivated successfully.
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.348 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2183cc1e-0c63-495b-bb35-c7bcb991949b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.357 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fb78f660-4207-4d7c-89bb-6a5562a4171f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 sudo[295450]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:06 compute-0 NetworkManager[44885]: <info>  [1760432346.3906] device (tap7b0bcfe7-c0): carrier: link connected
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.397 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[120e53de-090f-4516-9e00-ea3d4496eb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.416 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb5c219-5f56-4e7f-9aa6-91826382f2c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0bcfe7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:a0:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612515, 'reachable_time': 41221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295819, 'error': None, 'target': 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.435 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f00095e6-4661-4f08-9be4-c50bf951ac33]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:a0ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612515, 'tstamp': 612515}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295830, 'error': None, 'target': 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 sudo[295806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:06 compute-0 sudo[295806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:06 compute-0 sudo[295806]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.458 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fd986a3d-e95b-41eb-96ca-a03c4bb80ca5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0bcfe7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:a0:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612515, 'reachable_time': 41221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295832, 'error': None, 'target': 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.491 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d981d1-6755-454f-9e59-f19960be35c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 sudo[295834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 08:59:06 compute-0 sudo[295834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:06 compute-0 sudo[295834]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.554 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[201eeb0b-b771-43af-86f2-a15f86a88923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0bcfe7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0bcfe7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:06 compute-0 NetworkManager[44885]: <info>  [1760432346.5595] manager: (tap7b0bcfe7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:06 compute-0 kernel: tap7b0bcfe7-c0: entered promiscuous mode
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.563 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0bcfe7-c0, col_values=(('external_ids', {'iface-id': '807b5c61-142e-48b3-a22e-e804f0499ef7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:06 compute-0 ovn_controller[152662]: 2025-10-14T08:59:06Z|00214|binding|INFO|Releasing lport 807b5c61-142e-48b3-a22e-e804f0499ef7 from this chassis (sb_readonly=0)
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:06 compute-0 sudo[295864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:06 compute-0 sudo[295864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:06 compute-0 sudo[295864]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.593 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b0bcfe7-c41f-42d1-9739-3dab148a181f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b0bcfe7-c41f-42d1-9739-3dab148a181f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.595 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20374efa-4ab3-476c-9e26-469dcc116428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.596 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7b0bcfe7-c41f-42d1-9739-3dab148a181f
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7b0bcfe7-c41f-42d1-9739-3dab148a181f.pid.haproxy
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7b0bcfe7-c41f-42d1-9739-3dab148a181f
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:59:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.597 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'env', 'PROCESS_TAG=haproxy-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b0bcfe7-c41f-42d1-9739-3dab148a181f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:59:06 compute-0 sudo[295892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 08:59:06 compute-0 sudo[295892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.705 2 INFO nova.virt.libvirt.driver [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting instance files /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.706 2 INFO nova.virt.libvirt.driver [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deletion of /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del complete
Oct 14 08:59:06 compute-0 ceph-mon[74249]: pgmap v1228: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.8 MiB/s rd, 11 MiB/s wr, 598 op/s
Oct 14 08:59:06 compute-0 ovn_controller[152662]: 2025-10-14T08:59:06Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:fb:42 10.100.0.3
Oct 14 08:59:06 compute-0 ovn_controller[152662]: 2025-10-14T08:59:06Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:fb:42 10.100.0.3
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.796 2 DEBUG nova.compute.manager [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.797 2 DEBUG oslo_concurrency.lockutils [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.797 2 DEBUG oslo_concurrency.lockutils [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.798 2 DEBUG oslo_concurrency.lockutils [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.798 2 DEBUG nova.compute.manager [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Processing event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.799 2 INFO nova.compute.manager [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.799 2 DEBUG oslo.service.loopingcall [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.800 2 DEBUG nova.compute.manager [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.800 2 DEBUG nova.network.neutron [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:59:06 compute-0 nova_compute[259627]: 2025-10-14 08:59:06.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:07.017 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:07.018 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:07.019 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:07 compute-0 podman[295973]: 2025-10-14 08:59:07.078201193 +0000 UTC m=+0.062327142 container create 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.092 2 DEBUG nova.network.neutron [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.109 2 DEBUG nova.network.neutron [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:07 compute-0 podman[295979]: 2025-10-14 08:59:07.111025569 +0000 UTC m=+0.081743649 container create c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 14 08:59:07 compute-0 systemd[1]: Started libpod-conmon-2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa.scope.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.127 2 INFO nova.compute.manager [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Took 0.33 seconds to deallocate network for instance.
Oct 14 08:59:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:07 compute-0 podman[295973]: 2025-10-14 08:59:07.053718961 +0000 UTC m=+0.037844930 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee5fee482ca97cde7b330eef7b447de260a6448dd9b51c26764ac8cff20927e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:07 compute-0 systemd[1]: Started libpod-conmon-c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16.scope.
Oct 14 08:59:07 compute-0 podman[295979]: 2025-10-14 08:59:07.073770424 +0000 UTC m=+0.044488524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:59:07 compute-0 podman[295973]: 2025-10-14 08:59:07.169599867 +0000 UTC m=+0.153725826 container init 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:59:07 compute-0 podman[295973]: 2025-10-14 08:59:07.178655059 +0000 UTC m=+0.162781018 container start 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 08:59:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.184 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.185 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:07 compute-0 podman[295979]: 2025-10-14 08:59:07.201040229 +0000 UTC m=+0.171758299 container init c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 08:59:07 compute-0 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [NOTICE]   (296014) : New worker (296017) forked
Oct 14 08:59:07 compute-0 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [NOTICE]   (296014) : Loading success.
Oct 14 08:59:07 compute-0 podman[295979]: 2025-10-14 08:59:07.207788635 +0000 UTC m=+0.178506715 container start c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:59:07 compute-0 podman[295979]: 2025-10-14 08:59:07.211715191 +0000 UTC m=+0.182433271 container attach c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:59:07 compute-0 jolly_lehmann[296010]: 167 167
Oct 14 08:59:07 compute-0 systemd[1]: libpod-c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16.scope: Deactivated successfully.
Oct 14 08:59:07 compute-0 podman[295979]: 2025-10-14 08:59:07.21533054 +0000 UTC m=+0.186048620 container died c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.226 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 08:59:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae668dbdc9ed47c922b5b15d592565495b9ebd0afa22ab974257c776b40a004e-merged.mount: Deactivated successfully.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.245 2 DEBUG nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.245 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.246 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.246 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.246 2 DEBUG nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Processing event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.246 2 DEBUG nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.247 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.247 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.247 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.247 2 DEBUG nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.248 2 WARNING nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received unexpected event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 for instance with vm_state building and task_state spawning.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.249 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 08:59:07 compute-0 podman[295979]: 2025-10-14 08:59:07.252666447 +0000 UTC m=+0.223384527 container remove c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.252 2 DEBUG nova.compute.provider_tree [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.254 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.261 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432347.2601912, 654413e6-01cd-4e54-a271-6b515a8561e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.262 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] VM Resumed (Lifecycle Event)
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.264 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.269 2 INFO nova.virt.libvirt.driver [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance spawned successfully.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.270 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.274 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.294 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.299 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.302 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.303 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:07 compute-0 systemd[1]: libpod-conmon-c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16.scope: Deactivated successfully.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.303 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.304 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.305 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.307 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.314 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.331 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updated VIF entry in instance network info cache for port fa5f1925-a535-45ee-b96e-f79c725d7960. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.332 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updating instance_info_cache with network_info: [{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.336 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.348 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:07 compute-0 podman[296044]: 2025-10-14 08:59:07.45356313 +0000 UTC m=+0.040658479 container create f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:59:07 compute-0 systemd[1]: Started libpod-conmon-f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35.scope.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.503 2 INFO nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Took 15.01 seconds to spawn the instance on the hypervisor.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.504 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:07 compute-0 podman[296044]: 2025-10-14 08:59:07.439479464 +0000 UTC m=+0.026574833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 08:59:07 compute-0 podman[296044]: 2025-10-14 08:59:07.55331944 +0000 UTC m=+0.140414809 container init f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 08:59:07 compute-0 podman[296044]: 2025-10-14 08:59:07.561161062 +0000 UTC m=+0.148256411 container start f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 08:59:07 compute-0 podman[296044]: 2025-10-14 08:59:07.56473297 +0000 UTC m=+0.151828319 container attach f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 08:59:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 3.9 MiB/s wr, 433 op/s
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.574 2 DEBUG oslo_concurrency.processutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.614 2 INFO nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Took 16.12 seconds to build instance.
Oct 14 08:59:07 compute-0 nova_compute[259627]: 2025-10-14 08:59:07.631 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381741753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:08 compute-0 nova_compute[259627]: 2025-10-14 08:59:08.067 2 DEBUG oslo_concurrency.processutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:08 compute-0 nova_compute[259627]: 2025-10-14 08:59:08.075 2 DEBUG nova.compute.provider_tree [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:08 compute-0 nova_compute[259627]: 2025-10-14 08:59:08.100 2 DEBUG nova.virt.libvirt.driver [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 08:59:08 compute-0 nova_compute[259627]: 2025-10-14 08:59:08.102 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:08 compute-0 nova_compute[259627]: 2025-10-14 08:59:08.252 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:08 compute-0 nova_compute[259627]: 2025-10-14 08:59:08.405 2 INFO nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Deleted allocations for instance 51c76e0f-284d-4122-83b4-32c4518b9056
Oct 14 08:59:08 compute-0 nova_compute[259627]: 2025-10-14 08:59:08.562 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:08 compute-0 awesome_cray[296058]: {
Oct 14 08:59:08 compute-0 awesome_cray[296058]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "osd_id": 2,
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "type": "bluestore"
Oct 14 08:59:08 compute-0 awesome_cray[296058]:     },
Oct 14 08:59:08 compute-0 awesome_cray[296058]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "osd_id": 1,
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "type": "bluestore"
Oct 14 08:59:08 compute-0 awesome_cray[296058]:     },
Oct 14 08:59:08 compute-0 awesome_cray[296058]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "osd_id": 0,
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 08:59:08 compute-0 awesome_cray[296058]:         "type": "bluestore"
Oct 14 08:59:08 compute-0 awesome_cray[296058]:     }
Oct 14 08:59:08 compute-0 awesome_cray[296058]: }
Oct 14 08:59:08 compute-0 systemd[1]: libpod-f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35.scope: Deactivated successfully.
Oct 14 08:59:08 compute-0 conmon[296058]: conmon f6fc0e86a3a5b0c8eacf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35.scope/container/memory.events
Oct 14 08:59:08 compute-0 podman[296044]: 2025-10-14 08:59:08.642300371 +0000 UTC m=+1.229395720 container died f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 08:59:08 compute-0 nova_compute[259627]: 2025-10-14 08:59:08.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7-merged.mount: Deactivated successfully.
Oct 14 08:59:08 compute-0 podman[296044]: 2025-10-14 08:59:08.694085773 +0000 UTC m=+1.281181122 container remove f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 08:59:08 compute-0 systemd[1]: libpod-conmon-f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35.scope: Deactivated successfully.
Oct 14 08:59:08 compute-0 ceph-mon[74249]: pgmap v1229: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 3.9 MiB/s wr, 433 op/s
Oct 14 08:59:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/381741753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:08 compute-0 sudo[295892]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 08:59:08 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:59:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 08:59:08 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:59:08 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 18036fd2-3b0e-4a50-bfae-ea198d20e47d does not exist
Oct 14 08:59:08 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e9b7a5ef-ca1c-4d02-a34b-d3f81914edf9 does not exist
Oct 14 08:59:08 compute-0 sudo[296126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 08:59:08 compute-0 sudo[296126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:08 compute-0 sudo[296126]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:08 compute-0 sudo[296151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 08:59:08 compute-0 sudo[296151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 08:59:08 compute-0 sudo[296151]: pam_unix(sudo:session): session closed for user root
Oct 14 08:59:09 compute-0 nova_compute[259627]: 2025-10-14 08:59:09.014 2 DEBUG nova.compute.manager [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:09 compute-0 nova_compute[259627]: 2025-10-14 08:59:09.014 2 DEBUG oslo_concurrency.lockutils [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:09 compute-0 nova_compute[259627]: 2025-10-14 08:59:09.014 2 DEBUG oslo_concurrency.lockutils [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:09 compute-0 nova_compute[259627]: 2025-10-14 08:59:09.014 2 DEBUG oslo_concurrency.lockutils [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:09 compute-0 nova_compute[259627]: 2025-10-14 08:59:09.015 2 DEBUG nova.compute.manager [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-plugged-902a062a-858b-4495-936b-47a675567467 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:09 compute-0 nova_compute[259627]: 2025-10-14 08:59:09.015 2 WARNING nova.compute.manager [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received unexpected event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 for instance with vm_state active and task_state None.
Oct 14 08:59:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 3.9 MiB/s wr, 433 op/s
Oct 14 08:59:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:59:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.131 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.132 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.132 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.132 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.133 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.134 2 INFO nova.compute.manager [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Terminating instance
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.135 2 DEBUG nova.compute.manager [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:59:10 compute-0 kernel: tap902a062a-85 (unregistering): left promiscuous mode
Oct 14 08:59:10 compute-0 NetworkManager[44885]: <info>  [1760432350.1951] device (tap902a062a-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00215|binding|INFO|Releasing lport 902a062a-858b-4495-936b-47a675567467 from this chassis (sb_readonly=0)
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00216|binding|INFO|Setting lport 902a062a-858b-4495-936b-47a675567467 down in Southbound
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00217|binding|INFO|Removing iface tap902a062a-85 ovn-installed in OVS
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.212 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b0:d5 10.100.0.125'], port_security=['fa:16:3e:99:b0:d5 10.100.0.125'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.125/24', 'neutron:device_id': '654413e6-01cd-4e54-a271-6b515a8561e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba2a7d22-c618-4486-a804-fe221f5826d8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=902a062a-858b-4495-936b-47a675567467) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.214 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 902a062a-858b-4495-936b-47a675567467 in datapath b65b17d8-22d8-41b1-aa72-fd93aefdff30 unbound from our chassis
Oct 14 08:59:10 compute-0 kernel: tapfa5f1925-a5 (unregistering): left promiscuous mode
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.216 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b65b17d8-22d8-41b1-aa72-fd93aefdff30, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:59:10 compute-0 NetworkManager[44885]: <info>  [1760432350.2214] device (tapfa5f1925-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.218 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec9f351-a8c4-49b9-bb9d-1162a25efb30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.219 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 namespace which is not needed anymore
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00218|binding|INFO|Releasing lport fa5f1925-a535-45ee-b96e-f79c725d7960 from this chassis (sb_readonly=0)
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00219|binding|INFO|Setting lport fa5f1925-a535-45ee-b96e-f79c725d7960 down in Southbound
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00220|binding|INFO|Removing iface tapfa5f1925-a5 ovn-installed in OVS
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.263 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:59:d2 10.100.1.203'], port_security=['fa:16:3e:9d:59:d2 10.100.1.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.203/24', 'neutron:device_id': '654413e6-01cd-4e54-a271-6b515a8561e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26d3dea5-d3a1-43cc-a801-df7cba99d5e2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fa5f1925-a535-45ee-b96e-f79c725d7960) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 14 08:59:10 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Consumed 3.688s CPU time.
Oct 14 08:59:10 compute-0 systemd-machined[214636]: Machine qemu-33-instance-0000001c terminated.
Oct 14 08:59:10 compute-0 NetworkManager[44885]: <info>  [1760432350.3687] manager: (tapfa5f1925-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Oct 14 08:59:10 compute-0 kernel: tapbcdd5079-ef (unregistering): left promiscuous mode
Oct 14 08:59:10 compute-0 NetworkManager[44885]: <info>  [1760432350.3799] device (tapbcdd5079-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:10 compute-0 systemd[1]: libpod-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8.scope: Deactivated successfully.
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.404 2 INFO nova.virt.libvirt.driver [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance destroyed successfully.
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.404 2 DEBUG nova.objects.instance [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'resources' on Instance uuid 654413e6-01cd-4e54-a271-6b515a8561e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.430 2 DEBUG nova.virt.libvirt.vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:07Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.431 2 DEBUG nova.network.os_vif_util [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.431 2 DEBUG nova.network.os_vif_util [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.432 2 DEBUG os_vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap902a062a-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [NOTICE]   (295760) : haproxy version is 2.8.14-c23fe91
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [NOTICE]   (295760) : path to executable is /usr/sbin/haproxy
Oct 14 08:59:10 compute-0 conmon[295747]: conmon eefa4b1ee19cb0960184 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8.scope/container/memory.events
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [WARNING]  (295760) : Exiting Master process...
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [ALERT]    (295760) : Current worker (295770) exited with code 143 (Terminated)
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [WARNING]  (295760) : All workers exited. Exiting... (0)
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00221|binding|INFO|Releasing lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 from this chassis (sb_readonly=0)
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00222|binding|INFO|Setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 down in Southbound
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00223|binding|INFO|Removing iface tapbcdd5079-ef ovn-installed in OVS
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:10 compute-0 podman[296203]: 2025-10-14 08:59:10.447241403 +0000 UTC m=+0.109815837 container died eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.450 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:fb:42 10.100.0.3'], port_security=['fa:16:3e:da:fb:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f3d9640-8200-45d8-ac25-bbc5d016d49f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bcdd5079-efdb-47f7-99b0-21394b1d16e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.480 2 INFO os_vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85')
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.481 2 DEBUG nova.virt.libvirt.vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:07Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:10 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.481 2 DEBUG nova.network.os_vif_util [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:10 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001a.scope: Consumed 12.554s CPU time.
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.482 2 DEBUG nova.network.os_vif_util [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.483 2 DEBUG os_vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b2261a989d2df2c2391ffadc57ac59269fc091a983c6619c6804ba5d6a17e8d-merged.mount: Deactivated successfully.
Oct 14 08:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8-userdata-shm.mount: Deactivated successfully.
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa5f1925-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:10 compute-0 systemd-machined[214636]: Machine qemu-28-instance-0000001a terminated.
Oct 14 08:59:10 compute-0 podman[296203]: 2025-10-14 08:59:10.493556291 +0000 UTC m=+0.156130725 container cleanup eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.497 2 INFO os_vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5')
Oct 14 08:59:10 compute-0 systemd[1]: libpod-conmon-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8.scope: Deactivated successfully.
Oct 14 08:59:10 compute-0 podman[296255]: 2025-10-14 08:59:10.563426965 +0000 UTC m=+0.046836260 container remove eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.575 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cdffe2-6d07-4b11-87d1-7c9dafe1a6ad]: (4, ('Tue Oct 14 08:59:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 (eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8)\neefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8\nTue Oct 14 08:59:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 (eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8)\neefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.580 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[321c6f7c-21c6-47cb-bf3b-a100cdb721c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.581 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb65b17d8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 kernel: tapb65b17d8-20: left promiscuous mode
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cc2ebc-5cf2-426a-81e6-253e2a1306a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.634 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[240dfe69-7471-4418-9a62-cf7b19a0563a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.635 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1b2d38-017b-46a0-bafe-52612be1b306]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fda534e3-0578-407c-b434-5d4a789ae11d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612383, 'reachable_time': 27790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296305, 'error': None, 'target': 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 systemd[1]: run-netns-ovnmeta\x2db65b17d8\x2d22d8\x2d41b1\x2daa72\x2dfd93aefdff30.mount: Deactivated successfully.
Oct 14 08:59:10 compute-0 kernel: tapbcdd5079-ef: entered promiscuous mode
Oct 14 08:59:10 compute-0 kernel: tapbcdd5079-ef (unregistering): left promiscuous mode
Oct 14 08:59:10 compute-0 NetworkManager[44885]: <info>  [1760432350.6584] manager: (tapbcdd5079-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.655 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.655 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8acc7d0f-60e3-4ccf-9bf5-d87d04ee3ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.656 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fa5f1925-a535-45ee-b96e-f79c725d7960 in datapath 7b0bcfe7-c41f-42d1-9739-3dab148a181f unbound from our chassis
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.657 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b0bcfe7-c41f-42d1-9739-3dab148a181f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00224|binding|INFO|Claiming lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 for this chassis.
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00225|binding|INFO|bcdd5079-efdb-47f7-99b0-21394b1d16e2: Claiming fa:16:3e:da:fb:42 10.100.0.3
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.659 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d73894e2-b20e-4933-9be7-3237dd8d5f00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.660 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f namespace which is not needed anymore
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00226|binding|INFO|Setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 ovn-installed in OVS
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00227|if_status|INFO|Dropped 2 log messages in last 219 seconds (most recently, 219 seconds ago) due to excessive rate
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00228|if_status|INFO|Not setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 down as sb is readonly
Oct 14 08:59:10 compute-0 podman[296284]: 2025-10-14 08:59:10.716255068 +0000 UTC m=+0.106038665 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 14 08:59:10 compute-0 ceph-mon[74249]: pgmap v1230: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 3.9 MiB/s wr, 433 op/s
Oct 14 08:59:10 compute-0 podman[296285]: 2025-10-14 08:59:10.76396304 +0000 UTC m=+0.148585390 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 08:59:10 compute-0 ovn_controller[152662]: 2025-10-14T08:59:10Z|00229|binding|INFO|Releasing lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 from this chassis (sb_readonly=0)
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.777 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:fb:42 10.100.0.3'], port_security=['fa:16:3e:da:fb:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f3d9640-8200-45d8-ac25-bbc5d016d49f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bcdd5079-efdb-47f7-99b0-21394b1d16e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.783 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:fb:42 10.100.0.3'], port_security=['fa:16:3e:da:fb:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f3d9640-8200-45d8-ac25-bbc5d016d49f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bcdd5079-efdb-47f7-99b0-21394b1d16e2) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [NOTICE]   (296014) : haproxy version is 2.8.14-c23fe91
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [NOTICE]   (296014) : path to executable is /usr/sbin/haproxy
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [WARNING]  (296014) : Exiting Master process...
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [ALERT]    (296014) : Current worker (296017) exited with code 143 (Terminated)
Oct 14 08:59:10 compute-0 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [WARNING]  (296014) : All workers exited. Exiting... (0)
Oct 14 08:59:10 compute-0 systemd[1]: libpod-2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa.scope: Deactivated successfully.
Oct 14 08:59:10 compute-0 podman[296357]: 2025-10-14 08:59:10.854308928 +0000 UTC m=+0.049577868 container died 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa-userdata-shm.mount: Deactivated successfully.
Oct 14 08:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ee5fee482ca97cde7b330eef7b447de260a6448dd9b51c26764ac8cff20927e-merged.mount: Deactivated successfully.
Oct 14 08:59:10 compute-0 podman[296357]: 2025-10-14 08:59:10.892445455 +0000 UTC m=+0.087714395 container cleanup 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 08:59:10 compute-0 systemd[1]: libpod-conmon-2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa.scope: Deactivated successfully.
Oct 14 08:59:10 compute-0 podman[296384]: 2025-10-14 08:59:10.956619671 +0000 UTC m=+0.041808798 container remove 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.964 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1488e198-503a-4a80-a9d6-cd9167eadccd]: (4, ('Tue Oct 14 08:59:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f (2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa)\n2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa\nTue Oct 14 08:59:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f (2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa)\n2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.967 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[66b8c958-3027-46cf-9a76-fd64f4a49f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.968 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0bcfe7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 kernel: tap7b0bcfe7-c0: left promiscuous mode
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.983 2 INFO nova.virt.libvirt.driver [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Deleting instance files /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6_del
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.984 2 INFO nova.virt.libvirt.driver [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Deletion of /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6_del complete
Oct 14 08:59:10 compute-0 nova_compute[259627]: 2025-10-14 08:59:10.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.993 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[211518f3-7f03-4421-be0c-34024eaad328]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.020 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77e38c97-5ba2-4c36-b3e4-577daa888dd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.021 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc32272b-74a6-4222-85e5-7b3378c073ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.029 2 INFO nova.compute.manager [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.030 2 DEBUG oslo.service.loopingcall [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.030 2 DEBUG nova.compute.manager [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.030 2 DEBUG nova.network.neutron [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.035 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bdfb47-15ab-44f6-a7fe-e5aa03165f6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612503, 'reachable_time': 29925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296403, 'error': None, 'target': 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.036 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.036 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[72ea90f1-71ec-4b3c-bc23-f87cc5c2bcd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.037 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bcdd5079-efdb-47f7-99b0-21394b1d16e2 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.038 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73ebc064-a313-4f6c-a167-db4a6a33d2ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.038 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace which is not needed anymore
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.065 2 DEBUG nova.compute.manager [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-unplugged-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.066 2 DEBUG oslo_concurrency.lockutils [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.066 2 DEBUG oslo_concurrency.lockutils [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.066 2 DEBUG oslo_concurrency.lockutils [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.067 2 DEBUG nova.compute.manager [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-unplugged-fa5f1925-a535-45ee-b96e-f79c725d7960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.067 2 DEBUG nova.compute.manager [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-unplugged-fa5f1925-a535-45ee-b96e-f79c725d7960 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.121 2 INFO nova.virt.libvirt.driver [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance shutdown successfully after 13 seconds.
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.125 2 INFO nova.virt.libvirt.driver [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance destroyed successfully.
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.126 2 DEBUG nova.objects.instance [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.139 2 DEBUG nova.compute.manager [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:11 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [NOTICE]   (293674) : haproxy version is 2.8.14-c23fe91
Oct 14 08:59:11 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [NOTICE]   (293674) : path to executable is /usr/sbin/haproxy
Oct 14 08:59:11 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [WARNING]  (293674) : Exiting Master process...
Oct 14 08:59:11 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [WARNING]  (293674) : Exiting Master process...
Oct 14 08:59:11 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [ALERT]    (293674) : Current worker (293676) exited with code 143 (Terminated)
Oct 14 08:59:11 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [WARNING]  (293674) : All workers exited. Exiting... (0)
Oct 14 08:59:11 compute-0 systemd[1]: libpod-7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb.scope: Deactivated successfully.
Oct 14 08:59:11 compute-0 podman[296421]: 2025-10-14 08:59:11.18179827 +0000 UTC m=+0.063390867 container died 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.181 2 DEBUG oslo_concurrency.lockutils [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:11 compute-0 podman[296421]: 2025-10-14 08:59:11.221239229 +0000 UTC m=+0.102831776 container cleanup 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:59:11 compute-0 systemd[1]: libpod-conmon-7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb.scope: Deactivated successfully.
Oct 14 08:59:11 compute-0 podman[296446]: 2025-10-14 08:59:11.287197408 +0000 UTC m=+0.043867268 container remove 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.292 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2835110b-0560-48fd-8a13-6d9bb61eef8b]: (4, ('Tue Oct 14 08:59:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb)\n7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb\nTue Oct 14 08:59:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb)\n7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.294 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34b72f21-d6a6-4444-bc69-7677058f51f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.294 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:11 compute-0 kernel: tap2322cf7a-00: left promiscuous mode
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.317 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[189c130d-a14a-46ba-a5cb-49a2e581d181]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8af72eac-5bd1-47ce-9d27-16da14685cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.342 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d60c3bcf-dd46-4ebc-82ee-9403fcc96a9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.360 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68c9eec7-480a-4e5e-8e72-2c63d4f99542]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611218, 'reachable_time': 18994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296464, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.362 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.362 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b5faf9-200a-44c8-aeac-62c42047cbed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.362 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bcdd5079-efdb-47f7-99b0-21394b1d16e2 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.363 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.364 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f836f2a-4055-460a-b810-dd1187b82f6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.364 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bcdd5079-efdb-47f7-99b0-21394b1d16e2 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.365 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:59:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.366 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1682d087-7129-4a5e-8b64-53f55e24a7cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d7b0bcfe7\x2dc41f\x2d42d1\x2d9739\x2d3dab148a181f.mount: Deactivated successfully.
Oct 14 08:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7281a4ba77e5cabb29255d89d41089125e8d4a234511b13cf81fb479f2c3742c-merged.mount: Deactivated successfully.
Oct 14 08:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb-userdata-shm.mount: Deactivated successfully.
Oct 14 08:59:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d2322cf7a\x2d0090\x2d40fa\x2da558\x2d42d84cc6fc2a.mount: Deactivated successfully.
Oct 14 08:59:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 306 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 5.8 MiB/s wr, 595 op/s
Oct 14 08:59:11 compute-0 nova_compute[259627]: 2025-10-14 08:59:11.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.200 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-unplugged-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.200 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.203 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.204 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.204 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-unplugged-902a062a-858b-4495-936b-47a675567467 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.205 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-unplugged-902a062a-858b-4495-936b-47a675567467 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.205 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.206 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.207 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.207 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.208 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-plugged-902a062a-858b-4495-936b-47a675567467 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.209 2 WARNING nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received unexpected event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 for instance with vm_state active and task_state deleting.
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.210 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-unplugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.210 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.211 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.211 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.212 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] No waiting events found dispatching network-vif-unplugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.212 2 WARNING nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received unexpected event network-vif-unplugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 for instance with vm_state stopped and task_state None.
Oct 14 08:59:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:12 compute-0 ceph-mon[74249]: pgmap v1231: 305 pgs: 305 active+clean; 306 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 5.8 MiB/s wr, 595 op/s
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.813 2 DEBUG nova.compute.manager [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.865 2 INFO nova.compute.manager [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] instance snapshotting
Oct 14 08:59:12 compute-0 nova_compute[259627]: 2025-10-14 08:59:12.866 2 WARNING nova.compute.manager [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.107 2 DEBUG nova.network.neutron [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.126 2 INFO nova.compute.manager [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Took 2.10 seconds to deallocate network for instance.
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.171 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.172 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.187 2 INFO nova.virt.libvirt.driver [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Beginning cold snapshot process
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.380 2 DEBUG nova.virt.libvirt.imagebackend [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.401 2 DEBUG oslo_concurrency.processutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.437 2 DEBUG nova.compute.manager [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.437 2 DEBUG oslo_concurrency.lockutils [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.438 2 DEBUG oslo_concurrency.lockutils [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.438 2 DEBUG oslo_concurrency.lockutils [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.438 2 DEBUG nova.compute.manager [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.438 2 WARNING nova.compute.manager [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received unexpected event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 for instance with vm_state deleted and task_state None.
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.438 2 DEBUG nova.compute.manager [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-deleted-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 306 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 2.2 MiB/s wr, 451 op/s
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.757 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(3d0fd75ab46944e19a6ca6507c025d43) on rbd image(3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:59:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769614021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.887 2 DEBUG oslo_concurrency.processutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.893 2 DEBUG nova.compute.provider_tree [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.917 2 DEBUG nova.scheduler.client.report [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.938 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:13 compute-0 nova_compute[259627]: 2025-10-14 08:59:13.961 2 INFO nova.scheduler.client.report [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Deleted allocations for instance 654413e6-01cd-4e54-a271-6b515a8561e6
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.016 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.321 2 DEBUG nova.compute.manager [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.321 2 DEBUG oslo_concurrency.lockutils [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.322 2 DEBUG oslo_concurrency.lockutils [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.322 2 DEBUG oslo_concurrency.lockutils [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.322 2 DEBUG nova.compute.manager [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] No waiting events found dispatching network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.323 2 WARNING nova.compute.manager [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received unexpected event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 for instance with vm_state stopped and task_state image_uploading.
Oct 14 08:59:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 14 08:59:14 compute-0 ceph-mon[74249]: pgmap v1232: 305 pgs: 305 active+clean; 306 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 2.2 MiB/s wr, 451 op/s
Oct 14 08:59:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1769614021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 14 08:59:14 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.817 2 DEBUG oslo_concurrency.lockutils [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.818 2 DEBUG oslo_concurrency.lockutils [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.819 2 DEBUG nova.compute.manager [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.825 2 DEBUG nova.compute.manager [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.826 2 DEBUG nova.objects.instance [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'flavor' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.834 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning vms/3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk@3d0fd75ab46944e19a6ca6507c025d43 to images/0c9ef4ae-621d-4e37-afca-e810a018589c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.895 2 DEBUG nova.virt.libvirt.driver [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 08:59:14 compute-0 nova_compute[259627]: 2025-10-14 08:59:14.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:15 compute-0 ovn_controller[152662]: 2025-10-14T08:59:15Z|00230|binding|INFO|Releasing lport 650f034e-5333-49ba-9907-b0409944aee7 from this chassis (sb_readonly=0)
Oct 14 08:59:15 compute-0 ovn_controller[152662]: 2025-10-14T08:59:15Z|00231|binding|INFO|Removing iface tapbcdd5079-ef ovn-installed in OVS
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.011 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] flattening images/0c9ef4ae-621d-4e37-afca-e810a018589c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.297 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(3d0fd75ab46944e19a6ca6507c025d43) on rbd image(3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.515 2 DEBUG nova.compute.manager [req-6e3464f6-6dfd-4b21-8b00-a3fb9f253dce req-26dfb640-8060-47e4-ad2a-dde3d6b7f5ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-deleted-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.4 MiB/s wr, 371 op/s
Oct 14 08:59:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 14 08:59:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 14 08:59:15 compute-0 ceph-mon[74249]: osdmap e142: 3 total, 3 up, 3 in
Oct 14 08:59:15 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.854 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(snap) on rbd image(0c9ef4ae-621d-4e37-afca-e810a018589c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:15 compute-0 nova_compute[259627]: 2025-10-14 08:59:15.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 08:59:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 14 08:59:16 compute-0 ceph-mon[74249]: pgmap v1234: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.4 MiB/s wr, 371 op/s
Oct 14 08:59:16 compute-0 ceph-mon[74249]: osdmap e143: 3 total, 3 up, 3 in
Oct 14 08:59:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 14 08:59:16 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 14 08:59:16 compute-0 nova_compute[259627]: 2025-10-14 08:59:16.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:16 compute-0 nova_compute[259627]: 2025-10-14 08:59:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:17 compute-0 kernel: tap0b16cd6a-fe (unregistering): left promiscuous mode
Oct 14 08:59:17 compute-0 NetworkManager[44885]: <info>  [1760432357.1737] device (tap0b16cd6a-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00232|binding|INFO|Releasing lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 from this chassis (sb_readonly=0)
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00233|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 down in Southbound
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00234|binding|INFO|Removing iface tap0b16cd6a-fe ovn-installed in OVS
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.195 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.197 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.200 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.213 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[59652a25-5f43-493f-b2ea-2ebfa3c48c85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.248 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5083a639-0194-4340-89c7-b6d300ac4d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.256 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfd525b-9bb8-44dd-8643-5edd2e8528af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 14 08:59:17 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001b.scope: Consumed 14.253s CPU time.
Oct 14 08:59:17 compute-0 systemd-machined[214636]: Machine qemu-29-instance-0000001b terminated.
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.285 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[11e0d4e8-e457-4d2f-b841-393ae9591ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:d2:4a 10.100.0.13
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:d2:4a 10.100.0.13
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.307 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3183170d-eb3f-42b1-b37c-bb3724f1916a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296646, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.327 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb34aa8-4fd0-47ec-93c6-bf594d67e99a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296647, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296647, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.328 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:17 compute-0 kernel: tap0b16cd6a-fe: entered promiscuous mode
Oct 14 08:59:17 compute-0 NetworkManager[44885]: <info>  [1760432357.4155] manager: (tap0b16cd6a-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Oct 14 08:59:17 compute-0 kernel: tap0b16cd6a-fe (unregistering): left promiscuous mode
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00235|binding|INFO|Claiming lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for this chassis.
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00236|binding|INFO|0b16cd6a-fe42-4a54-8bbe-810915fcaa93: Claiming fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.432 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.433 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.434 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00237|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 ovn-installed in OVS
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00238|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 up in Southbound
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00239|binding|INFO|Releasing lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 from this chassis (sb_readonly=1)
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00240|binding|INFO|Removing iface tap0b16cd6a-fe ovn-installed in OVS
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.451 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83dd0496-85ce-4e7c-80b0-b87ec206c367]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00241|binding|INFO|Releasing lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 from this chassis (sb_readonly=0)
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00242|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 down in Southbound
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.461 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.481 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[029682cb-350c-4281-8204-fc79dca7b2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.484 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4c26fa53-dbd2-4664-84ff-5096237f2305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.501 2 DEBUG nova.compute.manager [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.501 2 DEBUG oslo_concurrency.lockutils [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.502 2 DEBUG oslo_concurrency.lockutils [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.502 2 DEBUG oslo_concurrency.lockutils [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.502 2 DEBUG nova.compute.manager [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.502 2 WARNING nova.compute.manager [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state active and task_state powering-off.
Oct 14 08:59:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.514 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2c07ceb7-5ae3-42b5-9527-1f71969c5a5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.544 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2102c0-e624-4f55-b694-980f3b12e5e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296661, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 10 MiB/s wr, 294 op/s
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.580 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bed7a836-39af-425f-8c83-a2328089c1c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296662, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296662, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.582 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.632 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.633 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.635 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.636 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.639 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.641 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.656 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9104b2-8b8b-4bb0-a418-d253ce545431]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.690 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[02675c9c-9ed2-44cd-af65-42f5c1f1c7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.692 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d73719ef-4eb2-49ca-90a1-0b640c5b9038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.730 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4a85c07a-24bc-4bc8-955f-da1fdcaf5fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3381c7-8c1b-4a0c-9350-35d35c269f3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 832, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 832, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296670, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc760652-7eb3-452a-92dc-9f2e9e9236b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296671, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296671, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.767 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.776 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.776 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.776 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.776 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:17 compute-0 ceph-mon[74249]: osdmap e144: 3 total, 3 up, 3 in
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.934 2 INFO nova.virt.libvirt.driver [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance shutdown successfully after 3 seconds.
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.941 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance destroyed successfully.
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.942 2 DEBUG nova.objects.instance [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'numa_topology' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:46:6c 10.100.0.14
Oct 14 08:59:17 compute-0 ovn_controller[152662]: 2025-10-14T08:59:17Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:46:6c 10.100.0.14
Oct 14 08:59:17 compute-0 nova_compute[259627]: 2025-10-14 08:59:17.987 2 DEBUG nova.compute.manager [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:18 compute-0 nova_compute[259627]: 2025-10-14 08:59:18.065 2 DEBUG oslo_concurrency.lockutils [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:18 compute-0 nova_compute[259627]: 2025-10-14 08:59:18.364 2 INFO nova.virt.libvirt.driver [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Snapshot image upload complete
Oct 14 08:59:18 compute-0 nova_compute[259627]: 2025-10-14 08:59:18.364 2 INFO nova.compute.manager [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 5.50 seconds to snapshot the instance on the hypervisor.
Oct 14 08:59:18 compute-0 ceph-mon[74249]: pgmap v1237: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 10 MiB/s wr, 294 op/s
Oct 14 08:59:18 compute-0 nova_compute[259627]: 2025-10-14 08:59:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190223049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 10 MiB/s wr, 294 op/s
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.581 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.581 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.584 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.584 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.587 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.588 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 14 08:59:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 14 08:59:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4190223049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:19 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.836 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.840 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3899MB free_disk=59.85548782348633GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.840 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.841 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 3f3d9640-8200-45d8-ac25-bbc5d016d49f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 27fa4cf8-c08c-46a2-af8f-17c8980a2317 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance eb820455-d45c-4331-9363-124f11537f52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 08:59:19 compute-0 nova_compute[259627]: 2025-10-14 08:59:19.947 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.028 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.402 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.403 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.404 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.404 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.404 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.404 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.404 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.405 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.406 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.407 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.407 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.407 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.407 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.407 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.
Oct 14 08:59:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4163746131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.451 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.457 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.471 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.493 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.494 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.581 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.581 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.581 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.581 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.582 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.583 2 INFO nova.compute.manager [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Terminating instance
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.583 2 DEBUG nova.compute.manager [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.589 2 INFO nova.virt.libvirt.driver [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance destroyed successfully.
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.589 2 DEBUG nova.objects.instance [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.612 2 DEBUG nova.virt.libvirt.vif [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107832270',display_name='tempest-ImagesTestJSON-server-107832270',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-107832270',id=26,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-bhq8zg2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:18Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=3f3d9640-8200-45d8-ac25-bbc5d016d49f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.613 2 DEBUG nova.network.os_vif_util [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.613 2 DEBUG nova.network.os_vif_util [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.614 2 DEBUG os_vif [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcdd5079-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.624 2 INFO os_vif [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef')
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.674 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'flavor' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.693 2 DEBUG oslo_concurrency.lockutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.693 2 DEBUG oslo_concurrency.lockutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquired lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.693 2 DEBUG nova.network.neutron [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:59:20 compute-0 nova_compute[259627]: 2025-10-14 08:59:20.693 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'info_cache' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:20 compute-0 ceph-mon[74249]: pgmap v1238: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 10 MiB/s wr, 294 op/s
Oct 14 08:59:20 compute-0 ceph-mon[74249]: osdmap e145: 3 total, 3 up, 3 in
Oct 14 08:59:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4163746131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.043 2 INFO nova.virt.libvirt.driver [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Deleting instance files /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f_del
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.045 2 INFO nova.virt.libvirt.driver [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Deletion of /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f_del complete
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.086 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432346.0813866, 51c76e0f-284d-4122-83b4-32c4518b9056 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.086 2 INFO nova.compute.manager [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Stopped (Lifecycle Event)
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.112 2 DEBUG nova.compute.manager [None req-64299c29-53d2-42c0-9b93-89ff9f5a7685 - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.117 2 INFO nova.compute.manager [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 0.53 seconds to destroy the instance on the hypervisor.
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.118 2 DEBUG oslo.service.loopingcall [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.118 2 DEBUG nova.compute.manager [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.118 2 DEBUG nova.network.neutron [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.495 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 372 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 10 MiB/s wr, 336 op/s
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 08:59:21 compute-0 nova_compute[259627]: 2025-10-14 08:59:21.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.013 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.094 2 DEBUG nova.network.neutron [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.110 2 INFO nova.compute.manager [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 0.99 seconds to deallocate network for instance.
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.172 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.173 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.196 2 DEBUG nova.network.neutron [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.212 2 DEBUG oslo_concurrency.lockutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Releasing lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.213 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.213 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.214 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.247 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance destroyed successfully.
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.247 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'numa_topology' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.258 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'resources' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.269 2 DEBUG nova.virt.libvirt.vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:18Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.270 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.271 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.271 2 DEBUG os_vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b16cd6a-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.278 2 INFO os_vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe')
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.280 2 DEBUG oslo_concurrency.processutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.318 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start _get_guest_xml network_info=[{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.324 2 WARNING nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.331 2 DEBUG nova.virt.libvirt.host [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.332 2 DEBUG nova.virt.libvirt.host [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.343 2 DEBUG nova.virt.libvirt.host [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.344 2 DEBUG nova.virt.libvirt.host [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.344 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.345 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.345 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.345 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.345 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.346 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.346 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.346 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.346 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.347 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.347 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.347 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.348 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.365 2 DEBUG oslo_concurrency.processutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 14 08:59:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 14 08:59:22 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 14 08:59:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4155859392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.713 2 DEBUG oslo_concurrency.processutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.720 2 DEBUG nova.compute.provider_tree [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.736 2 DEBUG nova.scheduler.client.report [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.762 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708326235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.793 2 DEBUG oslo_concurrency.processutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:22 compute-0 ceph-mon[74249]: pgmap v1240: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 372 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 10 MiB/s wr, 336 op/s
Oct 14 08:59:22 compute-0 ceph-mon[74249]: osdmap e146: 3 total, 3 up, 3 in
Oct 14 08:59:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4155859392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1708326235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.879 2 INFO nova.scheduler.client.report [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance 3f3d9640-8200-45d8-ac25-bbc5d016d49f
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.883 2 DEBUG oslo_concurrency.processutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:22 compute-0 nova_compute[259627]: 2025-10-14 08:59:22.995 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.086 2 DEBUG nova.compute.manager [req-0aa4a025-0914-432a-8b44-21fc71da2109 req-9a3995f9-f6c8-4a68-9e9f-1b3e2e786e33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-deleted-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183545449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.316 2 DEBUG oslo_concurrency.processutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.318 2 DEBUG nova.virt.libvirt.vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:18Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.318 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.319 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.321 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.362 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <uuid>27fa4cf8-c08c-46a2-af8f-17c8980a2317</uuid>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <name>instance-0000001b</name>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1883268496</nova:name>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:59:22</nova:creationTime>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <nova:user uuid="56f2f9bf9b064a208d9ce5fe732c4ff7">tempest-ListServerFiltersTestJSON-1842486796-project-member</nova:user>
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <nova:project uuid="3d3a647aa3914555a8a2c5fd6fe7a543">tempest-ListServerFiltersTestJSON-1842486796</nova:project>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <nova:port uuid="0b16cd6a-fe42-4a54-8bbe-810915fcaa93">
Oct 14 08:59:23 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <system>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <entry name="serial">27fa4cf8-c08c-46a2-af8f-17c8980a2317</entry>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <entry name="uuid">27fa4cf8-c08c-46a2-af8f-17c8980a2317</entry>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </system>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <os>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   </os>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <features>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   </features>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk">
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config">
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c3:07:ec"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <target dev="tap0b16cd6a-fe"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/console.log" append="off"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <video>
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </video>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:59:23 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:59:23 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:59:23 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:59:23 compute-0 nova_compute[259627]: </domain>
Oct 14 08:59:23 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.363 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.363 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.364 2 DEBUG nova.virt.libvirt.vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:18Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.365 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.365 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.366 2 DEBUG os_vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b16cd6a-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b16cd6a-fe, col_values=(('external_ids', {'iface-id': '0b16cd6a-fe42-4a54-8bbe-810915fcaa93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:07:ec', 'vm-uuid': '27fa4cf8-c08c-46a2-af8f-17c8980a2317'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:23 compute-0 NetworkManager[44885]: <info>  [1760432363.3782] manager: (tap0b16cd6a-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.384 2 INFO os_vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe')
Oct 14 08:59:23 compute-0 kernel: tap0b16cd6a-fe: entered promiscuous mode
Oct 14 08:59:23 compute-0 NetworkManager[44885]: <info>  [1760432363.4552] manager: (tap0b16cd6a-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Oct 14 08:59:23 compute-0 ovn_controller[152662]: 2025-10-14T08:59:23Z|00243|binding|INFO|Claiming lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for this chassis.
Oct 14 08:59:23 compute-0 ovn_controller[152662]: 2025-10-14T08:59:23Z|00244|binding|INFO|0b16cd6a-fe42-4a54-8bbe-810915fcaa93: Claiming fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.464 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.465 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.466 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:59:23 compute-0 systemd-udevd[296836]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.487 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e7c767-0718-47aa-a19d-3f42d5af8ac1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:23 compute-0 ovn_controller[152662]: 2025-10-14T08:59:23Z|00245|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 ovn-installed in OVS
Oct 14 08:59:23 compute-0 ovn_controller[152662]: 2025-10-14T08:59:23Z|00246|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 up in Southbound
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:23 compute-0 NetworkManager[44885]: <info>  [1760432363.5008] device (tap0b16cd6a-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:23 compute-0 NetworkManager[44885]: <info>  [1760432363.5023] device (tap0b16cd6a-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:23 compute-0 systemd-machined[214636]: New machine qemu-34-instance-0000001b.
Oct 14 08:59:23 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001b.
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.525 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b4635890-bd05-451d-9d14-02d7617453b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.528 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1e08b011-8b0e-4a4f-b29c-6a741e500317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.559 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[736d29be-abbb-417c-be0a-1cfdf8f69a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 372 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 9.2 MiB/s wr, 298 op/s
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.577 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e448dda1-e1d7-41af-a140-3f9b3bb8ffa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 874, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 874, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296851, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.591 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe3e1aa-59c1-4a06-9e78-734c8088ce05]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296853, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296853, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.592 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:23 compute-0 nova_compute[259627]: 2025-10-14 08:59:23.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.595 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.595 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.595 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.595 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1183545449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.056 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.057 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.096 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.161 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.162 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.168 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.169 2 INFO nova.compute.claims [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.316 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.319 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.363 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.363 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.364 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157234727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.785 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.793 2 DEBUG nova.compute.provider_tree [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.815 2 DEBUG nova.scheduler.client.report [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.826 2 DEBUG nova.compute.manager [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.827 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 27fa4cf8-c08c-46a2-af8f-17c8980a2317 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.828 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432364.8235388, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.829 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Resumed (Lifecycle Event)
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.837 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance rebooted successfully.
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.838 2 DEBUG nova.compute.manager [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.852 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.856 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.857 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.866 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:24 compute-0 ceph-mon[74249]: pgmap v1242: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 372 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 9.2 MiB/s wr, 298 op/s
Oct 14 08:59:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3157234727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.911 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.912 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432364.8241057, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.913 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Started (Lifecycle Event)
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.940 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.944 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.955 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.955 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:59:24 compute-0 nova_compute[259627]: 2025-10-14 08:59:24.982 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.002 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.096 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.097 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.098 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Creating image(s)
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.120 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.148 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.171 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.175 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.273 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.276 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.277 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.278 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.307 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.310 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f3dafba3-6472-4921-9ece-b6076172365e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.383 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432350.3816154, 654413e6-01cd-4e54-a271-6b515a8561e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.383 2 INFO nova.compute.manager [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] VM Stopped (Lifecycle Event)
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.406 2 DEBUG nova.compute.manager [None req-cc76175c-4e61-4e2d-ae2e-e4b8d39581d0 - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.559 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f3dafba3-6472-4921-9ece-b6076172365e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.8 MiB/s wr, 329 op/s
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.621 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.717 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432350.6832848, 3f3d9640-8200-45d8-ac25-bbc5d016d49f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.717 2 INFO nova.compute.manager [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] VM Stopped (Lifecycle Event)
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.725 2 DEBUG nova.objects.instance [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid f3dafba3-6472-4921-9ece-b6076172365e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.753 2 DEBUG nova.compute.manager [None req-168f0166-41c9-4407-841c-1f1f2bf95ad7 - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.755 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.756 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Ensure instance console log exists: /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.756 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.757 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.758 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:25 compute-0 nova_compute[259627]: 2025-10-14 08:59:25.951 2 DEBUG nova.policy [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:59:26 compute-0 nova_compute[259627]: 2025-10-14 08:59:26.594 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Successfully created port: ecb526b7-d1ad-4a75-b851-482702018258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:59:26 compute-0 ceph-mon[74249]: pgmap v1243: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.8 MiB/s wr, 329 op/s
Oct 14 08:59:26 compute-0 nova_compute[259627]: 2025-10-14 08:59:26.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:27 compute-0 nova_compute[259627]: 2025-10-14 08:59:27.498 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Successfully updated port: ecb526b7-d1ad-4a75-b851-482702018258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:59:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 14 08:59:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 14 08:59:27 compute-0 nova_compute[259627]: 2025-10-14 08:59:27.524 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:27 compute-0 nova_compute[259627]: 2025-10-14 08:59:27.524 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:27 compute-0 nova_compute[259627]: 2025-10-14 08:59:27.524 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:59:27 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 14 08:59:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 8.1 MiB/s wr, 340 op/s
Oct 14 08:59:27 compute-0 nova_compute[259627]: 2025-10-14 08:59:27.593 2 DEBUG nova.compute.manager [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-changed-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:27 compute-0 nova_compute[259627]: 2025-10-14 08:59:27.593 2 DEBUG nova.compute.manager [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Refreshing instance network info cache due to event network-changed-ecb526b7-d1ad-4a75-b851-482702018258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:59:27 compute-0 nova_compute[259627]: 2025-10-14 08:59:27.594 2 DEBUG oslo_concurrency.lockutils [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:27 compute-0 nova_compute[259627]: 2025-10-14 08:59:27.712 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.360 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:28 compute-0 ceph-mon[74249]: osdmap e147: 3 total, 3 up, 3 in
Oct 14 08:59:28 compute-0 ceph-mon[74249]: pgmap v1245: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 8.1 MiB/s wr, 340 op/s
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.956 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Updating instance_info_cache with network_info: [{"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.983 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.983 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance network_info: |[{"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.983 2 DEBUG oslo_concurrency.lockutils [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.983 2 DEBUG nova.network.neutron [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Refreshing network info cache for port ecb526b7-d1ad-4a75-b851-482702018258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.986 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start _get_guest_xml network_info=[{"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.989 2 WARNING nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.995 2 DEBUG nova.virt.libvirt.host [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:59:28 compute-0 nova_compute[259627]: 2025-10-14 08:59:28.995 2 DEBUG nova.virt.libvirt.host [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.000 2 DEBUG nova.virt.libvirt.host [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.001 2 DEBUG nova.virt.libvirt.host [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.001 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.001 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.002 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.002 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.002 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.002 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.003 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.003 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.003 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.003 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.004 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.004 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.006 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293226448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.426 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.460 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.465 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1293226448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 260 KiB/s rd, 53 KiB/s wr, 76 op/s
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.679 2 DEBUG nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.680 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.680 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.680 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.681 2 DEBUG nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.681 2 WARNING nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state active and task_state None.
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.682 2 DEBUG nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.682 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.682 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.682 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.683 2 DEBUG nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.683 2 WARNING nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state active and task_state None.
Oct 14 08:59:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911530406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.879 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.881 2 DEBUG nova.virt.libvirt.vif [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1482418899',display_name='tempest-ImagesTestJSON-server-1482418899',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1482418899',id=31,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-dol66h7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:25Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f3dafba3-6472-4921-9ece-b6076172365e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.882 2 DEBUG nova.network.os_vif_util [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.883 2 DEBUG nova.network.os_vif_util [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.885 2 DEBUG nova.objects.instance [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid f3dafba3-6472-4921-9ece-b6076172365e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.904 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <uuid>f3dafba3-6472-4921-9ece-b6076172365e</uuid>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <name>instance-0000001f</name>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesTestJSON-server-1482418899</nova:name>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:59:28</nova:creationTime>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <nova:port uuid="ecb526b7-d1ad-4a75-b851-482702018258">
Oct 14 08:59:29 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <system>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <entry name="serial">f3dafba3-6472-4921-9ece-b6076172365e</entry>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <entry name="uuid">f3dafba3-6472-4921-9ece-b6076172365e</entry>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </system>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <os>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   </os>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <features>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   </features>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f3dafba3-6472-4921-9ece-b6076172365e_disk">
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f3dafba3-6472-4921-9ece-b6076172365e_disk.config">
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:29 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:e5:8b:13"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <target dev="tapecb526b7-d1"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/console.log" append="off"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <video>
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </video>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:59:29 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:59:29 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:59:29 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:59:29 compute-0 nova_compute[259627]: </domain>
Oct 14 08:59:29 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.907 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Preparing to wait for external event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.907 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.908 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.908 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.910 2 DEBUG nova.virt.libvirt.vif [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1482418899',display_name='tempest-ImagesTestJSON-server-1482418899',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1482418899',id=31,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-dol66h7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:25Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f3dafba3-6472-4921-9ece-b6076172365e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.910 2 DEBUG nova.network.os_vif_util [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.911 2 DEBUG nova.network.os_vif_util [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.912 2 DEBUG os_vif [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapecb526b7-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapecb526b7-d1, col_values=(('external_ids', {'iface-id': 'ecb526b7-d1ad-4a75-b851-482702018258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:8b:13', 'vm-uuid': 'f3dafba3-6472-4921-9ece-b6076172365e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:29 compute-0 NetworkManager[44885]: <info>  [1760432369.9259] manager: (tapecb526b7-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:29 compute-0 nova_compute[259627]: 2025-10-14 08:59:29.938 2 INFO os_vif [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1')
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.023 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.024 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.025 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:e5:8b:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.026 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Using config drive
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.061 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.191 2 DEBUG nova.network.neutron [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Updated VIF entry in instance network info cache for port ecb526b7-d1ad-4a75-b851-482702018258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.192 2 DEBUG nova.network.neutron [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Updating instance_info_cache with network_info: [{"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.221 2 DEBUG oslo_concurrency.lockutils [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.529 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Creating config drive at /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.535 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoj38svzi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:30 compute-0 ceph-mon[74249]: pgmap v1246: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 260 KiB/s rd, 53 KiB/s wr, 76 op/s
Oct 14 08:59:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2911530406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.672 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoj38svzi" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.696 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.699 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config f3dafba3-6472-4921-9ece-b6076172365e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.856 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config f3dafba3-6472-4921-9ece-b6076172365e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.857 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Deleting local config drive /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config because it was imported into RBD.
Oct 14 08:59:30 compute-0 kernel: tapecb526b7-d1: entered promiscuous mode
Oct 14 08:59:30 compute-0 NetworkManager[44885]: <info>  [1760432370.9149] manager: (tapecb526b7-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Oct 14 08:59:30 compute-0 ovn_controller[152662]: 2025-10-14T08:59:30Z|00247|binding|INFO|Claiming lport ecb526b7-d1ad-4a75-b851-482702018258 for this chassis.
Oct 14 08:59:30 compute-0 ovn_controller[152662]: 2025-10-14T08:59:30Z|00248|binding|INFO|ecb526b7-d1ad-4a75-b851-482702018258: Claiming fa:16:3e:e5:8b:13 10.100.0.9
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:30 compute-0 nova_compute[259627]: 2025-10-14 08:59:30.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.941 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:8b:13 10.100.0.9'], port_security=['fa:16:3e:e5:8b:13 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f3dafba3-6472-4921-9ece-b6076172365e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ecb526b7-d1ad-4a75-b851-482702018258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.943 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ecb526b7-d1ad-4a75-b851-482702018258 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.954 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:59:30 compute-0 systemd-machined[214636]: New machine qemu-35-instance-0000001f.
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.971 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf15dd9-c83b-45b9-940c-e1ed0fddc2e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.973 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2322cf7a-01 in ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.976 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2322cf7a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.976 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85a73d3a-ae43-4901-9283-9d305ed9be01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.977 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07753aaf-2347-4627-a03a-1587c9b33a6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:30 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Oct 14 08:59:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.990 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d89ea0b5-20b4-4e94-a0b7-2cfd336e8e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.017 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb70221-83b5-47e5-88d7-fcb34f1dd7c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 systemd-udevd[297243]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:31 compute-0 ovn_controller[152662]: 2025-10-14T08:59:31Z|00249|binding|INFO|Setting lport ecb526b7-d1ad-4a75-b851-482702018258 ovn-installed in OVS
Oct 14 08:59:31 compute-0 ovn_controller[152662]: 2025-10-14T08:59:31Z|00250|binding|INFO|Setting lport ecb526b7-d1ad-4a75-b851-482702018258 up in Southbound
Oct 14 08:59:31 compute-0 nova_compute[259627]: 2025-10-14 08:59:31.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:31 compute-0 NetworkManager[44885]: <info>  [1760432371.0368] device (tapecb526b7-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:31 compute-0 NetworkManager[44885]: <info>  [1760432371.0376] device (tapecb526b7-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.050 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cd3ba2-c4ef-4f37-8f3b-a0a7431fd6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 NetworkManager[44885]: <info>  [1760432371.0596] manager: (tap2322cf7a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/124)
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.061 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f541bf-12c8-46d7-94e6-f16e2d822556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 podman[297216]: 2025-10-14 08:59:31.070324592 +0000 UTC m=+0.100775036 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.091 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[db3b85c6-bc80-477b-9b9b-2bdd3ee3621d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.093 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd61bbb-9fe2-432d-9f2a-d6d36964a369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 podman[297217]: 2025-10-14 08:59:31.099891378 +0000 UTC m=+0.130903436 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:59:31 compute-0 NetworkManager[44885]: <info>  [1760432371.1185] device (tap2322cf7a-00): carrier: link connected
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.124 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d40835f5-dff0-428a-87d2-c10f98dadd80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.138 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d915ba8c-8cd7-4fe7-9cd3-faee787d7c2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614988, 'reachable_time': 21752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297290, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.153 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[89f0de31-255f-4af4-9910-6159c984f213]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:956c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614988, 'tstamp': 614988}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297291, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.166 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e18a1a8-7263-427b-ae8a-41e30952ee71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614988, 'reachable_time': 21752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297292, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.195 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[96b7e563-4c76-4aac-98cb-eee325b8d0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.254 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[405230d6-1dce-4b63-b2a5-64b3f4436210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.255 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.255 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.256 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:31 compute-0 NetworkManager[44885]: <info>  [1760432371.3106] manager: (tap2322cf7a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Oct 14 08:59:31 compute-0 kernel: tap2322cf7a-00: entered promiscuous mode
Oct 14 08:59:31 compute-0 nova_compute[259627]: 2025-10-14 08:59:31.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.313 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:31 compute-0 nova_compute[259627]: 2025-10-14 08:59:31.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:31 compute-0 ovn_controller[152662]: 2025-10-14T08:59:31Z|00251|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 08:59:31 compute-0 nova_compute[259627]: 2025-10-14 08:59:31.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.334 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.335 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9585f4-8d8e-41e0-a7f5-9528059882af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.336 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:59:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.337 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'env', 'PROCESS_TAG=haproxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2322cf7a-0090-40fa-a558-42d84cc6fc2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:59:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 325 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 169 op/s
Oct 14 08:59:31 compute-0 podman[297365]: 2025-10-14 08:59:31.709958279 +0000 UTC m=+0.048049851 container create d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 08:59:31 compute-0 systemd[1]: Started libpod-conmon-d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1.scope.
Oct 14 08:59:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83f526bc4bd45d25326b47f32c13811952e9e26bf37d62c64b21ce07a2169ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:31 compute-0 podman[297365]: 2025-10-14 08:59:31.685435887 +0000 UTC m=+0.023527479 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:59:31 compute-0 podman[297365]: 2025-10-14 08:59:31.788946148 +0000 UTC m=+0.127037810 container init d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:59:31 compute-0 podman[297365]: 2025-10-14 08:59:31.802230275 +0000 UTC m=+0.140321887 container start d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 08:59:31 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [NOTICE]   (297384) : New worker (297386) forked
Oct 14 08:59:31 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [NOTICE]   (297384) : Loading success.
Oct 14 08:59:31 compute-0 nova_compute[259627]: 2025-10-14 08:59:31.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:31 compute-0 nova_compute[259627]: 2025-10-14 08:59:31.967 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432371.9670866, f3dafba3-6472-4921-9ece-b6076172365e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:31 compute-0 nova_compute[259627]: 2025-10-14 08:59:31.968 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Started (Lifecycle Event)
Oct 14 08:59:31 compute-0 nova_compute[259627]: 2025-10-14 08:59:31.996 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:32 compute-0 nova_compute[259627]: 2025-10-14 08:59:32.000 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432371.9682617, f3dafba3-6472-4921-9ece-b6076172365e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:32 compute-0 nova_compute[259627]: 2025-10-14 08:59:32.000 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Paused (Lifecycle Event)
Oct 14 08:59:32 compute-0 nova_compute[259627]: 2025-10-14 08:59:32.021 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:32 compute-0 nova_compute[259627]: 2025-10-14 08:59:32.024 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:32 compute-0 nova_compute[259627]: 2025-10-14 08:59:32.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:32 compute-0 ceph-mon[74249]: pgmap v1247: 305 pgs: 305 active+clean; 325 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 169 op/s
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:59:32
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta']
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:59:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 08:59:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 325 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.784 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.785 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.786 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.786 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.786 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.787 2 INFO nova.compute.manager [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Terminating instance
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.788 2 DEBUG nova.compute.manager [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:59:33 compute-0 kernel: tap2da46865-98 (unregistering): left promiscuous mode
Oct 14 08:59:33 compute-0 NetworkManager[44885]: <info>  [1760432373.8487] device (tap2da46865-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:33 compute-0 ovn_controller[152662]: 2025-10-14T08:59:33Z|00252|binding|INFO|Releasing lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d from this chassis (sb_readonly=0)
Oct 14 08:59:33 compute-0 ovn_controller[152662]: 2025-10-14T08:59:33Z|00253|binding|INFO|Setting lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d down in Southbound
Oct 14 08:59:33 compute-0 ovn_controller[152662]: 2025-10-14T08:59:33Z|00254|binding|INFO|Removing iface tap2da46865-98 ovn-installed in OVS
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.867 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d2:4a 10.100.0.13'], port_security=['fa:16:3e:28:d2:4a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2da46865-98ea-42a7-a5cc-44b5bef36a3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.868 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2da46865-98ea-42a7-a5cc-44b5bef36a3d in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis
Oct 14 08:59:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.870 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:59:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.890 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f481b5-6877-4192-a24f-8d27b008f68e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:33 compute-0 nova_compute[259627]: 2025-10-14 08:59:33.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:33 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 14 08:59:33 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 15.044s CPU time.
Oct 14 08:59:33 compute-0 systemd-machined[214636]: Machine qemu-30-instance-0000001e terminated.
Oct 14 08:59:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.927 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1532e2c-86ed-4232-9924-242c102eb91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.930 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1af90aac-58ef-4935-9dfb-2e4fc7590ced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.968 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70e39bb1-8957-48f2-8086-753121bb153f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.990 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3558fc08-6ed5-4374-b846-02f532ae2097]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 18, 'rx_bytes': 1000, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 18, 'rx_bytes': 1000, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297404, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.019 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d84689-b16d-4b96-9339-9345c03dd01d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297406, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297406, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.025 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.031 2 INFO nova.virt.libvirt.driver [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance destroyed successfully.
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.031 2 DEBUG nova.objects.instance [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'resources' on Instance uuid 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.036 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.036 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.036 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.037 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.043 2 DEBUG nova.virt.libvirt.vif [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-208119549',display_name='tempest-ListServerFiltersTestJSON-instance-208119549',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-208119549',id=30,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-ldwr4ls0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:03Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.044 2 DEBUG nova.network.os_vif_util [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.044 2 DEBUG nova.network.os_vif_util [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.045 2 DEBUG os_vif [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2da46865-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.052 2 INFO os_vif [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98')
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.431 2 INFO nova.virt.libvirt.driver [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Deleting instance files /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_del
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.433 2 INFO nova.virt.libvirt.driver [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Deletion of /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_del complete
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.485 2 INFO nova.compute.manager [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.486 2 DEBUG oslo.service.loopingcall [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.486 2 DEBUG nova.compute.manager [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:59:34 compute-0 nova_compute[259627]: 2025-10-14 08:59:34.487 2 DEBUG nova.network.neutron [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:59:34 compute-0 ceph-mon[74249]: pgmap v1248: 305 pgs: 305 active+clean; 325 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Oct 14 08:59:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:35.339 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:35 compute-0 nova_compute[259627]: 2025-10-14 08:59:35.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:35.340 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 08:59:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct 14 08:59:36 compute-0 ovn_controller[152662]: 2025-10-14T08:59:36Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 08:59:36 compute-0 ovn_controller[152662]: 2025-10-14T08:59:36Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 08:59:36 compute-0 ceph-mon[74249]: pgmap v1249: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct 14 08:59:36 compute-0 nova_compute[259627]: 2025-10-14 08:59:36.879 2 DEBUG nova.network.neutron [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:36 compute-0 nova_compute[259627]: 2025-10-14 08:59:36.900 2 INFO nova.compute.manager [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Took 2.41 seconds to deallocate network for instance.
Oct 14 08:59:36 compute-0 nova_compute[259627]: 2025-10-14 08:59:36.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:36 compute-0 nova_compute[259627]: 2025-10-14 08:59:36.956 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:36 compute-0 nova_compute[259627]: 2025-10-14 08:59:36.957 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:36 compute-0 nova_compute[259627]: 2025-10-14 08:59:36.980 2 DEBUG nova.compute.manager [req-295c0c76-6dcc-4348-80f2-9b32b046a165 req-e96e52d0-1e20-4b1f-9e47-148f058bafb4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received event network-vif-deleted-2da46865-98ea-42a7-a5cc-44b5bef36a3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:37 compute-0 nova_compute[259627]: 2025-10-14 08:59:37.089 2 DEBUG oslo_concurrency.processutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898397418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:37 compute-0 nova_compute[259627]: 2025-10-14 08:59:37.545 2 DEBUG oslo_concurrency.processutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:37 compute-0 nova_compute[259627]: 2025-10-14 08:59:37.554 2 DEBUG nova.compute.provider_tree [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:37 compute-0 nova_compute[259627]: 2025-10-14 08:59:37.574 2 DEBUG nova.scheduler.client.report [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 14 08:59:37 compute-0 nova_compute[259627]: 2025-10-14 08:59:37.600 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:37 compute-0 nova_compute[259627]: 2025-10-14 08:59:37.622 2 INFO nova.scheduler.client.report [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Deleted allocations for instance 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9
Oct 14 08:59:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/898397418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:37 compute-0 nova_compute[259627]: 2025-10-14 08:59:37.703 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.205 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.206 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.207 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.207 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.208 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.210 2 INFO nova.compute.manager [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Terminating instance
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.211 2 DEBUG nova.compute.manager [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:59:38 compute-0 kernel: tapacc7c80f-88 (unregistering): left promiscuous mode
Oct 14 08:59:38 compute-0 NetworkManager[44885]: <info>  [1760432378.2700] device (tapacc7c80f-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:38 compute-0 ovn_controller[152662]: 2025-10-14T08:59:38Z|00255|binding|INFO|Releasing lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a from this chassis (sb_readonly=0)
Oct 14 08:59:38 compute-0 ovn_controller[152662]: 2025-10-14T08:59:38Z|00256|binding|INFO|Setting lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a down in Southbound
Oct 14 08:59:38 compute-0 ovn_controller[152662]: 2025-10-14T08:59:38Z|00257|binding|INFO|Removing iface tapacc7c80f-88 ovn-installed in OVS
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.296 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:46:6c 10.100.0.14'], port_security=['fa:16:3e:da:46:6c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'eb820455-d45c-4331-9363-124f11537f52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=acc7c80f-8812-4bbf-93f8-cc3f1556b62a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.297 162547 INFO neutron.agent.ovn.metadata.agent [-] Port acc7c80f-8812-4bbf-93f8-cc3f1556b62a in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.298 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.317 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52a0cf4d-e06d-4480-b9f4-da3bfcc203c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:38 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 14 08:59:38 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001d.scope: Consumed 14.992s CPU time.
Oct 14 08:59:38 compute-0 systemd-machined[214636]: Machine qemu-32-instance-0000001d terminated.
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.356 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[28e0e229-a065-4da8-a592-3222682b5db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.360 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[138e83e0-dcae-4008-836d-5d207a1c0a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.393 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[18d40bad-ba20-458d-ae43-a5b638d7d564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14862412-4949-4674-86f5-32cc57069159]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 20, 'rx_bytes': 1000, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 20, 'rx_bytes': 1000, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297472, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.427 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a29042c-65d6-46e7-b628-ab9dec82238d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297473, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297473, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.428 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.433 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.434 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.434 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.434 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.450 2 INFO nova.virt.libvirt.driver [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance destroyed successfully.
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.451 2 DEBUG nova.objects.instance [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'resources' on Instance uuid eb820455-d45c-4331-9363-124f11537f52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.467 2 DEBUG nova.virt.libvirt.vif [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-195518745',display_name='tempest-ListServerFiltersTestJSON-instance-195518745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-195518745',id=29,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-0kd7c49h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:02Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=eb820455-d45c-4331-9363-124f11537f52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.468 2 DEBUG nova.network.os_vif_util [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.469 2 DEBUG nova.network.os_vif_util [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.469 2 DEBUG os_vif [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacc7c80f-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.477 2 INFO os_vif [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88')
Oct 14 08:59:38 compute-0 ceph-mon[74249]: pgmap v1250: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.841 2 INFO nova.virt.libvirt.driver [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Deleting instance files /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52_del
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.842 2 INFO nova.virt.libvirt.driver [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Deletion of /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52_del complete
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.908 2 INFO nova.compute.manager [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.909 2 DEBUG oslo.service.loopingcall [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.909 2 DEBUG nova.compute.manager [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.909 2 DEBUG nova.network.neutron [-] [instance: eb820455-d45c-4331-9363-124f11537f52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.983 2 DEBUG nova.compute.manager [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.984 2 DEBUG oslo_concurrency.lockutils [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.984 2 DEBUG oslo_concurrency.lockutils [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.984 2 DEBUG oslo_concurrency.lockutils [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.984 2 DEBUG nova.compute.manager [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Processing event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.985 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.990 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432378.9899826, f3dafba3-6472-4921-9ece-b6076172365e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.990 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Resumed (Lifecycle Event)
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.992 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.997 2 INFO nova.virt.libvirt.driver [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance spawned successfully.
Oct 14 08:59:38 compute-0 nova_compute[259627]: 2025-10-14 08:59:38.997 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.011 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.018 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.021 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.021 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.022 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.022 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.023 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.023 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.057 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.097 2 INFO nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 14.00 seconds to spawn the instance on the hypervisor.
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.097 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.176 2 INFO nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 15.04 seconds to build instance.
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.193 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.540 2 DEBUG nova.network.neutron [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.559 2 INFO nova.compute.manager [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Took 0.65 seconds to deallocate network for instance.
Oct 14 08:59:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.596 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.597 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.640 2 DEBUG nova.compute.manager [req-6fe65eb0-0eb0-4b32-9175-75c40c74e297 req-186fafee-5646-4f3c-8502-30bf4106e92e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received event network-vif-deleted-acc7c80f-8812-4bbf-93f8-cc3f1556b62a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:39 compute-0 nova_compute[259627]: 2025-10-14 08:59:39.703 2 DEBUG oslo_concurrency.processutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694684315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:40 compute-0 nova_compute[259627]: 2025-10-14 08:59:40.202 2 DEBUG oslo_concurrency.processutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:40 compute-0 nova_compute[259627]: 2025-10-14 08:59:40.208 2 DEBUG nova.compute.provider_tree [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:40 compute-0 nova_compute[259627]: 2025-10-14 08:59:40.237 2 DEBUG nova.scheduler.client.report [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:40 compute-0 nova_compute[259627]: 2025-10-14 08:59:40.304 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:40 compute-0 nova_compute[259627]: 2025-10-14 08:59:40.338 2 INFO nova.scheduler.client.report [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Deleted allocations for instance eb820455-d45c-4331-9363-124f11537f52
Oct 14 08:59:40 compute-0 nova_compute[259627]: 2025-10-14 08:59:40.415 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:40 compute-0 ceph-mon[74249]: pgmap v1251: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 14 08:59:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1694684315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:41.342 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 167 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Oct 14 08:59:41 compute-0 podman[297527]: 2025-10-14 08:59:41.687309592 +0000 UTC m=+0.075813803 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 08:59:41 compute-0 podman[297526]: 2025-10-14 08:59:41.77725872 +0000 UTC m=+0.176621418 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:59:41 compute-0 nova_compute[259627]: 2025-10-14 08:59:41.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.407 2 DEBUG nova.compute.manager [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.408 2 DEBUG oslo_concurrency.lockutils [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.408 2 DEBUG oslo_concurrency.lockutils [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.409 2 DEBUG oslo_concurrency.lockutils [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.409 2 DEBUG nova.compute.manager [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] No waiting events found dispatching network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.409 2 WARNING nova.compute.manager [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received unexpected event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 for instance with vm_state active and task_state None.
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.446 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.447 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.464 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:59:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.540 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.541 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.561 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.562 2 INFO nova.compute.claims [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.568 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.569 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.570 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.571 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.572 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.573 2 INFO nova.compute.manager [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Terminating instance
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.575 2 DEBUG nova.compute.manager [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:59:42 compute-0 kernel: tap0b16cd6a-fe (unregistering): left promiscuous mode
Oct 14 08:59:42 compute-0 NetworkManager[44885]: <info>  [1760432382.6275] device (tap0b16cd6a-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:42 compute-0 ovn_controller[152662]: 2025-10-14T08:59:42Z|00258|binding|INFO|Releasing lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 from this chassis (sb_readonly=0)
Oct 14 08:59:42 compute-0 ovn_controller[152662]: 2025-10-14T08:59:42Z|00259|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 down in Southbound
Oct 14 08:59:42 compute-0 ovn_controller[152662]: 2025-10-14T08:59:42Z|00260|binding|INFO|Removing iface tap0b16cd6a-fe ovn-installed in OVS
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.646 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.647 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis
Oct 14 08:59:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.649 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:59:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc4e2ef-a626-460a-8152-098706dd1430]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.651 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 namespace which is not needed anymore
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:42 compute-0 ceph-mon[74249]: pgmap v1252: 305 pgs: 305 active+clean; 167 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Oct 14 08:59:42 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 14 08:59:42 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001b.scope: Consumed 13.046s CPU time.
Oct 14 08:59:42 compute-0 systemd-machined[214636]: Machine qemu-34-instance-0000001b terminated.
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.727 2 DEBUG nova.objects.instance [None req-af3d57db-6051-4643-905e-4c951fe186b9 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid f3dafba3-6472-4921-9ece-b6076172365e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.740 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.791 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432382.791429, f3dafba3-6472-4921-9ece-b6076172365e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.792 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Paused (Lifecycle Event)
Oct 14 08:59:42 compute-0 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [NOTICE]   (294131) : haproxy version is 2.8.14-c23fe91
Oct 14 08:59:42 compute-0 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [NOTICE]   (294131) : path to executable is /usr/sbin/haproxy
Oct 14 08:59:42 compute-0 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [WARNING]  (294131) : Exiting Master process...
Oct 14 08:59:42 compute-0 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [ALERT]    (294131) : Current worker (294143) exited with code 143 (Terminated)
Oct 14 08:59:42 compute-0 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [WARNING]  (294131) : All workers exited. Exiting... (0)
Oct 14 08:59:42 compute-0 systemd[1]: libpod-212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782.scope: Deactivated successfully.
Oct 14 08:59:42 compute-0 podman[297593]: 2025-10-14 08:59:42.811001924 +0000 UTC m=+0.062033583 container died 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.814 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.828 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.831 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance destroyed successfully.
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.832 2 DEBUG nova.objects.instance [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'resources' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782-userdata-shm.mount: Deactivated successfully.
Oct 14 08:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7915092ded73a0a1e93e8fda5fd9538a3414c7bf089afe2c761ba10dc82e95d-merged.mount: Deactivated successfully.
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.852 2 DEBUG nova.virt.libvirt.vif [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:24Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.852 2 DEBUG nova.network.os_vif_util [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.853 2 DEBUG nova.network.os_vif_util [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.853 2 DEBUG os_vif [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b16cd6a-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.858 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:42 compute-0 nova_compute[259627]: 2025-10-14 08:59:42.862 2 INFO os_vif [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe')
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 08:59:42 compute-0 podman[297593]: 2025-10-14 08:59:42.886388905 +0000 UTC m=+0.137420564 container cleanup 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001106859860324282 of space, bias 1.0, pg target 0.3320579580972846 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 08:59:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 08:59:42 compute-0 systemd[1]: libpod-conmon-212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782.scope: Deactivated successfully.
Oct 14 08:59:43 compute-0 podman[297662]: 2025-10-14 08:59:43.014952122 +0000 UTC m=+0.098590552 container remove 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.020 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef373034-ae03-44c0-9b9a-da327b0a0b96]: (4, ('Tue Oct 14 08:59:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 (212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782)\n212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782\nTue Oct 14 08:59:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 (212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782)\n212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.023 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8ed956-fb48-43c6-996e-bee168394d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.023 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:43 compute-0 kernel: tapc4d50d6a-60: left promiscuous mode
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.042 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46ce3038-3162-436e-874b-9e2f3343a39b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.067 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8f1f4b-2597-4ff3-a1e7-2bc94b92d182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.068 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f60605-05fe-411f-9357-fedff9a1b730]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.087 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9c246a-9c81-4d3a-acfb-83d22adfdb06]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611635, 'reachable_time': 40378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297684, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dc4d50d6a\x2d6686\x2d4b50\x2db1e5\x2d9f71bae17a99.mount: Deactivated successfully.
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.092 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.092 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9b22b370-f459-439b-a91f-96f50969cabc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.113 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.113 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:43 compute-0 kernel: tapecb526b7-d1 (unregistering): left promiscuous mode
Oct 14 08:59:43 compute-0 NetworkManager[44885]: <info>  [1760432383.1474] device (tapecb526b7-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.147 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:43 compute-0 ovn_controller[152662]: 2025-10-14T08:59:43Z|00261|binding|INFO|Releasing lport ecb526b7-d1ad-4a75-b851-482702018258 from this chassis (sb_readonly=0)
Oct 14 08:59:43 compute-0 ovn_controller[152662]: 2025-10-14T08:59:43Z|00262|binding|INFO|Setting lport ecb526b7-d1ad-4a75-b851-482702018258 down in Southbound
Oct 14 08:59:43 compute-0 ovn_controller[152662]: 2025-10-14T08:59:43Z|00263|binding|INFO|Removing iface tapecb526b7-d1 ovn-installed in OVS
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.178 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:8b:13 10.100.0.9'], port_security=['fa:16:3e:e5:8b:13 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f3dafba3-6472-4921-9ece-b6076172365e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ecb526b7-d1ad-4a75-b851-482702018258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.180 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ecb526b7-d1ad-4a75-b851-482702018258 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.182 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.183 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[25dcbd94-66f2-49bb-a2c7-35cd5efbc599]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.184 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace which is not needed anymore
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:43 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct 14 08:59:43 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 4.728s CPU time.
Oct 14 08:59:43 compute-0 systemd-machined[214636]: Machine qemu-35-instance-0000001f terminated.
Oct 14 08:59:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3953465074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.228 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.233 2 DEBUG nova.compute.provider_tree [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.235 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.247 2 DEBUG nova.scheduler.client.report [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.273 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.274 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.276 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.286 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.287 2 INFO nova.compute.claims [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:59:43 compute-0 NetworkManager[44885]: <info>  [1760432383.3065] manager: (tapecb526b7-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Oct 14 08:59:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [NOTICE]   (297384) : haproxy version is 2.8.14-c23fe91
Oct 14 08:59:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [NOTICE]   (297384) : path to executable is /usr/sbin/haproxy
Oct 14 08:59:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [WARNING]  (297384) : Exiting Master process...
Oct 14 08:59:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [WARNING]  (297384) : Exiting Master process...
Oct 14 08:59:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [ALERT]    (297384) : Current worker (297386) exited with code 143 (Terminated)
Oct 14 08:59:43 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [WARNING]  (297384) : All workers exited. Exiting... (0)
Oct 14 08:59:43 compute-0 systemd[1]: libpod-d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1.scope: Deactivated successfully.
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.320 2 DEBUG nova.compute.manager [None req-af3d57db-6051-4643-905e-4c951fe186b9 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:43 compute-0 podman[297709]: 2025-10-14 08:59:43.323286704 +0000 UTC m=+0.050051710 container died d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.338 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.339 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1-userdata-shm.mount: Deactivated successfully.
Oct 14 08:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b83f526bc4bd45d25326b47f32c13811952e9e26bf37d62c64b21ce07a2169ec-merged.mount: Deactivated successfully.
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.366 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:59:43 compute-0 podman[297709]: 2025-10-14 08:59:43.368875793 +0000 UTC m=+0.095640799 container cleanup d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 08:59:43 compute-0 systemd[1]: libpod-conmon-d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1.scope: Deactivated successfully.
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.398 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:59:43 compute-0 podman[297750]: 2025-10-14 08:59:43.436176046 +0000 UTC m=+0.043205702 container remove d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.442 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12f2f189-796a-4039-b6f7-aafca1756f51]: (4, ('Tue Oct 14 08:59:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1)\nd8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1\nTue Oct 14 08:59:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1)\nd8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f746ef4a-80be-414b-baa8-60df8e37b995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.447 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.450 2 INFO nova.virt.libvirt.driver [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Deleting instance files /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317_del
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.450 2 INFO nova.virt.libvirt.driver [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Deletion of /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317_del complete
Oct 14 08:59:43 compute-0 kernel: tap2322cf7a-00: left promiscuous mode
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.480 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.481 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.481 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating image(s)
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.490 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[723f6a83-03ee-4014-aea3-5ec38a4f6e3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.509 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a827162c-7ae1-41de-8450-f1685bd67851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83dc47b2-d6e1-425a-98c2-ccb820e73cc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.534 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[41912d64-a70a-4098-b90f-fb5ad0a31c8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614980, 'reachable_time': 40063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297798, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.536 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:59:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.537 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[837c011d-1c8b-4767-aaa1-69dd779bca68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.540 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.561 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.564 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 167 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 165 op/s
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.594 2 DEBUG nova.policy [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56001fe1c9fc432e923f8c57058754db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.598 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.647 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.648 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.649 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.649 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.672 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.675 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3953465074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.715 2 INFO nova.compute.manager [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Took 1.14 seconds to destroy the instance on the hypervisor.
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.716 2 DEBUG oslo.service.loopingcall [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.717 2 DEBUG nova.compute.manager [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.718 2 DEBUG nova.network.neutron [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:59:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d2322cf7a\x2d0090\x2d40fa\x2da558\x2d42d84cc6fc2a.mount: Deactivated successfully.
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.949 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:43 compute-0 nova_compute[259627]: 2025-10-14 08:59:43.997 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] resizing rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:59:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547583162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.086 2 DEBUG nova.objects.instance [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'migration_context' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.090 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.095 2 DEBUG nova.compute.provider_tree [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.101 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.102 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Ensure instance console log exists: /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.102 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.102 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.103 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.108 2 DEBUG nova.scheduler.client.report [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.137 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.138 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.185 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.185 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.202 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.219 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.329 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.332 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.332 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Creating image(s)
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.361 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.386 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.406 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.410 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.437 2 DEBUG nova.network.neutron [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.441 2 DEBUG nova.policy [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ec2f781b62446cb98129707144b9d37', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd273d79854242779e57eece9a65f7c0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.447 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Successfully created port: 60379992-d75d-4eff-a6bb-5d1615f35475 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.455 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.455 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.486 2 INFO nova.compute.manager [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Took 0.77 seconds to deallocate network for instance.
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.492 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.494 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.495 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.495 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.495 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.519 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.522 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1af6c158-005b-4f3c-9044-87158e57378d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.564 2 DEBUG nova.compute.manager [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.570 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.571 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.592 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.614 2 INFO nova.compute.manager [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] instance snapshotting
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.615 2 WARNING nova.compute.manager [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.667 2 DEBUG nova.compute.manager [req-f27054b8-bc5c-41bd-b0b7-23084921ea31 req-23d8353f-0b88-4e67-9e33-fe698629c3f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-deleted-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.698 2 DEBUG oslo_concurrency.processutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:44 compute-0 ceph-mon[74249]: pgmap v1253: 305 pgs: 305 active+clean; 167 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 165 op/s
Oct 14 08:59:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2547583162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.752 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1af6c158-005b-4f3c-9044-87158e57378d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.799 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] resizing rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.831 2 INFO nova.virt.libvirt.driver [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Beginning cold snapshot process
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.883 2 DEBUG nova.objects.instance [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'migration_context' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.916 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.917 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Ensure instance console log exists: /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.917 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.918 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:44 compute-0 nova_compute[259627]: 2025-10-14 08:59:44.918 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.011 2 DEBUG nova.virt.libvirt.imagebackend [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.030 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Successfully created port: 36158ae1-8367-4859-a407-565fde315649 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:59:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211805633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.108 2 DEBUG oslo_concurrency.processutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.113 2 DEBUG nova.compute.provider_tree [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.153 2 DEBUG nova.scheduler.client.report [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.172 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Successfully updated port: 60379992-d75d-4eff-a6bb-5d1615f35475 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.178 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.180 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.187 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.187 2 INFO nova.compute.claims [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.191 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.191 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquired lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.191 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.211 2 INFO nova.scheduler.client.report [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Deleted allocations for instance 27fa4cf8-c08c-46a2-af8f-17c8980a2317
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.278 2 DEBUG nova.compute.manager [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-changed-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.279 2 DEBUG nova.compute.manager [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Refreshing instance network info cache due to event network-changed-60379992-d75d-4eff-a6bb-5d1615f35475. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.279 2 DEBUG oslo_concurrency.lockutils [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.285 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.298 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(83423e3ebdc24f9ea1f9981cd320a086) on rbd image(f3dafba3-6472-4921-9ece-b6076172365e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.366 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 256 op/s
Oct 14 08:59:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 14 08:59:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 14 08:59:45 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 14 08:59:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1211805633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.806 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning vms/f3dafba3-6472-4921-9ece-b6076172365e_disk@83423e3ebdc24f9ea1f9981cd320a086 to images/2762376e-9539-4ed8-bf9b-be2decee774f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 08:59:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599579862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.847 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.854 2 DEBUG nova.compute.provider_tree [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.875 2 DEBUG nova.scheduler.client.report [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.907 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] flattening images/2762376e-9539-4ed8-bf9b-be2decee774f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.952 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.956 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:45 compute-0 nova_compute[259627]: 2025-10-14 08:59:45.957 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.025 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.025 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.055 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.079 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.119 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(83423e3ebdc24f9ea1f9981cd320a086) on rbd image(f3dafba3-6472-4921-9ece-b6076172365e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.179 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.180 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.180 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Creating image(s)
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.204 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.239 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.273 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.278 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.323 2 DEBUG nova.policy [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56001fe1c9fc432e923f8c57058754db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.378 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.379 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.380 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.380 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.397 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.400 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b932e3d1-4cf6-4934-9eec-c93284b17b43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.676 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b932e3d1-4cf6-4934-9eec-c93284b17b43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 14 08:59:46 compute-0 ceph-mon[74249]: pgmap v1254: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 256 op/s
Oct 14 08:59:46 compute-0 ceph-mon[74249]: osdmap e148: 3 total, 3 up, 3 in
Oct 14 08:59:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1599579862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.730 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] resizing rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:59:46 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.770 2 DEBUG nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-unplugged-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.770 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.770 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.771 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.771 2 DEBUG nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] No waiting events found dispatching network-vif-unplugged-ecb526b7-d1ad-4a75-b851-482702018258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.771 2 WARNING nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received unexpected event network-vif-unplugged-ecb526b7-d1ad-4a75-b851-482702018258 for instance with vm_state suspended and task_state image_uploading.
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.771 2 DEBUG nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.771 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.772 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.772 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.772 2 DEBUG nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] No waiting events found dispatching network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.772 2 WARNING nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received unexpected event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 for instance with vm_state suspended and task_state image_uploading.
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.796 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(snap) on rbd image(2762376e-9539-4ed8-bf9b-be2decee774f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.860 2 DEBUG nova.objects.instance [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'migration_context' on Instance uuid b932e3d1-4cf6-4934-9eec-c93284b17b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.877 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.877 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Ensure instance console log exists: /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.878 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.878 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.878 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:46 compute-0 nova_compute[259627]: 2025-10-14 08:59:46.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.255 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Successfully updated port: 36158ae1-8367-4859-a407-565fde315649 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.274 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.274 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquired lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.274 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.492 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:59:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 306 op/s
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.616 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Updating instance_info_cache with network_info: [{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.635 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Releasing lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.636 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance network_info: |[{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.636 2 DEBUG oslo_concurrency.lockutils [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.636 2 DEBUG nova.network.neutron [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Refreshing network info cache for port 60379992-d75d-4eff-a6bb-5d1615f35475 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.638 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start _get_guest_xml network_info=[{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.642 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Successfully created port: 6fb13023-6749-4e1b-b7d9-235dff8e72d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.647 2 WARNING nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.653 2 DEBUG nova.virt.libvirt.host [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.654 2 DEBUG nova.virt.libvirt.host [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.660 2 DEBUG nova.virt.libvirt.host [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.660 2 DEBUG nova.virt.libvirt.host [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.660 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.661 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.661 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.661 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.661 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.662 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.662 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.662 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.662 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.663 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.663 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.663 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.665 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 14 08:59:47 compute-0 ceph-mon[74249]: osdmap e149: 3 total, 3 up, 3 in
Oct 14 08:59:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 14 08:59:47 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 14 08:59:47 compute-0 nova_compute[259627]: 2025-10-14 08:59:47.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/442874738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.204 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.235 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.240 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924961389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.692 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.694 2 DEBUG nova.virt.libvirt.vif [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:43Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.694 2 DEBUG nova.network.os_vif_util [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.695 2 DEBUG nova.network.os_vif_util [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.696 2 DEBUG nova.objects.instance [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.711 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <uuid>310ebd88-5fe0-40ad-99dd-c3a1b410d357</uuid>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <name>instance-00000020</name>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdminTestJSON-server-490112967</nova:name>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:59:47</nova:creationTime>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <nova:user uuid="56001fe1c9fc432e923f8c57058754db">tempest-ServersAdminTestJSON-276167539-project-member</nova:user>
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <nova:project uuid="ed7ee17abdbe419cb7d7fd0da2cd2068">tempest-ServersAdminTestJSON-276167539</nova:project>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <nova:port uuid="60379992-d75d-4eff-a6bb-5d1615f35475">
Oct 14 08:59:48 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <system>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <entry name="serial">310ebd88-5fe0-40ad-99dd-c3a1b410d357</entry>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <entry name="uuid">310ebd88-5fe0-40ad-99dd-c3a1b410d357</entry>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </system>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <os>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   </os>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <features>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   </features>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk">
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config">
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:48 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:47:c2:c5"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <target dev="tap60379992-d7"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/console.log" append="off"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <video>
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </video>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:59:48 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:59:48 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:59:48 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:59:48 compute-0 nova_compute[259627]: </domain>
Oct 14 08:59:48 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.712 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Preparing to wait for external event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.713 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.713 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.714 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.715 2 DEBUG nova.virt.libvirt.vif [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:43Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.716 2 DEBUG nova.network.os_vif_util [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.717 2 DEBUG nova.network.os_vif_util [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.718 2 DEBUG os_vif [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.724 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60379992-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60379992-d7, col_values=(('external_ids', {'iface-id': '60379992-d75d-4eff-a6bb-5d1615f35475', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:c2:c5', 'vm-uuid': '310ebd88-5fe0-40ad-99dd-c3a1b410d357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:48 compute-0 NetworkManager[44885]: <info>  [1760432388.7339] manager: (tap60379992-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.741 2 INFO os_vif [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7')
Oct 14 08:59:48 compute-0 ceph-mon[74249]: pgmap v1257: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 306 op/s
Oct 14 08:59:48 compute-0 ceph-mon[74249]: osdmap e150: 3 total, 3 up, 3 in
Oct 14 08:59:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/442874738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3924961389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.879 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.879 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.879 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No VIF found with MAC fa:16:3e:47:c2:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.880 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Using config drive
Oct 14 08:59:48 compute-0 nova_compute[259627]: 2025-10-14 08:59:48.897 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.030 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432374.0295033, 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.030 2 INFO nova.compute.manager [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] VM Stopped (Lifecycle Event)
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.051 2 DEBUG nova.compute.manager [None req-6319883c-1dc7-412a-83a0-b380cefd926e - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.108 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updating instance_info_cache with network_info: [{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.134 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Releasing lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.134 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance network_info: |[{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.137 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start _get_guest_xml network_info=[{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.142 2 WARNING nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.145 2 DEBUG nova.network.neutron [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Updated VIF entry in instance network info cache for port 60379992-d75d-4eff-a6bb-5d1615f35475. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.146 2 DEBUG nova.network.neutron [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Updating instance_info_cache with network_info: [{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.156 2 DEBUG nova.virt.libvirt.host [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.157 2 DEBUG nova.virt.libvirt.host [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.160 2 DEBUG nova.virt.libvirt.host [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.160 2 DEBUG nova.virt.libvirt.host [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.161 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.161 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.163 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.163 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.163 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.163 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.164 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.167 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.219 2 DEBUG nova.compute.manager [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-changed-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.220 2 DEBUG nova.compute.manager [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Refreshing instance network info cache due to event network-changed-36158ae1-8367-4859-a407-565fde315649. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.220 2 DEBUG oslo_concurrency.lockutils [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.220 2 DEBUG oslo_concurrency.lockutils [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.221 2 DEBUG nova.network.neutron [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Refreshing network info cache for port 36158ae1-8367-4859-a407-565fde315649 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.223 2 DEBUG oslo_concurrency.lockutils [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.567 2 INFO nova.virt.libvirt.driver [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Snapshot image upload complete
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.568 2 INFO nova.compute.manager [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 4.95 seconds to snapshot the instance on the hypervisor.
Oct 14 08:59:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 507 KiB/s rd, 7.1 MiB/s wr, 183 op/s
Oct 14 08:59:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501644839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.650 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.675 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.679 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.725 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating config drive at /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.730 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxow3px_z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.774 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Successfully updated port: 6fb13023-6749-4e1b-b7d9-235dff8e72d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.796 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.796 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquired lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.796 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.816 2 DEBUG nova.compute.manager [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-changed-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.817 2 DEBUG nova.compute.manager [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Refreshing instance network info cache due to event network-changed-6fb13023-6749-4e1b-b7d9-235dff8e72d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.817 2 DEBUG oslo_concurrency.lockutils [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1501644839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.888 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxow3px_z" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.933 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:49 compute-0 nova_compute[259627]: 2025-10-14 08:59:49.940 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.002 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:59:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183158786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.160 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.162 2 DEBUG nova.virt.libvirt.vif [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:44Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.163 2 DEBUG nova.network.os_vif_util [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.165 2 DEBUG nova.network.os_vif_util [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.167 2 DEBUG nova.objects.instance [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.170 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.171 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deleting local config drive /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config because it was imported into RBD.
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.194 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <uuid>1af6c158-005b-4f3c-9044-87158e57378d</uuid>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <name>instance-00000021</name>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <nova:name>tempest-InstanceActionsTestJSON-server-461517423</nova:name>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:59:49</nova:creationTime>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <nova:user uuid="9ec2f781b62446cb98129707144b9d37">tempest-InstanceActionsTestJSON-580993042-project-member</nova:user>
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <nova:project uuid="fd273d79854242779e57eece9a65f7c0">tempest-InstanceActionsTestJSON-580993042</nova:project>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <nova:port uuid="36158ae1-8367-4859-a407-565fde315649">
Oct 14 08:59:50 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <system>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <entry name="serial">1af6c158-005b-4f3c-9044-87158e57378d</entry>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <entry name="uuid">1af6c158-005b-4f3c-9044-87158e57378d</entry>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </system>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <os>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   </os>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <features>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   </features>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1af6c158-005b-4f3c-9044-87158e57378d_disk">
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1af6c158-005b-4f3c-9044-87158e57378d_disk.config">
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:50 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:a4:7e:78"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <target dev="tap36158ae1-83"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/console.log" append="off"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <video>
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </video>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:59:50 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:59:50 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:59:50 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:59:50 compute-0 nova_compute[259627]: </domain>
Oct 14 08:59:50 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.195 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Preparing to wait for external event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.195 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.196 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.196 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.197 2 DEBUG nova.virt.libvirt.vif [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:44Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.198 2 DEBUG nova.network.os_vif_util [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.199 2 DEBUG nova.network.os_vif_util [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.200 2 DEBUG os_vif [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.209 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36158ae1-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.210 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36158ae1-83, col_values=(('external_ids', {'iface-id': '36158ae1-8367-4859-a407-565fde315649', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:7e:78', 'vm-uuid': '1af6c158-005b-4f3c-9044-87158e57378d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:50 compute-0 NetworkManager[44885]: <info>  [1760432390.2688] manager: (tap36158ae1-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:50 compute-0 kernel: tap60379992-d7: entered promiscuous mode
Oct 14 08:59:50 compute-0 ovn_controller[152662]: 2025-10-14T08:59:50Z|00264|binding|INFO|Claiming lport 60379992-d75d-4eff-a6bb-5d1615f35475 for this chassis.
Oct 14 08:59:50 compute-0 ovn_controller[152662]: 2025-10-14T08:59:50Z|00265|binding|INFO|60379992-d75d-4eff-a6bb-5d1615f35475: Claiming fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 08:59:50 compute-0 NetworkManager[44885]: <info>  [1760432390.2846] manager: (tap60379992-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.285 2 INFO os_vif [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83')
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.296 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:c2:c5 10.100.0.8'], port_security=['fa:16:3e:47:c2:c5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '310ebd88-5fe0-40ad-99dd-c3a1b410d357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=60379992-d75d-4eff-a6bb-5d1615f35475) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.297 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 60379992-d75d-4eff-a6bb-5d1615f35475 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a bound to our chassis
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.299 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.311 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a937b47e-1d75-44f5-a073-d7b6636eafbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.312 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapea0c857a-d1 in ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.314 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapea0c857a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.314 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afcd0e61-98f9-4a02-bc9d-fbad8a79f964]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8406ec3d-5cd4-465b-8a3d-057b528eb220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.331 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a590140c-50ca-468d-abf0-c57d7348858e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 systemd-machined[214636]: New machine qemu-36-instance-00000020.
Oct 14 08:59:50 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.353 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.353 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.354 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] No VIF found with MAC fa:16:3e:a4:7e:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.354 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Using config drive
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.358 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0171ef07-7506-4201-9fbe-e9fc15a5ac56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 systemd-udevd[298686]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:50 compute-0 NetworkManager[44885]: <info>  [1760432390.3875] device (tap60379992-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:50 compute-0 NetworkManager[44885]: <info>  [1760432390.3886] device (tap60379992-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.403 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.400 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[87aee32a-776e-4f57-8dca-59ee7d185191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 NetworkManager[44885]: <info>  [1760432390.4103] manager: (tapea0c857a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.409 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24413726-8a80-4e6c-a6ee-fb9f40fc013c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_controller[152662]: 2025-10-14T08:59:50Z|00266|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 ovn-installed in OVS
Oct 14 08:59:50 compute-0 ovn_controller[152662]: 2025-10-14T08:59:50Z|00267|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 up in Southbound
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.462 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8ef56c-3490-4bcf-94fd-475eb7f06eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.464 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e06c500f-fc8b-4259-89d9-9c97a76afa95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 NetworkManager[44885]: <info>  [1760432390.4980] device (tapea0c857a-d0): carrier: link connected
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.507 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0c3e28-fe65-468f-8df3-5329e3455834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.526 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c229d27c-0e4a-4ca7-9a89-fc0a55445088]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298734, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.547 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[390937dd-6c84-4515-a6d5-b8c5134886c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:51c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616925, 'tstamp': 616925}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298735, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.570 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd27b37c-d2c1-4c8d-8d45-665f033d7afe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298736, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d491508-7c98-4af2-a3ef-2b285db1d7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.692 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ef7ab1-d559-4988-a01a-89e31b0a1d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 NetworkManager[44885]: <info>  [1760432390.6977] manager: (tapea0c857a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct 14 08:59:50 compute-0 kernel: tapea0c857a-d0: entered promiscuous mode
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:50 compute-0 ovn_controller[152662]: 2025-10-14T08:59:50Z|00268|binding|INFO|Releasing lport 6baedd76-8a05-42d6-8356-18b586f58672 from this chassis (sb_readonly=0)
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.738 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ea0c857a-d31a-43a0-b285-c89c1ddc920a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ea0c857a-d31a-43a0-b285-c89c1ddc920a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.739 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6196c4-b953-4a69-bd8a-b9d1dc8ceb18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.740 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/ea0c857a-d31a-43a0-b285-c89c1ddc920a.pid.haproxy
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:59:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.740 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'env', 'PROCESS_TAG=haproxy-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ea0c857a-d31a-43a0-b285-c89c1ddc920a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:59:50 compute-0 ceph-mon[74249]: pgmap v1259: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 507 KiB/s rd, 7.1 MiB/s wr, 183 op/s
Oct 14 08:59:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1183158786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.950 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Creating config drive at /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.955 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ah2_cug execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.986 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Updating instance_info_cache with network_info: [{"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.997 2 DEBUG nova.network.neutron [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updated VIF entry in instance network info cache for port 36158ae1-8367-4859-a407-565fde315649. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:59:50 compute-0 nova_compute[259627]: 2025-10-14 08:59:50.998 2 DEBUG nova.network.neutron [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updating instance_info_cache with network_info: [{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.008 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Releasing lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.009 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance network_info: |[{"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.010 2 DEBUG oslo_concurrency.lockutils [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.010 2 DEBUG nova.network.neutron [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Refreshing network info cache for port 6fb13023-6749-4e1b-b7d9-235dff8e72d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.015 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start _get_guest_xml network_info=[{"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.016 2 DEBUG oslo_concurrency.lockutils [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.022 2 WARNING nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.030 2 DEBUG nova.virt.libvirt.host [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.030 2 DEBUG nova.virt.libvirt.host [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.034 2 DEBUG nova.virt.libvirt.host [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.035 2 DEBUG nova.virt.libvirt.host [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.035 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.035 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.037 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.037 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.037 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.037 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.039 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.090 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ah2_cug" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.127 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.136 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config 1af6c158-005b-4f3c-9044-87158e57378d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:51 compute-0 podman[298796]: 2025-10-14 08:59:51.220938028 +0000 UTC m=+0.071963028 container create 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 08:59:51 compute-0 podman[298796]: 2025-10-14 08:59:51.179450189 +0000 UTC m=+0.030475239 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:59:51 compute-0 systemd[1]: Started libpod-conmon-8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea.scope.
Oct 14 08:59:51 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.324 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config 1af6c158-005b-4f3c-9044-87158e57378d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.325 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Deleting local config drive /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config because it was imported into RBD.
Oct 14 08:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f93cef988fd9fccba9ac6e4003db9a58536ec9d10518828f62640c8f85b271e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:51 compute-0 podman[298796]: 2025-10-14 08:59:51.346976893 +0000 UTC m=+0.198001913 container init 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:59:51 compute-0 podman[298796]: 2025-10-14 08:59:51.356307462 +0000 UTC m=+0.207332432 container start 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 08:59:51 compute-0 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [NOTICE]   (298905) : New worker (298910) forked
Oct 14 08:59:51 compute-0 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [NOTICE]   (298905) : Loading success.
Oct 14 08:59:51 compute-0 kernel: tap36158ae1-83: entered promiscuous mode
Oct 14 08:59:51 compute-0 NetworkManager[44885]: <info>  [1760432391.3940] manager: (tap36158ae1-83): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Oct 14 08:59:51 compute-0 NetworkManager[44885]: <info>  [1760432391.4089] device (tap36158ae1-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:51 compute-0 NetworkManager[44885]: <info>  [1760432391.4097] device (tap36158ae1-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.417 2 DEBUG nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.418 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.418 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.419 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.419 2 DEBUG nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Processing event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.419 2 DEBUG nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.419 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.420 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.420 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.420 2 DEBUG nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] No waiting events found dispatching network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.421 2 WARNING nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received unexpected event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 for instance with vm_state building and task_state spawning.
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:51 compute-0 ovn_controller[152662]: 2025-10-14T08:59:51Z|00269|binding|INFO|Claiming lport 36158ae1-8367-4859-a407-565fde315649 for this chassis.
Oct 14 08:59:51 compute-0 ovn_controller[152662]: 2025-10-14T08:59:51Z|00270|binding|INFO|36158ae1-8367-4859-a407-565fde315649: Claiming fa:16:3e:a4:7e:78 10.100.0.7
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.465 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:7e:78 10.100.0.7'], port_security=['fa:16:3e:a4:7e:78 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1af6c158-005b-4f3c-9044-87158e57378d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd273d79854242779e57eece9a65f7c0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b2e574d-708f-45c3-a256-0b7fe2d0873c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65584940-3c3d-4797-80d1-97beab212175, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=36158ae1-8367-4859-a407-565fde315649) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.466 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 36158ae1-8367-4859-a407-565fde315649 in datapath a0f6e4de-522c-468f-8b55-9e5064a6cce8 bound to our chassis
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.469 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 08:59:51 compute-0 systemd-machined[214636]: New machine qemu-37-instance-00000021.
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.479 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f5c1b7-4332-42d9-8543-7f2f565b6df6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.479 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0f6e4de-51 in ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.481 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0f6e4de-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df58721d-8466-449d-8f8e-5bb7c3f5b370]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1547ac0-41a3-432d-963b-7e6fc9598430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.503 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1d5512-7ff4-4e1d-b6ee-aaf2064b0458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d37babc-a7f0-4b23-aa37-3e69f267bfac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817577410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:51 compute-0 ovn_controller[152662]: 2025-10-14T08:59:51Z|00271|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 ovn-installed in OVS
Oct 14 08:59:51 compute-0 ovn_controller[152662]: 2025-10-14T08:59:51Z|00272|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 up in Southbound
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.551 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab856e7a-c9f8-46b2-b16c-a01fd9ebcea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 NetworkManager[44885]: <info>  [1760432391.5590] manager: (tapa0f6e4de-50): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.557 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67c97a47-0b6f-417f-8d6b-159edadf8afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.567 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 273 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 182 op/s
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.598 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e410052b-5aff-4378-9c06-198a6f1c08f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.602 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27b83a0e-54c7-4f21-b2c0-37932dfdd0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.602 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.612 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:51 compute-0 NetworkManager[44885]: <info>  [1760432391.6247] device (tapa0f6e4de-50): carrier: link connected
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.630 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[92c843ea-5ebd-4b86-8abe-4a80e9d0b657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[152eb80a-16cf-4178-a083-e8d4a1e29020]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0f6e4de-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617038, 'reachable_time': 39186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298967, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e83d068f-809f-49ea-9122-c08a0a817271]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:e471'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617038, 'tstamp': 617038}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298968, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.686 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fae45989-9d59-4607-9321-f8680d598331]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0f6e4de-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617038, 'reachable_time': 39186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298969, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[782f7682-4d98-4891-93f3-ea36e398b476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.790 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0fa856-ce66-497b-ba2b-df7ae34b76c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.791 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f6e4de-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.792 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.792 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f6e4de-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:51 compute-0 NetworkManager[44885]: <info>  [1760432391.7951] manager: (tapa0f6e4de-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Oct 14 08:59:51 compute-0 kernel: tapa0f6e4de-50: entered promiscuous mode
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.803 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0f6e4de-50, col_values=(('external_ids', {'iface-id': 'f946a06b-cc1d-436a-9eac-cf144d4f5ad3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:51 compute-0 ovn_controller[152662]: 2025-10-14T08:59:51Z|00273|binding|INFO|Releasing lport f946a06b-cc1d-436a-9eac-cf144d4f5ad3 from this chassis (sb_readonly=0)
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.834 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.835 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4ed024-8932-4077-bbbf-87ad749cfd2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.836 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:59:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.837 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'env', 'PROCESS_TAG=haproxy-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0f6e4de-522c-468f-8b55-9e5064a6cce8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:59:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 14 08:59:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 14 08:59:51 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 14 08:59:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1817577410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.937 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.939 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432391.9366655, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.939 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Started (Lifecycle Event)
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.945 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.948 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance spawned successfully.
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.948 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.970 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.981 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.982 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.982 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.982 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.983 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.983 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:51 compute-0 nova_compute[259627]: 2025-10-14 08:59:51.996 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.016 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.017 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432391.9379873, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.017 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Paused (Lifecycle Event)
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.046 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.052 2 INFO nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Took 8.57 seconds to spawn the instance on the hypervisor.
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.052 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.054 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432391.9428499, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.054 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Resumed (Lifecycle Event)
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.089 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.095 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/836305436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.128 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.129 2 DEBUG nova.virt.libvirt.vif [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-618227233',display_name='tempest-ServersAdminTestJSON-server-618227233',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-618227233',id=34,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-wask3hqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:46Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=b932e3d1-4cf6-4934-9eec-c93284b17b43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.129 2 DEBUG nova.network.os_vif_util [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.130 2 DEBUG nova.network.os_vif_util [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.131 2 DEBUG nova.objects.instance [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_devices' on Instance uuid b932e3d1-4cf6-4934-9eec-c93284b17b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.133 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.141 2 INFO nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Took 9.63 seconds to build instance.
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.143 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <uuid>b932e3d1-4cf6-4934-9eec-c93284b17b43</uuid>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <name>instance-00000022</name>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdminTestJSON-server-618227233</nova:name>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:59:51</nova:creationTime>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <nova:user uuid="56001fe1c9fc432e923f8c57058754db">tempest-ServersAdminTestJSON-276167539-project-member</nova:user>
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <nova:project uuid="ed7ee17abdbe419cb7d7fd0da2cd2068">tempest-ServersAdminTestJSON-276167539</nova:project>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <nova:port uuid="6fb13023-6749-4e1b-b7d9-235dff8e72d4">
Oct 14 08:59:52 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <system>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <entry name="serial">b932e3d1-4cf6-4934-9eec-c93284b17b43</entry>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <entry name="uuid">b932e3d1-4cf6-4934-9eec-c93284b17b43</entry>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </system>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <os>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   </os>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <features>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   </features>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b932e3d1-4cf6-4934-9eec-c93284b17b43_disk">
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config">
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:52 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b3:11:58"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <target dev="tap6fb13023-67"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/console.log" append="off"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <video>
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </video>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:59:52 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:59:52 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:59:52 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:59:52 compute-0 nova_compute[259627]: </domain>
Oct 14 08:59:52 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.144 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Preparing to wait for external event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.144 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.144 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.144 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.145 2 DEBUG nova.virt.libvirt.vif [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-618227233',display_name='tempest-ServersAdminTestJSON-server-618227233',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-618227233',id=34,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-wask3hqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:46Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=b932e3d1-4cf6-4934-9eec-c93284b17b43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.145 2 DEBUG nova.network.os_vif_util [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.145 2 DEBUG nova.network.os_vif_util [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.146 2 DEBUG os_vif [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.149 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fb13023-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:52 compute-0 NetworkManager[44885]: <info>  [1760432392.1523] manager: (tap6fb13023-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.150 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6fb13023-67, col_values=(('external_ids', {'iface-id': '6fb13023-6749-4e1b-b7d9-235dff8e72d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:11:58', 'vm-uuid': 'b932e3d1-4cf6-4934-9eec-c93284b17b43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.160 2 INFO os_vif [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67')
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.164 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.206 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.206 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.206 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No VIF found with MAC fa:16:3e:b3:11:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.207 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Using config drive
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.229 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:52 compute-0 podman[299027]: 2025-10-14 08:59:52.285199622 +0000 UTC m=+0.063386557 container create 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 08:59:52 compute-0 systemd[1]: Started libpod-conmon-96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590.scope.
Oct 14 08:59:52 compute-0 systemd[1]: Started libcrun container.
Oct 14 08:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11ba0701e8e75b5e6c026747289ae9028a21947c0206fb85f067034683ad925/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 08:59:52 compute-0 podman[299027]: 2025-10-14 08:59:52.249200988 +0000 UTC m=+0.027388003 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 08:59:52 compute-0 podman[299027]: 2025-10-14 08:59:52.356679208 +0000 UTC m=+0.134866163 container init 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 08:59:52 compute-0 podman[299027]: 2025-10-14 08:59:52.363167467 +0000 UTC m=+0.141354402 container start 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 08:59:52 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [NOTICE]   (299109) : New worker (299112) forked
Oct 14 08:59:52 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [NOTICE]   (299109) : Loading success.
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.472 2 DEBUG nova.compute.manager [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.473 2 DEBUG oslo_concurrency.lockutils [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.473 2 DEBUG oslo_concurrency.lockutils [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.473 2 DEBUG oslo_concurrency.lockutils [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.473 2 DEBUG nova.compute.manager [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Processing event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 14 08:59:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 14 08:59:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.639 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Creating config drive at /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.645 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcmqi5jk9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.759 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.760 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.760 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.761 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.761 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.763 2 INFO nova.compute.manager [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Terminating instance
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.764 2 DEBUG nova.compute.manager [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.773 2 INFO nova.virt.libvirt.driver [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance destroyed successfully.
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.773 2 DEBUG nova.objects.instance [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid f3dafba3-6472-4921-9ece-b6076172365e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.786 2 DEBUG nova.virt.libvirt.vif [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1482418899',display_name='tempest-ImagesTestJSON-server-1482418899',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1482418899',id=31,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-dol66h7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:49Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f3dafba3-6472-4921-9ece-b6076172365e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.786 2 DEBUG nova.network.os_vif_util [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.787 2 DEBUG nova.network.os_vif_util [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.788 2 DEBUG os_vif [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecb526b7-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.795 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcmqi5jk9" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.820 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.825 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.864 2 INFO os_vif [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1')
Oct 14 08:59:52 compute-0 ceph-mon[74249]: pgmap v1260: 305 pgs: 305 active+clean; 273 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 182 op/s
Oct 14 08:59:52 compute-0 ceph-mon[74249]: osdmap e151: 3 total, 3 up, 3 in
Oct 14 08:59:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/836305436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:52 compute-0 ceph-mon[74249]: osdmap e152: 3 total, 3 up, 3 in
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.882 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432392.8676484, 1af6c158-005b-4f3c-9044-87158e57378d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.882 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Started (Lifecycle Event)
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.885 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.890 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.900 2 INFO nova.virt.libvirt.driver [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance spawned successfully.
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.900 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.912 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.918 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.926 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.927 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.927 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.927 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.928 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.928 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.940 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.941 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432392.8679252, 1af6c158-005b-4f3c-9044-87158e57378d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.941 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Paused (Lifecycle Event)
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.970 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.973 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432392.8897731, 1af6c158-005b-4f3c-9044-87158e57378d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.973 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Resumed (Lifecycle Event)
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.977 2 DEBUG nova.network.neutron [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Updated VIF entry in instance network info cache for port 6fb13023-6749-4e1b-b7d9-235dff8e72d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 08:59:52 compute-0 nova_compute[259627]: 2025-10-14 08:59:52.977 2 DEBUG nova.network.neutron [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Updating instance_info_cache with network_info: [{"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.005 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.005 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Deleting local config drive /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config because it was imported into RBD.
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.009 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.010 2 DEBUG oslo_concurrency.lockutils [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.014 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.023 2 INFO nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Took 8.69 seconds to spawn the instance on the hypervisor.
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.023 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.036 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:53 compute-0 kernel: tap6fb13023-67: entered promiscuous mode
Oct 14 08:59:53 compute-0 NetworkManager[44885]: <info>  [1760432393.0675] manager: (tap6fb13023-67): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:53 compute-0 ovn_controller[152662]: 2025-10-14T08:59:53Z|00274|binding|INFO|Claiming lport 6fb13023-6749-4e1b-b7d9-235dff8e72d4 for this chassis.
Oct 14 08:59:53 compute-0 ovn_controller[152662]: 2025-10-14T08:59:53Z|00275|binding|INFO|6fb13023-6749-4e1b-b7d9-235dff8e72d4: Claiming fa:16:3e:b3:11:58 10.100.0.5
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.076 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:11:58 10.100.0.5'], port_security=['fa:16:3e:b3:11:58 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b932e3d1-4cf6-4934-9eec-c93284b17b43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6fb13023-6749-4e1b-b7d9-235dff8e72d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.079 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6fb13023-6749-4e1b-b7d9-235dff8e72d4 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a bound to our chassis
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.081 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 08:59:53 compute-0 NetworkManager[44885]: <info>  [1760432393.0927] device (tap6fb13023-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:53 compute-0 NetworkManager[44885]: <info>  [1760432393.0934] device (tap6fb13023-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.093 2 INFO nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Took 9.88 seconds to build instance.
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.097 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af091888-8156-4b18-b7d1-e41b7ff726df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:53 compute-0 ovn_controller[152662]: 2025-10-14T08:59:53Z|00276|binding|INFO|Setting lport 6fb13023-6749-4e1b-b7d9-235dff8e72d4 ovn-installed in OVS
Oct 14 08:59:53 compute-0 ovn_controller[152662]: 2025-10-14T08:59:53Z|00277|binding|INFO|Setting lport 6fb13023-6749-4e1b-b7d9-235dff8e72d4 up in Southbound
Oct 14 08:59:53 compute-0 systemd-machined[214636]: New machine qemu-38-instance-00000022.
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:53 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.116 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.140 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27b7c5dc-53d7-4aa4-8b79-5a441bd40701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.145 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[15f68432-f96a-4000-9272-76f8c923e5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.177 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[13a8f917-b226-4e05-aece-ee56fc6de246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.198 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b70cc78d-abab-49c5-88c0-de601d7dd834]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 702, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 702, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 604, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 604, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299208, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.214 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7a9926-eecc-44ca-ad18-2a9af773bed9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299210, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299210, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.216 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.221 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.221 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.221 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.222 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.310 2 INFO nova.virt.libvirt.driver [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Deleting instance files /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e_del
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.311 2 INFO nova.virt.libvirt.driver [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Deletion of /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e_del complete
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.380 2 INFO nova.compute.manager [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 0.62 seconds to destroy the instance on the hypervisor.
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.380 2 DEBUG oslo.service.loopingcall [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.380 2 DEBUG nova.compute.manager [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.381 2 DEBUG nova.network.neutron [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.449 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432378.4484634, eb820455-d45c-4331-9363-124f11537f52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.450 2 INFO nova.compute.manager [-] [instance: eb820455-d45c-4331-9363-124f11537f52] VM Stopped (Lifecycle Event)
Oct 14 08:59:53 compute-0 nova_compute[259627]: 2025-10-14 08:59:53.471 2 DEBUG nova.compute.manager [None req-04da1bc4-e72c-447c-9151-e66959b83d80 - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 273 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 182 op/s
Oct 14 08:59:54 compute-0 nova_compute[259627]: 2025-10-14 08:59:54.203 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432394.2030919, b932e3d1-4cf6-4934-9eec-c93284b17b43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:54 compute-0 nova_compute[259627]: 2025-10-14 08:59:54.203 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] VM Started (Lifecycle Event)
Oct 14 08:59:54 compute-0 nova_compute[259627]: 2025-10-14 08:59:54.231 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:54 compute-0 nova_compute[259627]: 2025-10-14 08:59:54.234 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432394.2049513, b932e3d1-4cf6-4934-9eec-c93284b17b43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:54 compute-0 nova_compute[259627]: 2025-10-14 08:59:54.234 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] VM Paused (Lifecycle Event)
Oct 14 08:59:54 compute-0 nova_compute[259627]: 2025-10-14 08:59:54.250 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:54 compute-0 nova_compute[259627]: 2025-10-14 08:59:54.252 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:54 compute-0 nova_compute[259627]: 2025-10-14 08:59:54.268 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:54 compute-0 ceph-mon[74249]: pgmap v1263: 305 pgs: 305 active+clean; 273 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 182 op/s
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.023 2 DEBUG nova.network.neutron [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.046 2 INFO nova.compute.manager [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 1.67 seconds to deallocate network for instance.
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.080 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.080 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.081 2 WARNING nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state active and task_state None.
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Processing event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.083 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] No waiting events found dispatching network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.083 2 WARNING nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received unexpected event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 for instance with vm_state building and task_state spawning.
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.083 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.086 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432395.0865035, b932e3d1-4cf6-4934-9eec-c93284b17b43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.086 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] VM Resumed (Lifecycle Event)
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.088 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.090 2 INFO nova.virt.libvirt.driver [-] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance spawned successfully.
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.090 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.118 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.121 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.124 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.124 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.130 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.131 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.131 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.131 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.132 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.132 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.167 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.207 2 INFO nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Took 9.03 seconds to spawn the instance on the hypervisor.
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.207 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.250 2 DEBUG oslo_concurrency.processutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.287 2 INFO nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Took 10.71 seconds to build instance.
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.310 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.518 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.518 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.518 2 INFO nova.compute.manager [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Rebooting instance
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.536 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.537 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquired lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.537 2 DEBUG nova.network.neutron [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:59:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 5.5 MiB/s wr, 452 op/s
Oct 14 08:59:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740848897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.757 2 DEBUG oslo_concurrency.processutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.764 2 DEBUG nova.compute.provider_tree [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.782 2 DEBUG nova.scheduler.client.report [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.806 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.838 2 INFO nova.scheduler.client.report [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance f3dafba3-6472-4921-9ece-b6076172365e
Oct 14 08:59:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2740848897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:55 compute-0 nova_compute[259627]: 2025-10-14 08:59:55.921 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.353 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.354 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.383 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.475 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.476 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.486 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.488 2 INFO nova.compute.claims [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Claim successful on node compute-0.ctlplane.example.com
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.633 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:56 compute-0 ceph-mon[74249]: pgmap v1264: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 5.5 MiB/s wr, 452 op/s
Oct 14 08:59:56 compute-0 nova_compute[259627]: 2025-10-14 08:59:56.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 08:59:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258774786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.069 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.074 2 DEBUG nova.compute.provider_tree [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.090 2 DEBUG nova.scheduler.client.report [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.122 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.125 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.193 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.194 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.222 2 INFO nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.253 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.358 2 DEBUG nova.compute.manager [req-cb8802c7-a0a4-4fa3-81a0-c3e0454005dd req-58dce2ed-9787-495d-900a-6c3f98093051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-deleted-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.363 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.364 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.365 2 INFO nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Creating image(s)
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.386 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.415 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.452 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.458 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.495 2 DEBUG nova.policy [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 08:59:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 08:59:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 14 08:59:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.531 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.534 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.535 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.537 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.581 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 08:59:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 55 KiB/s wr, 407 op/s
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.586 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6de921d2-e251-431d-9333-bae44aa81859_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.698 2 DEBUG nova.network.neutron [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updating instance_info_cache with network_info: [{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.746 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Releasing lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.748 2 DEBUG nova.compute.manager [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.830 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6de921d2-e251-431d-9333-bae44aa81859_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.879 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432382.813722, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.879 2 INFO nova.compute.manager [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Stopped (Lifecycle Event)
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.886 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 08:59:57 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1258774786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 08:59:57 compute-0 ceph-mon[74249]: osdmap e153: 3 total, 3 up, 3 in
Oct 14 08:59:57 compute-0 nova_compute[259627]: 2025-10-14 08:59:57.917 2 DEBUG nova.compute.manager [None req-ee8e8317-fcf4-4168-96b4-915240082296 - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:57 compute-0 kernel: tap36158ae1-83 (unregistering): left promiscuous mode
Oct 14 08:59:57 compute-0 NetworkManager[44885]: <info>  [1760432397.9549] device (tap36158ae1-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 08:59:57 compute-0 ovn_controller[152662]: 2025-10-14T08:59:57Z|00278|binding|INFO|Releasing lport 36158ae1-8367-4859-a407-565fde315649 from this chassis (sb_readonly=0)
Oct 14 08:59:57 compute-0 ovn_controller[152662]: 2025-10-14T08:59:57Z|00279|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 down in Southbound
Oct 14 08:59:57 compute-0 ovn_controller[152662]: 2025-10-14T08:59:57Z|00280|binding|INFO|Removing iface tap36158ae1-83 ovn-installed in OVS
Oct 14 08:59:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.969 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:7e:78 10.100.0.7'], port_security=['fa:16:3e:a4:7e:78 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1af6c158-005b-4f3c-9044-87158e57378d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd273d79854242779e57eece9a65f7c0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b2e574d-708f-45c3-a256-0b7fe2d0873c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65584940-3c3d-4797-80d1-97beab212175, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=36158ae1-8367-4859-a407-565fde315649) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.970 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 36158ae1-8367-4859-a407-565fde315649 in datapath a0f6e4de-522c-468f-8b55-9e5064a6cce8 unbound from our chassis
Oct 14 08:59:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.971 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0f6e4de-522c-468f-8b55-9e5064a6cce8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 08:59:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.972 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[43f36d3c-4a7a-4ce0-8927-824397984e25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.972 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 namespace which is not needed anymore
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:58 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct 14 08:59:58 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 6.219s CPU time.
Oct 14 08:59:58 compute-0 systemd-machined[214636]: Machine qemu-37-instance-00000021 terminated.
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.016 2 DEBUG nova.objects.instance [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid 6de921d2-e251-431d-9333-bae44aa81859 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.030 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.031 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Ensure instance console log exists: /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.031 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.031 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.032 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:58 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [NOTICE]   (299109) : haproxy version is 2.8.14-c23fe91
Oct 14 08:59:58 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [NOTICE]   (299109) : path to executable is /usr/sbin/haproxy
Oct 14 08:59:58 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [WARNING]  (299109) : Exiting Master process...
Oct 14 08:59:58 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [ALERT]    (299109) : Current worker (299112) exited with code 143 (Terminated)
Oct 14 08:59:58 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [WARNING]  (299109) : All workers exited. Exiting... (0)
Oct 14 08:59:58 compute-0 systemd[1]: libpod-96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590.scope: Deactivated successfully.
Oct 14 08:59:58 compute-0 podman[299489]: 2025-10-14 08:59:58.109621696 +0000 UTC m=+0.044654248 container died 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590-userdata-shm.mount: Deactivated successfully.
Oct 14 08:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11ba0701e8e75b5e6c026747289ae9028a21947c0206fb85f067034683ad925-merged.mount: Deactivated successfully.
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.152 2 INFO nova.virt.libvirt.driver [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance destroyed successfully.
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.153 2 DEBUG nova.objects.instance [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'resources' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:58 compute-0 podman[299489]: 2025-10-14 08:59:58.154148089 +0000 UTC m=+0.089180661 container cleanup 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.161 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Successfully created port: c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.169 2 DEBUG nova.virt.libvirt.vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:57Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.169 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.170 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.171 2 DEBUG os_vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36158ae1-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.179 2 INFO os_vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83')
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.186 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start _get_guest_xml network_info=[{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.189 2 WARNING nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 08:59:58 compute-0 systemd[1]: libpod-conmon-96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590.scope: Deactivated successfully.
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.196 2 DEBUG nova.virt.libvirt.host [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.196 2 DEBUG nova.virt.libvirt.host [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.199 2 DEBUG nova.virt.libvirt.host [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.200 2 DEBUG nova.virt.libvirt.host [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.200 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.200 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.201 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.201 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.202 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.202 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.202 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.202 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.203 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.203 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.203 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.204 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.204 2 DEBUG nova.objects.instance [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.219 2 DEBUG oslo_concurrency.processutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:58 compute-0 podman[299525]: 2025-10-14 08:59:58.219763461 +0000 UTC m=+0.042231858 container remove 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.226 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19c417cb-0a9c-4c78-8d42-ed52b9819d28]: (4, ('Tue Oct 14 08:59:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 (96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590)\n96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590\nTue Oct 14 08:59:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 (96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590)\n96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.228 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[96f9f9b2-5bb5-4327-b1b6-84396eaa876d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.231 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f6e4de-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:58 compute-0 kernel: tapa0f6e4de-50: left promiscuous mode
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.259 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58a197bd-7940-4edf-993d-d1e51003075e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.286 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7165e449-f174-42e6-90df-d572966c74f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f492c00-648b-4aee-aa80-b224ff6fcb39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9673e9-f21a-4e3c-81fa-f2581b36978e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617030, 'reachable_time': 41316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299540, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:58 compute-0 systemd[1]: run-netns-ovnmeta\x2da0f6e4de\x2d522c\x2d468f\x2d8b55\x2d9e5064a6cce8.mount: Deactivated successfully.
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.305 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 08:59:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.305 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a0bb4e-994c-4cc1-bcb6-c114c268e87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.321 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432383.3203607, f3dafba3-6472-4921-9ece-b6076172365e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.322 2 INFO nova.compute.manager [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Stopped (Lifecycle Event)
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.341 2 DEBUG nova.compute.manager [None req-ac2b7d07-5e51-43bd-97bd-aabdbc0d8d03 - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 08:59:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100255372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.677 2 DEBUG oslo_concurrency.processutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:58 compute-0 nova_compute[259627]: 2025-10-14 08:59:58.715 2 DEBUG oslo_concurrency.processutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 08:59:58 compute-0 ceph-mon[74249]: pgmap v1266: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 55 KiB/s wr, 407 op/s
Oct 14 08:59:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4100255372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.031 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Successfully updated port: c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.048 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.049 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.049 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 08:59:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 08:59:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3144157410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.141 2 DEBUG nova.compute.manager [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received event network-changed-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.142 2 DEBUG nova.compute.manager [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Refreshing instance network info cache due to event network-changed-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.142 2 DEBUG oslo_concurrency.lockutils [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.151 2 DEBUG oslo_concurrency.processutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.153 2 DEBUG nova.virt.libvirt.vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:57Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.153 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.154 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.155 2 DEBUG nova.objects.instance [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.181 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <uuid>1af6c158-005b-4f3c-9044-87158e57378d</uuid>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <name>instance-00000021</name>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <metadata>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <nova:name>tempest-InstanceActionsTestJSON-server-461517423</nova:name>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 08:59:58</nova:creationTime>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <nova:user uuid="9ec2f781b62446cb98129707144b9d37">tempest-InstanceActionsTestJSON-580993042-project-member</nova:user>
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <nova:project uuid="fd273d79854242779e57eece9a65f7c0">tempest-InstanceActionsTestJSON-580993042</nova:project>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <nova:port uuid="36158ae1-8367-4859-a407-565fde315649">
Oct 14 08:59:59 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   </metadata>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <system>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <entry name="serial">1af6c158-005b-4f3c-9044-87158e57378d</entry>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <entry name="uuid">1af6c158-005b-4f3c-9044-87158e57378d</entry>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </system>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <os>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   </os>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <features>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <apic/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   </features>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   </clock>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   </cpu>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   <devices>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1af6c158-005b-4f3c-9044-87158e57378d_disk">
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1af6c158-005b-4f3c-9044-87158e57378d_disk.config">
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       </source>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 08:59:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:a4:7e:78"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <target dev="tap36158ae1-83"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </interface>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/console.log" append="off"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </serial>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <video>
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </video>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </rng>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 08:59:59 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 08:59:59 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 08:59:59 compute-0 nova_compute[259627]:   </devices>
Oct 14 08:59:59 compute-0 nova_compute[259627]: </domain>
Oct 14 08:59:59 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.182 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.183 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.184 2 DEBUG nova.virt.libvirt.vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:57Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.184 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.185 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.185 2 DEBUG os_vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.189 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36158ae1-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36158ae1-83, col_values=(('external_ids', {'iface-id': '36158ae1-8367-4859-a407-565fde315649', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:7e:78', 'vm-uuid': '1af6c158-005b-4f3c-9044-87158e57378d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 NetworkManager[44885]: <info>  [1760432399.1925] manager: (tap36158ae1-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.196 2 INFO os_vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83')
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.226 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 08:59:59 compute-0 kernel: tap36158ae1-83: entered promiscuous mode
Oct 14 08:59:59 compute-0 systemd-udevd[299458]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 ovn_controller[152662]: 2025-10-14T08:59:59Z|00281|binding|INFO|Claiming lport 36158ae1-8367-4859-a407-565fde315649 for this chassis.
Oct 14 08:59:59 compute-0 ovn_controller[152662]: 2025-10-14T08:59:59Z|00282|binding|INFO|36158ae1-8367-4859-a407-565fde315649: Claiming fa:16:3e:a4:7e:78 10.100.0.7
Oct 14 08:59:59 compute-0 NetworkManager[44885]: <info>  [1760432399.2549] manager: (tap36158ae1-83): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.261 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:7e:78 10.100.0.7'], port_security=['fa:16:3e:a4:7e:78 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1af6c158-005b-4f3c-9044-87158e57378d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd273d79854242779e57eece9a65f7c0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4b2e574d-708f-45c3-a256-0b7fe2d0873c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65584940-3c3d-4797-80d1-97beab212175, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=36158ae1-8367-4859-a407-565fde315649) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.263 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 36158ae1-8367-4859-a407-565fde315649 in datapath a0f6e4de-522c-468f-8b55-9e5064a6cce8 bound to our chassis
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.265 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 08:59:59 compute-0 NetworkManager[44885]: <info>  [1760432399.2678] device (tap36158ae1-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 08:59:59 compute-0 NetworkManager[44885]: <info>  [1760432399.2691] device (tap36158ae1-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 08:59:59 compute-0 ovn_controller[152662]: 2025-10-14T08:59:59Z|00283|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 ovn-installed in OVS
Oct 14 08:59:59 compute-0 ovn_controller[152662]: 2025-10-14T08:59:59Z|00284|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 up in Southbound
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[257fefcd-e30e-41b3-a04e-fc8d3caa9cbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.282 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0f6e4de-51 in ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.284 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0f6e4de-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.284 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9b681e22-6629-4a45-be01-31c14d5f8829]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bf8722-c5b5-4111-ade8-833fd692029f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 systemd-machined[214636]: New machine qemu-39-instance-00000021.
Oct 14 08:59:59 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000021.
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.303 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4616b5bb-fae5-4dee-af08-6fce63c08ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.336 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[765729e0-e8da-4f14-b502-898fc733549b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.368 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3b53b2e6-b5d4-44f0-9e97-06c10edaaef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 NetworkManager[44885]: <info>  [1760432399.3807] manager: (tapa0f6e4de-50): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.380 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4998212e-a42f-4be1-a18c-c2676d044724]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.420 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cf06f251-d78e-4a0b-84bd-185fe92241d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.422 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fd59ed-2239-47cb-be11-8b33c01a3e8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 NetworkManager[44885]: <info>  [1760432399.4528] device (tapa0f6e4de-50): carrier: link connected
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.457 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[00794ea5-a768-4360-b893-5e5710c898d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.480 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab0a3c7-efc4-4370-bf33-74d4bc0a162e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0f6e4de-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617821, 'reachable_time': 15347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299649, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.496 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e2396f-cc14-4c96-bb0a-9a801c3e438d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:e471'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617821, 'tstamp': 617821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299650, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3e6f0c-7d01-4019-86ca-1ae3a05945c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0f6e4de-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617821, 'reachable_time': 15347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299651, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.547 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a504dc58-7e7c-43d8-b023-be6dad2ab9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 43 KiB/s wr, 316 op/s
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.600 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fee0503-a1ff-460c-b7bd-d347349dd2c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.601 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f6e4de-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.601 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.602 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f6e4de-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:59 compute-0 NetworkManager[44885]: <info>  [1760432399.6044] manager: (tapa0f6e4de-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct 14 08:59:59 compute-0 kernel: tapa0f6e4de-50: entered promiscuous mode
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.612 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0f6e4de-50, col_values=(('external_ids', {'iface-id': 'f946a06b-cc1d-436a-9eac-cf144d4f5ad3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 08:59:59 compute-0 ovn_controller[152662]: 2025-10-14T08:59:59Z|00285|binding|INFO|Releasing lport f946a06b-cc1d-436a-9eac-cf144d4f5ad3 from this chassis (sb_readonly=0)
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.631 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.631 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e74fcfc-32cd-4573-8ca5-2d1ff768a892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.632 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: global
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 08:59:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.632 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'env', 'PROCESS_TAG=haproxy-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0f6e4de-522c-468f-8b55-9e5064a6cce8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.793 2 DEBUG nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.794 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.795 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.795 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.795 2 DEBUG nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.796 2 WARNING nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state active and task_state reboot_started_hard.
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.796 2 DEBUG nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.797 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.797 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.798 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.798 2 DEBUG nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 08:59:59 compute-0 nova_compute[259627]: 2025-10-14 08:59:59.798 2 WARNING nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state active and task_state reboot_started_hard.
Oct 14 08:59:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3144157410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:00 compute-0 podman[299681]: 2025-10-14 09:00:00.006090616 +0000 UTC m=+0.056764255 container create ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:00:00 compute-0 systemd[1]: Started libpod-conmon-ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c.scope.
Oct 14 09:00:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:00 compute-0 podman[299681]: 2025-10-14 08:59:59.975264459 +0000 UTC m=+0.025938148 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:00:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f7155d9b3a359e6f03029b557be26ab30a97c56f137c4a515e7fc358810474/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:00 compute-0 podman[299681]: 2025-10-14 09:00:00.094316022 +0000 UTC m=+0.144989681 container init ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:00:00 compute-0 podman[299681]: 2025-10-14 09:00:00.113744809 +0000 UTC m=+0.164418448 container start ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:00:00 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299696]: [NOTICE]   (299725) : New worker (299734) forked
Oct 14 09:00:00 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299696]: [NOTICE]   (299725) : Loading success.
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.151 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Updating instance_info_cache with network_info: [{"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.198 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.198 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Instance network_info: |[{"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.199 2 DEBUG oslo_concurrency.lockutils [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.199 2 DEBUG nova.network.neutron [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Refreshing network info cache for port c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.202 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Start _get_guest_xml network_info=[{"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.207 2 WARNING nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.220 2 DEBUG nova.virt.libvirt.host [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.222 2 DEBUG nova.virt.libvirt.host [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.227 2 DEBUG nova.virt.libvirt.host [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.228 2 DEBUG nova.virt.libvirt.host [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.228 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.229 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.229 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.229 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.230 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.230 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.230 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.230 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.230 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.231 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.231 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.231 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.234 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.338 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "6b630da6-e65a-48aa-9559-1d59beb73a93" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.338 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:00 compute-0 rsyslogd[1002]: imjournal from <np0005486808:systemd>: begin to drop messages due to rate-limiting
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.360 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.423 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.423 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.439 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.439 2 INFO nova.compute.claims [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:00:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619233789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.668 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.710 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.714 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.740 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.771 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 1af6c158-005b-4f3c-9044-87158e57378d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.772 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432400.6830661, 1af6c158-005b-4f3c-9044-87158e57378d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.773 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Resumed (Lifecycle Event)
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.779 2 DEBUG nova.compute.manager [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.787 2 INFO nova.virt.libvirt.driver [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance rebooted successfully.
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.788 2 DEBUG nova.compute.manager [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.807 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.815 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.847 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.848 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432400.683176, 1af6c158-005b-4f3c-9044-87158e57378d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.848 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Started (Lifecycle Event)
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.878 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.888 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:00 compute-0 nova_compute[259627]: 2025-10-14 09:00:00.900 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:00 compute-0 ceph-mon[74249]: pgmap v1267: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 43 KiB/s wr, 316 op/s
Oct 14 09:00:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/619233789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/127062736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.196 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.197 2 DEBUG nova.virt.libvirt.vif [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877945577',display_name='tempest-ImagesTestJSON-server-1877945577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877945577',id=35,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3imcd7e3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:57Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=6de921d2-e251-431d-9333-bae44aa81859,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.198 2 DEBUG nova.network.os_vif_util [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.198 2 DEBUG nova.network.os_vif_util [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:83:ca,bridge_name='br-int',has_traffic_filtering=True,id=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eb9aa7-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.200 2 DEBUG nova.objects.instance [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid 6de921d2-e251-431d-9333-bae44aa81859 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.219 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <uuid>6de921d2-e251-431d-9333-bae44aa81859</uuid>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <name>instance-00000023</name>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesTestJSON-server-1877945577</nova:name>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:00</nova:creationTime>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <nova:port uuid="c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b">
Oct 14 09:00:01 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <entry name="serial">6de921d2-e251-431d-9333-bae44aa81859</entry>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <entry name="uuid">6de921d2-e251-431d-9333-bae44aa81859</entry>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6de921d2-e251-431d-9333-bae44aa81859_disk">
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6de921d2-e251-431d-9333-bae44aa81859_disk.config">
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:01 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:1d:83:ca"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <target dev="tapc0eb9aa7-6f"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/console.log" append="off"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:01 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:01 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:01 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:01 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:01 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.220 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Preparing to wait for external event network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.220 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.220 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.220 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.221 2 DEBUG nova.virt.libvirt.vif [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877945577',display_name='tempest-ImagesTestJSON-server-1877945577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877945577',id=35,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3imcd7e3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:57Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=6de921d2-e251-431d-9333-bae44aa81859,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.221 2 DEBUG nova.network.os_vif_util [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.222 2 DEBUG nova.network.os_vif_util [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:83:ca,bridge_name='br-int',has_traffic_filtering=True,id=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eb9aa7-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.222 2 DEBUG os_vif [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:83:ca,bridge_name='br-int',has_traffic_filtering=True,id=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eb9aa7-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.223 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.223 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0eb9aa7-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0eb9aa7-6f, col_values=(('external_ids', {'iface-id': 'c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:83:ca', 'vm-uuid': '6de921d2-e251-431d-9333-bae44aa81859'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496898898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:01 compute-0 NetworkManager[44885]: <info>  [1760432401.2291] manager: (tapc0eb9aa7-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.237 2 INFO os_vif [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:83:ca,bridge_name='br-int',has_traffic_filtering=True,id=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eb9aa7-6f')
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.258 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.265 2 DEBUG nova.compute.provider_tree [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.281 2 DEBUG nova.scheduler.client.report [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.307 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.308 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.311 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.311 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.312 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:1d:83:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.312 2 INFO nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Using config drive
Oct 14 09:00:01 compute-0 podman[299840]: 2025-10-14 09:00:01.335778757 +0000 UTC m=+0.069393675 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.349 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:01 compute-0 podman[299841]: 2025-10-14 09:00:01.358398952 +0000 UTC m=+0.090027141 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.371 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.371 2 DEBUG nova.network.neutron [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.394 2 INFO nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.413 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.501 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.503 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.503 2 INFO nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Creating image(s)
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.522 2 DEBUG nova.storage.rbd_utils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 6b630da6-e65a-48aa-9559-1d59beb73a93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.542 2 DEBUG nova.storage.rbd_utils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 6b630da6-e65a-48aa-9559-1d59beb73a93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.563 2 DEBUG nova.storage.rbd_utils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 6b630da6-e65a-48aa-9559-1d59beb73a93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.566 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 227 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 2.4 MiB/s wr, 400 op/s
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.628 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.629 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.630 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.630 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.649 2 DEBUG nova.storage.rbd_utils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 6b630da6-e65a-48aa-9559-1d59beb73a93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.653 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6b630da6-e65a-48aa-9559-1d59beb73a93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.681 2 DEBUG nova.policy [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56001fe1c9fc432e923f8c57058754db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.684 2 DEBUG nova.network.neutron [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Updated VIF entry in instance network info cache for port c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.685 2 DEBUG nova.network.neutron [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Updating instance_info_cache with network_info: [{"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.703 2 DEBUG oslo_concurrency.lockutils [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.746 2 INFO nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Creating config drive at /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/disk.config
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.755 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxrlclaaw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.847 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.849 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.849 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.849 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.849 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.850 2 INFO nova.compute.manager [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Terminating instance
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.851 2 DEBUG nova.compute.manager [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.864 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6b630da6-e65a-48aa-9559-1d59beb73a93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:01 compute-0 kernel: tap36158ae1-83 (unregistering): left promiscuous mode
Oct 14 09:00:01 compute-0 NetworkManager[44885]: <info>  [1760432401.8903] device (tap36158ae1-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:00:01 compute-0 ovn_controller[152662]: 2025-10-14T09:00:01Z|00286|binding|INFO|Releasing lport 36158ae1-8367-4859-a407-565fde315649 from this chassis (sb_readonly=0)
Oct 14 09:00:01 compute-0 ovn_controller[152662]: 2025-10-14T09:00:01Z|00287|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 down in Southbound
Oct 14 09:00:01 compute-0 ovn_controller[152662]: 2025-10-14T09:00:01Z|00288|binding|INFO|Removing iface tap36158ae1-83 ovn-installed in OVS
Oct 14 09:00:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:01.912 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:7e:78 10.100.0.7'], port_security=['fa:16:3e:a4:7e:78 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1af6c158-005b-4f3c-9044-87158e57378d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd273d79854242779e57eece9a65f7c0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4b2e574d-708f-45c3-a256-0b7fe2d0873c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65584940-3c3d-4797-80d1-97beab212175, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=36158ae1-8367-4859-a407-565fde315649) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:01.913 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 36158ae1-8367-4859-a407-565fde315649 in datapath a0f6e4de-522c-468f-8b55-9e5064a6cce8 unbound from our chassis
Oct 14 09:00:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:01.914 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0f6e4de-522c-468f-8b55-9e5064a6cce8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:00:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/127062736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1496898898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:01.915 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfa1ac7-6355-4055-89a2-edbbc51d93c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:01.921 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 namespace which is not needed anymore
Oct 14 09:00:01 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct 14 09:00:01 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000021.scope: Consumed 2.434s CPU time.
Oct 14 09:00:01 compute-0 systemd-machined[214636]: Machine qemu-39-instance-00000021 terminated.
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.940 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxrlclaaw" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.942 2 DEBUG nova.compute.manager [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.942 2 DEBUG oslo_concurrency.lockutils [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.942 2 DEBUG oslo_concurrency.lockutils [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.942 2 DEBUG oslo_concurrency.lockutils [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.943 2 DEBUG nova.compute.manager [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.943 2 WARNING nova.compute.manager [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state active and task_state deleting.
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.943 2 DEBUG nova.compute.manager [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.943 2 DEBUG oslo_concurrency.lockutils [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.943 2 DEBUG oslo_concurrency.lockutils [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.943 2 DEBUG oslo_concurrency.lockutils [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.943 2 DEBUG nova.compute.manager [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.944 2 WARNING nova.compute.manager [req-5186a8ff-2e9a-4f47-8dd2-fd76d38d6e50 req-42cd2baa-ceec-4c04-8dab-313bf7012630 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state active and task_state deleting.
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.963 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.969 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/disk.config 6de921d2-e251-431d-9333-bae44aa81859_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:01 compute-0 nova_compute[259627]: 2025-10-14 09:00:01.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.042 2 DEBUG nova.storage.rbd_utils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] resizing rbd image 6b630da6-e65a-48aa-9559-1d59beb73a93_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:02 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299696]: [NOTICE]   (299725) : haproxy version is 2.8.14-c23fe91
Oct 14 09:00:02 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299696]: [NOTICE]   (299725) : path to executable is /usr/sbin/haproxy
Oct 14 09:00:02 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299696]: [WARNING]  (299725) : Exiting Master process...
Oct 14 09:00:02 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299696]: [WARNING]  (299725) : Exiting Master process...
Oct 14 09:00:02 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299696]: [ALERT]    (299725) : Current worker (299734) exited with code 143 (Terminated)
Oct 14 09:00:02 compute-0 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299696]: [WARNING]  (299725) : All workers exited. Exiting... (0)
Oct 14 09:00:02 compute-0 systemd[1]: libpod-ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c.scope: Deactivated successfully.
Oct 14 09:00:02 compute-0 conmon[299696]: conmon ed7f78eb3c41dff983c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c.scope/container/memory.events
Oct 14 09:00:02 compute-0 podman[300054]: 2025-10-14 09:00:02.055328606 +0000 UTC m=+0.047483137 container died ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c-userdata-shm.mount: Deactivated successfully.
Oct 14 09:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3f7155d9b3a359e6f03029b557be26ab30a97c56f137c4a515e7fc358810474-merged.mount: Deactivated successfully.
Oct 14 09:00:02 compute-0 podman[300054]: 2025-10-14 09:00:02.109735782 +0000 UTC m=+0.101890293 container cleanup ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:00:02 compute-0 systemd[1]: libpod-conmon-ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c.scope: Deactivated successfully.
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.122 2 INFO nova.virt.libvirt.driver [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance destroyed successfully.
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.123 2 DEBUG nova.objects.instance [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'resources' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.152 2 DEBUG nova.virt.libvirt.vif [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:00:00Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.153 2 DEBUG nova.network.os_vif_util [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.154 2 DEBUG nova.network.os_vif_util [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.154 2 DEBUG os_vif [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.157 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36158ae1-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.169 2 INFO os_vif [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83')
Oct 14 09:00:02 compute-0 podman[300146]: 2025-10-14 09:00:02.176278927 +0000 UTC m=+0.044307090 container remove ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.181 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67ba5c06-5a20-4c2b-a076-2a67a1f8bfd1]: (4, ('Tue Oct 14 09:00:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 (ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c)\ned7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c\nTue Oct 14 09:00:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 (ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c)\ned7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.183 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be4564d9-175f-4d6f-90fd-8cce8e44c17b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.183 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f6e4de-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:02 compute-0 kernel: tapa0f6e4de-50: left promiscuous mode
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.209 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b651c5-1bf3-47e3-a786-807fcf58bac0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.232 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[43fe7d7c-7778-44f3-ac40-0964fe0aa8ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d519fa5-5709-4b29-b515-621489972a50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.243 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/disk.config 6de921d2-e251-431d-9333-bae44aa81859_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.248 2 INFO nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Deleting local config drive /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/disk.config because it was imported into RBD.
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.249 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7d4cf5-4b9b-411a-8d05-d371a2fc6f58]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617812, 'reachable_time': 35662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300198, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 systemd[1]: run-netns-ovnmeta\x2da0f6e4de\x2d522c\x2d468f\x2d8b55\x2d9e5064a6cce8.mount: Deactivated successfully.
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.251 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.251 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[01d86d61-8ac5-4ba4-a848-6e64c2fad84a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.258 2 DEBUG nova.objects.instance [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b630da6-e65a-48aa-9559-1d59beb73a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.273 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.273 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Ensure instance console log exists: /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.274 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.274 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.276 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:02 compute-0 kernel: tapc0eb9aa7-6f: entered promiscuous mode
Oct 14 09:00:02 compute-0 systemd-udevd[300016]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:02 compute-0 NetworkManager[44885]: <info>  [1760432402.3022] manager: (tapc0eb9aa7-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 ovn_controller[152662]: 2025-10-14T09:00:02Z|00289|binding|INFO|Claiming lport c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b for this chassis.
Oct 14 09:00:02 compute-0 ovn_controller[152662]: 2025-10-14T09:00:02Z|00290|binding|INFO|c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b: Claiming fa:16:3e:1d:83:ca 10.100.0.6
Oct 14 09:00:02 compute-0 NetworkManager[44885]: <info>  [1760432402.3122] device (tapc0eb9aa7-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:02 compute-0 NetworkManager[44885]: <info>  [1760432402.3139] device (tapc0eb9aa7-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.315 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:83:ca 10.100.0.6'], port_security=['fa:16:3e:1d:83:ca 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6de921d2-e251-431d-9333-bae44aa81859', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.316 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.318 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.328 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[307cd449-9bc3-4718-9d87-97f0fde5f9eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.328 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2322cf7a-01 in ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.331 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2322cf7a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.331 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[003ef481-c38f-498f-8763-ded96a8cfad6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.331 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b66ad847-cb93-4a1e-9d0a-8f785a5e6483]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.345 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[005f8196-755b-4c0a-9737-fcc19de09370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 systemd-machined[214636]: New machine qemu-40-instance-00000023.
Oct 14 09:00:02 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000023.
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.369 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b988d5-0a98-46b7-a679-ced981a77b7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 ovn_controller[152662]: 2025-10-14T09:00:02Z|00291|binding|INFO|Setting lport c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b ovn-installed in OVS
Oct 14 09:00:02 compute-0 ovn_controller[152662]: 2025-10-14T09:00:02Z|00292|binding|INFO|Setting lport c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b up in Southbound
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.402 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8bab56-bed6-4e1a-b294-56f459c787b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 NetworkManager[44885]: <info>  [1760432402.4229] manager: (tap2322cf7a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.424 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7665ccb7-70e4-4c3d-89fc-cfd8c5a884e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.459 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a809fb03-97fd-4c30-b603-3a2fce4113d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.462 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d859eb7d-4e0e-4a15-97eb-498fc2281363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 NetworkManager[44885]: <info>  [1760432402.4852] device (tap2322cf7a-00): carrier: link connected
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.491 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fe74a47a-4f49-4c15-9cb0-e60278dd922b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.507 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36be77c5-29da-423a-b638-0f4435fe9ecd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618124, 'reachable_time': 18179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300247, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.527 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c88fa750-7c15-440d-bf5d-eb262cbb775f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:956c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618124, 'tstamp': 618124}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300248, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.546 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38f6176a-ca38-4957-9561-8e8d7817e5dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618124, 'reachable_time': 18179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300256, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.579 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d68f94a6-7ce5-4bf2-8656-55bff15b29f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.645 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eee119f1-bd2f-4c30-8dc3-5ff897be57a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.646 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.646 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.646 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 kernel: tap2322cf7a-00: entered promiscuous mode
Oct 14 09:00:02 compute-0 NetworkManager[44885]: <info>  [1760432402.6491] manager: (tap2322cf7a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.651 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 ovn_controller[152662]: 2025-10-14T09:00:02Z|00293|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.675 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.675 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cf312906-bdfd-4b1b-a297-0c60da408623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.676 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:00:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:02.677 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'env', 'PROCESS_TAG=haproxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2322cf7a-0090-40fa-a558-42d84cc6fc2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.706 2 INFO nova.virt.libvirt.driver [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Deleting instance files /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d_del
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.707 2 INFO nova.virt.libvirt.driver [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Deletion of /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d_del complete
Oct 14 09:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.776 2 INFO nova.compute.manager [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.777 2 DEBUG oslo.service.loopingcall [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.777 2 DEBUG nova.compute.manager [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:00:02 compute-0 nova_compute[259627]: 2025-10-14 09:00:02.777 2 DEBUG nova.network.neutron [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:00:02 compute-0 ceph-mon[74249]: pgmap v1268: 305 pgs: 305 active+clean; 227 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 2.4 MiB/s wr, 400 op/s
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.020 2 DEBUG nova.network.neutron [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Successfully created port: f92053b9-1a42-46c2-9abc-77adb8210c62 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:00:03 compute-0 podman[300323]: 2025-10-14 09:00:03.119032707 +0000 UTC m=+0.057383200 container create b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:00:03 compute-0 systemd[1]: Started libpod-conmon-b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65.scope.
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.185 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432403.1855035, 6de921d2-e251-431d-9333-bae44aa81859 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.186 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] VM Started (Lifecycle Event)
Oct 14 09:00:03 compute-0 podman[300323]: 2025-10-14 09:00:03.087282377 +0000 UTC m=+0.025632890 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:00:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e4364654eb1177932eb1d6868d5f8af1add3539df06012a85c7f076ab965a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:03 compute-0 podman[300323]: 2025-10-14 09:00:03.208538825 +0000 UTC m=+0.146889318 container init b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.209 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.213 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432403.1861582, 6de921d2-e251-431d-9333-bae44aa81859 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.214 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] VM Paused (Lifecycle Event)
Oct 14 09:00:03 compute-0 podman[300323]: 2025-10-14 09:00:03.216325956 +0000 UTC m=+0.154676439 container start b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:00:03 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[300339]: [NOTICE]   (300343) : New worker (300345) forked
Oct 14 09:00:03 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[300339]: [NOTICE]   (300343) : Loading success.
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.249 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.252 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.278 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.400 2 DEBUG nova.network.neutron [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.415 2 INFO nova.compute.manager [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Took 0.64 seconds to deallocate network for instance.
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.455 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.455 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.479 2 DEBUG nova.compute.manager [req-876f8fbb-30eb-41a5-a1ea-1d0c9ec6c7bc req-7546ec4b-1ec6-49cb-89ee-3300213d06e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-deleted-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 227 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.2 MiB/s wr, 362 op/s
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.588 2 DEBUG oslo_concurrency.processutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.767 2 DEBUG nova.network.neutron [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Successfully updated port: f92053b9-1a42-46c2-9abc-77adb8210c62 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.785 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "refresh_cache-6b630da6-e65a-48aa-9559-1d59beb73a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.786 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquired lock "refresh_cache-6b630da6-e65a-48aa-9559-1d59beb73a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.786 2 DEBUG nova.network.neutron [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:00:03 compute-0 nova_compute[259627]: 2025-10-14 09:00:03.963 2 DEBUG nova.network.neutron [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:00:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763274830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.037 2 DEBUG oslo_concurrency.processutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.044 2 DEBUG nova.compute.provider_tree [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.057 2 DEBUG nova.scheduler.client.report [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.077 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.114 2 INFO nova.scheduler.client.report [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Deleted allocations for instance 1af6c158-005b-4f3c-9044-87158e57378d
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.170 2 DEBUG oslo_concurrency.lockutils [None req-c4552988-eb55-4fa0-b84e-a4424df76cc7 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.201 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.201 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.203 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.203 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.203 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.203 2 WARNING nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state deleted and task_state None.
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.204 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.204 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.204 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.205 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.205 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.205 2 WARNING nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state deleted and task_state None.
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.205 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received event network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.206 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.206 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.206 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.206 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Processing event network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.206 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received event network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.206 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.207 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.207 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.207 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] No waiting events found dispatching network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.207 2 WARNING nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received unexpected event network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b for instance with vm_state building and task_state spawning.
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.207 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received event network-changed-f92053b9-1a42-46c2-9abc-77adb8210c62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.208 2 DEBUG nova.compute.manager [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Refreshing instance network info cache due to event network-changed-f92053b9-1a42-46c2-9abc-77adb8210c62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.208 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6b630da6-e65a-48aa-9559-1d59beb73a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.209 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.227 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432404.2271419, 6de921d2-e251-431d-9333-bae44aa81859 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.227 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] VM Resumed (Lifecycle Event)
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.229 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.232 2 INFO nova.virt.libvirt.driver [-] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Instance spawned successfully.
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.233 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:04 compute-0 ovn_controller[152662]: 2025-10-14T09:00:04Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 09:00:04 compute-0 ovn_controller[152662]: 2025-10-14T09:00:04Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.255 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.260 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.260 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.260 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.261 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.261 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.261 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.266 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.308 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.368 2 INFO nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Took 7.00 seconds to spawn the instance on the hypervisor.
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.368 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.424 2 INFO nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Took 7.97 seconds to build instance.
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.441 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.943 2 DEBUG nova.network.neutron [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Updating instance_info_cache with network_info: [{"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:04 compute-0 ceph-mon[74249]: pgmap v1269: 305 pgs: 305 active+clean; 227 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.2 MiB/s wr, 362 op/s
Oct 14 09:00:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1763274830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.962 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Releasing lock "refresh_cache-6b630da6-e65a-48aa-9559-1d59beb73a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.962 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Instance network_info: |[{"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.962 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6b630da6-e65a-48aa-9559-1d59beb73a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.963 2 DEBUG nova.network.neutron [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Refreshing network info cache for port f92053b9-1a42-46c2-9abc-77adb8210c62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.965 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Start _get_guest_xml network_info=[{"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.969 2 WARNING nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.974 2 DEBUG nova.virt.libvirt.host [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.974 2 DEBUG nova.virt.libvirt.host [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.982 2 DEBUG nova.virt.libvirt.host [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.982 2 DEBUG nova.virt.libvirt.host [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.983 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.983 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.983 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.983 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.984 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.984 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.984 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.984 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.985 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.985 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.985 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.985 2 DEBUG nova.virt.hardware [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:04 compute-0 nova_compute[259627]: 2025-10-14 09:00:04.989 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.429 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.429 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.447 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:00:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375495280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.499 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:00:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/134544278' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:00:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:00:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/134544278' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.521 2 DEBUG nova.storage.rbd_utils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 6b630da6-e65a-48aa-9559-1d59beb73a93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.527 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.570 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.570 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.578 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.578 2 INFO nova.compute.claims [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:00:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 260 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.8 MiB/s wr, 385 op/s
Oct 14 09:00:05 compute-0 nova_compute[259627]: 2025-10-14 09:00:05.764 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2375495280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/134544278' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:00:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/134544278' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:00:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760789946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.126 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.128 2 DEBUG nova.virt.libvirt.vif [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-303012336',display_name='tempest-ServersAdminTestJSON-server-303012336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-303012336',id=36,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-9e7dvtz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:01Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=6b630da6-e65a-48aa-9559-1d59beb73a93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.128 2 DEBUG nova.network.os_vif_util [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.129 2 DEBUG nova.network.os_vif_util [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:4e:2c,bridge_name='br-int',has_traffic_filtering=True,id=f92053b9-1a42-46c2-9abc-77adb8210c62,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92053b9-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.133 2 DEBUG nova.objects.instance [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b630da6-e65a-48aa-9559-1d59beb73a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.149 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <uuid>6b630da6-e65a-48aa-9559-1d59beb73a93</uuid>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <name>instance-00000024</name>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdminTestJSON-server-303012336</nova:name>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:04</nova:creationTime>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <nova:user uuid="56001fe1c9fc432e923f8c57058754db">tempest-ServersAdminTestJSON-276167539-project-member</nova:user>
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <nova:project uuid="ed7ee17abdbe419cb7d7fd0da2cd2068">tempest-ServersAdminTestJSON-276167539</nova:project>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <nova:port uuid="f92053b9-1a42-46c2-9abc-77adb8210c62">
Oct 14 09:00:06 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <entry name="serial">6b630da6-e65a-48aa-9559-1d59beb73a93</entry>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <entry name="uuid">6b630da6-e65a-48aa-9559-1d59beb73a93</entry>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6b630da6-e65a-48aa-9559-1d59beb73a93_disk">
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6b630da6-e65a-48aa-9559-1d59beb73a93_disk.config">
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:ef:4e:2c"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <target dev="tapf92053b9-1a"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93/console.log" append="off"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:06 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:06 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:06 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:06 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:06 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.150 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Preparing to wait for external event network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.150 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.150 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.151 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.152 2 DEBUG nova.virt.libvirt.vif [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-303012336',display_name='tempest-ServersAdminTestJSON-server-303012336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-303012336',id=36,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-9e7dvtz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:01Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=6b630da6-e65a-48aa-9559-1d59beb73a93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.152 2 DEBUG nova.network.os_vif_util [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.153 2 DEBUG nova.network.os_vif_util [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:4e:2c,bridge_name='br-int',has_traffic_filtering=True,id=f92053b9-1a42-46c2-9abc-77adb8210c62,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92053b9-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.153 2 DEBUG os_vif [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:4e:2c,bridge_name='br-int',has_traffic_filtering=True,id=f92053b9-1a42-46c2-9abc-77adb8210c62,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92053b9-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.154 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf92053b9-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.161 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf92053b9-1a, col_values=(('external_ids', {'iface-id': 'f92053b9-1a42-46c2-9abc-77adb8210c62', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:4e:2c', 'vm-uuid': '6b630da6-e65a-48aa-9559-1d59beb73a93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:06 compute-0 NetworkManager[44885]: <info>  [1760432406.1971] manager: (tapf92053b9-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.200 2 DEBUG nova.compute.manager [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.203 2 INFO os_vif [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:4e:2c,bridge_name='br-int',has_traffic_filtering=True,id=f92053b9-1a42-46c2-9abc-77adb8210c62,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92053b9-1a')
Oct 14 09:00:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248272124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.264 2 INFO nova.compute.manager [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] instance snapshotting
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.276 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.276 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.276 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No VIF found with MAC fa:16:3e:ef:4e:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.277 2 INFO nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Using config drive
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.298 2 DEBUG nova.storage.rbd_utils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 6b630da6-e65a-48aa-9559-1d59beb73a93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.306 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.312 2 DEBUG nova.compute.provider_tree [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.332 2 DEBUG nova.scheduler.client.report [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.371 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.372 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.437 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.438 2 DEBUG nova.network.neutron [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.455 2 INFO nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.472 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.555 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.556 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.556 2 INFO nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Creating image(s)
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.580 2 DEBUG nova.storage.rbd_utils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] rbd image 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.619 2 DEBUG nova.storage.rbd_utils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] rbd image 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.644 2 DEBUG nova.storage.rbd_utils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] rbd image 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.648 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.682 2 INFO nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Creating config drive at /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93/disk.config
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.687 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg99oys56 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.719 2 DEBUG nova.policy [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91f3c667c5b24ffa9e97a07a6cfa768f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8b21807ca224c1daf381f45c9748d90', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.730 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.730 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.731 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.731 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.751 2 DEBUG nova.storage.rbd_utils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] rbd image 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.754 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.779 2 INFO nova.virt.libvirt.driver [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Beginning live snapshot process
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.826 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg99oys56" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.861 2 DEBUG nova.storage.rbd_utils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 6b630da6-e65a-48aa-9559-1d59beb73a93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.866 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93/disk.config 6b630da6-e65a-48aa-9559-1d59beb73a93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:06 compute-0 ceph-mon[74249]: pgmap v1270: 305 pgs: 305 active+clean; 260 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.8 MiB/s wr, 385 op/s
Oct 14 09:00:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/760789946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3248272124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:06 compute-0 nova_compute[259627]: 2025-10-14 09:00:06.989 2 DEBUG nova.virt.libvirt.imagebackend [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.004 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.018 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.018 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.019 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.053 2 DEBUG nova.storage.rbd_utils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] resizing rbd image 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.098 2 DEBUG oslo_concurrency.processutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93/disk.config 6b630da6-e65a-48aa-9559-1d59beb73a93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.099 2 INFO nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Deleting local config drive /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93/disk.config because it was imported into RBD.
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.115 2 DEBUG nova.network.neutron [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Updated VIF entry in instance network info cache for port f92053b9-1a42-46c2-9abc-77adb8210c62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.115 2 DEBUG nova.network.neutron [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Updating instance_info_cache with network_info: [{"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:07 compute-0 kernel: tapf92053b9-1a: entered promiscuous mode
Oct 14 09:00:07 compute-0 NetworkManager[44885]: <info>  [1760432407.1393] manager: (tapf92053b9-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Oct 14 09:00:07 compute-0 ovn_controller[152662]: 2025-10-14T09:00:07Z|00294|binding|INFO|Claiming lport f92053b9-1a42-46c2-9abc-77adb8210c62 for this chassis.
Oct 14 09:00:07 compute-0 ovn_controller[152662]: 2025-10-14T09:00:07Z|00295|binding|INFO|f92053b9-1a42-46c2-9abc-77adb8210c62: Claiming fa:16:3e:ef:4e:2c 10.100.0.11
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.149 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:4e:2c 10.100.0.11'], port_security=['fa:16:3e:ef:4e:2c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6b630da6-e65a-48aa-9559-1d59beb73a93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f92053b9-1a42-46c2-9abc-77adb8210c62) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.150 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f92053b9-1a42-46c2-9abc-77adb8210c62 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a bound to our chassis
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.152 2 DEBUG oslo_concurrency.lockutils [req-4f101a51-38f5-44e9-ad8a-fa3ff22b8423 req-864aaafc-50d6-4bce-b688-3b2410844336 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6b630da6-e65a-48aa-9559-1d59beb73a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.157 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:00:07 compute-0 ovn_controller[152662]: 2025-10-14T09:00:07Z|00296|binding|INFO|Setting lport f92053b9-1a42-46c2-9abc-77adb8210c62 ovn-installed in OVS
Oct 14 09:00:07 compute-0 ovn_controller[152662]: 2025-10-14T09:00:07Z|00297|binding|INFO|Setting lport f92053b9-1a42-46c2-9abc-77adb8210c62 up in Southbound
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:07 compute-0 systemd-machined[214636]: New machine qemu-41-instance-00000024.
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.177 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7de6b24b-4081-4707-94be-f3c72506b1e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:07 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000024.
Oct 14 09:00:07 compute-0 systemd-udevd[300731]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.217 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5726df-832c-4454-b1c1-d06446ee00fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.219 2 DEBUG nova.storage.rbd_utils [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(9d107c98d0ea4da48eda7575da3a0599) on rbd image(6de921d2-e251-431d-9333-bae44aa81859_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.220 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b2bc14-ceb5-4bbd-a449-8db23cdb41e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:07 compute-0 NetworkManager[44885]: <info>  [1760432407.2216] device (tapf92053b9-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:07 compute-0 NetworkManager[44885]: <info>  [1760432407.2235] device (tapf92053b9-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.263 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bd5778-055b-447e-b07e-59df2df296b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.283 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcafea1-8766-4884-8ba5-7887bc116640]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300762, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[060157c8-6b0b-4de6-af34-7be0b1dde768]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300765, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300765, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.304 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.308 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.308 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.309 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:07.309 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.314 2 DEBUG nova.objects.instance [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lazy-loading 'migration_context' on Instance uuid 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.345 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.345 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Ensure instance console log exists: /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.346 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.346 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.346 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.388 2 DEBUG nova.compute.manager [req-d9555df9-c4e7-47b5-9b04-4e43da33b252 req-dba9cf7f-53f1-43d3-ac0a-3c3113fddb36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received event network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.388 2 DEBUG oslo_concurrency.lockutils [req-d9555df9-c4e7-47b5-9b04-4e43da33b252 req-dba9cf7f-53f1-43d3-ac0a-3c3113fddb36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.389 2 DEBUG oslo_concurrency.lockutils [req-d9555df9-c4e7-47b5-9b04-4e43da33b252 req-dba9cf7f-53f1-43d3-ac0a-3c3113fddb36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.389 2 DEBUG oslo_concurrency.lockutils [req-d9555df9-c4e7-47b5-9b04-4e43da33b252 req-dba9cf7f-53f1-43d3-ac0a-3c3113fddb36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.389 2 DEBUG nova.compute.manager [req-d9555df9-c4e7-47b5-9b04-4e43da33b252 req-dba9cf7f-53f1-43d3-ac0a-3c3113fddb36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Processing event network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:07 compute-0 ovn_controller[152662]: 2025-10-14T09:00:07Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:11:58 10.100.0.5
Oct 14 09:00:07 compute-0 ovn_controller[152662]: 2025-10-14T09:00:07Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:11:58 10.100.0.5
Oct 14 09:00:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 260 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.8 MiB/s wr, 383 op/s
Oct 14 09:00:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 14 09:00:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 14 09:00:07 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 14 09:00:07 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 14 09:00:07 compute-0 nova_compute[259627]: 2025-10-14 09:00:07.994 2 DEBUG nova.network.neutron [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Successfully created port: 5163f251-371c-409d-9510-db3f0c358877 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.059 2 DEBUG nova.storage.rbd_utils [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning vms/6de921d2-e251-431d-9333-bae44aa81859_disk@9d107c98d0ea4da48eda7575da3a0599 to images/76c5459a-f996-4fca-ad51-ec29f6551489 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:00:08 compute-0 ovn_controller[152662]: 2025-10-14T09:00:08Z|00298|binding|INFO|Releasing lport 6baedd76-8a05-42d6-8356-18b586f58672 from this chassis (sb_readonly=0)
Oct 14 09:00:08 compute-0 ovn_controller[152662]: 2025-10-14T09:00:08Z|00299|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.226 2 DEBUG nova.storage.rbd_utils [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] flattening images/76c5459a-f996-4fca-ad51-ec29f6551489 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.471 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432408.4607966, 6b630da6-e65a-48aa-9559-1d59beb73a93 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.472 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] VM Started (Lifecycle Event)
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.473 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.482 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.489 2 DEBUG nova.storage.rbd_utils [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(9d107c98d0ea4da48eda7575da3a0599) on rbd image(6de921d2-e251-431d-9333-bae44aa81859_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.492 2 INFO nova.virt.libvirt.driver [-] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Instance spawned successfully.
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.492 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.494 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.497 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.521 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.521 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432408.4609928, 6b630da6-e65a-48aa-9559-1d59beb73a93 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.522 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] VM Paused (Lifecycle Event)
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.525 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.526 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.526 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.527 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.527 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.528 2 DEBUG nova.virt.libvirt.driver [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.561 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.565 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432408.4758136, 6b630da6-e65a-48aa-9559-1d59beb73a93 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.565 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] VM Resumed (Lifecycle Event)
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.588 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.593 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.602 2 INFO nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Took 7.10 seconds to spawn the instance on the hypervisor.
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.602 2 DEBUG nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.611 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.658 2 INFO nova.compute.manager [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Took 8.25 seconds to build instance.
Oct 14 09:00:08 compute-0 nova_compute[259627]: 2025-10-14 09:00:08.674 2 DEBUG oslo_concurrency.lockutils [None req-40304f5f-f391-4bbc-a937-eb672cb2120f 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:08 compute-0 sudo[300880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:08 compute-0 sudo[300880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:08 compute-0 sudo[300880]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 14 09:00:08 compute-0 ceph-mon[74249]: pgmap v1271: 305 pgs: 305 active+clean; 260 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.8 MiB/s wr, 383 op/s
Oct 14 09:00:08 compute-0 ceph-mon[74249]: osdmap e154: 3 total, 3 up, 3 in
Oct 14 09:00:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 14 09:00:09 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 14 09:00:09 compute-0 sudo[300905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:00:09 compute-0 sudo[300905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:09 compute-0 nova_compute[259627]: 2025-10-14 09:00:09.066 2 DEBUG nova.storage.rbd_utils [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(snap) on rbd image(76c5459a-f996-4fca-ad51-ec29f6551489) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:00:09 compute-0 sudo[300905]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:09 compute-0 sudo[300930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:09 compute-0 sudo[300930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:09 compute-0 sudo[300930]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:09 compute-0 sudo[300973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:00:09 compute-0 sudo[300973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 260 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 335 op/s
Oct 14 09:00:09 compute-0 sudo[300973]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:00:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:00:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:00:09 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:00:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:00:09 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:00:09 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 9da5366f-96a5-4d01-86f1-20763419aa8a does not exist
Oct 14 09:00:09 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 43bf2002-346f-48f2-8dd0-ae8c0f5ac8f5 does not exist
Oct 14 09:00:09 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 43a21cb5-71f8-4633-b340-772bc383dc5c does not exist
Oct 14 09:00:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:00:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:00:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:00:09 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:00:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:00:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:00:09 compute-0 sudo[301028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:09 compute-0 sudo[301028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:09 compute-0 sudo[301028]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:09 compute-0 sudo[301053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:00:09 compute-0 sudo[301053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:09 compute-0 sudo[301053]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:09 compute-0 sudo[301078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:09 compute-0 sudo[301078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:09 compute-0 sudo[301078]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 14 09:00:10 compute-0 ceph-mon[74249]: osdmap e155: 3 total, 3 up, 3 in
Oct 14 09:00:10 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:00:10 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:00:10 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:00:10 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:00:10 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:00:10 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:00:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 14 09:00:10 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 14 09:00:10 compute-0 sudo[301103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:00:10 compute-0 sudo[301103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.156 2 DEBUG nova.network.neutron [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Successfully updated port: 5163f251-371c-409d-9510-db3f0c358877 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.180 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "refresh_cache-24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.180 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquired lock "refresh_cache-24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.180 2 DEBUG nova.network.neutron [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.212 2 DEBUG nova.compute.manager [req-619d18e4-a0c5-41d6-bcdf-47d954f5f561 req-1fe1e88b-b1e2-4da4-94ed-e57b85123a0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received event network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.212 2 DEBUG oslo_concurrency.lockutils [req-619d18e4-a0c5-41d6-bcdf-47d954f5f561 req-1fe1e88b-b1e2-4da4-94ed-e57b85123a0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.213 2 DEBUG oslo_concurrency.lockutils [req-619d18e4-a0c5-41d6-bcdf-47d954f5f561 req-1fe1e88b-b1e2-4da4-94ed-e57b85123a0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.213 2 DEBUG oslo_concurrency.lockutils [req-619d18e4-a0c5-41d6-bcdf-47d954f5f561 req-1fe1e88b-b1e2-4da4-94ed-e57b85123a0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.213 2 DEBUG nova.compute.manager [req-619d18e4-a0c5-41d6-bcdf-47d954f5f561 req-1fe1e88b-b1e2-4da4-94ed-e57b85123a0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] No waiting events found dispatching network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.213 2 WARNING nova.compute.manager [req-619d18e4-a0c5-41d6-bcdf-47d954f5f561 req-1fe1e88b-b1e2-4da4-94ed-e57b85123a0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received unexpected event network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 for instance with vm_state active and task_state None.
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.325 2 DEBUG nova.compute.manager [req-780f9f2f-16cc-4dd7-9f9a-7065257198ef req-9254b9bb-7a78-4c3d-8168-548278ecfd73 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received event network-changed-5163f251-371c-409d-9510-db3f0c358877 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.325 2 DEBUG nova.compute.manager [req-780f9f2f-16cc-4dd7-9f9a-7065257198ef req-9254b9bb-7a78-4c3d-8168-548278ecfd73 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Refreshing instance network info cache due to event network-changed-5163f251-371c-409d-9510-db3f0c358877. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.325 2 DEBUG oslo_concurrency.lockutils [req-780f9f2f-16cc-4dd7-9f9a-7065257198ef req-9254b9bb-7a78-4c3d-8168-548278ecfd73 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:10 compute-0 podman[301167]: 2025-10-14 09:00:10.45830569 +0000 UTC m=+0.040320932 container create 06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:00:10 compute-0 systemd[1]: Started libpod-conmon-06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3.scope.
Oct 14 09:00:10 compute-0 nova_compute[259627]: 2025-10-14 09:00:10.497 2 DEBUG nova.network.neutron [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:00:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:10 compute-0 podman[301167]: 2025-10-14 09:00:10.441260011 +0000 UTC m=+0.023275273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:00:10 compute-0 podman[301167]: 2025-10-14 09:00:10.548789451 +0000 UTC m=+0.130804713 container init 06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:00:10 compute-0 podman[301167]: 2025-10-14 09:00:10.556815249 +0000 UTC m=+0.138830491 container start 06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:00:10 compute-0 podman[301167]: 2025-10-14 09:00:10.559685299 +0000 UTC m=+0.141700561 container attach 06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_torvalds, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:00:10 compute-0 systemd[1]: libpod-06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3.scope: Deactivated successfully.
Oct 14 09:00:10 compute-0 nostalgic_torvalds[301183]: 167 167
Oct 14 09:00:10 compute-0 conmon[301183]: conmon 06894fd58604bb8c138d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3.scope/container/memory.events
Oct 14 09:00:10 compute-0 podman[301188]: 2025-10-14 09:00:10.609594005 +0000 UTC m=+0.030369777 container died 06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 09:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bc1c49a8d14e705f1f2b2e740ffba293d2eca029ef64751b06666e6a29b5d9d-merged.mount: Deactivated successfully.
Oct 14 09:00:10 compute-0 podman[301188]: 2025-10-14 09:00:10.645783113 +0000 UTC m=+0.066558855 container remove 06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_torvalds, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 14 09:00:10 compute-0 systemd[1]: libpod-conmon-06894fd58604bb8c138dac92d447aae5037d90c3d09ff67ab50287962ce5e1e3.scope: Deactivated successfully.
Oct 14 09:00:10 compute-0 podman[301210]: 2025-10-14 09:00:10.847415945 +0000 UTC m=+0.055904384 container create 240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 14 09:00:10 compute-0 systemd[1]: Started libpod-conmon-240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666.scope.
Oct 14 09:00:10 compute-0 podman[301210]: 2025-10-14 09:00:10.828762097 +0000 UTC m=+0.037250586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:00:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04943132d86820968c65ef373c652aaa02b9c2d23c81699da34805901833b656/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04943132d86820968c65ef373c652aaa02b9c2d23c81699da34805901833b656/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04943132d86820968c65ef373c652aaa02b9c2d23c81699da34805901833b656/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04943132d86820968c65ef373c652aaa02b9c2d23c81699da34805901833b656/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04943132d86820968c65ef373c652aaa02b9c2d23c81699da34805901833b656/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:10 compute-0 podman[301210]: 2025-10-14 09:00:10.993051721 +0000 UTC m=+0.201540180 container init 240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:00:10 compute-0 podman[301210]: 2025-10-14 09:00:10.999674823 +0000 UTC m=+0.208163252 container start 240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:00:11 compute-0 podman[301210]: 2025-10-14 09:00:11.003394275 +0000 UTC m=+0.211882744 container attach 240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:00:11 compute-0 ceph-mon[74249]: pgmap v1274: 305 pgs: 305 active+clean; 260 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 335 op/s
Oct 14 09:00:11 compute-0 ceph-mon[74249]: osdmap e156: 3 total, 3 up, 3 in
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 385 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 11 MiB/s wr, 496 op/s
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.648 2 DEBUG nova.network.neutron [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Updating instance_info_cache with network_info: [{"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.679 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Releasing lock "refresh_cache-24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.680 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Instance network_info: |[{"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.680 2 DEBUG oslo_concurrency.lockutils [req-780f9f2f-16cc-4dd7-9f9a-7065257198ef req-9254b9bb-7a78-4c3d-8168-548278ecfd73 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.680 2 DEBUG nova.network.neutron [req-780f9f2f-16cc-4dd7-9f9a-7065257198ef req-9254b9bb-7a78-4c3d-8168-548278ecfd73 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Refreshing network info cache for port 5163f251-371c-409d-9510-db3f0c358877 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.684 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Start _get_guest_xml network_info=[{"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.691 2 WARNING nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.701 2 DEBUG nova.virt.libvirt.host [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.702 2 DEBUG nova.virt.libvirt.host [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.705 2 DEBUG nova.virt.libvirt.host [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.706 2 DEBUG nova.virt.libvirt.host [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.706 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.706 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.707 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.707 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.707 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.708 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.708 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.708 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.708 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.708 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.709 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.709 2 DEBUG nova.virt.hardware [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.712 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.972 2 INFO nova.virt.libvirt.driver [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Snapshot image upload complete
Oct 14 09:00:11 compute-0 nova_compute[259627]: 2025-10-14 09:00:11.973 2 INFO nova.compute.manager [None req-157943ec-bc99-4ada-8315-05beaaa5bded 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Took 5.71 seconds to snapshot the instance on the hypervisor.
Oct 14 09:00:12 compute-0 gracious_bassi[301226]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:00:12 compute-0 gracious_bassi[301226]: --> relative data size: 1.0
Oct 14 09:00:12 compute-0 gracious_bassi[301226]: --> All data devices are unavailable
Oct 14 09:00:12 compute-0 systemd[1]: libpod-240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666.scope: Deactivated successfully.
Oct 14 09:00:12 compute-0 systemd[1]: libpod-240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666.scope: Consumed 1.030s CPU time.
Oct 14 09:00:12 compute-0 podman[301275]: 2025-10-14 09:00:12.181658098 +0000 UTC m=+0.043230493 container died 240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:00:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-04943132d86820968c65ef373c652aaa02b9c2d23c81699da34805901833b656-merged.mount: Deactivated successfully.
Oct 14 09:00:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976660368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:12 compute-0 podman[301282]: 2025-10-14 09:00:12.2306195 +0000 UTC m=+0.068801861 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.239 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:12 compute-0 podman[301275]: 2025-10-14 09:00:12.245873864 +0000 UTC m=+0.107446239 container remove 240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bassi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:00:12 compute-0 systemd[1]: libpod-conmon-240e6633c33de47cee2487e2105ae299f1af012288a5ed315b540e77f05a9666.scope: Deactivated successfully.
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.281 2 DEBUG nova.storage.rbd_utils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] rbd image 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:12 compute-0 podman[301276]: 2025-10-14 09:00:12.287316492 +0000 UTC m=+0.126095217 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.287 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:12 compute-0 sudo[301103]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:12 compute-0 sudo[301356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:12 compute-0 sudo[301356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:12 compute-0 sudo[301356]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:12 compute-0 sudo[301381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:00:12 compute-0 sudo[301381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:12 compute-0 sudo[301381]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:12 compute-0 sudo[301425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:12 compute-0 sudo[301425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:12 compute-0 sudo[301425]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:12 compute-0 sudo[301450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:00:12 compute-0 sudo[301450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 14 09:00:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 14 09:00:12 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 14 09:00:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331643089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.722 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.726 2 DEBUG nova.virt.libvirt.vif [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1347232807',display_name='tempest-ServerPasswordTestJSON-server-1347232807',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1347232807',id=37,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8b21807ca224c1daf381f45c9748d90',ramdisk_id='',reservation_id='r-w20njkep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-484472080',owner_user_name='tempest-ServerPasswordTestJSON-484472080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:06Z,user_data=None,user_id='91f3c667c5b24ffa9e97a07a6cfa768f',uuid=24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.727 2 DEBUG nova.network.os_vif_util [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Converting VIF {"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.728 2 DEBUG nova.network.os_vif_util [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:aa,bridge_name='br-int',has_traffic_filtering=True,id=5163f251-371c-409d-9510-db3f0c358877,network=Network(0bdbc6e1-68cf-4934-97c2-16531fbc212d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5163f251-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.730 2 DEBUG nova.objects.instance [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.755 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <uuid>24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe</uuid>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <name>instance-00000025</name>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerPasswordTestJSON-server-1347232807</nova:name>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:11</nova:creationTime>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <nova:user uuid="91f3c667c5b24ffa9e97a07a6cfa768f">tempest-ServerPasswordTestJSON-484472080-project-member</nova:user>
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <nova:project uuid="a8b21807ca224c1daf381f45c9748d90">tempest-ServerPasswordTestJSON-484472080</nova:project>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <nova:port uuid="5163f251-371c-409d-9510-db3f0c358877">
Oct 14 09:00:12 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <entry name="serial">24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe</entry>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <entry name="uuid">24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe</entry>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk">
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk.config">
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:7d:2f:aa"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <target dev="tap5163f251-37"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe/console.log" append="off"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:12 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:12 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:12 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:12 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:12 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.761 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Preparing to wait for external event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.762 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.762 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.762 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.763 2 DEBUG nova.virt.libvirt.vif [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1347232807',display_name='tempest-ServerPasswordTestJSON-server-1347232807',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1347232807',id=37,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8b21807ca224c1daf381f45c9748d90',ramdisk_id='',reservation_id='r-w20njkep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-484472080',owner_user_name='tempest-ServerPasswordTestJSON-484472080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:06Z,user_data=None,user_id='91f3c667c5b24ffa9e97a07a6cfa768f',uuid=24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.764 2 DEBUG nova.network.os_vif_util [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Converting VIF {"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.765 2 DEBUG nova.network.os_vif_util [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:aa,bridge_name='br-int',has_traffic_filtering=True,id=5163f251-371c-409d-9510-db3f0c358877,network=Network(0bdbc6e1-68cf-4934-97c2-16531fbc212d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5163f251-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.765 2 DEBUG os_vif [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:aa,bridge_name='br-int',has_traffic_filtering=True,id=5163f251-371c-409d-9510-db3f0c358877,network=Network(0bdbc6e1-68cf-4934-97c2-16531fbc212d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5163f251-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.766 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.767 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5163f251-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5163f251-37, col_values=(('external_ids', {'iface-id': '5163f251-371c-409d-9510-db3f0c358877', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:2f:aa', 'vm-uuid': '24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:12 compute-0 NetworkManager[44885]: <info>  [1760432412.7802] manager: (tap5163f251-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.793 2 INFO os_vif [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:aa,bridge_name='br-int',has_traffic_filtering=True,id=5163f251-371c-409d-9510-db3f0c358877,network=Network(0bdbc6e1-68cf-4934-97c2-16531fbc212d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5163f251-37')
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.869 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.870 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.870 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] No VIF found with MAC fa:16:3e:7d:2f:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.871 2 INFO nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Using config drive
Oct 14 09:00:12 compute-0 nova_compute[259627]: 2025-10-14 09:00:12.904 2 DEBUG nova.storage.rbd_utils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] rbd image 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:12 compute-0 podman[301520]: 2025-10-14 09:00:12.919586848 +0000 UTC m=+0.053735430 container create 8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:00:12 compute-0 systemd[1]: Started libpod-conmon-8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559.scope.
Oct 14 09:00:12 compute-0 podman[301520]: 2025-10-14 09:00:12.893247121 +0000 UTC m=+0.027395703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:00:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:13 compute-0 ceph-mon[74249]: pgmap v1276: 305 pgs: 305 active+clean; 385 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 11 MiB/s wr, 496 op/s
Oct 14 09:00:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3976660368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:13 compute-0 ceph-mon[74249]: osdmap e157: 3 total, 3 up, 3 in
Oct 14 09:00:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1331643089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:13 compute-0 podman[301520]: 2025-10-14 09:00:13.02511751 +0000 UTC m=+0.159266102 container init 8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:00:13 compute-0 podman[301520]: 2025-10-14 09:00:13.036173711 +0000 UTC m=+0.170322313 container start 8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lalande, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:00:13 compute-0 podman[301520]: 2025-10-14 09:00:13.040309263 +0000 UTC m=+0.174457925 container attach 8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 09:00:13 compute-0 trusting_lalande[301555]: 167 167
Oct 14 09:00:13 compute-0 systemd[1]: libpod-8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559.scope: Deactivated successfully.
Oct 14 09:00:13 compute-0 podman[301520]: 2025-10-14 09:00:13.04631399 +0000 UTC m=+0.180462592 container died 8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:00:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9660ab1e79bfea6120418900cfa81b4cfee3a8d2d529be76d0d5d5d8f5b3c2d5-merged.mount: Deactivated successfully.
Oct 14 09:00:13 compute-0 podman[301520]: 2025-10-14 09:00:13.087474401 +0000 UTC m=+0.221622963 container remove 8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lalande, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:00:13 compute-0 systemd[1]: libpod-conmon-8173656b73a39323c877cf889ba366c38c1dcafb0a83eb7fd29073ac8f475559.scope: Deactivated successfully.
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.218 2 INFO nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Creating config drive at /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe/disk.config
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.225 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9lxlq5_7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.268 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "f921c880-38a7-40b6-8300-2123889a19c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.270 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.293 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:00:13 compute-0 podman[301578]: 2025-10-14 09:00:13.319342255 +0000 UTC m=+0.056221832 container create e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.363 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.363 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:13 compute-0 systemd[1]: Started libpod-conmon-e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80.scope.
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.369 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9lxlq5_7" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:13 compute-0 podman[301578]: 2025-10-14 09:00:13.299886827 +0000 UTC m=+0.036766414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.391 2 DEBUG nova.storage.rbd_utils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] rbd image 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b023744f9bcd9d6a6f418d0a3ca59fdf49c08beb1891cb619d1ab6f7cc5cc7cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b023744f9bcd9d6a6f418d0a3ca59fdf49c08beb1891cb619d1ab6f7cc5cc7cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b023744f9bcd9d6a6f418d0a3ca59fdf49c08beb1891cb619d1ab6f7cc5cc7cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b023744f9bcd9d6a6f418d0a3ca59fdf49c08beb1891cb619d1ab6f7cc5cc7cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.404 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe/disk.config 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:13 compute-0 podman[301578]: 2025-10-14 09:00:13.422261522 +0000 UTC m=+0.159141089 container init e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:00:13 compute-0 podman[301578]: 2025-10-14 09:00:13.430534575 +0000 UTC m=+0.167414132 container start e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:00:13 compute-0 podman[301578]: 2025-10-14 09:00:13.43440619 +0000 UTC m=+0.171285767 container attach e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.434 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.435 2 INFO nova.compute.claims [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.543 2 DEBUG oslo_concurrency.processutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe/disk.config 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.544 2 INFO nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Deleting local config drive /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe/disk.config because it was imported into RBD.
Oct 14 09:00:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 385 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 532 op/s
Oct 14 09:00:13 compute-0 kernel: tap5163f251-37: entered promiscuous mode
Oct 14 09:00:13 compute-0 NetworkManager[44885]: <info>  [1760432413.6024] manager: (tap5163f251-37): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Oct 14 09:00:13 compute-0 ovn_controller[152662]: 2025-10-14T09:00:13Z|00300|binding|INFO|Claiming lport 5163f251-371c-409d-9510-db3f0c358877 for this chassis.
Oct 14 09:00:13 compute-0 ovn_controller[152662]: 2025-10-14T09:00:13Z|00301|binding|INFO|5163f251-371c-409d-9510-db3f0c358877: Claiming fa:16:3e:7d:2f:aa 10.100.0.11
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.624 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:2f:aa 10.100.0.11'], port_security=['fa:16:3e:7d:2f:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8b21807ca224c1daf381f45c9748d90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '999b5f9f-a407-4e91-ac97-3b1cf8c15e6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02b5903f-52bd-43c9-8141-1efd7f9784c9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5163f251-371c-409d-9510-db3f0c358877) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.625 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5163f251-371c-409d-9510-db3f0c358877 in datapath 0bdbc6e1-68cf-4934-97c2-16531fbc212d bound to our chassis
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.627 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0bdbc6e1-68cf-4934-97c2-16531fbc212d
Oct 14 09:00:13 compute-0 systemd-udevd[301648]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ab91936c-b25f-4e75-97c0-bcf0fda650c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.643 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0bdbc6e1-61 in ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.644 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0bdbc6e1-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.645 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4faf24-139e-4dd4-ac2e-4bccad21050e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.646 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ffe89b-63ea-4095-932f-26863f77b000]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 NetworkManager[44885]: <info>  [1760432413.6606] device (tap5163f251-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:13 compute-0 systemd-machined[214636]: New machine qemu-42-instance-00000025.
Oct 14 09:00:13 compute-0 NetworkManager[44885]: <info>  [1760432413.6629] device (tap5163f251-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.666 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2d224ea8-6965-4167-8c85-4195271eae3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000025.
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.680 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.703 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65789b4a-15ba-4506-99a5-c9450052ad27]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.709 2 DEBUG nova.network.neutron [req-780f9f2f-16cc-4dd7-9f9a-7065257198ef req-9254b9bb-7a78-4c3d-8168-548278ecfd73 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Updated VIF entry in instance network info cache for port 5163f251-371c-409d-9510-db3f0c358877. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.711 2 DEBUG nova.network.neutron [req-780f9f2f-16cc-4dd7-9f9a-7065257198ef req-9254b9bb-7a78-4c3d-8168-548278ecfd73 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Updating instance_info_cache with network_info: [{"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:13 compute-0 ovn_controller[152662]: 2025-10-14T09:00:13Z|00302|binding|INFO|Setting lport 5163f251-371c-409d-9510-db3f0c358877 ovn-installed in OVS
Oct 14 09:00:13 compute-0 ovn_controller[152662]: 2025-10-14T09:00:13Z|00303|binding|INFO|Setting lport 5163f251-371c-409d-9510-db3f0c358877 up in Southbound
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:13 compute-0 nova_compute[259627]: 2025-10-14 09:00:13.745 2 DEBUG oslo_concurrency.lockutils [req-780f9f2f-16cc-4dd7-9f9a-7065257198ef req-9254b9bb-7a78-4c3d-8168-548278ecfd73 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.747 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba63a7c-190e-4b93-9e85-b9fec50350fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 NetworkManager[44885]: <info>  [1760432413.7532] manager: (tap0bdbc6e1-60): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Oct 14 09:00:13 compute-0 systemd-udevd[301653]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e1493550-6f2f-4285-9851-48601e532753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.802 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff4a902-9fa2-4d42-9631-48e9fb4fbc15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.805 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[509e07a0-806d-48e2-b3d5-61e73eb80ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 NetworkManager[44885]: <info>  [1760432413.8369] device (tap0bdbc6e1-60): carrier: link connected
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.844 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cee663-0ba4-42a4-8217-f67f8acf803e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9aef2ad6-7cb6-4e62-8c94-7d43b8e7122c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bdbc6e1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:1c:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619259, 'reachable_time': 41856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301702, 'error': None, 'target': 'ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.880 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24f9e4f6-3b34-4e19-affb-f441b714f4a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:1ca3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619259, 'tstamp': 619259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301703, 'error': None, 'target': 'ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.902 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[30dd0e62-d14b-4748-ba1a-a5aa7f02410d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bdbc6e1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:1c:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619259, 'reachable_time': 41856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301704, 'error': None, 'target': 'ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:13.935 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[033cc5d1-63c1-4ad5-a79e-faeae16bee1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.006 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a85292-c9cb-4beb-babe-aaeab2e918f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.007 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bdbc6e1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.008 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.008 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bdbc6e1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:14 compute-0 NetworkManager[44885]: <info>  [1760432414.0103] manager: (tap0bdbc6e1-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct 14 09:00:14 compute-0 kernel: tap0bdbc6e1-60: entered promiscuous mode
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.020 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0bdbc6e1-60, col_values=(('external_ids', {'iface-id': '90d819fa-2499-443a-bd6c-39dd92f5208b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:14 compute-0 ovn_controller[152662]: 2025-10-14T09:00:14Z|00304|binding|INFO|Releasing lport 90d819fa-2499-443a-bd6c-39dd92f5208b from this chassis (sb_readonly=0)
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.027 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0bdbc6e1-68cf-4934-97c2-16531fbc212d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0bdbc6e1-68cf-4934-97c2-16531fbc212d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.034 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18ec6411-9797-4824-b236-cdd76fce691a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.041 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0bdbc6e1-68cf-4934-97c2-16531fbc212d
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0bdbc6e1-68cf-4934-97c2-16531fbc212d.pid.haproxy
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0bdbc6e1-68cf-4934-97c2-16531fbc212d
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:00:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:14.042 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'env', 'PROCESS_TAG=haproxy-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0bdbc6e1-68cf-4934-97c2-16531fbc212d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17238386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.181 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.187 2 DEBUG nova.compute.provider_tree [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.203 2 DEBUG nova.scheduler.client.report [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.224 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.224 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:00:14 compute-0 keen_gagarin[301595]: {
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:     "0": [
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:         {
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "devices": [
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "/dev/loop3"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             ],
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_name": "ceph_lv0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_size": "21470642176",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "name": "ceph_lv0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "tags": {
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cluster_name": "ceph",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.crush_device_class": "",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.encrypted": "0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osd_id": "0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.type": "block",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.vdo": "0"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             },
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "type": "block",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "vg_name": "ceph_vg0"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:         }
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:     ],
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:     "1": [
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:         {
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "devices": [
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "/dev/loop4"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             ],
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_name": "ceph_lv1",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_size": "21470642176",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "name": "ceph_lv1",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "tags": {
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cluster_name": "ceph",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.crush_device_class": "",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.encrypted": "0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osd_id": "1",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.type": "block",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.vdo": "0"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             },
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "type": "block",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "vg_name": "ceph_vg1"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:         }
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:     ],
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:     "2": [
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:         {
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "devices": [
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "/dev/loop5"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             ],
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_name": "ceph_lv2",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_size": "21470642176",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "name": "ceph_lv2",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "tags": {
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.cluster_name": "ceph",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.crush_device_class": "",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.encrypted": "0",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osd_id": "2",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.type": "block",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:                 "ceph.vdo": "0"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             },
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "type": "block",
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:             "vg_name": "ceph_vg2"
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:         }
Oct 14 09:00:14 compute-0 keen_gagarin[301595]:     ]
Oct 14 09:00:14 compute-0 keen_gagarin[301595]: }
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.274 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.274 2 DEBUG nova.network.neutron [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:00:14 compute-0 systemd[1]: libpod-e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80.scope: Deactivated successfully.
Oct 14 09:00:14 compute-0 podman[301578]: 2025-10-14 09:00:14.287645482 +0000 UTC m=+1.024525039 container died e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.295 2 INFO nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.315 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b023744f9bcd9d6a6f418d0a3ca59fdf49c08beb1891cb619d1ab6f7cc5cc7cd-merged.mount: Deactivated successfully.
Oct 14 09:00:14 compute-0 podman[301578]: 2025-10-14 09:00:14.387547956 +0000 UTC m=+1.124427513 container remove e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:00:14 compute-0 systemd[1]: libpod-conmon-e4b3d18a418f56e87f7fedc7a4715f79363b226979bf6b10c1288000fc4e9f80.scope: Deactivated successfully.
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.404 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.405 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.405 2 INFO nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Creating image(s)
Oct 14 09:00:14 compute-0 sudo[301450]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.438 2 DEBUG nova.storage.rbd_utils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image f921c880-38a7-40b6-8300-2123889a19c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:14 compute-0 podman[301794]: 2025-10-14 09:00:14.46836911 +0000 UTC m=+0.078369485 container create 75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.482 2 DEBUG nova.storage.rbd_utils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image f921c880-38a7-40b6-8300-2123889a19c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:14 compute-0 sudo[301825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:14 compute-0 sudo[301825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:14 compute-0 sudo[301825]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:14 compute-0 systemd[1]: Started libpod-conmon-75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7.scope.
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.517 2 DEBUG nova.storage.rbd_utils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image f921c880-38a7-40b6-8300-2123889a19c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:14 compute-0 podman[301794]: 2025-10-14 09:00:14.434725454 +0000 UTC m=+0.044725839 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.527 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb4826876986e74176533eacf2c86b4940f3954d2bb0db2fb4de3c16c27ff42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:14 compute-0 podman[301794]: 2025-10-14 09:00:14.559319114 +0000 UTC m=+0.169319519 container init 75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:00:14 compute-0 podman[301794]: 2025-10-14 09:00:14.565408833 +0000 UTC m=+0.175409208 container start 75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:00:14 compute-0 sudo[301884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:00:14 compute-0 neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d[301886]: [NOTICE]   (301916) : New worker (301919) forked
Oct 14 09:00:14 compute-0 neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d[301886]: [NOTICE]   (301916) : Loading success.
Oct 14 09:00:14 compute-0 sudo[301884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:14 compute-0 sudo[301884]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.604 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.604 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.605 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.605 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.631 2 DEBUG nova.storage.rbd_utils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image f921c880-38a7-40b6-8300-2123889a19c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.636 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f921c880-38a7-40b6-8300-2123889a19c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:14 compute-0 sudo[301929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:14 compute-0 sudo[301929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:14 compute-0 sudo[301929]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.663 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432414.6579707, 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.663 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] VM Started (Lifecycle Event)
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.696 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:14 compute-0 sudo[301975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:00:14 compute-0 sudo[301975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.705 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432414.658262, 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.706 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] VM Paused (Lifecycle Event)
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.733 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.740 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.779 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.903 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f921c880-38a7-40b6-8300-2123889a19c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.963 2 DEBUG nova.storage.rbd_utils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] resizing rbd image f921c880-38a7-40b6-8300-2123889a19c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:14 compute-0 nova_compute[259627]: 2025-10-14 09:00:14.989 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:15 compute-0 ceph-mon[74249]: pgmap v1278: 305 pgs: 305 active+clean; 385 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 532 op/s
Oct 14 09:00:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/17238386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.054 2 DEBUG nova.policy [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56001fe1c9fc432e923f8c57058754db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.057 2 DEBUG nova.compute.manager [req-a9fb4fe1-9728-4441-9d41-6f04b0a78116 req-d32c787d-a141-424a-9a81-ebb201c10d02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.058 2 DEBUG oslo_concurrency.lockutils [req-a9fb4fe1-9728-4441-9d41-6f04b0a78116 req-d32c787d-a141-424a-9a81-ebb201c10d02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.058 2 DEBUG oslo_concurrency.lockutils [req-a9fb4fe1-9728-4441-9d41-6f04b0a78116 req-d32c787d-a141-424a-9a81-ebb201c10d02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.058 2 DEBUG oslo_concurrency.lockutils [req-a9fb4fe1-9728-4441-9d41-6f04b0a78116 req-d32c787d-a141-424a-9a81-ebb201c10d02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.058 2 DEBUG nova.compute.manager [req-a9fb4fe1-9728-4441-9d41-6f04b0a78116 req-d32c787d-a141-424a-9a81-ebb201c10d02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Processing event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.059 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.062 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432415.062394, 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.062 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] VM Resumed (Lifecycle Event)
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.065 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:15 compute-0 podman[302112]: 2025-10-14 09:00:15.070385073 +0000 UTC m=+0.056732185 container create a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.084 2 DEBUG nova.objects.instance [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'migration_context' on Instance uuid f921c880-38a7-40b6-8300-2123889a19c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.088 2 INFO nova.virt.libvirt.driver [-] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Instance spawned successfully.
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.088 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.100 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.105 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:15 compute-0 systemd[1]: Started libpod-conmon-a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899.scope.
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.125 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.126 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Ensure instance console log exists: /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.126 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.126 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.126 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.134 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:15 compute-0 podman[302112]: 2025-10-14 09:00:15.045123132 +0000 UTC m=+0.031470264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.140 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.141 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.141 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.141 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.142 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.142 2 DEBUG nova.virt.libvirt.driver [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:15 compute-0 podman[302112]: 2025-10-14 09:00:15.174603532 +0000 UTC m=+0.160950644 container init a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moser, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:00:15 compute-0 podman[302112]: 2025-10-14 09:00:15.184031403 +0000 UTC m=+0.170378495 container start a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moser, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:00:15 compute-0 podman[302112]: 2025-10-14 09:00:15.187860997 +0000 UTC m=+0.174208139 container attach a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moser, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 09:00:15 compute-0 heuristic_moser[302146]: 167 167
Oct 14 09:00:15 compute-0 systemd[1]: libpod-a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899.scope: Deactivated successfully.
Oct 14 09:00:15 compute-0 podman[302112]: 2025-10-14 09:00:15.194264185 +0000 UTC m=+0.180611287 container died a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.207 2 INFO nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Took 8.65 seconds to spawn the instance on the hypervisor.
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.208 2 DEBUG nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4900571f660d885aebdc2aa692167ed5623c03ca4e5109b88023bb18f754c3d3-merged.mount: Deactivated successfully.
Oct 14 09:00:15 compute-0 podman[302112]: 2025-10-14 09:00:15.249601404 +0000 UTC m=+0.235948536 container remove a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_moser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:00:15 compute-0 systemd[1]: libpod-conmon-a23c5252b6f46a2faea9de983749a240567d4c85064a72f104f67e1202361899.scope: Deactivated successfully.
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.293 2 INFO nova.compute.manager [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Took 9.80 seconds to build instance.
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.310 2 DEBUG oslo_concurrency.lockutils [None req-78d40e20-6648-43a5-87c5-a29928782ae1 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:15 compute-0 podman[302170]: 2025-10-14 09:00:15.504756349 +0000 UTC m=+0.076860908 container create bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:00:15 compute-0 podman[302170]: 2025-10-14 09:00:15.470360374 +0000 UTC m=+0.042464953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:00:15 compute-0 systemd[1]: Started libpod-conmon-bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd.scope.
Oct 14 09:00:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 436 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 14 MiB/s wr, 551 op/s
Oct 14 09:00:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb9a7e4fc5feddbde349773c66ccfaf0ae3d1aebda322f8c6fcad356739d4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb9a7e4fc5feddbde349773c66ccfaf0ae3d1aebda322f8c6fcad356739d4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb9a7e4fc5feddbde349773c66ccfaf0ae3d1aebda322f8c6fcad356739d4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdb9a7e4fc5feddbde349773c66ccfaf0ae3d1aebda322f8c6fcad356739d4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:15 compute-0 podman[302170]: 2025-10-14 09:00:15.632580828 +0000 UTC m=+0.204685397 container init bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:00:15 compute-0 podman[302170]: 2025-10-14 09:00:15.641561899 +0000 UTC m=+0.213666448 container start bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 14 09:00:15 compute-0 podman[302170]: 2025-10-14 09:00:15.645196178 +0000 UTC m=+0.217300787 container attach bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:00:15 compute-0 nova_compute[259627]: 2025-10-14 09:00:15.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:16 compute-0 brave_boyd[302186]: {
Oct 14 09:00:16 compute-0 brave_boyd[302186]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "osd_id": 2,
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "type": "bluestore"
Oct 14 09:00:16 compute-0 brave_boyd[302186]:     },
Oct 14 09:00:16 compute-0 brave_boyd[302186]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "osd_id": 1,
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "type": "bluestore"
Oct 14 09:00:16 compute-0 brave_boyd[302186]:     },
Oct 14 09:00:16 compute-0 brave_boyd[302186]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "osd_id": 0,
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:00:16 compute-0 brave_boyd[302186]:         "type": "bluestore"
Oct 14 09:00:16 compute-0 brave_boyd[302186]:     }
Oct 14 09:00:16 compute-0 brave_boyd[302186]: }
Oct 14 09:00:16 compute-0 systemd[1]: libpod-bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd.scope: Deactivated successfully.
Oct 14 09:00:16 compute-0 podman[302170]: 2025-10-14 09:00:16.675877077 +0000 UTC m=+1.247981626 container died bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbdb9a7e4fc5feddbde349773c66ccfaf0ae3d1aebda322f8c6fcad356739d4e-merged.mount: Deactivated successfully.
Oct 14 09:00:16 compute-0 ovn_controller[152662]: 2025-10-14T09:00:16Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:83:ca 10.100.0.6
Oct 14 09:00:16 compute-0 ovn_controller[152662]: 2025-10-14T09:00:16Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:83:ca 10.100.0.6
Oct 14 09:00:16 compute-0 podman[302170]: 2025-10-14 09:00:16.740573246 +0000 UTC m=+1.312677795 container remove bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:00:16 compute-0 systemd[1]: libpod-conmon-bf9dfedcbb43fb21097b9ce95d481502d8fd03ff2fb964fc48573b04d6b9a5dd.scope: Deactivated successfully.
Oct 14 09:00:16 compute-0 sudo[301975]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:00:16 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:00:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:00:16 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:00:16 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 298350c1-786e-4723-b22b-28c69e49657c does not exist
Oct 14 09:00:16 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7ef897f7-a56c-42b8-95f2-0dea2251cf30 does not exist
Oct 14 09:00:16 compute-0 sudo[302232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:00:16 compute-0 sudo[302232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:16 compute-0 sudo[302232]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:16 compute-0 sudo[302257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:00:16 compute-0 sudo[302257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:00:16 compute-0 sudo[302257]: pam_unix(sudo:session): session closed for user root
Oct 14 09:00:16 compute-0 nova_compute[259627]: 2025-10-14 09:00:16.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:16 compute-0 nova_compute[259627]: 2025-10-14 09:00:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:17 compute-0 ceph-mon[74249]: pgmap v1279: 305 pgs: 305 active+clean; 436 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 14 MiB/s wr, 551 op/s
Oct 14 09:00:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:00:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.069 2 DEBUG nova.network.neutron [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Successfully created port: 07fd6657-ae56-455d-b362-d5f4a3368a9e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.093 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.093 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.107 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.118 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432402.0811214, 1af6c158-005b-4f3c-9044-87158e57378d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.118 2 INFO nova.compute.manager [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Stopped (Lifecycle Event)
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.139 2 DEBUG nova.compute.manager [req-48f4edd0-e76e-4998-8622-9965ade6d47e req-e1971dc9-ae6b-4c51-b22b-657183fa9d0d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.140 2 DEBUG oslo_concurrency.lockutils [req-48f4edd0-e76e-4998-8622-9965ade6d47e req-e1971dc9-ae6b-4c51-b22b-657183fa9d0d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.140 2 DEBUG oslo_concurrency.lockutils [req-48f4edd0-e76e-4998-8622-9965ade6d47e req-e1971dc9-ae6b-4c51-b22b-657183fa9d0d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.140 2 DEBUG oslo_concurrency.lockutils [req-48f4edd0-e76e-4998-8622-9965ade6d47e req-e1971dc9-ae6b-4c51-b22b-657183fa9d0d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.140 2 DEBUG nova.compute.manager [req-48f4edd0-e76e-4998-8622-9965ade6d47e req-e1971dc9-ae6b-4c51-b22b-657183fa9d0d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] No waiting events found dispatching network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.141 2 WARNING nova.compute.manager [req-48f4edd0-e76e-4998-8622-9965ade6d47e req-e1971dc9-ae6b-4c51-b22b-657183fa9d0d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received unexpected event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 for instance with vm_state active and task_state None.
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.142 2 DEBUG nova.compute.manager [None req-59adff0c-0f0a-4e81-b9bf-655be3db41a1 - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.184 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.185 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.192 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.192 2 INFO nova.compute.claims [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.428 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 436 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 12 MiB/s wr, 454 op/s
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801582642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.924 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.933 2 DEBUG nova.compute.provider_tree [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.960 2 DEBUG nova.scheduler.client.report [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:17 compute-0 nova_compute[259627]: 2025-10-14 09:00:17.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.000 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.002 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:00:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/801582642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.122 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.123 2 DEBUG nova.network.neutron [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.148 2 INFO nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.171 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.275 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.277 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.277 2 INFO nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Creating image(s)
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.304 2 DEBUG nova.storage.rbd_utils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6f94ea73-1c62-40a9-9300-0d81c596377c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.337 2 DEBUG nova.storage.rbd_utils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6f94ea73-1c62-40a9-9300-0d81c596377c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.366 2 DEBUG nova.storage.rbd_utils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6f94ea73-1c62-40a9-9300-0d81c596377c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.371 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f585003ec43b6a5fb48428b6ec4bde2adfde01c7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.372 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f585003ec43b6a5fb48428b6ec4bde2adfde01c7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.404 2 DEBUG nova.policy [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.494 2 DEBUG nova.network.neutron [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Successfully updated port: 07fd6657-ae56-455d-b362-d5f4a3368a9e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.508 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.508 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquired lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.509 2 DEBUG nova.network.neutron [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.636 2 DEBUG nova.virt.libvirt.imagebackend [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/76c5459a-f996-4fca-ad51-ec29f6551489/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/76c5459a-f996-4fca-ad51-ec29f6551489/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.690 2 DEBUG nova.network.neutron [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.697 2 DEBUG nova.virt.libvirt.imagebackend [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/76c5459a-f996-4fca-ad51-ec29f6551489/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.697 2 DEBUG nova.storage.rbd_utils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning images/76c5459a-f996-4fca-ad51-ec29f6551489@snap to None/6f94ea73-1c62-40a9-9300-0d81c596377c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.752 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.752 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.752 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.753 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.753 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.754 2 INFO nova.compute.manager [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Terminating instance
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.755 2 DEBUG nova.compute.manager [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:00:18 compute-0 kernel: tap5163f251-37 (unregistering): left promiscuous mode
Oct 14 09:00:18 compute-0 NetworkManager[44885]: <info>  [1760432418.8058] device (tap5163f251-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:00:18 compute-0 ovn_controller[152662]: 2025-10-14T09:00:18Z|00305|binding|INFO|Releasing lport 5163f251-371c-409d-9510-db3f0c358877 from this chassis (sb_readonly=0)
Oct 14 09:00:18 compute-0 ovn_controller[152662]: 2025-10-14T09:00:18Z|00306|binding|INFO|Setting lport 5163f251-371c-409d-9510-db3f0c358877 down in Southbound
Oct 14 09:00:18 compute-0 ovn_controller[152662]: 2025-10-14T09:00:18Z|00307|binding|INFO|Removing iface tap5163f251-37 ovn-installed in OVS
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:18.838 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:2f:aa 10.100.0.11'], port_security=['fa:16:3e:7d:2f:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8b21807ca224c1daf381f45c9748d90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '999b5f9f-a407-4e91-ac97-3b1cf8c15e6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02b5903f-52bd-43c9-8141-1efd7f9784c9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5163f251-371c-409d-9510-db3f0c358877) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:18.839 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5163f251-371c-409d-9510-db3f0c358877 in datapath 0bdbc6e1-68cf-4934-97c2-16531fbc212d unbound from our chassis
Oct 14 09:00:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:18.841 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bdbc6e1-68cf-4934-97c2-16531fbc212d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:00:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:18.842 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3f53e3-7f2b-48bd-84f2-0d8a35b9de67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:18.842 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d namespace which is not needed anymore
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.901 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f585003ec43b6a5fb48428b6ec4bde2adfde01c7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:18 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000025.scope: Deactivated successfully.
Oct 14 09:00:18 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000025.scope: Consumed 4.492s CPU time.
Oct 14 09:00:18 compute-0 systemd-machined[214636]: Machine qemu-42-instance-00000025 terminated.
Oct 14 09:00:18 compute-0 nova_compute[259627]: 2025-10-14 09:00:18.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:18 compute-0 kernel: tap5163f251-37: entered promiscuous mode
Oct 14 09:00:18 compute-0 NetworkManager[44885]: <info>  [1760432418.9783] manager: (tap5163f251-37): new Tun device (/org/freedesktop/NetworkManager/Devices/151)
Oct 14 09:00:18 compute-0 ovn_controller[152662]: 2025-10-14T09:00:18Z|00308|binding|INFO|Claiming lport 5163f251-371c-409d-9510-db3f0c358877 for this chassis.
Oct 14 09:00:18 compute-0 ovn_controller[152662]: 2025-10-14T09:00:18Z|00309|binding|INFO|5163f251-371c-409d-9510-db3f0c358877: Claiming fa:16:3e:7d:2f:aa 10.100.0.11
Oct 14 09:00:18 compute-0 kernel: tap5163f251-37 (unregistering): left promiscuous mode
Oct 14 09:00:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:18.989 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:2f:aa 10.100.0.11'], port_security=['fa:16:3e:7d:2f:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8b21807ca224c1daf381f45c9748d90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '999b5f9f-a407-4e91-ac97-3b1cf8c15e6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02b5903f-52bd-43c9-8141-1efd7f9784c9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5163f251-371c-409d-9510-db3f0c358877) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:19 compute-0 neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d[301886]: [NOTICE]   (301916) : haproxy version is 2.8.14-c23fe91
Oct 14 09:00:19 compute-0 neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d[301886]: [NOTICE]   (301916) : path to executable is /usr/sbin/haproxy
Oct 14 09:00:19 compute-0 neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d[301886]: [WARNING]  (301916) : Exiting Master process...
Oct 14 09:00:19 compute-0 neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d[301886]: [ALERT]    (301916) : Current worker (301919) exited with code 143 (Terminated)
Oct 14 09:00:19 compute-0 neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d[301886]: [WARNING]  (301916) : All workers exited. Exiting... (0)
Oct 14 09:00:19 compute-0 systemd[1]: libpod-75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7.scope: Deactivated successfully.
Oct 14 09:00:19 compute-0 ovn_controller[152662]: 2025-10-14T09:00:19Z|00310|binding|INFO|Setting lport 5163f251-371c-409d-9510-db3f0c358877 ovn-installed in OVS
Oct 14 09:00:19 compute-0 ovn_controller[152662]: 2025-10-14T09:00:19Z|00311|binding|INFO|Setting lport 5163f251-371c-409d-9510-db3f0c358877 up in Southbound
Oct 14 09:00:19 compute-0 ovn_controller[152662]: 2025-10-14T09:00:19Z|00312|binding|INFO|Releasing lport 5163f251-371c-409d-9510-db3f0c358877 from this chassis (sb_readonly=1)
Oct 14 09:00:19 compute-0 ovn_controller[152662]: 2025-10-14T09:00:19Z|00313|binding|INFO|Removing iface tap5163f251-37 ovn-installed in OVS
Oct 14 09:00:19 compute-0 ovn_controller[152662]: 2025-10-14T09:00:19Z|00314|if_status|INFO|Dropped 8 log messages in last 68 seconds (most recently, 61 seconds ago) due to excessive rate
Oct 14 09:00:19 compute-0 ovn_controller[152662]: 2025-10-14T09:00:19Z|00315|if_status|INFO|Not setting lport 5163f251-371c-409d-9510-db3f0c358877 down as sb is readonly
Oct 14 09:00:19 compute-0 ovn_controller[152662]: 2025-10-14T09:00:19Z|00316|binding|INFO|Releasing lport 5163f251-371c-409d-9510-db3f0c358877 from this chassis (sb_readonly=0)
Oct 14 09:00:19 compute-0 ovn_controller[152662]: 2025-10-14T09:00:19Z|00317|binding|INFO|Setting lport 5163f251-371c-409d-9510-db3f0c358877 down in Southbound
Oct 14 09:00:19 compute-0 podman[302466]: 2025-10-14 09:00:19.023404813 +0000 UTC m=+0.074147862 container died 75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.026 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:2f:aa 10.100.0.11'], port_security=['fa:16:3e:7d:2f:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8b21807ca224c1daf381f45c9748d90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '999b5f9f-a407-4e91-ac97-3b1cf8c15e6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02b5903f-52bd-43c9-8141-1efd7f9784c9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5163f251-371c-409d-9510-db3f0c358877) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:19 compute-0 ceph-mon[74249]: pgmap v1280: 305 pgs: 305 active+clean; 436 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 12 MiB/s wr, 454 op/s
Oct 14 09:00:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7-userdata-shm.mount: Deactivated successfully.
Oct 14 09:00:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-efb4826876986e74176533eacf2c86b4940f3954d2bb0db2fb4de3c16c27ff42-merged.mount: Deactivated successfully.
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.064 2 DEBUG nova.network.neutron [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Successfully created port: 0ddc0438-5718-411f-bf95-e14886b82478 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:19 compute-0 podman[302466]: 2025-10-14 09:00:19.068638394 +0000 UTC m=+0.119381443 container cleanup 75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.068 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:19 compute-0 systemd[1]: libpod-conmon-75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7.scope: Deactivated successfully.
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.111 2 INFO nova.virt.libvirt.driver [-] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Instance destroyed successfully.
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.111 2 DEBUG nova.objects.instance [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lazy-loading 'resources' on Instance uuid 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.120 2 DEBUG nova.objects.instance [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid 6f94ea73-1c62-40a9-9300-0d81c596377c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:19 compute-0 podman[302519]: 2025-10-14 09:00:19.137360641 +0000 UTC m=+0.042460074 container remove 75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.138 2 DEBUG nova.virt.libvirt.vif [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1347232807',display_name='tempest-ServerPasswordTestJSON-server-1347232807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1347232807',id=37,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8b21807ca224c1daf381f45c9748d90',ramdisk_id='',reservation_id='r-w20njkep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-484472080',owner_user_name='tempest-ServerPasswordTestJSON-484472080-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:00:18Z,user_data=None,user_id='91f3c667c5b24ffa9e97a07a6cfa768f',uuid=24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.139 2 DEBUG nova.network.os_vif_util [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Converting VIF {"id": "5163f251-371c-409d-9510-db3f0c358877", "address": "fa:16:3e:7d:2f:aa", "network": {"id": "0bdbc6e1-68cf-4934-97c2-16531fbc212d", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-459284180-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8b21807ca224c1daf381f45c9748d90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5163f251-37", "ovs_interfaceid": "5163f251-371c-409d-9510-db3f0c358877", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.139 2 DEBUG nova.network.os_vif_util [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:aa,bridge_name='br-int',has_traffic_filtering=True,id=5163f251-371c-409d-9510-db3f0c358877,network=Network(0bdbc6e1-68cf-4934-97c2-16531fbc212d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5163f251-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.140 2 DEBUG os_vif [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:aa,bridge_name='br-int',has_traffic_filtering=True,id=5163f251-371c-409d-9510-db3f0c358877,network=Network(0bdbc6e1-68cf-4934-97c2-16531fbc212d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5163f251-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.141 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.142 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.142 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.142 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.142 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.146 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e067345-6bdb-4f53-bd6a-ae3e050d0857]: (4, ('Tue Oct 14 09:00:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d (75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7)\n75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7\nTue Oct 14 09:00:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d (75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7)\n75af5515245e53b9ed6a2cf9d725dd1d4beb4d43372f6d5c3153fbd5ddae9bd7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.147 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b83417-c488-4058-b240-8e3c98844839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.148 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bdbc6e1-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:19 compute-0 kernel: tap0bdbc6e1-60: left promiscuous mode
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5163f251-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.179 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.179 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Ensure instance console log exists: /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.179 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ce6870-f9ba-4e0e-b5db-9327477e6fb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.180 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.180 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.180 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.192 2 INFO os_vif [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:aa,bridge_name='br-int',has_traffic_filtering=True,id=5163f251-371c-409d-9510-db3f0c358877,network=Network(0bdbc6e1-68cf-4934-97c2-16531fbc212d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5163f251-37')
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.197 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7545f1-7ab4-4568-92e0-e0448fc9186f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.198 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d62c5f5-a21a-40d8-bb07-743c44a1bf79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb8ca45-b8f1-4426-9e6d-f6f92fcbaf0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619250, 'reachable_time': 41185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302562, 'error': None, 'target': 'ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.222 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0bdbc6e1-68cf-4934-97c2-16531fbc212d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.222 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[e8dc4d7a-17cc-4b4c-8e5f-5f3df6ce4457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.223 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5163f251-371c-409d-9510-db3f0c358877 in datapath 0bdbc6e1-68cf-4934-97c2-16531fbc212d unbound from our chassis
Oct 14 09:00:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d0bdbc6e1\x2d68cf\x2d4934\x2d97c2\x2d16531fbc212d.mount: Deactivated successfully.
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.224 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bdbc6e1-68cf-4934-97c2-16531fbc212d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.225 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7071af91-f09a-4f4d-af12-b8f5a017e10c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.225 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5163f251-371c-409d-9510-db3f0c358877 in datapath 0bdbc6e1-68cf-4934-97c2-16531fbc212d unbound from our chassis
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.226 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bdbc6e1-68cf-4934-97c2-16531fbc212d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:00:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:19.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d136bb00-f1d6-4f5c-a235-bcd945080ab7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.268 2 DEBUG nova.compute.manager [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Received event network-changed-07fd6657-ae56-455d-b362-d5f4a3368a9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.268 2 DEBUG nova.compute.manager [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Refreshing instance network info cache due to event network-changed-07fd6657-ae56-455d-b362-d5f4a3368a9e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.268 2 DEBUG oslo_concurrency.lockutils [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 436 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 7.9 MiB/s wr, 267 op/s
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.616 2 INFO nova.virt.libvirt.driver [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Deleting instance files /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_del
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.617 2 INFO nova.virt.libvirt.driver [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Deletion of /var/lib/nova/instances/24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe_del complete
Oct 14 09:00:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156992053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.660 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.694 2 INFO nova.compute.manager [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.694 2 DEBUG oslo.service.loopingcall [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.694 2 DEBUG nova.compute.manager [-] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.694 2 DEBUG nova.network.neutron [-] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.753 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.753 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.755 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Error from libvirt while getting description of instance-00000025: [Error Code 42] Domain not found: no domain with matching uuid '24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe' (instance-00000025): libvirt.libvirtError: Domain not found: no domain with matching uuid '24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe' (instance-00000025)
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.758 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.759 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.763 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.763 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.768 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.769 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.963 2 DEBUG nova.network.neutron [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Updating instance_info_cache with network_info: [{"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.991 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Releasing lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.991 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Instance network_info: |[{"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.991 2 DEBUG oslo_concurrency.lockutils [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.992 2 DEBUG nova.network.neutron [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Refreshing network info cache for port 07fd6657-ae56-455d-b362-d5f4a3368a9e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:00:19 compute-0 nova_compute[259627]: 2025-10-14 09:00:19.995 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Start _get_guest_xml network_info=[{"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.004 2 WARNING nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.011 2 DEBUG nova.virt.libvirt.host [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.012 2 DEBUG nova.virt.libvirt.host [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.018 2 DEBUG nova.virt.libvirt.host [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.019 2 DEBUG nova.virt.libvirt.host [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.020 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.020 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.021 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.021 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.022 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.022 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.023 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.023 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.023 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.024 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.024 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.024 2 DEBUG nova.virt.hardware [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.029 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4156992053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.104 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.106 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3564MB free_disk=59.810611724853516GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.106 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.107 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.220 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 310ebd88-5fe0-40ad-99dd-c3a1b410d357 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.221 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance b932e3d1-4cf6-4934-9eec-c93284b17b43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.221 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6de921d2-e251-431d-9333-bae44aa81859 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.221 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6b630da6-e65a-48aa-9559-1d59beb73a93 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.221 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.221 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance f921c880-38a7-40b6-8300-2123889a19c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.222 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6f94ea73-1c62-40a9-9300-0d81c596377c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.222 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.222 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.390 2 DEBUG nova.network.neutron [-] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.410 2 INFO nova.compute.manager [-] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Took 0.72 seconds to deallocate network for instance.
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.415 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.461 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243472728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.511 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.535 2 DEBUG nova.storage.rbd_utils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image f921c880-38a7-40b6-8300-2123889a19c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.539 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.718 2 DEBUG nova.compute.manager [req-a179fef6-5ade-4afc-a8a7-5ff636ba9629 req-57e3e7fa-dd0c-4b03-9428-fd54f57cd49e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received event network-vif-deleted-5163f251-371c-409d-9510-db3f0c358877 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468748519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.917 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.924 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842131310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.942 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.951 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.953 2 DEBUG nova.virt.libvirt.vif [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2049488467',display_name='tempest-ServersAdminTestJSON-server-2049488467',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2049488467',id=38,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-v8hrxxla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:14Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=f921c880-38a7-40b6-8300-2123889a19c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.953 2 DEBUG nova.network.os_vif_util [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.955 2 DEBUG nova.network.os_vif_util [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:30:a0,bridge_name='br-int',has_traffic_filtering=True,id=07fd6657-ae56-455d-b362-d5f4a3368a9e,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd6657-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.957 2 DEBUG nova.objects.instance [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_devices' on Instance uuid f921c880-38a7-40b6-8300-2123889a19c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.980 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <uuid>f921c880-38a7-40b6-8300-2123889a19c6</uuid>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <name>instance-00000026</name>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdminTestJSON-server-2049488467</nova:name>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:20</nova:creationTime>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <nova:user uuid="56001fe1c9fc432e923f8c57058754db">tempest-ServersAdminTestJSON-276167539-project-member</nova:user>
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <nova:project uuid="ed7ee17abdbe419cb7d7fd0da2cd2068">tempest-ServersAdminTestJSON-276167539</nova:project>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <nova:port uuid="07fd6657-ae56-455d-b362-d5f4a3368a9e">
Oct 14 09:00:20 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <entry name="serial">f921c880-38a7-40b6-8300-2123889a19c6</entry>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <entry name="uuid">f921c880-38a7-40b6-8300-2123889a19c6</entry>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f921c880-38a7-40b6-8300-2123889a19c6_disk">
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f921c880-38a7-40b6-8300-2123889a19c6_disk.config">
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:20 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:86:30:a0"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <target dev="tap07fd6657-ae"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6/console.log" append="off"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:20 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:20 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:20 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:20 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:20 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.981 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Preparing to wait for external event network-vif-plugged-07fd6657-ae56-455d-b362-d5f4a3368a9e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.981 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "f921c880-38a7-40b6-8300-2123889a19c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.982 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.983 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.984 2 DEBUG nova.virt.libvirt.vif [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2049488467',display_name='tempest-ServersAdminTestJSON-server-2049488467',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2049488467',id=38,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-v8hrxxla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:14Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=f921c880-38a7-40b6-8300-2123889a19c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.985 2 DEBUG nova.network.os_vif_util [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.986 2 DEBUG nova.network.os_vif_util [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:30:a0,bridge_name='br-int',has_traffic_filtering=True,id=07fd6657-ae56-455d-b362-d5f4a3368a9e,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd6657-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.987 2 DEBUG os_vif [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:30:a0,bridge_name='br-int',has_traffic_filtering=True,id=07fd6657-ae56-455d-b362-d5f4a3368a9e,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd6657-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.990 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.993 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:00:20 compute-0 nova_compute[259627]: 2025-10-14 09:00:20.994 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.000 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.001 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07fd6657-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.002 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07fd6657-ae, col_values=(('external_ids', {'iface-id': '07fd6657-ae56-455d-b362-d5f4a3368a9e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:30:a0', 'vm-uuid': 'f921c880-38a7-40b6-8300-2123889a19c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:21 compute-0 NetworkManager[44885]: <info>  [1760432421.0045] manager: (tap07fd6657-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.012 2 INFO os_vif [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:30:a0,bridge_name='br-int',has_traffic_filtering=True,id=07fd6657-ae56-455d-b362-d5f4a3368a9e,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd6657-ae')
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.065 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.066 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.066 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No VIF found with MAC fa:16:3e:86:30:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.067 2 INFO nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Using config drive
Oct 14 09:00:21 compute-0 ceph-mon[74249]: pgmap v1281: 305 pgs: 305 active+clean; 436 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 7.9 MiB/s wr, 267 op/s
Oct 14 09:00:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1243472728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2468748519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1842131310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.096 2 DEBUG nova.storage.rbd_utils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image f921c880-38a7-40b6-8300-2123889a19c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.207 2 DEBUG nova.network.neutron [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Updated VIF entry in instance network info cache for port 07fd6657-ae56-455d-b362-d5f4a3368a9e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.208 2 DEBUG nova.network.neutron [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Updating instance_info_cache with network_info: [{"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.210 2 DEBUG oslo_concurrency.processutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.235 2 DEBUG oslo_concurrency.lockutils [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.236 2 DEBUG nova.compute.manager [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received event network-vif-unplugged-5163f251-371c-409d-9510-db3f0c358877 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.236 2 DEBUG oslo_concurrency.lockutils [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.236 2 DEBUG oslo_concurrency.lockutils [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.236 2 DEBUG oslo_concurrency.lockutils [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.236 2 DEBUG nova.compute.manager [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] No waiting events found dispatching network-vif-unplugged-5163f251-371c-409d-9510-db3f0c358877 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.237 2 DEBUG nova.compute.manager [req-b735098b-fba7-446f-b5ee-d12a99ae09d6 req-f4d1b816-de10-473c-8fa9-d2106e867832 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received event network-vif-unplugged-5163f251-371c-409d-9510-db3f0c358877 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.445 2 DEBUG nova.network.neutron [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Successfully updated port: 0ddc0438-5718-411f-bf95-e14886b82478 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.473 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-6f94ea73-1c62-40a9-9300-0d81c596377c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.474 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-6f94ea73-1c62-40a9-9300-0d81c596377c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.474 2 DEBUG nova.network.neutron [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.569 2 DEBUG nova.compute.manager [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.569 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.569 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.570 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.570 2 DEBUG nova.compute.manager [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] No waiting events found dispatching network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.570 2 WARNING nova.compute.manager [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received unexpected event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 for instance with vm_state deleted and task_state None.
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.570 2 DEBUG nova.compute.manager [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.571 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.571 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.571 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.571 2 DEBUG nova.compute.manager [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] No waiting events found dispatching network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.571 2 WARNING nova.compute.manager [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Received unexpected event network-vif-plugged-5163f251-371c-409d-9510-db3f0c358877 for instance with vm_state deleted and task_state None.
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.572 2 DEBUG nova.compute.manager [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-changed-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.572 2 DEBUG nova.compute.manager [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Refreshing instance network info cache due to event network-changed-0ddc0438-5718-411f-bf95-e14886b82478. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.572 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6f94ea73-1c62-40a9-9300-0d81c596377c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 430 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.3 MiB/s wr, 296 op/s
Oct 14 09:00:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2983644418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.624 2 DEBUG oslo_concurrency.processutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.629 2 DEBUG nova.compute.provider_tree [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.649 2 DEBUG nova.scheduler.client.report [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.668 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.696 2 INFO nova.scheduler.client.report [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Deleted allocations for instance 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe
Oct 14 09:00:21 compute-0 ovn_controller[152662]: 2025-10-14T09:00:21Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:4e:2c 10.100.0.11
Oct 14 09:00:21 compute-0 ovn_controller[152662]: 2025-10-14T09:00:21Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:4e:2c 10.100.0.11
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.763 2 DEBUG oslo_concurrency.lockutils [None req-3dea0716-3430-4af9-b653-7d4f8a03035e 91f3c667c5b24ffa9e97a07a6cfa768f a8b21807ca224c1daf381f45c9748d90 - - default default] Lock "24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.766 2 DEBUG nova.network.neutron [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.834 2 INFO nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Creating config drive at /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6/disk.config
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.844 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7vu09q4u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:21 compute-0 nova_compute[259627]: 2025-10-14 09:00:21.987 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7vu09q4u" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.030 2 DEBUG nova.storage.rbd_utils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image f921c880-38a7-40b6-8300-2123889a19c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.034 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6/disk.config f921c880-38a7-40b6-8300-2123889a19c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2983644418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.214 2 DEBUG oslo_concurrency.processutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6/disk.config f921c880-38a7-40b6-8300-2123889a19c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.216 2 INFO nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Deleting local config drive /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6/disk.config because it was imported into RBD.
Oct 14 09:00:22 compute-0 kernel: tap07fd6657-ae: entered promiscuous mode
Oct 14 09:00:22 compute-0 NetworkManager[44885]: <info>  [1760432422.2693] manager: (tap07fd6657-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Oct 14 09:00:22 compute-0 ovn_controller[152662]: 2025-10-14T09:00:22Z|00318|binding|INFO|Claiming lport 07fd6657-ae56-455d-b362-d5f4a3368a9e for this chassis.
Oct 14 09:00:22 compute-0 ovn_controller[152662]: 2025-10-14T09:00:22Z|00319|binding|INFO|07fd6657-ae56-455d-b362-d5f4a3368a9e: Claiming fa:16:3e:86:30:a0 10.100.0.14
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.282 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:30:a0 10.100.0.14'], port_security=['fa:16:3e:86:30:a0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f921c880-38a7-40b6-8300-2123889a19c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=07fd6657-ae56-455d-b362-d5f4a3368a9e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.283 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 07fd6657-ae56-455d-b362-d5f4a3368a9e in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a bound to our chassis
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.285 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:00:22 compute-0 ovn_controller[152662]: 2025-10-14T09:00:22Z|00320|binding|INFO|Setting lport 07fd6657-ae56-455d-b362-d5f4a3368a9e ovn-installed in OVS
Oct 14 09:00:22 compute-0 ovn_controller[152662]: 2025-10-14T09:00:22Z|00321|binding|INFO|Setting lport 07fd6657-ae56-455d-b362-d5f4a3368a9e up in Southbound
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.306 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[668d665f-e67a-43e9-aba6-0d8e83cd235d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:22 compute-0 systemd-machined[214636]: New machine qemu-43-instance-00000026.
Oct 14 09:00:22 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000026.
Oct 14 09:00:22 compute-0 systemd-udevd[302779]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.351 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[98e07d30-31bd-40d9-9a10-2ee8c62b126d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.358 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[709aa27c-96f6-441e-a288-10f92066c07e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:22 compute-0 NetworkManager[44885]: <info>  [1760432422.3686] device (tap07fd6657-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:22 compute-0 NetworkManager[44885]: <info>  [1760432422.3693] device (tap07fd6657-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.408 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd22cc2-fad1-4b0b-b98a-9232d94fb496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.433 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5352b94b-9f6b-44a6-a7ce-5107db07e76f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302789, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.453 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4de89a86-03cf-4106-a910-f68f18e31d6a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302791, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302791, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.454 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.458 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.458 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.458 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:22.459 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.805 2 DEBUG nova.compute.manager [req-518e72c2-eb88-479b-a229-973c0ef45988 req-ca3084ae-a507-4c42-a189-32739080e190 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Received event network-vif-plugged-07fd6657-ae56-455d-b362-d5f4a3368a9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.805 2 DEBUG oslo_concurrency.lockutils [req-518e72c2-eb88-479b-a229-973c0ef45988 req-ca3084ae-a507-4c42-a189-32739080e190 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f921c880-38a7-40b6-8300-2123889a19c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.806 2 DEBUG oslo_concurrency.lockutils [req-518e72c2-eb88-479b-a229-973c0ef45988 req-ca3084ae-a507-4c42-a189-32739080e190 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.806 2 DEBUG oslo_concurrency.lockutils [req-518e72c2-eb88-479b-a229-973c0ef45988 req-ca3084ae-a507-4c42-a189-32739080e190 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.807 2 DEBUG nova.compute.manager [req-518e72c2-eb88-479b-a229-973c0ef45988 req-ca3084ae-a507-4c42-a189-32739080e190 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Processing event network-vif-plugged-07fd6657-ae56-455d-b362-d5f4a3368a9e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.909 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.910 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.941 2 DEBUG nova.network.neutron [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Updating instance_info_cache with network_info: [{"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.950 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.951 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.959 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-6f94ea73-1c62-40a9-9300-0d81c596377c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.960 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Instance network_info: |[{"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.961 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6f94ea73-1c62-40a9-9300-0d81c596377c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.961 2 DEBUG nova.network.neutron [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Refreshing network info cache for port 0ddc0438-5718-411f-bf95-e14886b82478 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.968 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Start _get_guest_xml network_info=[{"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:00:06Z,direct_url=<?>,disk_format='raw',id=76c5459a-f996-4fca-ad51-ec29f6551489,min_disk=1,min_ram=0,name='tempest-test-snap-761702226',owner='0d87d2d744db48dc8b32bb4bf6847fce',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:00:11Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '76c5459a-f996-4fca-ad51-ec29f6551489'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.979 2 WARNING nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.994 2 DEBUG nova.virt.libvirt.host [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:22 compute-0 nova_compute[259627]: 2025-10-14 09:00:22.996 2 DEBUG nova.virt.libvirt.host [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.002 2 DEBUG nova.virt.libvirt.host [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.003 2 DEBUG nova.virt.libvirt.host [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.004 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.005 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:00:06Z,direct_url=<?>,disk_format='raw',id=76c5459a-f996-4fca-ad51-ec29f6551489,min_disk=1,min_ram=0,name='tempest-test-snap-761702226',owner='0d87d2d744db48dc8b32bb4bf6847fce',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:00:11Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.006 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.007 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.008 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.009 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.009 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.010 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.011 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.011 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.012 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.013 2 DEBUG nova.virt.hardware [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.018 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:23 compute-0 ceph-mon[74249]: pgmap v1282: 305 pgs: 305 active+clean; 430 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.3 MiB/s wr, 296 op/s
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.371 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.374 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432423.371063, f921c880-38a7-40b6-8300-2123889a19c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.376 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] VM Started (Lifecycle Event)
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.379 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.384 2 INFO nova.virt.libvirt.driver [-] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Instance spawned successfully.
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.385 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.398 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.408 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.424 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.425 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.426 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.427 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.428 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.429 2 DEBUG nova.virt.libvirt.driver [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.438 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.439 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432423.3732011, f921c880-38a7-40b6-8300-2123889a19c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.439 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] VM Paused (Lifecycle Event)
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.493 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.498 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432423.3796575, f921c880-38a7-40b6-8300-2123889a19c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.499 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] VM Resumed (Lifecycle Event)
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.524 2 INFO nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Took 9.12 seconds to spawn the instance on the hypervisor.
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.525 2 DEBUG nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.528 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373132558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.542 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.560 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.580 2 DEBUG nova.storage.rbd_utils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6f94ea73-1c62-40a9-9300-0d81c596377c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.585 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 430 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.611 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.624 2 INFO nova.compute.manager [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Took 10.29 seconds to build instance.
Oct 14 09:00:23 compute-0 nova_compute[259627]: 2025-10-14 09:00:23.639 2 DEBUG oslo_concurrency.lockutils [None req-1717fb30-143b-49b6-95da-f4a85446ef2a 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863113499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2373132558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2863113499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.100 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.102 2 DEBUG nova.virt.libvirt.vif [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-33419614',display_name='tempest-ImagesTestJSON-server-33419614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-33419614',id=39,image_ref='76c5459a-f996-4fca-ad51-ec29f6551489',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-5dalrlgt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6de921d2-e251-431d-9333-bae44aa81859',image_min_disk='1',image_min_ram='0',image_owner_id='0d87d2d744db48dc8b32bb4bf6847fce',image_owner_project_name='tempest-ImagesTestJSON-168259448',image_owner_user_name='tempest-ImagesTestJSON-168259448-project-member',image_user_id='3a217215c39e41fea2323ff7b3b4e6aa',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:18Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=6f94ea73-1c62-40a9-9300-0d81c596377c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.103 2 DEBUG nova.network.os_vif_util [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.105 2 DEBUG nova.network.os_vif_util [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:7f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0ddc0438-5718-411f-bf95-e14886b82478,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddc0438-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.107 2 DEBUG nova.objects.instance [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f94ea73-1c62-40a9-9300-0d81c596377c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.128 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <uuid>6f94ea73-1c62-40a9-9300-0d81c596377c</uuid>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <name>instance-00000027</name>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesTestJSON-server-33419614</nova:name>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:22</nova:creationTime>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="76c5459a-f996-4fca-ad51-ec29f6551489"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <nova:port uuid="0ddc0438-5718-411f-bf95-e14886b82478">
Oct 14 09:00:24 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <entry name="serial">6f94ea73-1c62-40a9-9300-0d81c596377c</entry>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <entry name="uuid">6f94ea73-1c62-40a9-9300-0d81c596377c</entry>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6f94ea73-1c62-40a9-9300-0d81c596377c_disk">
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6f94ea73-1c62-40a9-9300-0d81c596377c_disk.config">
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:24 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:90:7f:0c"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <target dev="tap0ddc0438-57"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c/console.log" append="off"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:24 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:24 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:24 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:24 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:24 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.129 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Preparing to wait for external event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.129 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.130 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.130 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.131 2 DEBUG nova.virt.libvirt.vif [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-33419614',display_name='tempest-ImagesTestJSON-server-33419614',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-33419614',id=39,image_ref='76c5459a-f996-4fca-ad51-ec29f6551489',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-5dalrlgt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6de921d2-e251-431d-9333-bae44aa81859',image_min_disk='1',image_min_ram='0',image_owner_id='0d87d2d744db48dc8b32bb4bf6847fce',image_owner_project_name='tempest-ImagesTestJSON-168259448',image_owner_user_name='tempest-ImagesTestJSON-168259448-project-member',image_user_id='3a217215c39e41fea2323ff7b3b4e6aa',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:18Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=6f94ea73-1c62-40a9-9300-0d81c596377c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.132 2 DEBUG nova.network.os_vif_util [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.133 2 DEBUG nova.network.os_vif_util [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:7f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0ddc0438-5718-411f-bf95-e14886b82478,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddc0438-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.134 2 DEBUG os_vif [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:7f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0ddc0438-5718-411f-bf95-e14886b82478,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddc0438-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.141 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.142 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.149 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ddc0438-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.150 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ddc0438-57, col_values=(('external_ids', {'iface-id': '0ddc0438-5718-411f-bf95-e14886b82478', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:7f:0c', 'vm-uuid': '6f94ea73-1c62-40a9-9300-0d81c596377c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:24 compute-0 NetworkManager[44885]: <info>  [1760432424.1532] manager: (tap0ddc0438-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.160 2 INFO os_vif [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:7f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0ddc0438-5718-411f-bf95-e14886b82478,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddc0438-57')
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.218 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.218 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.218 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:90:7f:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.219 2 INFO nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Using config drive
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.244 2 DEBUG nova.storage.rbd_utils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6f94ea73-1c62-40a9-9300-0d81c596377c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.470 2 INFO nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Creating config drive at /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c/disk.config
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.476 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmycgyaq1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.628 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmycgyaq1" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.657 2 DEBUG nova.storage.rbd_utils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6f94ea73-1c62-40a9-9300-0d81c596377c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.662 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c/disk.config 6f94ea73-1c62-40a9-9300-0d81c596377c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.842 2 DEBUG oslo_concurrency.processutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c/disk.config 6f94ea73-1c62-40a9-9300-0d81c596377c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.843 2 INFO nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Deleting local config drive /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c/disk.config because it was imported into RBD.
Oct 14 09:00:24 compute-0 kernel: tap0ddc0438-57: entered promiscuous mode
Oct 14 09:00:24 compute-0 NetworkManager[44885]: <info>  [1760432424.8973] manager: (tap0ddc0438-57): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Oct 14 09:00:24 compute-0 ovn_controller[152662]: 2025-10-14T09:00:24Z|00322|binding|INFO|Claiming lport 0ddc0438-5718-411f-bf95-e14886b82478 for this chassis.
Oct 14 09:00:24 compute-0 ovn_controller[152662]: 2025-10-14T09:00:24Z|00323|binding|INFO|0ddc0438-5718-411f-bf95-e14886b82478: Claiming fa:16:3e:90:7f:0c 10.100.0.7
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:24.905 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:7f:0c 10.100.0.7'], port_security=['fa:16:3e:90:7f:0c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f94ea73-1c62-40a9-9300-0d81c596377c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0ddc0438-5718-411f-bf95-e14886b82478) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:24.906 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0ddc0438-5718-411f-bf95-e14886b82478 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis
Oct 14 09:00:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:24.908 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.914 2 DEBUG nova.compute.manager [req-2154650c-3fcf-45a6-afef-2b9da19d2975 req-d8301a56-b9e3-49f7-a999-1c56bed7c2a9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Received event network-vif-plugged-07fd6657-ae56-455d-b362-d5f4a3368a9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.915 2 DEBUG oslo_concurrency.lockutils [req-2154650c-3fcf-45a6-afef-2b9da19d2975 req-d8301a56-b9e3-49f7-a999-1c56bed7c2a9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f921c880-38a7-40b6-8300-2123889a19c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.915 2 DEBUG oslo_concurrency.lockutils [req-2154650c-3fcf-45a6-afef-2b9da19d2975 req-d8301a56-b9e3-49f7-a999-1c56bed7c2a9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.916 2 DEBUG oslo_concurrency.lockutils [req-2154650c-3fcf-45a6-afef-2b9da19d2975 req-d8301a56-b9e3-49f7-a999-1c56bed7c2a9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.916 2 DEBUG nova.compute.manager [req-2154650c-3fcf-45a6-afef-2b9da19d2975 req-d8301a56-b9e3-49f7-a999-1c56bed7c2a9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] No waiting events found dispatching network-vif-plugged-07fd6657-ae56-455d-b362-d5f4a3368a9e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.916 2 WARNING nova.compute.manager [req-2154650c-3fcf-45a6-afef-2b9da19d2975 req-d8301a56-b9e3-49f7-a999-1c56bed7c2a9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Received unexpected event network-vif-plugged-07fd6657-ae56-455d-b362-d5f4a3368a9e for instance with vm_state active and task_state None.
Oct 14 09:00:24 compute-0 NetworkManager[44885]: <info>  [1760432424.9239] device (tap0ddc0438-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:24 compute-0 NetworkManager[44885]: <info>  [1760432424.9257] device (tap0ddc0438-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:24 compute-0 systemd-machined[214636]: New machine qemu-44-instance-00000027.
Oct 14 09:00:24 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000027.
Oct 14 09:00:24 compute-0 ovn_controller[152662]: 2025-10-14T09:00:24Z|00324|binding|INFO|Setting lport 0ddc0438-5718-411f-bf95-e14886b82478 ovn-installed in OVS
Oct 14 09:00:24 compute-0 ovn_controller[152662]: 2025-10-14T09:00:24Z|00325|binding|INFO|Setting lport 0ddc0438-5718-411f-bf95-e14886b82478 up in Southbound
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:24.937 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0de5cd32-8e6c-4224-83e6-ea2350c0a54d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:24.971 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8088e7-a245-4ef5-b729-b0f7519f14d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:24.976 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[799f3809-c0b9-468c-aea5-1e84cadd51b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.985 2 DEBUG nova.network.neutron [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Updated VIF entry in instance network info cache for port 0ddc0438-5718-411f-bf95-e14886b82478. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:00:24 compute-0 nova_compute[259627]: 2025-10-14 09:00:24.985 2 DEBUG nova.network.neutron [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Updating instance_info_cache with network_info: [{"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.007 2 DEBUG oslo_concurrency.lockutils [req-8299e852-3897-4e60-8e14-f3da4b481bbb req-e07e6049-3c03-43da-b38a-a4e6206aac18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6f94ea73-1c62-40a9-9300-0d81c596377c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:25.025 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9d9031-97f5-43d9-9b4b-431df718cad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:25.046 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6526c61-0e27-402c-82c1-686cbaeb1e8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618124, 'reachable_time': 18179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302981, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:25.069 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d1db5f3e-fbd8-47eb-8756-58e26b605ceb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2322cf7a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618137, 'tstamp': 618137}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302982, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2322cf7a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618140, 'tstamp': 618140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302982, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:25.071 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:25.075 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:25.075 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:25.076 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:25.077 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:25 compute-0 ceph-mon[74249]: pgmap v1283: 305 pgs: 305 active+clean; 430 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.221 2 DEBUG oslo_concurrency.lockutils [None req-a3062222-6783-40ee-aa26-f94d741ddc1f 08cc6ecbff464ad68e7b883283c9dac5 3560c0a649af405ea342016ed07df8a7 - - default default] Acquiring lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.222 2 DEBUG oslo_concurrency.lockutils [None req-a3062222-6783-40ee-aa26-f94d741ddc1f 08cc6ecbff464ad68e7b883283c9dac5 3560c0a649af405ea342016ed07df8a7 - - default default] Acquired lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.223 2 DEBUG nova.network.neutron [None req-a3062222-6783-40ee-aa26-f94d741ddc1f 08cc6ecbff464ad68e7b883283c9dac5 3560c0a649af405ea342016ed07df8a7 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:00:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 451 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 378 op/s
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.781 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432425.7806604, 6f94ea73-1c62-40a9-9300-0d81c596377c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.781 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] VM Started (Lifecycle Event)
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.801 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.805 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432425.7816396, 6f94ea73-1c62-40a9-9300-0d81c596377c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.805 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] VM Paused (Lifecycle Event)
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.823 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.829 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:25 compute-0 nova_compute[259627]: 2025-10-14 09:00:25.849 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:27 compute-0 nova_compute[259627]: 2025-10-14 09:00:27.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:27 compute-0 ceph-mon[74249]: pgmap v1284: 305 pgs: 305 active+clean; 451 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 378 op/s
Oct 14 09:00:27 compute-0 nova_compute[259627]: 2025-10-14 09:00:27.314 2 DEBUG nova.network.neutron [None req-a3062222-6783-40ee-aa26-f94d741ddc1f 08cc6ecbff464ad68e7b883283c9dac5 3560c0a649af405ea342016ed07df8a7 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Updating instance_info_cache with network_info: [{"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:27 compute-0 nova_compute[259627]: 2025-10-14 09:00:27.340 2 DEBUG oslo_concurrency.lockutils [None req-a3062222-6783-40ee-aa26-f94d741ddc1f 08cc6ecbff464ad68e7b883283c9dac5 3560c0a649af405ea342016ed07df8a7 - - default default] Releasing lock "refresh_cache-f921c880-38a7-40b6-8300-2123889a19c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:27 compute-0 nova_compute[259627]: 2025-10-14 09:00:27.341 2 DEBUG nova.compute.manager [None req-a3062222-6783-40ee-aa26-f94d741ddc1f 08cc6ecbff464ad68e7b883283c9dac5 3560c0a649af405ea342016ed07df8a7 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 14 09:00:27 compute-0 nova_compute[259627]: 2025-10-14 09:00:27.341 2 DEBUG nova.compute.manager [None req-a3062222-6783-40ee-aa26-f94d741ddc1f 08cc6ecbff464ad68e7b883283c9dac5 3560c0a649af405ea342016ed07df8a7 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] network_info to inject: |[{"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 14 09:00:27 compute-0 ovn_controller[152662]: 2025-10-14T09:00:27Z|00326|binding|INFO|Releasing lport 6baedd76-8a05-42d6-8356-18b586f58672 from this chassis (sb_readonly=0)
Oct 14 09:00:27 compute-0 ovn_controller[152662]: 2025-10-14T09:00:27Z|00327|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 09:00:27 compute-0 nova_compute[259627]: 2025-10-14 09:00:27.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 451 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 323 op/s
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:29 compute-0 ceph-mon[74249]: pgmap v1285: 305 pgs: 305 active+clean; 451 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 323 op/s
Oct 14 09:00:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 451 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 323 op/s
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.795 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.795 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.804 2 DEBUG nova.compute.manager [req-ca69288a-78c4-4ec5-9513-f9ce66a7b3c1 req-1b892c0b-3f88-4ae6-a6a9-848ec35b127e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.805 2 DEBUG oslo_concurrency.lockutils [req-ca69288a-78c4-4ec5-9513-f9ce66a7b3c1 req-1b892c0b-3f88-4ae6-a6a9-848ec35b127e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.805 2 DEBUG oslo_concurrency.lockutils [req-ca69288a-78c4-4ec5-9513-f9ce66a7b3c1 req-1b892c0b-3f88-4ae6-a6a9-848ec35b127e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.806 2 DEBUG oslo_concurrency.lockutils [req-ca69288a-78c4-4ec5-9513-f9ce66a7b3c1 req-1b892c0b-3f88-4ae6-a6a9-848ec35b127e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.806 2 DEBUG nova.compute.manager [req-ca69288a-78c4-4ec5-9513-f9ce66a7b3c1 req-1b892c0b-3f88-4ae6-a6a9-848ec35b127e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Processing event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.807 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.810 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432429.810111, 6f94ea73-1c62-40a9-9300-0d81c596377c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.810 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] VM Resumed (Lifecycle Event)
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.820 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.828 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.829 2 DEBUG nova.virt.libvirt.driver [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.833 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.835 2 INFO nova.virt.libvirt.driver [-] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Instance spawned successfully.
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.835 2 INFO nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Took 11.56 seconds to spawn the instance on the hypervisor.
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.836 2 DEBUG nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.850 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.918 2 INFO nova.compute.manager [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Took 12.75 seconds to build instance.
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.936 2 DEBUG oslo_concurrency.lockutils [None req-07d83c19-c10e-4411-8688-2cf056abfd5e 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.938 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.939 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.948 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:00:29 compute-0 nova_compute[259627]: 2025-10-14 09:00:29.949 2 INFO nova.compute.claims [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.188 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348210968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.647 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.657 2 DEBUG nova.compute.provider_tree [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.677 2 DEBUG nova.scheduler.client.report [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.705 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.706 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.759 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.760 2 DEBUG nova.network.neutron [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.782 2 INFO nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.804 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.904 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.907 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.907 2 INFO nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Creating image(s)
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.932 2 DEBUG nova.storage.rbd_utils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] rbd image ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.964 2 DEBUG nova.storage.rbd_utils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] rbd image ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:30 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.994 2 DEBUG nova.storage.rbd_utils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] rbd image ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:30.999 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.091 2 DEBUG nova.policy [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cfa85d055bc34b8a8c67b895be1ce2f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c678b346288454cb60e84d74446e637', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.098 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.099 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.099 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.100 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.101 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.104 2 INFO nova.compute.manager [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Terminating instance
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.107 2 DEBUG nova.compute.manager [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.108 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.110 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.111 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.112 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.147 2 DEBUG nova.storage.rbd_utils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] rbd image ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.152 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:31 compute-0 ceph-mon[74249]: pgmap v1286: 305 pgs: 305 active+clean; 451 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 323 op/s
Oct 14 09:00:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1348210968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:31 compute-0 kernel: tap0ddc0438-57 (unregistering): left promiscuous mode
Oct 14 09:00:31 compute-0 NetworkManager[44885]: <info>  [1760432431.2400] device (tap0ddc0438-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00328|binding|INFO|Releasing lport 0ddc0438-5718-411f-bf95-e14886b82478 from this chassis (sb_readonly=0)
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00329|binding|INFO|Setting lport 0ddc0438-5718-411f-bf95-e14886b82478 down in Southbound
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00330|binding|INFO|Removing iface tap0ddc0438-57 ovn-installed in OVS
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.265 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:7f:0c 10.100.0.7'], port_security=['fa:16:3e:90:7f:0c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f94ea73-1c62-40a9-9300-0d81c596377c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0ddc0438-5718-411f-bf95-e14886b82478) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.267 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0ddc0438-5718-411f-bf95-e14886b82478 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.270 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.296 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a5e32d-74c6-488a-bb58-ba5280e3bf8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct 14 09:00:31 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Consumed 2.288s CPU time.
Oct 14 09:00:31 compute-0 systemd-machined[214636]: Machine qemu-44-instance-00000027 terminated.
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.333 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[501d7eaf-ef1e-4ba6-877d-fc18096d39ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.337 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c67a3df8-d39f-4d75-92a0-fb1a70dc3991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.377 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[82f12cf9-fc63-4976-8954-cc592c5ed405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 systemd-udevd[303141]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:31 compute-0 kernel: tap0ddc0438-57: entered promiscuous mode
Oct 14 09:00:31 compute-0 NetworkManager[44885]: <info>  [1760432431.4082] manager: (tap0ddc0438-57): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Oct 14 09:00:31 compute-0 kernel: tap0ddc0438-57 (unregistering): left promiscuous mode
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00331|binding|INFO|Claiming lport 0ddc0438-5718-411f-bf95-e14886b82478 for this chassis.
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00332|binding|INFO|0ddc0438-5718-411f-bf95-e14886b82478: Claiming fa:16:3e:90:7f:0c 10.100.0.7
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.424 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:7f:0c 10.100.0.7'], port_security=['fa:16:3e:90:7f:0c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f94ea73-1c62-40a9-9300-0d81c596377c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0ddc0438-5718-411f-bf95-e14886b82478) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.436 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac74c72-3383-49e9-9e6a-02de82259426]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618124, 'reachable_time': 18179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303154, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.437 2 INFO nova.compute.manager [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Rebuilding instance
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00333|binding|INFO|Setting lport 0ddc0438-5718-411f-bf95-e14886b82478 ovn-installed in OVS
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.459 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56f9ee98-afc9-4503-a17f-28cd0f22b9a0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2322cf7a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618137, 'tstamp': 618137}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303161, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2322cf7a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618140, 'tstamp': 618140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303161, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00334|binding|INFO|Setting lport 0ddc0438-5718-411f-bf95-e14886b82478 up in Southbound
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00335|binding|INFO|Releasing lport 0ddc0438-5718-411f-bf95-e14886b82478 from this chassis (sb_readonly=1)
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.464 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00336|if_status|INFO|Dropped 1 log messages in last 13 seconds (most recently, 13 seconds ago) due to excessive rate
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00337|if_status|INFO|Not setting lport 0ddc0438-5718-411f-bf95-e14886b82478 down as sb is readonly
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00338|binding|INFO|Removing iface tap0ddc0438-57 ovn-installed in OVS
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00339|binding|INFO|Releasing lport 0ddc0438-5718-411f-bf95-e14886b82478 from this chassis (sb_readonly=0)
Oct 14 09:00:31 compute-0 ovn_controller[152662]: 2025-10-14T09:00:31Z|00340|binding|INFO|Setting lport 0ddc0438-5718-411f-bf95-e14886b82478 down in Southbound
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.479 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:7f:0c 10.100.0.7'], port_security=['fa:16:3e:90:7f:0c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f94ea73-1c62-40a9-9300-0d81c596377c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0ddc0438-5718-411f-bf95-e14886b82478) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.480 2 INFO nova.virt.libvirt.driver [-] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Instance destroyed successfully.
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.480 2 DEBUG nova.objects.instance [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid 6f94ea73-1c62-40a9-9300-0d81c596377c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.496 2 DEBUG nova.virt.libvirt.vif [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:00:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-33419614',display_name='tempest-ImagesTestJSON-server-33419614',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-33419614',id=39,image_ref='76c5459a-f996-4fca-ad51-ec29f6551489',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-5dalrlgt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6de921d2-e251-431d-9333-bae44aa81859',image_min_disk='1',image_min_ram='0',image_owner_id='0d87d2d744db48dc8b32bb4bf6847fce',image_owner_project_name='tempest-ImagesTestJSON-168259448',image_owner_user_name='tempest-ImagesTestJSON-168259448-project-member',image_user_id='3a217215c39e41fea2323ff7b3b4e6aa',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:00:29Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=6f94ea73-1c62-40a9-9300-0d81c596377c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.496 2 DEBUG nova.network.os_vif_util [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "0ddc0438-5718-411f-bf95-e14886b82478", "address": "fa:16:3e:90:7f:0c", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ddc0438-57", "ovs_interfaceid": "0ddc0438-5718-411f-bf95-e14886b82478", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.497 2 DEBUG nova.network.os_vif_util [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:7f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0ddc0438-5718-411f-bf95-e14886b82478,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddc0438-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.498 2 DEBUG os_vif [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:7f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0ddc0438-5718-411f-bf95-e14886b82478,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddc0438-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ddc0438-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.504 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.504 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.504 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.505 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.504 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.506 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0ddc0438-5718-411f-bf95-e14886b82478 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.507 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.532 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b18d6a5-d750-420c-a69e-c3e116ce15cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.538 2 INFO os_vif [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:7f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0ddc0438-5718-411f-bf95-e14886b82478,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ddc0438-57')
Oct 14 09:00:31 compute-0 podman[303158]: 2025-10-14 09:00:31.550543639 +0000 UTC m=+0.101858233 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.565 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fa05dcac-10b4-40b6-ac38-9941e1fbf2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 podman[303159]: 2025-10-14 09:00:31.569915714 +0000 UTC m=+0.118808878 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.573 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5106664b-7dd6-4278-9d99-6110ad87f9e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 451 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 331 op/s
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.607 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f7faba5b-bf87-4a93-8b76-984546d6d850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.610 2 DEBUG nova.storage.rbd_utils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] resizing rbd image ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.627 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[682b8ea7-a7bb-4102-ac4d-421db3a2cd74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618124, 'reachable_time': 18179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303253, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.656 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45aaa391-1d4f-482b-9e86-e70130545987]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2322cf7a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618137, 'tstamp': 618137}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303273, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2322cf7a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618140, 'tstamp': 618140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303273, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.657 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.699 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.700 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0ddc0438-5718-411f-bf95-e14886b82478 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.701 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.711 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a151d6e7-3e97-4bda-a999-b2e8049c9d3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.759 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f5de81-1ea9-4adc-9040-6626a8d75cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.764 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[22efd74a-431f-4d38-a10b-841e1108fa70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.770 2 DEBUG nova.compute.manager [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.783 2 DEBUG nova.objects.instance [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lazy-loading 'migration_context' on Instance uuid ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.820 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c03574e2-5cb7-45cc-8ca5-e2ed326cf407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.836 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.837 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Ensure instance console log exists: /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.838 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.838 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.839 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.841 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e50e08ba-0ee1-4cad-8ea4-5cd724c63ee2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618124, 'reachable_time': 18179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303300, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.862 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fa051786-abb9-49b4-b578-3eaac4e88f6a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2322cf7a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618137, 'tstamp': 618137}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303301, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2322cf7a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618140, 'tstamp': 618140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303301, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.864 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.867 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.867 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:31.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.895 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_requests' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.912 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.935 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'resources' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.954 2 DEBUG nova.network.neutron [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Successfully created port: e98aad77-80b5-4685-8387-939d9379c597 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.959 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'migration_context' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.978 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.982 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:00:31 compute-0 nova_compute[259627]: 2025-10-14 09:00:31.999 2 DEBUG nova.compute.manager [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.000 2 DEBUG oslo_concurrency.lockutils [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.000 2 DEBUG oslo_concurrency.lockutils [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.001 2 DEBUG oslo_concurrency.lockutils [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.001 2 DEBUG nova.compute.manager [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] No waiting events found dispatching network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.002 2 WARNING nova.compute.manager [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received unexpected event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 for instance with vm_state active and task_state deleting.
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.003 2 DEBUG nova.compute.manager [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-unplugged-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.003 2 DEBUG oslo_concurrency.lockutils [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.003 2 DEBUG oslo_concurrency.lockutils [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.004 2 DEBUG oslo_concurrency.lockutils [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.004 2 DEBUG nova.compute.manager [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] No waiting events found dispatching network-vif-unplugged-0ddc0438-5718-411f-bf95-e14886b82478 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.004 2 DEBUG nova.compute.manager [req-b4269bc5-cf0b-4596-9f4c-7e18d35900e6 req-0273cf9b-23e2-4f8b-b7f7-e13811ffc8e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-unplugged-0ddc0438-5718-411f-bf95-e14886b82478 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.013 2 INFO nova.virt.libvirt.driver [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Deleting instance files /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c_del
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.014 2 INFO nova.virt.libvirt.driver [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Deletion of /var/lib/nova/instances/6f94ea73-1c62-40a9-9300-0d81c596377c_del complete
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.090 2 INFO nova.compute.manager [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Took 0.98 seconds to destroy the instance on the hypervisor.
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.091 2 DEBUG oslo.service.loopingcall [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.091 2 DEBUG nova.compute.manager [-] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.091 2 DEBUG nova.network.neutron [-] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:00:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.561 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "1fa36bef-0a0f-4f9c-877c-325a656fa127" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.562 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "1fa36bef-0a0f-4f9c-877c-325a656fa127" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.590 2 DEBUG nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.659 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.660 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.668 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.669 2 INFO nova.compute.claims [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:00:32
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta']
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.753 2 DEBUG nova.network.neutron [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Successfully updated port: e98aad77-80b5-4685-8387-939d9379c597 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.768 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "refresh_cache-ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.768 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquired lock "refresh_cache-ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.769 2 DEBUG nova.network.neutron [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.848 2 DEBUG nova.compute.manager [req-fdc186dc-e707-4f5c-9bbe-239de7181792 req-3f2705cd-ef41-42e0-8b4d-272297885664 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received event network-changed-e98aad77-80b5-4685-8387-939d9379c597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.849 2 DEBUG nova.compute.manager [req-fdc186dc-e707-4f5c-9bbe-239de7181792 req-3f2705cd-ef41-42e0-8b4d-272297885664 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Refreshing instance network info cache due to event network-changed-e98aad77-80b5-4685-8387-939d9379c597. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.850 2 DEBUG oslo_concurrency.lockutils [req-fdc186dc-e707-4f5c-9bbe-239de7181792 req-3f2705cd-ef41-42e0-8b4d-272297885664 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.908 2 DEBUG nova.network.neutron [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:00:32 compute-0 nova_compute[259627]: 2025-10-14 09:00:32.919 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:00:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.081 2 DEBUG nova.network.neutron [-] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.101 2 INFO nova.compute.manager [-] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Took 1.01 seconds to deallocate network for instance.
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.153 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:33 compute-0 ceph-mon[74249]: pgmap v1287: 305 pgs: 305 active+clean; 451 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 331 op/s
Oct 14 09:00:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236002524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.426 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.433 2 DEBUG nova.compute.provider_tree [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.455 2 DEBUG nova.scheduler.client.report [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.495 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.496 2 DEBUG nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.499 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.571 2 DEBUG nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.594 2 INFO nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:00:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 451 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 894 KiB/s wr, 139 op/s
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.616 2 DEBUG nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.696 2 DEBUG oslo_concurrency.processutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.738 2 DEBUG nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.740 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.741 2 INFO nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Creating image(s)
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.763 2 DEBUG nova.storage.rbd_utils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.788 2 DEBUG nova.storage.rbd_utils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.813 2 DEBUG nova.storage.rbd_utils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.817 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.915 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.917 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.918 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.918 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.949 2 DEBUG nova.storage.rbd_utils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.956 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:33 compute-0 nova_compute[259627]: 2025-10-14 09:00:33.998 2 DEBUG nova.network.neutron [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Updating instance_info_cache with network_info: [{"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.026 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Releasing lock "refresh_cache-ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.026 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Instance network_info: |[{"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.027 2 DEBUG oslo_concurrency.lockutils [req-fdc186dc-e707-4f5c-9bbe-239de7181792 req-3f2705cd-ef41-42e0-8b4d-272297885664 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.027 2 DEBUG nova.network.neutron [req-fdc186dc-e707-4f5c-9bbe-239de7181792 req-3f2705cd-ef41-42e0-8b4d-272297885664 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Refreshing network info cache for port e98aad77-80b5-4685-8387-939d9379c597 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.031 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Start _get_guest_xml network_info=[{"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.036 2 WARNING nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.053 2 DEBUG nova.virt.libvirt.host [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.053 2 DEBUG nova.virt.libvirt.host [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.058 2 DEBUG nova.virt.libvirt.host [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.059 2 DEBUG nova.virt.libvirt.host [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.059 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.060 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.060 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.060 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.061 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.061 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.061 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.062 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.062 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.062 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.063 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.063 2 DEBUG nova.virt.hardware [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.067 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.112 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432418.998205, 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.112 2 INFO nova.compute.manager [-] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] VM Stopped (Lifecycle Event)
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.151 2 DEBUG nova.compute.manager [None req-5082ddc6-b08c-4bf5-9891-29ea30783759 - - - - - -] [instance: 24f83c83-b3c3-4139-ae1d-f6a3e3fe33fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3409947649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.159 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.160 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.160 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.161 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.162 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] No waiting events found dispatching network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.162 2 WARNING nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received unexpected event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 for instance with vm_state deleted and task_state None.
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.163 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.163 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.164 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.164 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.164 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] No waiting events found dispatching network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.165 2 WARNING nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received unexpected event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 for instance with vm_state deleted and task_state None.
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.165 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.166 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.166 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.166 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.167 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] No waiting events found dispatching network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.167 2 WARNING nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received unexpected event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 for instance with vm_state deleted and task_state None.
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.167 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-unplugged-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.168 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.168 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.168 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.169 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] No waiting events found dispatching network-vif-unplugged-0ddc0438-5718-411f-bf95-e14886b82478 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.169 2 WARNING nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received unexpected event network-vif-unplugged-0ddc0438-5718-411f-bf95-e14886b82478 for instance with vm_state deleted and task_state None.
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.169 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.170 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.170 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.170 2 DEBUG oslo_concurrency.lockutils [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.172 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] No waiting events found dispatching network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.172 2 WARNING nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received unexpected event network-vif-plugged-0ddc0438-5718-411f-bf95-e14886b82478 for instance with vm_state deleted and task_state None.
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.173 2 DEBUG nova.compute.manager [req-363c5358-17ce-458f-910b-f5c7e2dcdcec req-739da6e6-0488-4c08-8d87-c9bbf06c2d05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Received event network-vif-deleted-0ddc0438-5718-411f-bf95-e14886b82478 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.179 2 DEBUG oslo_concurrency.processutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.221 2 DEBUG nova.compute.provider_tree [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1236002524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3409947649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.239 2 DEBUG nova.scheduler.client.report [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.266 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:34 compute-0 kernel: tap60379992-d7 (unregistering): left promiscuous mode
Oct 14 09:00:34 compute-0 NetworkManager[44885]: <info>  [1760432434.2922] device (tap60379992-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:00:34 compute-0 ovn_controller[152662]: 2025-10-14T09:00:34Z|00341|binding|INFO|Releasing lport 60379992-d75d-4eff-a6bb-5d1615f35475 from this chassis (sb_readonly=0)
Oct 14 09:00:34 compute-0 ovn_controller[152662]: 2025-10-14T09:00:34Z|00342|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 down in Southbound
Oct 14 09:00:34 compute-0 ovn_controller[152662]: 2025-10-14T09:00:34Z|00343|binding|INFO|Removing iface tap60379992-d7 ovn-installed in OVS
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.306 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.308 2 INFO nova.scheduler.client.report [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance 6f94ea73-1c62-40a9-9300-0d81c596377c
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.310 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:c2:c5 10.100.0.8'], port_security=['fa:16:3e:47:c2:c5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '310ebd88-5fe0-40ad-99dd-c3a1b410d357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=60379992-d75d-4eff-a6bb-5d1615f35475) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.311 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 60379992-d75d-4eff-a6bb-5d1615f35475 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a unbound from our chassis
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.313 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.330 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19d3d70b-53e0-45ed-8807-0814711c0e9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:34 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 14 09:00:34 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Consumed 14.182s CPU time.
Oct 14 09:00:34 compute-0 systemd-machined[214636]: Machine qemu-36-instance-00000020 terminated.
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.359 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fe241d84-b5bd-491a-8632-7bac40842e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.362 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe86938-a66d-4430-a9dc-c0f86d4d324c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.391 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f2502a89-0330-4aae-ba94-a205e001b2a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.409 2 DEBUG nova.storage.rbd_utils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] resizing rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[389aa174-3e88-4ba3-b40f-6e68bea02319]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303507, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.431 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[821c242c-c8f8-4ceb-a387-544698cbafe2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303508, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303508, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.432 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:34.439 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.447 2 DEBUG oslo_concurrency.lockutils [None req-c3c3522d-ee3d-4395-b413-c699109f8a94 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6f94ea73-1c62-40a9-9300-0d81c596377c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.499 2 DEBUG nova.objects.instance [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'migration_context' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.512 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.512 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Ensure instance console log exists: /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.513 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.513 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.513 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.514 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.520 2 WARNING nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.536 2 DEBUG nova.virt.libvirt.host [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.537 2 DEBUG nova.virt.libvirt.host [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.544 2 DEBUG nova.virt.libvirt.host [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.544 2 DEBUG nova.virt.libvirt.host [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.544 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.544 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.545 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.545 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.545 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.546 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.546 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.546 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.546 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.546 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.547 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.547 2 DEBUG nova.virt.hardware [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.549 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2654967507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.599 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.624 2 DEBUG nova.storage.rbd_utils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] rbd image ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:34 compute-0 nova_compute[259627]: 2025-10-14 09:00:34.628 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.011 2 INFO nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance shutdown successfully after 3 seconds.
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.018 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance destroyed successfully.
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.025 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance destroyed successfully.
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.026 2 DEBUG nova.virt.libvirt.vif [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:30Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.027 2 DEBUG nova.network.os_vif_util [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.028 2 DEBUG nova.network.os_vif_util [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.028 2 DEBUG os_vif [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:00:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1656633032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60379992-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.038 2 DEBUG nova.compute.manager [req-35894e22-c0f9-4131-b877-fbf74c039853 req-5a595edf-e22a-44a1-8dc4-d737841e1cd0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-unplugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.038 2 DEBUG oslo_concurrency.lockutils [req-35894e22-c0f9-4131-b877-fbf74c039853 req-5a595edf-e22a-44a1-8dc4-d737841e1cd0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.039 2 DEBUG oslo_concurrency.lockutils [req-35894e22-c0f9-4131-b877-fbf74c039853 req-5a595edf-e22a-44a1-8dc4-d737841e1cd0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.039 2 DEBUG oslo_concurrency.lockutils [req-35894e22-c0f9-4131-b877-fbf74c039853 req-5a595edf-e22a-44a1-8dc4-d737841e1cd0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.039 2 DEBUG nova.compute.manager [req-35894e22-c0f9-4131-b877-fbf74c039853 req-5a595edf-e22a-44a1-8dc4-d737841e1cd0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] No waiting events found dispatching network-vif-unplugged-60379992-d75d-4eff-a6bb-5d1615f35475 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.039 2 WARNING nova.compute.manager [req-35894e22-c0f9-4131-b877-fbf74c039853 req-5a595edf-e22a-44a1-8dc4-d737841e1cd0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received unexpected event network-vif-unplugged-60379992-d75d-4eff-a6bb-5d1615f35475 for instance with vm_state error and task_state rebuilding.
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.042 2 INFO os_vif [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7')
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.057 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.090 2 DEBUG nova.storage.rbd_utils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.095 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69817023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.132 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.134 2 DEBUG nova.virt.libvirt.vif [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-646351129',display_name='tempest-InstanceActionsNegativeTestJSON-server-646351129',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-646351129',id=40,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c678b346288454cb60e84d74446e637',ramdisk_id='',reservation_id='r-9ydju5om',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1092081928',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1092081928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:30Z,user_data=None,user_id='cfa85d055bc34b8a8c67b895be1ce2f5',uuid=ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.134 2 DEBUG nova.network.os_vif_util [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Converting VIF {"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.135 2 DEBUG nova.network.os_vif_util [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:2f:92,bridge_name='br-int',has_traffic_filtering=True,id=e98aad77-80b5-4685-8387-939d9379c597,network=Network(079eaf38-8afb-4fe5-89ff-6d124da2b7ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape98aad77-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.144 2 DEBUG nova.objects.instance [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lazy-loading 'pci_devices' on Instance uuid ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.165 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <uuid>ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c</uuid>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <name>instance-00000028</name>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-646351129</nova:name>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:34</nova:creationTime>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:user uuid="cfa85d055bc34b8a8c67b895be1ce2f5">tempest-InstanceActionsNegativeTestJSON-1092081928-project-member</nova:user>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:project uuid="6c678b346288454cb60e84d74446e637">tempest-InstanceActionsNegativeTestJSON-1092081928</nova:project>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:port uuid="e98aad77-80b5-4685-8387-939d9379c597">
Oct 14 09:00:35 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="serial">ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="uuid">ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk.config">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:15:2f:92"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <target dev="tape98aad77-80"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c/console.log" append="off"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:35 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:35 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.165 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Preparing to wait for external event network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.166 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.166 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.166 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.168 2 DEBUG nova.virt.libvirt.vif [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-646351129',display_name='tempest-InstanceActionsNegativeTestJSON-server-646351129',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-646351129',id=40,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c678b346288454cb60e84d74446e637',ramdisk_id='',reservation_id='r-9ydju5om',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1092081928',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1092081928-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:30Z,user_data=None,user_id='cfa85d055bc34b8a8c67b895be1ce2f5',uuid=ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.168 2 DEBUG nova.network.os_vif_util [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Converting VIF {"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.169 2 DEBUG nova.network.os_vif_util [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:2f:92,bridge_name='br-int',has_traffic_filtering=True,id=e98aad77-80b5-4685-8387-939d9379c597,network=Network(079eaf38-8afb-4fe5-89ff-6d124da2b7ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape98aad77-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.169 2 DEBUG os_vif [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:2f:92,bridge_name='br-int',has_traffic_filtering=True,id=e98aad77-80b5-4685-8387-939d9379c597,network=Network(079eaf38-8afb-4fe5-89ff-6d124da2b7ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape98aad77-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.175 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape98aad77-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape98aad77-80, col_values=(('external_ids', {'iface-id': 'e98aad77-80b5-4685-8387-939d9379c597', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:2f:92', 'vm-uuid': 'ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:35 compute-0 NetworkManager[44885]: <info>  [1760432435.1788] manager: (tape98aad77-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.186 2 INFO os_vif [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:2f:92,bridge_name='br-int',has_traffic_filtering=True,id=e98aad77-80b5-4685-8387-939d9379c597,network=Network(079eaf38-8afb-4fe5-89ff-6d124da2b7ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape98aad77-80')
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.238 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.238 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.239 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] No VIF found with MAC fa:16:3e:15:2f:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.239 2 INFO nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Using config drive
Oct 14 09:00:35 compute-0 ceph-mon[74249]: pgmap v1288: 305 pgs: 305 active+clean; 451 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 894 KiB/s wr, 139 op/s
Oct 14 09:00:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2654967507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1656633032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/69817023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.266 2 DEBUG nova.storage.rbd_utils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] rbd image ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.525 2 INFO nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deleting instance files /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357_del
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.525 2 INFO nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deletion of /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357_del complete
Oct 14 09:00:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525131440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.552 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.553 2 DEBUG nova.objects.instance [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.589 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <uuid>1fa36bef-0a0f-4f9c-877c-325a656fa127</uuid>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <name>instance-00000029</name>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerShowV254Test-server-272120799</nova:name>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:34</nova:creationTime>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:user uuid="ea4260bffbae4126852a6ddd7dbb8c70">tempest-ServerShowV254Test-33884122-project-member</nova:user>
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <nova:project uuid="0dccd092671340e29357b2b2f18f6e3e">tempest-ServerShowV254Test-33884122</nova:project>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="serial">1fa36bef-0a0f-4f9c-877c-325a656fa127</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="uuid">1fa36bef-0a0f-4f9c-877c-325a656fa127</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1fa36bef-0a0f-4f9c-877c-325a656fa127_disk">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:35 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/console.log" append="off"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:35 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:35 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:35 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:35 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:35 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 527 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.4 MiB/s wr, 280 op/s
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.656 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.656 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.657 2 INFO nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Using config drive
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.678 2 DEBUG nova.storage.rbd_utils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.720 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.721 2 INFO nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating image(s)
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.741 2 DEBUG nova.storage.rbd_utils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.765 2 DEBUG nova.storage.rbd_utils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.787 2 DEBUG nova.storage.rbd_utils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.796 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.871 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.873 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.874 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.875 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.916 2 DEBUG nova.storage.rbd_utils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.926 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.980 2 INFO nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Creating config drive at /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config
Oct 14 09:00:35 compute-0 nova_compute[259627]: 2025-10-14 09:00:35.988 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwzwxongp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.136 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwzwxongp" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.173 2 DEBUG nova.storage.rbd_utils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.184 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.231 2 INFO nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Creating config drive at /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c/disk.config
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.236 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcb2cjciq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 14 09:00:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/525131440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 14 09:00:36 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.286 2 DEBUG nova.network.neutron [req-fdc186dc-e707-4f5c-9bbe-239de7181792 req-3f2705cd-ef41-42e0-8b4d-272297885664 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Updated VIF entry in instance network info cache for port e98aad77-80b5-4685-8387-939d9379c597. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.287 2 DEBUG nova.network.neutron [req-fdc186dc-e707-4f5c-9bbe-239de7181792 req-3f2705cd-ef41-42e0-8b4d-272297885664 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Updating instance_info_cache with network_info: [{"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.290 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.341 2 DEBUG oslo_concurrency.lockutils [req-fdc186dc-e707-4f5c-9bbe-239de7181792 req-3f2705cd-ef41-42e0-8b4d-272297885664 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.378 2 DEBUG nova.storage.rbd_utils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] resizing rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.414 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcb2cjciq" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.440 2 DEBUG nova.storage.rbd_utils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] rbd image ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.443 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c/disk.config ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.476 2 DEBUG oslo_concurrency.processutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.478 2 INFO nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Deleting local config drive /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config because it was imported into RBD.
Oct 14 09:00:36 compute-0 systemd-machined[214636]: New machine qemu-45-instance-00000029.
Oct 14 09:00:36 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.591 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.592 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Ensure instance console log exists: /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.592 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.593 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.593 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.596 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start _get_guest_xml network_info=[{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.604 2 WARNING nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.608 2 DEBUG nova.virt.libvirt.host [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.609 2 DEBUG nova.virt.libvirt.host [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.611 2 DEBUG nova.virt.libvirt.host [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.612 2 DEBUG nova.virt.libvirt.host [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.612 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.613 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.613 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.613 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.614 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.614 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.614 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.614 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.614 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.615 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.615 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.615 2 DEBUG nova.virt.hardware [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.615 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.631 2 DEBUG oslo_concurrency.processutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c/disk.config ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.631 2 INFO nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Deleting local config drive /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c/disk.config because it was imported into RBD.
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.646 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:36 compute-0 kernel: tape98aad77-80: entered promiscuous mode
Oct 14 09:00:36 compute-0 NetworkManager[44885]: <info>  [1760432436.6716] manager: (tape98aad77-80): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Oct 14 09:00:36 compute-0 systemd-udevd[303470]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:36 compute-0 ovn_controller[152662]: 2025-10-14T09:00:36Z|00344|binding|INFO|Claiming lport e98aad77-80b5-4685-8387-939d9379c597 for this chassis.
Oct 14 09:00:36 compute-0 ovn_controller[152662]: 2025-10-14T09:00:36Z|00345|binding|INFO|e98aad77-80b5-4685-8387-939d9379c597: Claiming fa:16:3e:15:2f:92 10.100.0.4
Oct 14 09:00:36 compute-0 NetworkManager[44885]: <info>  [1760432436.6867] device (tape98aad77-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:36 compute-0 NetworkManager[44885]: <info>  [1760432436.6877] device (tape98aad77-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.688 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:2f:92 10.100.0.4'], port_security=['fa:16:3e:15:2f:92 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-079eaf38-8afb-4fe5-89ff-6d124da2b7ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c678b346288454cb60e84d74446e637', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9457705-d84b-4d70-9346-4757f784d547', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71c85b37-713a-4740-a2ee-1c8f8f7374a9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e98aad77-80b5-4685-8387-939d9379c597) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.690 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e98aad77-80b5-4685-8387-939d9379c597 in datapath 079eaf38-8afb-4fe5-89ff-6d124da2b7ce bound to our chassis
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.692 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 079eaf38-8afb-4fe5-89ff-6d124da2b7ce
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.705 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ecdde53a-9a72-4194-9c62-b5a36c2c6d5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.706 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap079eaf38-81 in ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.708 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap079eaf38-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.708 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a4f6d97-a7d3-473a-8fd7-0df52ce658b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.709 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0155401f-3fac-41ea-b3c1-a9962b2df515]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 systemd-machined[214636]: New machine qemu-46-instance-00000028.
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.723 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7622fb12-418e-4d1d-b8b3-c1f7d90984a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000028.
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.747 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb87295-125f-4123-a639-1e10dcadaafe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:36 compute-0 ovn_controller[152662]: 2025-10-14T09:00:36Z|00346|binding|INFO|Setting lport e98aad77-80b5-4685-8387-939d9379c597 ovn-installed in OVS
Oct 14 09:00:36 compute-0 ovn_controller[152662]: 2025-10-14T09:00:36Z|00347|binding|INFO|Setting lport e98aad77-80b5-4685-8387-939d9379c597 up in Southbound
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.784 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e32f42-0d39-4770-9181-342ee88b47c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 NetworkManager[44885]: <info>  [1760432436.7928] manager: (tap079eaf38-80): new Veth device (/org/freedesktop/NetworkManager/Devices/159)
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.792 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d7cdf4-bb4f-4f31-97fb-141d97058644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.822 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6c34ad-d64b-455c-95c7-e200f9c7cb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.829 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd0eff3-85fa-4ec8-8ae6-ed8b116ce751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 NetworkManager[44885]: <info>  [1760432436.8539] device (tap079eaf38-80): carrier: link connected
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.861 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0e27e585-b788-4085-9da9-3cfddf9b3a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.873 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.874 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.874 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.875 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.875 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.876 2 INFO nova.compute.manager [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Terminating instance
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.877 2 DEBUG nova.compute.manager [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.879 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5269c0-d350-4561-bfab-8fd30010b15e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap079eaf38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:e3:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621561, 'reachable_time': 43506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304043, 'error': None, 'target': 'ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[657665b4-49f6-455d-b4d1-301745b302ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe29:e33a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621561, 'tstamp': 621561}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304044, 'error': None, 'target': 'ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.916 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc799f9d-88cf-4519-8583-129fb7d69c6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap079eaf38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:e3:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621561, 'reachable_time': 43506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304045, 'error': None, 'target': 'ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 kernel: tapc0eb9aa7-6f (unregistering): left promiscuous mode
Oct 14 09:00:36 compute-0 NetworkManager[44885]: <info>  [1760432436.9488] device (tapc0eb9aa7-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.953 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c536e342-3708-4d05-b8c0-8de2c25f4eff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:36 compute-0 ovn_controller[152662]: 2025-10-14T09:00:36Z|00348|binding|INFO|Releasing lport c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b from this chassis (sb_readonly=0)
Oct 14 09:00:36 compute-0 ovn_controller[152662]: 2025-10-14T09:00:36Z|00349|binding|INFO|Setting lport c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b down in Southbound
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:36 compute-0 ovn_controller[152662]: 2025-10-14T09:00:36Z|00350|binding|INFO|Removing iface tapc0eb9aa7-6f ovn-installed in OVS
Oct 14 09:00:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:36.969 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:83:ca 10.100.0.6'], port_security=['fa:16:3e:1d:83:ca 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6de921d2-e251-431d-9333-bae44aa81859', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:36 compute-0 nova_compute[259627]: 2025-10-14 09:00:36.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct 14 09:00:37 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Consumed 13.228s CPU time.
Oct 14 09:00:37 compute-0 systemd-machined[214636]: Machine qemu-40-instance-00000023 terminated.
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.022 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7139c328-258b-43f4-b03f-12396e572af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.023 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap079eaf38-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.023 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.024 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap079eaf38-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 kernel: tap079eaf38-80: entered promiscuous mode
Oct 14 09:00:37 compute-0 NetworkManager[44885]: <info>  [1760432437.0556] manager: (tap079eaf38-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.060 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap079eaf38-80, col_values=(('external_ids', {'iface-id': '26b84ac6-5ea4-41bd-86ad-e185a9b36a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 ovn_controller[152662]: 2025-10-14T09:00:37Z|00351|binding|INFO|Releasing lport 26b84ac6-5ea4-41bd-86ad-e185a9b36a32 from this chassis (sb_readonly=0)
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.087 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/079eaf38-8afb-4fe5-89ff-6d124da2b7ce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/079eaf38-8afb-4fe5-89ff-6d124da2b7ce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.089 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef894580-7787-49c8-9d05-c5f0c39793bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.089 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-079eaf38-8afb-4fe5-89ff-6d124da2b7ce
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/079eaf38-8afb-4fe5-89ff-6d124da2b7ce.pid.haproxy
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 079eaf38-8afb-4fe5-89ff-6d124da2b7ce
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.090 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce', 'env', 'PROCESS_TAG=haproxy-079eaf38-8afb-4fe5-89ff-6d124da2b7ce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/079eaf38-8afb-4fe5-89ff-6d124da2b7ce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:00:37 compute-0 NetworkManager[44885]: <info>  [1760432437.1065] manager: (tapc0eb9aa7-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.126 2 INFO nova.virt.libvirt.driver [-] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Instance destroyed successfully.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.127 2 DEBUG nova.objects.instance [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid 6de921d2-e251-431d-9333-bae44aa81859 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.148 2 DEBUG nova.virt.libvirt.vif [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877945577',display_name='tempest-ImagesTestJSON-server-1877945577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877945577',id=35,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3imcd7e3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:00:12Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=6de921d2-e251-431d-9333-bae44aa81859,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.149 2 DEBUG nova.network.os_vif_util [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "address": "fa:16:3e:1d:83:ca", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eb9aa7-6f", "ovs_interfaceid": "c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.152 2 DEBUG nova.network.os_vif_util [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:83:ca,bridge_name='br-int',has_traffic_filtering=True,id=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eb9aa7-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.153 2 DEBUG os_vif [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:83:ca,bridge_name='br-int',has_traffic_filtering=True,id=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eb9aa7-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.157 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0eb9aa7-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380031072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.167 2 INFO os_vif [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:83:ca,bridge_name='br-int',has_traffic_filtering=True,id=c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eb9aa7-6f')
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.193 2 DEBUG nova.compute.manager [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.194 2 DEBUG oslo_concurrency.lockutils [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.194 2 DEBUG oslo_concurrency.lockutils [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.194 2 DEBUG oslo_concurrency.lockutils [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.195 2 DEBUG nova.compute.manager [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] No waiting events found dispatching network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.195 2 WARNING nova.compute.manager [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received unexpected event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 for instance with vm_state error and task_state rebuild_spawning.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.195 2 DEBUG nova.compute.manager [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received event network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.196 2 DEBUG oslo_concurrency.lockutils [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.196 2 DEBUG oslo_concurrency.lockutils [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.197 2 DEBUG oslo_concurrency.lockutils [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.197 2 DEBUG nova.compute.manager [req-5136912e-b8c5-436d-a92b-f6d45abf103f req-51b32cd6-0b71-498b-afc2-01a19015f0a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Processing event network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.200 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.228 2 DEBUG nova.storage.rbd_utils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.233 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.266 2 DEBUG nova.compute.manager [req-affbbb37-aecc-49f3-aa9c-f646080f6c82 req-481f3c9d-be50-4ac1-aa0b-37397f61d64e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received event network-vif-unplugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.267 2 DEBUG oslo_concurrency.lockutils [req-affbbb37-aecc-49f3-aa9c-f646080f6c82 req-481f3c9d-be50-4ac1-aa0b-37397f61d64e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.268 2 DEBUG oslo_concurrency.lockutils [req-affbbb37-aecc-49f3-aa9c-f646080f6c82 req-481f3c9d-be50-4ac1-aa0b-37397f61d64e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.269 2 DEBUG oslo_concurrency.lockutils [req-affbbb37-aecc-49f3-aa9c-f646080f6c82 req-481f3c9d-be50-4ac1-aa0b-37397f61d64e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.269 2 DEBUG nova.compute.manager [req-affbbb37-aecc-49f3-aa9c-f646080f6c82 req-481f3c9d-be50-4ac1-aa0b-37397f61d64e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] No waiting events found dispatching network-vif-unplugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.269 2 DEBUG nova.compute.manager [req-affbbb37-aecc-49f3-aa9c-f646080f6c82 req-481f3c9d-be50-4ac1-aa0b-37397f61d64e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received event network-vif-unplugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:00:37 compute-0 ceph-mon[74249]: pgmap v1289: 305 pgs: 305 active+clean; 527 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.4 MiB/s wr, 280 op/s
Oct 14 09:00:37 compute-0 ceph-mon[74249]: osdmap e158: 3 total, 3 up, 3 in
Oct 14 09:00:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/380031072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.285 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:37 compute-0 podman[304234]: 2025-10-14 09:00:37.537092423 +0000 UTC m=+0.061245695 container create 332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:00:37 compute-0 systemd[1]: Started libpod-conmon-332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c.scope.
Oct 14 09:00:37 compute-0 podman[304234]: 2025-10-14 09:00:37.501255313 +0000 UTC m=+0.025408655 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:00:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 527 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Oct 14 09:00:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20cc2d40cbce76ae39682967d3dad1d055d61b8dc9e0a7329c0ef5136a8105a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:37 compute-0 podman[304234]: 2025-10-14 09:00:37.627157265 +0000 UTC m=+0.151310567 container init 332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS)
Oct 14 09:00:37 compute-0 podman[304234]: 2025-10-14 09:00:37.634065325 +0000 UTC m=+0.158218597 container start 332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:00:37 compute-0 neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce[304248]: [NOTICE]   (304252) : New worker (304254) forked
Oct 14 09:00:37 compute-0 neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce[304248]: [NOTICE]   (304252) : Loading success.
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.690 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.691 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.692 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[55149118-dd6a-4711-a26f-2e59d36837e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:37.692 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace which is not needed anymore
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.704 2 INFO nova.virt.libvirt.driver [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Deleting instance files /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859_del
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.705 2 INFO nova.virt.libvirt.driver [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Deletion of /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859_del complete
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.756 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432437.7559593, 1fa36bef-0a0f-4f9c-877c-325a656fa127 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.756 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] VM Resumed (Lifecycle Event)
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.759 2 DEBUG nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.759 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.762 2 INFO nova.compute.manager [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Took 0.88 seconds to destroy the instance on the hypervisor.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.762 2 DEBUG oslo.service.loopingcall [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.762 2 DEBUG nova.compute.manager [-] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.763 2 DEBUG nova.network.neutron [-] [instance: 6de921d2-e251-431d-9333-bae44aa81859] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.768 2 INFO nova.virt.libvirt.driver [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance spawned successfully.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.769 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.788 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.797 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2061999562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.801 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.802 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.802 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.803 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.803 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.804 2 DEBUG nova.virt.libvirt.driver [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.820 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.821 2 DEBUG nova.virt.libvirt.vif [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:35Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.821 2 DEBUG nova.network.os_vif_util [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.822 2 DEBUG nova.network.os_vif_util [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.824 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <uuid>310ebd88-5fe0-40ad-99dd-c3a1b410d357</uuid>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <name>instance-00000020</name>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdminTestJSON-server-490112967</nova:name>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:36</nova:creationTime>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <nova:user uuid="56001fe1c9fc432e923f8c57058754db">tempest-ServersAdminTestJSON-276167539-project-member</nova:user>
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <nova:project uuid="ed7ee17abdbe419cb7d7fd0da2cd2068">tempest-ServersAdminTestJSON-276167539</nova:project>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <nova:port uuid="60379992-d75d-4eff-a6bb-5d1615f35475">
Oct 14 09:00:37 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <entry name="serial">310ebd88-5fe0-40ad-99dd-c3a1b410d357</entry>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <entry name="uuid">310ebd88-5fe0-40ad-99dd-c3a1b410d357</entry>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk">
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config">
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:37 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:47:c2:c5"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <target dev="tap60379992-d7"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/console.log" append="off"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:37 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:37 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:37 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:37 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:37 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.824 2 DEBUG nova.compute.manager [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Preparing to wait for external event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.825 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.825 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.825 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.825 2 DEBUG nova.virt.libvirt.vif [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:35Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.826 2 DEBUG nova.network.os_vif_util [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.827 2 DEBUG nova.network.os_vif_util [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.827 2 DEBUG os_vif [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.828 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.837 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60379992-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.838 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60379992-d7, col_values=(('external_ids', {'iface-id': '60379992-d75d-4eff-a6bb-5d1615f35475', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:c2:c5', 'vm-uuid': '310ebd88-5fe0-40ad-99dd-c3a1b410d357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 NetworkManager[44885]: <info>  [1760432437.8402] manager: (tap60379992-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.847 2 INFO os_vif [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7')
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.848 2 INFO nova.virt.libvirt.driver [-] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Instance spawned successfully.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.848 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.851 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.852 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432437.758262, 1fa36bef-0a0f-4f9c-877c-325a656fa127 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.852 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] VM Started (Lifecycle Event)
Oct 14 09:00:37 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[300339]: [NOTICE]   (300343) : haproxy version is 2.8.14-c23fe91
Oct 14 09:00:37 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[300339]: [NOTICE]   (300343) : path to executable is /usr/sbin/haproxy
Oct 14 09:00:37 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[300339]: [WARNING]  (300343) : Exiting Master process...
Oct 14 09:00:37 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[300339]: [WARNING]  (300343) : Exiting Master process...
Oct 14 09:00:37 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[300339]: [ALERT]    (300343) : Current worker (300345) exited with code 143 (Terminated)
Oct 14 09:00:37 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[300339]: [WARNING]  (300343) : All workers exited. Exiting... (0)
Oct 14 09:00:37 compute-0 systemd[1]: libpod-b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65.scope: Deactivated successfully.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.886 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:37 compute-0 podman[304280]: 2025-10-14 09:00:37.88634725 +0000 UTC m=+0.053678639 container died b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.891 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.891 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.892 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.892 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.892 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.893 2 DEBUG nova.virt.libvirt.driver [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.896 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.899 2 INFO nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Took 4.16 seconds to spawn the instance on the hypervisor.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.899 2 DEBUG nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65-userdata-shm.mount: Deactivated successfully.
Oct 14 09:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-88e4364654eb1177932eb1d6868d5f8af1add3539df06012a85c7f076ab965a3-merged.mount: Deactivated successfully.
Oct 14 09:00:37 compute-0 podman[304280]: 2025-10-14 09:00:37.932098993 +0000 UTC m=+0.099430392 container cleanup b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:00:37 compute-0 systemd[1]: libpod-conmon-b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65.scope: Deactivated successfully.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.941 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.941 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432437.8266335, ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.941 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] VM Started (Lifecycle Event)
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.946 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.946 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.946 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No VIF found with MAC fa:16:3e:47:c2:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.947 2 INFO nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Using config drive
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.963 2 DEBUG nova.storage.rbd_utils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:37 compute-0 ovn_controller[152662]: 2025-10-14T09:00:37Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:30:a0 10.100.0.14
Oct 14 09:00:37 compute-0 ovn_controller[152662]: 2025-10-14T09:00:37Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:30:a0 10.100.0.14
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.974 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.977 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.987 2 INFO nova.compute.manager [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Took 5.35 seconds to build instance.
Oct 14 09:00:37 compute-0 nova_compute[259627]: 2025-10-14 09:00:37.990 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:38 compute-0 podman[304307]: 2025-10-14 09:00:38.003145448 +0000 UTC m=+0.045542819 container remove b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.006 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.007 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432437.8272514, ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.007 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] VM Paused (Lifecycle Event)
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.011 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e67963c6-ddf9-4885-b746-26027f7e7075]: (4, ('Tue Oct 14 09:00:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65)\nb5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65\nTue Oct 14 09:00:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (b5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65)\nb5296ae0802c0d60cfee18851bb80b1795c583a1b3d20c062b8e4d4781357b65\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.013 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd116823-67c1-45a7-9fb8-5634d2233475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.014 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.015 2 INFO nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Took 7.11 seconds to spawn the instance on the hypervisor.
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.015 2 DEBUG nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:38 compute-0 kernel: tap2322cf7a-00: left promiscuous mode
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.026 2 DEBUG oslo_concurrency.lockutils [None req-e133f0cc-1775-468f-921a-79802650e01a ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "1fa36bef-0a0f-4f9c-877c-325a656fa127" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.028 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.030 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432437.8308938, ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.030 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] VM Resumed (Lifecycle Event)
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.038 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'keypairs' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.041 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58459fdd-5a47-4f37-8322-b9dc7a788ab2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.058 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.063 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.064 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[409d537b-fbfb-48ef-87dc-0242170ee094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.065 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[49551ed5-cdd1-4fea-b184-7f143cdf0f55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.081 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d26aa3-0ad4-4950-8b60-94e2817d5e2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618115, 'reachable_time': 20737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304340, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d2322cf7a\x2d0090\x2d40fa\x2da558\x2d42d84cc6fc2a.mount: Deactivated successfully.
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.085 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.085 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[268275c0-0e79-445f-a5f4-1f30f4ff026f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.086 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.087 2 INFO nova.compute.manager [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Took 8.19 seconds to build instance.
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.099 2 DEBUG oslo_concurrency.lockutils [None req-0c1bcb33-3459-4241-8a2f-7961da0ead60 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2061999562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.371 2 INFO nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating config drive at /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.375 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6bjyg_c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.435 2 DEBUG nova.network.neutron [-] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.450 2 INFO nova.compute.manager [-] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Took 0.69 seconds to deallocate network for instance.
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.492 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.493 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.518 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6bjyg_c" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.544 2 DEBUG nova.storage.rbd_utils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.548 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.701 2 DEBUG oslo_concurrency.processutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.739 2 DEBUG oslo_concurrency.processutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.740 2 INFO nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deleting local config drive /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config because it was imported into RBD.
Oct 14 09:00:38 compute-0 kernel: tap60379992-d7: entered promiscuous mode
Oct 14 09:00:38 compute-0 ovn_controller[152662]: 2025-10-14T09:00:38Z|00352|binding|INFO|Claiming lport 60379992-d75d-4eff-a6bb-5d1615f35475 for this chassis.
Oct 14 09:00:38 compute-0 NetworkManager[44885]: <info>  [1760432438.7870] manager: (tap60379992-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct 14 09:00:38 compute-0 ovn_controller[152662]: 2025-10-14T09:00:38Z|00353|binding|INFO|60379992-d75d-4eff-a6bb-5d1615f35475: Claiming fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:38 compute-0 systemd-udevd[304341]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.795 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:c2:c5 10.100.0.8'], port_security=['fa:16:3e:47:c2:c5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '310ebd88-5fe0-40ad-99dd-c3a1b410d357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=60379992-d75d-4eff-a6bb-5d1615f35475) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.797 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 60379992-d75d-4eff-a6bb-5d1615f35475 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a bound to our chassis
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.798 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:00:38 compute-0 NetworkManager[44885]: <info>  [1760432438.8009] device (tap60379992-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:38 compute-0 NetworkManager[44885]: <info>  [1760432438.8018] device (tap60379992-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:38 compute-0 ovn_controller[152662]: 2025-10-14T09:00:38Z|00354|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 ovn-installed in OVS
Oct 14 09:00:38 compute-0 ovn_controller[152662]: 2025-10-14T09:00:38Z|00355|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 up in Southbound
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.820 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf8a71f-70fa-4e22-8eed-b849fbfcff16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:38 compute-0 systemd-machined[214636]: New machine qemu-47-instance-00000020.
Oct 14 09:00:38 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-00000020.
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.851 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d94f1b49-f9bd-4c56-990d-4c52482e68b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.854 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c628d604-e42e-41af-8d25-3b3186b5ba70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.894 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f65d2e-72c4-415f-9171-b76bfe43955a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.918 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0b5356-f2c7-4047-9d33-14347baf553a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304429, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.939 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b339b1d2-57c9-4571-8d03-4376c25fc082]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304430, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304430, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.941 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:38 compute-0 nova_compute[259627]: 2025-10-14 09:00:38.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.945 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.945 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.946 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:38.947 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506799058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.196 2 DEBUG oslo_concurrency.processutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.202 2 DEBUG nova.compute.provider_tree [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.216 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.217 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.217 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.217 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.218 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.219 2 INFO nova.compute.manager [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Terminating instance
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.220 2 DEBUG nova.compute.manager [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.223 2 DEBUG nova.scheduler.client.report [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.244 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:39 compute-0 kernel: tape98aad77-80 (unregistering): left promiscuous mode
Oct 14 09:00:39 compute-0 NetworkManager[44885]: <info>  [1760432439.2686] device (tape98aad77-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.273 2 INFO nova.scheduler.client.report [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance 6de921d2-e251-431d-9333-bae44aa81859
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:39 compute-0 ovn_controller[152662]: 2025-10-14T09:00:39Z|00356|binding|INFO|Releasing lport e98aad77-80b5-4685-8387-939d9379c597 from this chassis (sb_readonly=0)
Oct 14 09:00:39 compute-0 ovn_controller[152662]: 2025-10-14T09:00:39Z|00357|binding|INFO|Setting lport e98aad77-80b5-4685-8387-939d9379c597 down in Southbound
Oct 14 09:00:39 compute-0 ovn_controller[152662]: 2025-10-14T09:00:39Z|00358|binding|INFO|Removing iface tape98aad77-80 ovn-installed in OVS
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.282 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:2f:92 10.100.0.4'], port_security=['fa:16:3e:15:2f:92 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-079eaf38-8afb-4fe5-89ff-6d124da2b7ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c678b346288454cb60e84d74446e637', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9457705-d84b-4d70-9346-4757f784d547', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71c85b37-713a-4740-a2ee-1c8f8f7374a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e98aad77-80b5-4685-8387-939d9379c597) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.283 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e98aad77-80b5-4685-8387-939d9379c597 in datapath 079eaf38-8afb-4fe5-89ff-6d124da2b7ce unbound from our chassis
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.284 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 079eaf38-8afb-4fe5-89ff-6d124da2b7ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.285 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[228461f6-28d6-4422-9f1f-12d57e911fb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.285 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce namespace which is not needed anymore
Oct 14 09:00:39 compute-0 ceph-mon[74249]: pgmap v1291: 305 pgs: 305 active+clean; 527 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Oct 14 09:00:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2506799058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.330 2 DEBUG oslo_concurrency.lockutils [None req-cf1de6e2-ee90-4e6d-8b0b-d495814f4f04 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:39 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 14 09:00:39 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Consumed 2.261s CPU time.
Oct 14 09:00:39 compute-0 systemd-machined[214636]: Machine qemu-46-instance-00000028 terminated.
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.370 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.370 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.396 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:00:39 compute-0 neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce[304248]: [NOTICE]   (304252) : haproxy version is 2.8.14-c23fe91
Oct 14 09:00:39 compute-0 neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce[304248]: [NOTICE]   (304252) : path to executable is /usr/sbin/haproxy
Oct 14 09:00:39 compute-0 neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce[304248]: [WARNING]  (304252) : Exiting Master process...
Oct 14 09:00:39 compute-0 neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce[304248]: [ALERT]    (304252) : Current worker (304254) exited with code 143 (Terminated)
Oct 14 09:00:39 compute-0 neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce[304248]: [WARNING]  (304252) : All workers exited. Exiting... (0)
Oct 14 09:00:39 compute-0 systemd[1]: libpod-332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c.scope: Deactivated successfully.
Oct 14 09:00:39 compute-0 podman[304453]: 2025-10-14 09:00:39.416451403 +0000 UTC m=+0.041842738 container died 332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c-userdata-shm.mount: Deactivated successfully.
Oct 14 09:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-20cc2d40cbce76ae39682967d3dad1d055d61b8dc9e0a7329c0ef5136a8105a7-merged.mount: Deactivated successfully.
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.462 2 INFO nova.virt.libvirt.driver [-] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Instance destroyed successfully.
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.462 2 DEBUG nova.objects.instance [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lazy-loading 'resources' on Instance uuid ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.468 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.468 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:39 compute-0 podman[304453]: 2025-10-14 09:00:39.471125456 +0000 UTC m=+0.096516781 container cleanup 332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.474 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.474 2 INFO nova.compute.claims [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.479 2 DEBUG nova.virt.libvirt.vif [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-646351129',display_name='tempest-InstanceActionsNegativeTestJSON-server-646351129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-646351129',id=40,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c678b346288454cb60e84d74446e637',ramdisk_id='',reservation_id='r-9ydju5om',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1092081928',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1092081928-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:00:38Z,user_data=None,user_id='cfa85d055bc34b8a8c67b895be1ce2f5',uuid=ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.480 2 DEBUG nova.network.os_vif_util [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Converting VIF {"id": "e98aad77-80b5-4685-8387-939d9379c597", "address": "fa:16:3e:15:2f:92", "network": {"id": "079eaf38-8afb-4fe5-89ff-6d124da2b7ce", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-222116053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c678b346288454cb60e84d74446e637", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape98aad77-80", "ovs_interfaceid": "e98aad77-80b5-4685-8387-939d9379c597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.481 2 DEBUG nova.network.os_vif_util [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:2f:92,bridge_name='br-int',has_traffic_filtering=True,id=e98aad77-80b5-4685-8387-939d9379c597,network=Network(079eaf38-8afb-4fe5-89ff-6d124da2b7ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape98aad77-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.481 2 DEBUG os_vif [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:2f:92,bridge_name='br-int',has_traffic_filtering=True,id=e98aad77-80b5-4685-8387-939d9379c597,network=Network(079eaf38-8afb-4fe5-89ff-6d124da2b7ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape98aad77-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape98aad77-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.489 2 INFO os_vif [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:2f:92,bridge_name='br-int',has_traffic_filtering=True,id=e98aad77-80b5-4685-8387-939d9379c597,network=Network(079eaf38-8afb-4fe5-89ff-6d124da2b7ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape98aad77-80')
Oct 14 09:00:39 compute-0 systemd[1]: libpod-conmon-332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c.scope: Deactivated successfully.
Oct 14 09:00:39 compute-0 podman[304493]: 2025-10-14 09:00:39.545848321 +0000 UTC m=+0.044050793 container remove 332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.552 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bffaf525-e342-44c9-a754-85d496b3406f]: (4, ('Tue Oct 14 09:00:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce (332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c)\n332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c\nTue Oct 14 09:00:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce (332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c)\n332cfea4cb067928333c31c047aa252d68e3f8de031e613bde1b1637f580d14c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1887b862-bbea-4afc-bf6b-6ba2e5006b26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap079eaf38-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:39 compute-0 kernel: tap079eaf38-80: left promiscuous mode
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.585 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ad1ee9-6995-48af-a785-0362c23191a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.593 2 DEBUG nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received event network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.593 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.594 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.594 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.595 2 DEBUG nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] No waiting events found dispatching network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.597 2 WARNING nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received unexpected event network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 for instance with vm_state active and task_state deleting.
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.598 2 DEBUG nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received event network-vif-deleted-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.598 2 DEBUG nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 527 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.598 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.599 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.599 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.600 2 DEBUG nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Processing event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.600 2 DEBUG nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.600 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.600 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.600 2 DEBUG oslo_concurrency.lockutils [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.601 2 DEBUG nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] No waiting events found dispatching network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.601 2 WARNING nova.compute.manager [req-d08e1811-2c26-47b2-82e7-208117d0e948 req-8604af64-5b77-42b5-b9f8-44a6c4c46c4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received unexpected event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 for instance with vm_state error and task_state rebuild_spawning.
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.609 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[620ff426-2362-4fc6-a2fd-31e437780bba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.610 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6b31c2ee-8611-4f74-9e90-31ebe5d1e223]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.632 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f3009da8-5b19-4f33-af3f-383e97e737c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621554, 'reachable_time': 40284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304544, 'error': None, 'target': 'ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d079eaf38\x2d8afb\x2d4fe5\x2d89ff\x2d6d124da2b7ce.mount: Deactivated successfully.
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.633 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-079eaf38-8afb-4fe5-89ff-6d124da2b7ce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:00:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:39.634 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5afc4c16-9e65-4900-93f0-af7018a75635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.691 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.789 2 DEBUG nova.compute.manager [req-4302b50c-9ed3-47d0-b275-27bca2ae1b95 req-7003b15a-b028-4939-9dfa-abeff5894148 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received event network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.790 2 DEBUG oslo_concurrency.lockutils [req-4302b50c-9ed3-47d0-b275-27bca2ae1b95 req-7003b15a-b028-4939-9dfa-abeff5894148 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.790 2 DEBUG oslo_concurrency.lockutils [req-4302b50c-9ed3-47d0-b275-27bca2ae1b95 req-7003b15a-b028-4939-9dfa-abeff5894148 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.791 2 DEBUG oslo_concurrency.lockutils [req-4302b50c-9ed3-47d0-b275-27bca2ae1b95 req-7003b15a-b028-4939-9dfa-abeff5894148 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.792 2 DEBUG nova.compute.manager [req-4302b50c-9ed3-47d0-b275-27bca2ae1b95 req-7003b15a-b028-4939-9dfa-abeff5894148 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] No waiting events found dispatching network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.792 2 WARNING nova.compute.manager [req-4302b50c-9ed3-47d0-b275-27bca2ae1b95 req-7003b15a-b028-4939-9dfa-abeff5894148 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received unexpected event network-vif-plugged-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b for instance with vm_state deleted and task_state None.
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.853 2 INFO nova.compute.manager [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Rebuilding instance
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.994 2 INFO nova.virt.libvirt.driver [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Deleting instance files /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_del
Oct 14 09:00:39 compute-0 nova_compute[259627]: 2025-10-14 09:00:39.996 2 INFO nova.virt.libvirt.driver [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Deletion of /var/lib/nova/instances/ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c_del complete
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.062 2 INFO nova.compute.manager [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.063 2 DEBUG oslo.service.loopingcall [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.064 2 DEBUG nova.compute.manager [-] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.064 2 DEBUG nova.network.neutron [-] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.131 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1054247137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.145 2 DEBUG nova.compute.manager [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.155 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.160 2 DEBUG nova.compute.provider_tree [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.172 2 DEBUG nova.scheduler.client.report [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.199 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'pci_requests' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.201 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.201 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.226 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 310ebd88-5fe0-40ad-99dd-c3a1b410d357 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.227 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432440.2265544, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.227 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Started (Lifecycle Event)
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.229 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.230 2 DEBUG nova.compute.manager [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.234 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.238 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance spawned successfully.
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.238 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.267 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.268 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.268 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.269 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.269 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.270 2 DEBUG nova.virt.libvirt.driver [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.279 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'resources' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.281 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.282 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.283 2 DEBUG nova.network.neutron [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.289 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1054247137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.314 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'migration_context' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.317 2 INFO nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.319 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.319 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432440.2266476, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.319 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Paused (Lifecycle Event)
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.342 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.345 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.346 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.349 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.350 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432440.2345166, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Resumed (Lifecycle Event)
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.354 2 DEBUG nova.compute.manager [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.391 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.395 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.424 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.424 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.425 2 DEBUG nova.objects.instance [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.463 2 DEBUG nova.policy [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.475 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.477 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.477 2 INFO nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Creating image(s)
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.500 2 DEBUG nova.storage.rbd_utils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.523 2 DEBUG nova.storage.rbd_utils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.562 2 DEBUG nova.storage.rbd_utils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.566 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.600 2 DEBUG oslo_concurrency.lockutils [None req-5209e2bd-06f1-4fad-8394-585bd4d6a51b 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.617 2 DEBUG nova.network.neutron [-] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.632 2 INFO nova.compute.manager [-] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Took 0.57 seconds to deallocate network for instance.
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.640 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.640 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.641 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.641 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.664 2 DEBUG nova.storage.rbd_utils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.669 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.702 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.703 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.929 2 DEBUG oslo_concurrency.processutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:40 compute-0 nova_compute[259627]: 2025-10-14 09:00:40.978 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.019 2 DEBUG nova.network.neutron [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Successfully created port: 625280ef-3da7-4600-8f28-4ab4e77ff594 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.067 2 DEBUG nova.storage.rbd_utils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.170 2 DEBUG nova.objects.instance [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid f5132a2b-83eb-4bb1-a556-480d9fb97fec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.193 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.193 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Ensure instance console log exists: /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.194 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.194 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.194 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:41 compute-0 ceph-mon[74249]: pgmap v1292: 305 pgs: 305 active+clean; 527 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Oct 14 09:00:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543836990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.432 2 DEBUG oslo_concurrency.processutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.439 2 DEBUG nova.compute.provider_tree [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.461 2 DEBUG nova.scheduler.client.report [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.493 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.520 2 INFO nova.scheduler.client.report [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Deleted allocations for instance ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.585 2 DEBUG oslo_concurrency.lockutils [None req-5e3fdcc3-a902-440e-9262-5e4a82db5c47 cfa85d055bc34b8a8c67b895be1ce2f5 6c678b346288454cb60e84d74446e637 - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 372 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 9.0 MiB/s wr, 574 op/s
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.739 2 DEBUG nova.compute.manager [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received event network-vif-unplugged-e98aad77-80b5-4685-8387-939d9379c597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.740 2 DEBUG oslo_concurrency.lockutils [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.740 2 DEBUG oslo_concurrency.lockutils [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.740 2 DEBUG oslo_concurrency.lockutils [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.740 2 DEBUG nova.compute.manager [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] No waiting events found dispatching network-vif-unplugged-e98aad77-80b5-4685-8387-939d9379c597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.741 2 WARNING nova.compute.manager [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received unexpected event network-vif-unplugged-e98aad77-80b5-4685-8387-939d9379c597 for instance with vm_state deleted and task_state None.
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.741 2 DEBUG nova.compute.manager [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received event network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.741 2 DEBUG oslo_concurrency.lockutils [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.741 2 DEBUG oslo_concurrency.lockutils [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.741 2 DEBUG oslo_concurrency.lockutils [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.742 2 DEBUG nova.compute.manager [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] No waiting events found dispatching network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.742 2 WARNING nova.compute.manager [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received unexpected event network-vif-plugged-e98aad77-80b5-4685-8387-939d9379c597 for instance with vm_state deleted and task_state None.
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.742 2 DEBUG nova.compute.manager [req-2e487b50-510e-4c03-9fcf-30f85b1e95c8 req-51c0a5e2-f719-4bb2-98df-0eb9fd9915ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Received event network-vif-deleted-e98aad77-80b5-4685-8387-939d9379c597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.903 2 DEBUG nova.network.neutron [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Successfully updated port: 625280ef-3da7-4600-8f28-4ab4e77ff594 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.937 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-f5132a2b-83eb-4bb1-a556-480d9fb97fec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.938 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-f5132a2b-83eb-4bb1-a556-480d9fb97fec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:41 compute-0 nova_compute[259627]: 2025-10-14 09:00:41.938 2 DEBUG nova.network.neutron [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:00:42 compute-0 nova_compute[259627]: 2025-10-14 09:00:42.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:42 compute-0 nova_compute[259627]: 2025-10-14 09:00:42.097 2 DEBUG nova.network.neutron [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:00:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1543836990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 14 09:00:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 14 09:00:42 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 14 09:00:42 compute-0 nova_compute[259627]: 2025-10-14 09:00:42.552 2 INFO nova.compute.manager [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Rebuilding instance
Oct 14 09:00:42 compute-0 podman[304781]: 2025-10-14 09:00:42.675994174 +0000 UTC m=+0.075586727 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 09:00:42 compute-0 podman[304780]: 2025-10-14 09:00:42.719197015 +0000 UTC m=+0.119035304 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029694952830956567 of space, bias 1.0, pg target 0.890848584928697 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:00:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:00:42 compute-0 nova_compute[259627]: 2025-10-14 09:00:42.904 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:42 compute-0 nova_compute[259627]: 2025-10-14 09:00:42.935 2 DEBUG nova.compute.manager [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:42 compute-0 nova_compute[259627]: 2025-10-14 09:00:42.984 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_requests' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:42 compute-0 nova_compute[259627]: 2025-10-14 09:00:42.999 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.011 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'resources' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.023 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'migration_context' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.029 2 DEBUG nova.network.neutron [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Updating instance_info_cache with network_info: [{"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.034 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.038 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.046 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-f5132a2b-83eb-4bb1-a556-480d9fb97fec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.047 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Instance network_info: |[{"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.050 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Start _get_guest_xml network_info=[{"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.055 2 WARNING nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.060 2 DEBUG nova.virt.libvirt.host [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.060 2 DEBUG nova.virt.libvirt.host [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.064 2 DEBUG nova.virt.libvirt.host [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.064 2 DEBUG nova.virt.libvirt.host [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.064 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.065 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.065 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.065 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.065 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.066 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.066 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.066 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.066 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.066 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.067 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.067 2 DEBUG nova.virt.hardware [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.070 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:43 compute-0 ceph-mon[74249]: pgmap v1293: 305 pgs: 305 active+clean; 372 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 9.0 MiB/s wr, 574 op/s
Oct 14 09:00:43 compute-0 ceph-mon[74249]: osdmap e159: 3 total, 3 up, 3 in
Oct 14 09:00:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338944800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.584 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 372 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.9 MiB/s wr, 507 op/s
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.614 2 DEBUG nova.storage.rbd_utils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.624 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.851 2 DEBUG nova.compute.manager [req-f5f3b721-12d4-4d9c-9e44-b6af0a97d82a req-5bbbcf4c-b823-4559-a43f-46b995a53a24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received event network-changed-625280ef-3da7-4600-8f28-4ab4e77ff594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.851 2 DEBUG nova.compute.manager [req-f5f3b721-12d4-4d9c-9e44-b6af0a97d82a req-5bbbcf4c-b823-4559-a43f-46b995a53a24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Refreshing instance network info cache due to event network-changed-625280ef-3da7-4600-8f28-4ab4e77ff594. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.852 2 DEBUG oslo_concurrency.lockutils [req-f5f3b721-12d4-4d9c-9e44-b6af0a97d82a req-5bbbcf4c-b823-4559-a43f-46b995a53a24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f5132a2b-83eb-4bb1-a556-480d9fb97fec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.852 2 DEBUG oslo_concurrency.lockutils [req-f5f3b721-12d4-4d9c-9e44-b6af0a97d82a req-5bbbcf4c-b823-4559-a43f-46b995a53a24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f5132a2b-83eb-4bb1-a556-480d9fb97fec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:00:43 compute-0 nova_compute[259627]: 2025-10-14 09:00:43.852 2 DEBUG nova.network.neutron [req-f5f3b721-12d4-4d9c-9e44-b6af0a97d82a req-5bbbcf4c-b823-4559-a43f-46b995a53a24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Refreshing network info cache for port 625280ef-3da7-4600-8f28-4ab4e77ff594 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:00:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4087318805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.043 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.045 2 DEBUG nova.virt.libvirt.vif [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-936263881',display_name='tempest-ImagesTestJSON-server-936263881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-936263881',id=42,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-v3x7rtof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:40Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f5132a2b-83eb-4bb1-a556-480d9fb97fec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.046 2 DEBUG nova.network.os_vif_util [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.047 2 DEBUG nova.network.os_vif_util [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:6a:d6,bridge_name='br-int',has_traffic_filtering=True,id=625280ef-3da7-4600-8f28-4ab4e77ff594,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap625280ef-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.048 2 DEBUG nova.objects.instance [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid f5132a2b-83eb-4bb1-a556-480d9fb97fec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.066 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <uuid>f5132a2b-83eb-4bb1-a556-480d9fb97fec</uuid>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <name>instance-0000002a</name>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesTestJSON-server-936263881</nova:name>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:43</nova:creationTime>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <nova:port uuid="625280ef-3da7-4600-8f28-4ab4e77ff594">
Oct 14 09:00:44 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <entry name="serial">f5132a2b-83eb-4bb1-a556-480d9fb97fec</entry>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <entry name="uuid">f5132a2b-83eb-4bb1-a556-480d9fb97fec</entry>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk">
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk.config">
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:ea:6a:d6"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <target dev="tap625280ef-3d"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec/console.log" append="off"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:44 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:44 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:44 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:44 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:44 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.067 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Preparing to wait for external event network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.067 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.067 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.068 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.069 2 DEBUG nova.virt.libvirt.vif [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-936263881',display_name='tempest-ImagesTestJSON-server-936263881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-936263881',id=42,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-v3x7rtof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:40Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f5132a2b-83eb-4bb1-a556-480d9fb97fec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.070 2 DEBUG nova.network.os_vif_util [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.071 2 DEBUG nova.network.os_vif_util [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:6a:d6,bridge_name='br-int',has_traffic_filtering=True,id=625280ef-3da7-4600-8f28-4ab4e77ff594,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap625280ef-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.072 2 DEBUG os_vif [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:6a:d6,bridge_name='br-int',has_traffic_filtering=True,id=625280ef-3da7-4600-8f28-4ab4e77ff594,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap625280ef-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.075 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.079 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap625280ef-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.080 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap625280ef-3d, col_values=(('external_ids', {'iface-id': '625280ef-3da7-4600-8f28-4ab4e77ff594', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:6a:d6', 'vm-uuid': 'f5132a2b-83eb-4bb1-a556-480d9fb97fec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:44 compute-0 NetworkManager[44885]: <info>  [1760432444.0817] manager: (tap625280ef-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.089 2 INFO os_vif [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:6a:d6,bridge_name='br-int',has_traffic_filtering=True,id=625280ef-3da7-4600-8f28-4ab4e77ff594,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap625280ef-3d')
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.144 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.145 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.145 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:ea:6a:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.145 2 INFO nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Using config drive
Oct 14 09:00:44 compute-0 nova_compute[259627]: 2025-10-14 09:00:44.167 2 DEBUG nova.storage.rbd_utils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/338944800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4087318805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:45.087 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:45 compute-0 ceph-mon[74249]: pgmap v1295: 305 pgs: 305 active+clean; 372 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.9 MiB/s wr, 507 op/s
Oct 14 09:00:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 418 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.4 MiB/s wr, 552 op/s
Oct 14 09:00:45 compute-0 nova_compute[259627]: 2025-10-14 09:00:45.965 2 INFO nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Creating config drive at /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec/disk.config
Oct 14 09:00:45 compute-0 nova_compute[259627]: 2025-10-14 09:00:45.976 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpydkm6q9z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.135 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpydkm6q9z" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.169 2 DEBUG nova.storage.rbd_utils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.175 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec/disk.config f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.325 2 DEBUG oslo_concurrency.processutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec/disk.config f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.326 2 INFO nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Deleting local config drive /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec/disk.config because it was imported into RBD.
Oct 14 09:00:46 compute-0 kernel: tap625280ef-3d: entered promiscuous mode
Oct 14 09:00:46 compute-0 NetworkManager[44885]: <info>  [1760432446.3906] manager: (tap625280ef-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Oct 14 09:00:46 compute-0 ovn_controller[152662]: 2025-10-14T09:00:46Z|00359|binding|INFO|Claiming lport 625280ef-3da7-4600-8f28-4ab4e77ff594 for this chassis.
Oct 14 09:00:46 compute-0 ovn_controller[152662]: 2025-10-14T09:00:46Z|00360|binding|INFO|625280ef-3da7-4600-8f28-4ab4e77ff594: Claiming fa:16:3e:ea:6a:d6 10.100.0.8
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.399 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:6a:d6 10.100.0.8'], port_security=['fa:16:3e:ea:6a:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f5132a2b-83eb-4bb1-a556-480d9fb97fec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=625280ef-3da7-4600-8f28-4ab4e77ff594) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.400 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 625280ef-3da7-4600-8f28-4ab4e77ff594 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.401 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:46 compute-0 systemd-udevd[304958]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.417 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[439f3b60-a7a8-49a0-9f4e-720d9e4543ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_controller[152662]: 2025-10-14T09:00:46Z|00361|binding|INFO|Setting lport 625280ef-3da7-4600-8f28-4ab4e77ff594 ovn-installed in OVS
Oct 14 09:00:46 compute-0 ovn_controller[152662]: 2025-10-14T09:00:46Z|00362|binding|INFO|Setting lport 625280ef-3da7-4600-8f28-4ab4e77ff594 up in Southbound
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.418 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2322cf7a-01 in ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.420 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2322cf7a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.420 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12a83b81-97a5-4823-9088-c990ab6873f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.421 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ee8016-83f2-423a-9f52-6d866bb9a244]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:46 compute-0 systemd-machined[214636]: New machine qemu-48-instance-0000002a.
Oct 14 09:00:46 compute-0 NetworkManager[44885]: <info>  [1760432446.4321] device (tap625280ef-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:46 compute-0 NetworkManager[44885]: <info>  [1760432446.4329] device (tap625280ef-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.432 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f15793c4-1dc0-4af0-b52f-44ae0d4019d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002a.
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.446 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff48269b-dfe8-4e7f-b506-66edca77db33]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.449 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432431.4474368, 6f94ea73-1c62-40a9-9300-0d81c596377c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.449 2 INFO nova.compute.manager [-] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] VM Stopped (Lifecycle Event)
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.479 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[03c3e3fc-f718-488d-a02c-fa09ac435c69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.487 2 DEBUG nova.compute.manager [None req-e34a073d-8da7-415d-88ff-5fc81d7d8326 - - - - - -] [instance: 6f94ea73-1c62-40a9-9300-0d81c596377c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:46 compute-0 NetworkManager[44885]: <info>  [1760432446.4926] manager: (tap2322cf7a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/166)
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.491 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f68bd78c-6740-4e8f-a9e8-5ce8bfb878e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.531 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[792744aa-0a21-4a54-baaa-4be4e6510cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.534 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[301fe01c-97aa-4bce-8f14-019bbc0b172a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 NetworkManager[44885]: <info>  [1760432446.5581] device (tap2322cf7a-00): carrier: link connected
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.564 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff0cf11-c7e5-4e22-ac35-eb8ae7da3cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.586 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4048f86b-fb21-4b59-99fb-1a8ca70fe992]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622531, 'reachable_time': 23226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304991, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.602 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3722cc-1568-49b8-b600-1d0991548f7e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:956c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622531, 'tstamp': 622531}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304992, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.627 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbc64f6-7c96-4687-b450-8a6e606310e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622531, 'reachable_time': 23226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304993, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.656 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0498ac0-3694-4c28-a31b-619f3d0a572f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c29b6a-964c-4a54-8120-90c71ab09f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.740 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:46 compute-0 NetworkManager[44885]: <info>  [1760432446.7422] manager: (tap2322cf7a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Oct 14 09:00:46 compute-0 kernel: tap2322cf7a-00: entered promiscuous mode
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.746 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:46 compute-0 ovn_controller[152662]: 2025-10-14T09:00:46Z|00363|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 09:00:46 compute-0 nova_compute[259627]: 2025-10-14 09:00:46.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.771 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.772 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c49d89c-9b6e-4ef6-bcbc-5a370bbba480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.773 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:00:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:46.774 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'env', 'PROCESS_TAG=haproxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2322cf7a-0090-40fa-a558-42d84cc6fc2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:47 compute-0 podman[305041]: 2025-10-14 09:00:47.238821858 +0000 UTC m=+0.072812918 container create 9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.262 2 DEBUG nova.compute.manager [req-249bdd50-7ec7-4ad8-b4bf-b8785008e504 req-7c1145ff-dbcc-4f64-875f-44224c27217e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received event network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.262 2 DEBUG oslo_concurrency.lockutils [req-249bdd50-7ec7-4ad8-b4bf-b8785008e504 req-7c1145ff-dbcc-4f64-875f-44224c27217e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.263 2 DEBUG oslo_concurrency.lockutils [req-249bdd50-7ec7-4ad8-b4bf-b8785008e504 req-7c1145ff-dbcc-4f64-875f-44224c27217e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.263 2 DEBUG oslo_concurrency.lockutils [req-249bdd50-7ec7-4ad8-b4bf-b8785008e504 req-7c1145ff-dbcc-4f64-875f-44224c27217e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.263 2 DEBUG nova.compute.manager [req-249bdd50-7ec7-4ad8-b4bf-b8785008e504 req-7c1145ff-dbcc-4f64-875f-44224c27217e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Processing event network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:47 compute-0 systemd[1]: Started libpod-conmon-9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9.scope.
Oct 14 09:00:47 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:00:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94473d2644af28465e64fe09d42bd242df1c7f4c209c94cc8abaecea8272e0f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:00:47 compute-0 podman[305041]: 2025-10-14 09:00:47.213359393 +0000 UTC m=+0.047350483 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:00:47 compute-0 podman[305041]: 2025-10-14 09:00:47.310350294 +0000 UTC m=+0.144341374 container init 9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 09:00:47 compute-0 podman[305041]: 2025-10-14 09:00:47.316498245 +0000 UTC m=+0.150489305 container start 9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:00:47 compute-0 ceph-mon[74249]: pgmap v1296: 305 pgs: 305 active+clean; 418 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.4 MiB/s wr, 552 op/s
Oct 14 09:00:47 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[305080]: [NOTICE]   (305085) : New worker (305087) forked
Oct 14 09:00:47 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[305080]: [NOTICE]   (305085) : Loading success.
Oct 14 09:00:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 418 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 6.9 MiB/s wr, 515 op/s
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.755 2 DEBUG nova.network.neutron [req-f5f3b721-12d4-4d9c-9e44-b6af0a97d82a req-5bbbcf4c-b823-4559-a43f-46b995a53a24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Updated VIF entry in instance network info cache for port 625280ef-3da7-4600-8f28-4ab4e77ff594. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.756 2 DEBUG nova.network.neutron [req-f5f3b721-12d4-4d9c-9e44-b6af0a97d82a req-5bbbcf4c-b823-4559-a43f-46b995a53a24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Updating instance_info_cache with network_info: [{"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.773 2 DEBUG oslo_concurrency.lockutils [req-f5f3b721-12d4-4d9c-9e44-b6af0a97d82a req-5bbbcf4c-b823-4559-a43f-46b995a53a24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f5132a2b-83eb-4bb1-a556-480d9fb97fec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.777 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.778 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432447.777133, f5132a2b-83eb-4bb1-a556-480d9fb97fec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.781 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] VM Started (Lifecycle Event)
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.785 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.794 2 INFO nova.virt.libvirt.driver [-] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Instance spawned successfully.
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.797 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.809 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.822 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.823 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.824 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.825 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.826 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.827 2 DEBUG nova.virt.libvirt.driver [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.834 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.835 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432447.7782354, f5132a2b-83eb-4bb1-a556-480d9fb97fec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.835 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] VM Paused (Lifecycle Event)
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.866 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.869 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432447.7867389, f5132a2b-83eb-4bb1-a556-480d9fb97fec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.869 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] VM Resumed (Lifecycle Event)
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.888 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.891 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.897 2 INFO nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Took 7.42 seconds to spawn the instance on the hypervisor.
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.898 2 DEBUG nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.909 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.951 2 INFO nova.compute.manager [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Took 8.51 seconds to build instance.
Oct 14 09:00:47 compute-0 nova_compute[259627]: 2025-10-14 09:00:47.968 2 DEBUG oslo_concurrency.lockutils [None req-aa36e12a-ebb5-4cd0-bd70-2875f72bc84c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:48 compute-0 ovn_controller[152662]: 2025-10-14T09:00:48Z|00364|binding|INFO|Releasing lport 6baedd76-8a05-42d6-8356-18b586f58672 from this chassis (sb_readonly=0)
Oct 14 09:00:48 compute-0 ovn_controller[152662]: 2025-10-14T09:00:48Z|00365|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 09:00:48 compute-0 nova_compute[259627]: 2025-10-14 09:00:48.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:49 compute-0 nova_compute[259627]: 2025-10-14 09:00:49.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:49 compute-0 ceph-mon[74249]: pgmap v1297: 305 pgs: 305 active+clean; 418 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 6.9 MiB/s wr, 515 op/s
Oct 14 09:00:49 compute-0 nova_compute[259627]: 2025-10-14 09:00:49.443 2 DEBUG nova.compute.manager [req-75910186-3703-42cb-9cd8-c88a11c16ceb req-96d087d2-6e61-44bc-acd4-dc70f63cb4ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received event network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:49 compute-0 nova_compute[259627]: 2025-10-14 09:00:49.444 2 DEBUG oslo_concurrency.lockutils [req-75910186-3703-42cb-9cd8-c88a11c16ceb req-96d087d2-6e61-44bc-acd4-dc70f63cb4ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:49 compute-0 nova_compute[259627]: 2025-10-14 09:00:49.445 2 DEBUG oslo_concurrency.lockutils [req-75910186-3703-42cb-9cd8-c88a11c16ceb req-96d087d2-6e61-44bc-acd4-dc70f63cb4ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:49 compute-0 nova_compute[259627]: 2025-10-14 09:00:49.445 2 DEBUG oslo_concurrency.lockutils [req-75910186-3703-42cb-9cd8-c88a11c16ceb req-96d087d2-6e61-44bc-acd4-dc70f63cb4ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:49 compute-0 nova_compute[259627]: 2025-10-14 09:00:49.446 2 DEBUG nova.compute.manager [req-75910186-3703-42cb-9cd8-c88a11c16ceb req-96d087d2-6e61-44bc-acd4-dc70f63cb4ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] No waiting events found dispatching network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:49 compute-0 nova_compute[259627]: 2025-10-14 09:00:49.446 2 WARNING nova.compute.manager [req-75910186-3703-42cb-9cd8-c88a11c16ceb req-96d087d2-6e61-44bc-acd4-dc70f63cb4ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received unexpected event network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 for instance with vm_state active and task_state None.
Oct 14 09:00:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 418 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 6.9 MiB/s wr, 515 op/s
Oct 14 09:00:50 compute-0 nova_compute[259627]: 2025-10-14 09:00:50.036 2 DEBUG nova.compute.manager [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:50 compute-0 nova_compute[259627]: 2025-10-14 09:00:50.221 2 INFO nova.compute.manager [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] instance snapshotting
Oct 14 09:00:50 compute-0 nova_compute[259627]: 2025-10-14 09:00:50.413 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:00:50 compute-0 nova_compute[259627]: 2025-10-14 09:00:50.481 2 INFO nova.virt.libvirt.driver [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Beginning live snapshot process
Oct 14 09:00:50 compute-0 nova_compute[259627]: 2025-10-14 09:00:50.621 2 DEBUG nova.virt.libvirt.imagebackend [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:00:50 compute-0 nova_compute[259627]: 2025-10-14 09:00:50.898 2 DEBUG nova.storage.rbd_utils [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(58b9ec08fb9641398d863b577e68c076) on rbd image(f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:00:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 14 09:00:51 compute-0 ceph-mon[74249]: pgmap v1298: 305 pgs: 305 active+clean; 418 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 6.9 MiB/s wr, 515 op/s
Oct 14 09:00:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 14 09:00:51 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 14 09:00:51 compute-0 nova_compute[259627]: 2025-10-14 09:00:51.422 2 DEBUG nova.storage.rbd_utils [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning vms/f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk@58b9ec08fb9641398d863b577e68c076 to images/a641ea4f-3256-4d43-ac58-694946a339b0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:00:51 compute-0 nova_compute[259627]: 2025-10-14 09:00:51.571 2 DEBUG nova.storage.rbd_utils [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] flattening images/a641ea4f-3256-4d43-ac58-694946a339b0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:00:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 451 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 5.2 MiB/s wr, 304 op/s
Oct 14 09:00:51 compute-0 nova_compute[259627]: 2025-10-14 09:00:51.949 2 DEBUG nova.storage.rbd_utils [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(58b9ec08fb9641398d863b577e68c076) on rbd image(f5132a2b-83eb-4bb1-a556-480d9fb97fec_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:00:52 compute-0 nova_compute[259627]: 2025-10-14 09:00:52.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:52 compute-0 nova_compute[259627]: 2025-10-14 09:00:52.120 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432437.118779, 6de921d2-e251-431d-9333-bae44aa81859 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:52 compute-0 nova_compute[259627]: 2025-10-14 09:00:52.121 2 INFO nova.compute.manager [-] [instance: 6de921d2-e251-431d-9333-bae44aa81859] VM Stopped (Lifecycle Event)
Oct 14 09:00:52 compute-0 nova_compute[259627]: 2025-10-14 09:00:52.139 2 DEBUG nova.compute.manager [None req-91dd490a-0cd8-4675-a465-2a11bdbeea02 - - - - - -] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 14 09:00:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 14 09:00:52 compute-0 ceph-mon[74249]: osdmap e160: 3 total, 3 up, 3 in
Oct 14 09:00:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 14 09:00:52 compute-0 nova_compute[259627]: 2025-10-14 09:00:52.451 2 DEBUG nova.storage.rbd_utils [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(snap) on rbd image(a641ea4f-3256-4d43-ac58-694946a339b0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:00:52 compute-0 ovn_controller[152662]: 2025-10-14T09:00:52Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 09:00:52 compute-0 ovn_controller[152662]: 2025-10-14T09:00:52Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 09:00:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:52 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct 14 09:00:52 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 12.419s CPU time.
Oct 14 09:00:52 compute-0 systemd-machined[214636]: Machine qemu-45-instance-00000029 terminated.
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.093 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:00:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 14 09:00:53 compute-0 ceph-mon[74249]: pgmap v1300: 305 pgs: 305 active+clean; 451 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 5.2 MiB/s wr, 304 op/s
Oct 14 09:00:53 compute-0 ceph-mon[74249]: osdmap e161: 3 total, 3 up, 3 in
Oct 14 09:00:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 14 09:00:53 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.503 2 INFO nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance shutdown successfully after 13 seconds.
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.508 2 INFO nova.virt.libvirt.driver [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance destroyed successfully.
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.513 2 INFO nova.virt.libvirt.driver [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance destroyed successfully.
Oct 14 09:00:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 451 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 276 op/s
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image a641ea4f-3256-4d43-ac58-694946a339b0 could not be found.
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID a641ea4f-3256-4d43-ac58-694946a339b0
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver 
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver 
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image a641ea4f-3256-4d43-ac58-694946a339b0 could not be found.
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.620 2 ERROR nova.virt.libvirt.driver 
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.675 2 DEBUG nova.storage.rbd_utils [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(snap) on rbd image(a641ea4f-3256-4d43-ac58-694946a339b0) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.920 2 INFO nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Deleting instance files /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127_del
Oct 14 09:00:53 compute-0 nova_compute[259627]: 2025-10-14 09:00:53.921 2 INFO nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Deletion of /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127_del complete
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.266 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.267 2 INFO nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Creating image(s)
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.282 2 DEBUG nova.storage.rbd_utils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.300 2 DEBUG nova.storage.rbd_utils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.322 2 DEBUG nova.storage.rbd_utils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.325 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 14 09:00:54 compute-0 ceph-mon[74249]: osdmap e162: 3 total, 3 up, 3 in
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.389 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.390 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.391 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.391 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 14 09:00:54 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.425 2 DEBUG nova.storage.rbd_utils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.430 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.463 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432439.456358, ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.464 2 INFO nova.compute.manager [-] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] VM Stopped (Lifecycle Event)
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.527 2 DEBUG nova.compute.manager [None req-2501a6a4-d66a-4150-bb9c-17cf13f9c902 - - - - - -] [instance: ebf1173a-58d3-4ae2-b641-ca05a1f7ec2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.710 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.800 2 DEBUG nova.storage.rbd_utils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] resizing rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.921 2 WARNING nova.compute.manager [None req-364d6257-1f36-427c-b9ec-c1b4b7bfb03c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Image not found during snapshot: nova.exception.ImageNotFound: Image a641ea4f-3256-4d43-ac58-694946a339b0 could not be found.
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.928 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.928 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Ensure instance console log exists: /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.929 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.930 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.930 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.932 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.935 2 WARNING nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.946 2 DEBUG nova.virt.libvirt.host [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.947 2 DEBUG nova.virt.libvirt.host [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.952 2 DEBUG nova.virt.libvirt.host [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.952 2 DEBUG nova.virt.libvirt.host [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.953 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.953 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.954 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.954 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.955 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.955 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.955 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.956 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.956 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.956 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.957 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.957 2 DEBUG nova.virt.hardware [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.958 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:54 compute-0 nova_compute[259627]: 2025-10-14 09:00:54.973 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:55 compute-0 ceph-mon[74249]: pgmap v1303: 305 pgs: 305 active+clean; 451 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 276 op/s
Oct 14 09:00:55 compute-0 ceph-mon[74249]: osdmap e163: 3 total, 3 up, 3 in
Oct 14 09:00:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2921861379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:55 compute-0 nova_compute[259627]: 2025-10-14 09:00:55.486 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:55 compute-0 nova_compute[259627]: 2025-10-14 09:00:55.539 2 DEBUG nova.storage.rbd_utils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:55 compute-0 nova_compute[259627]: 2025-10-14 09:00:55.547 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:55 compute-0 kernel: tap60379992-d7 (unregistering): left promiscuous mode
Oct 14 09:00:55 compute-0 NetworkManager[44885]: <info>  [1760432455.5832] device (tap60379992-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:00:55 compute-0 ovn_controller[152662]: 2025-10-14T09:00:55Z|00366|binding|INFO|Releasing lport 60379992-d75d-4eff-a6bb-5d1615f35475 from this chassis (sb_readonly=0)
Oct 14 09:00:55 compute-0 ovn_controller[152662]: 2025-10-14T09:00:55Z|00367|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 down in Southbound
Oct 14 09:00:55 compute-0 ovn_controller[152662]: 2025-10-14T09:00:55Z|00368|binding|INFO|Removing iface tap60379992-d7 ovn-installed in OVS
Oct 14 09:00:55 compute-0 nova_compute[259627]: 2025-10-14 09:00:55.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.600 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:c2:c5 10.100.0.8'], port_security=['fa:16:3e:47:c2:c5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '310ebd88-5fe0-40ad-99dd-c3a1b410d357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=60379992-d75d-4eff-a6bb-5d1615f35475) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.601 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 60379992-d75d-4eff-a6bb-5d1615f35475 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a unbound from our chassis
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.602 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:00:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 451 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 16 MiB/s wr, 564 op/s
Oct 14 09:00:55 compute-0 nova_compute[259627]: 2025-10-14 09:00:55.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56a15f73-6682-4f0a-832e-8719ef1aefa1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.652 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab71debc-e01e-4905-92d9-cc0e4fd352c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.657 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3ebf0c-342b-4672-bb5f-c92cc37193ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:55 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 14 09:00:55 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000020.scope: Consumed 13.600s CPU time.
Oct 14 09:00:55 compute-0 systemd-machined[214636]: Machine qemu-47-instance-00000020 terminated.
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.692 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fae756c3-9df8-4566-affa-1e7ba8077d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.710 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[44dac046-22d3-4c60-9294-e737e8fb618b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 1168, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 1168, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305524, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.731 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0920796e-60c0-43a1-80d4-462e73177d7d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305534, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305534, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.733 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:55 compute-0 nova_compute[259627]: 2025-10-14 09:00:55.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:55 compute-0 nova_compute[259627]: 2025-10-14 09:00:55.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.740 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:55.740 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84368138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.017 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.019 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <uuid>1fa36bef-0a0f-4f9c-877c-325a656fa127</uuid>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <name>instance-00000029</name>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerShowV254Test-server-272120799</nova:name>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:54</nova:creationTime>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <nova:user uuid="ea4260bffbae4126852a6ddd7dbb8c70">tempest-ServerShowV254Test-33884122-project-member</nova:user>
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <nova:project uuid="0dccd092671340e29357b2b2f18f6e3e">tempest-ServerShowV254Test-33884122</nova:project>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <entry name="serial">1fa36bef-0a0f-4f9c-877c-325a656fa127</entry>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <entry name="uuid">1fa36bef-0a0f-4f9c-877c-325a656fa127</entry>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1fa36bef-0a0f-4f9c-877c-325a656fa127_disk">
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config">
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:56 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/console.log" append="off"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:56 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:56 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:56 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:56 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:56 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.074 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.074 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.074 2 INFO nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Using config drive
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.092 2 DEBUG nova.storage.rbd_utils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.114 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.222 2 INFO nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance shutdown successfully after 13 seconds.
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.226 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance destroyed successfully.
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.230 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance destroyed successfully.
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.231 2 DEBUG nova.virt.libvirt.vif [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:42Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.232 2 DEBUG nova.network.os_vif_util [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.232 2 DEBUG nova.network.os_vif_util [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.233 2 DEBUG os_vif [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60379992-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.242 2 INFO os_vif [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7')
Oct 14 09:00:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2921861379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/84368138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.655 2 INFO nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deleting instance files /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357_del
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.657 2 INFO nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deletion of /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357_del complete
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.929 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.930 2 INFO nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating image(s)
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.962 2 DEBUG nova.storage.rbd_utils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:56 compute-0 nova_compute[259627]: 2025-10-14 09:00:56.986 2 DEBUG nova.storage.rbd_utils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.008 2 DEBUG nova.storage.rbd_utils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.011 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.092 2 INFO nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Creating config drive at /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.101 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpes2jc9we execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.145 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.146 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.147 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.148 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.176 2 DEBUG nova.storage.rbd_utils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.181 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.249 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpes2jc9we" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.283 2 DEBUG nova.storage.rbd_utils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] rbd image 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.287 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.435 2 DEBUG nova.compute.manager [req-b9b2d293-f7e8-41ab-ac15-2cf609086dbe req-cbc7d3dc-2013-4794-8a3b-dd47b697fa76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-unplugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.435 2 DEBUG oslo_concurrency.lockutils [req-b9b2d293-f7e8-41ab-ac15-2cf609086dbe req-cbc7d3dc-2013-4794-8a3b-dd47b697fa76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.436 2 DEBUG oslo_concurrency.lockutils [req-b9b2d293-f7e8-41ab-ac15-2cf609086dbe req-cbc7d3dc-2013-4794-8a3b-dd47b697fa76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.436 2 DEBUG oslo_concurrency.lockutils [req-b9b2d293-f7e8-41ab-ac15-2cf609086dbe req-cbc7d3dc-2013-4794-8a3b-dd47b697fa76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.436 2 DEBUG nova.compute.manager [req-b9b2d293-f7e8-41ab-ac15-2cf609086dbe req-cbc7d3dc-2013-4794-8a3b-dd47b697fa76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] No waiting events found dispatching network-vif-unplugged-60379992-d75d-4eff-a6bb-5d1615f35475 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.436 2 WARNING nova.compute.manager [req-b9b2d293-f7e8-41ab-ac15-2cf609086dbe req-cbc7d3dc-2013-4794-8a3b-dd47b697fa76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received unexpected event network-vif-unplugged-60379992-d75d-4eff-a6bb-5d1615f35475 for instance with vm_state active and task_state rebuild_spawning.
Oct 14 09:00:57 compute-0 ceph-mon[74249]: pgmap v1305: 305 pgs: 305 active+clean; 451 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 16 MiB/s wr, 564 op/s
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.488 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.538 2 DEBUG oslo_concurrency.processutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config 1fa36bef-0a0f-4f9c-877c-325a656fa127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.538 2 INFO nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Deleting local config drive /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127/disk.config because it was imported into RBD.
Oct 14 09:00:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:00:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.550 2 DEBUG nova.storage.rbd_utils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] resizing rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:00:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 14 09:00:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 14 09:00:57 compute-0 systemd-machined[214636]: New machine qemu-49-instance-00000029.
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.590 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.591 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.591 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.591 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.591 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.593 2 INFO nova.compute.manager [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Terminating instance
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.594 2 DEBUG nova.compute.manager [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:00:57 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-00000029.
Oct 14 09:00:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 451 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 13 MiB/s wr, 457 op/s
Oct 14 09:00:57 compute-0 kernel: tap625280ef-3d (unregistering): left promiscuous mode
Oct 14 09:00:57 compute-0 NetworkManager[44885]: <info>  [1760432457.6309] device (tap625280ef-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:00:57 compute-0 ovn_controller[152662]: 2025-10-14T09:00:57Z|00369|binding|INFO|Releasing lport 625280ef-3da7-4600-8f28-4ab4e77ff594 from this chassis (sb_readonly=0)
Oct 14 09:00:57 compute-0 ovn_controller[152662]: 2025-10-14T09:00:57Z|00370|binding|INFO|Setting lport 625280ef-3da7-4600-8f28-4ab4e77ff594 down in Southbound
Oct 14 09:00:57 compute-0 ovn_controller[152662]: 2025-10-14T09:00:57Z|00371|binding|INFO|Removing iface tap625280ef-3d ovn-installed in OVS
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.650 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:6a:d6 10.100.0.8'], port_security=['fa:16:3e:ea:6a:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f5132a2b-83eb-4bb1-a556-480d9fb97fec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=625280ef-3da7-4600-8f28-4ab4e77ff594) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.651 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 625280ef-3da7-4600-8f28-4ab4e77ff594 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.653 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.653 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07caad04-2590-460c-bb2f-943cfe488cb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.656 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace which is not needed anymore
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 14 09:00:57 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002a.scope: Consumed 10.932s CPU time.
Oct 14 09:00:57 compute-0 systemd-machined[214636]: Machine qemu-48-instance-0000002a terminated.
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.705 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.705 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Ensure instance console log exists: /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.707 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.707 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.708 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.710 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start _get_guest_xml network_info=[{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.714 2 WARNING nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.720 2 DEBUG nova.virt.libvirt.host [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.720 2 DEBUG nova.virt.libvirt.host [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.723 2 DEBUG nova.virt.libvirt.host [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.724 2 DEBUG nova.virt.libvirt.host [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.724 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.724 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.725 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.725 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.725 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.726 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.726 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.726 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.726 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.727 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.727 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.727 2 DEBUG nova.virt.hardware [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.728 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.746 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:57 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[305080]: [NOTICE]   (305085) : haproxy version is 2.8.14-c23fe91
Oct 14 09:00:57 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[305080]: [NOTICE]   (305085) : path to executable is /usr/sbin/haproxy
Oct 14 09:00:57 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[305080]: [WARNING]  (305085) : Exiting Master process...
Oct 14 09:00:57 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[305080]: [ALERT]    (305085) : Current worker (305087) exited with code 143 (Terminated)
Oct 14 09:00:57 compute-0 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[305080]: [WARNING]  (305085) : All workers exited. Exiting... (0)
Oct 14 09:00:57 compute-0 systemd[1]: libpod-9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9.scope: Deactivated successfully.
Oct 14 09:00:57 compute-0 podman[305832]: 2025-10-14 09:00:57.792438156 +0000 UTC m=+0.043963042 container died 9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 09:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9-userdata-shm.mount: Deactivated successfully.
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-94473d2644af28465e64fe09d42bd242df1c7f4c209c94cc8abaecea8272e0f7-merged.mount: Deactivated successfully.
Oct 14 09:00:57 compute-0 podman[305832]: 2025-10-14 09:00:57.842901616 +0000 UTC m=+0.094426502 container cleanup 9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.848 2 INFO nova.virt.libvirt.driver [-] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Instance destroyed successfully.
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.850 2 DEBUG nova.objects.instance [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid f5132a2b-83eb-4bb1-a556-480d9fb97fec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.867 2 DEBUG nova.virt.libvirt.vif [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-936263881',display_name='tempest-ImagesTestJSON-server-936263881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-936263881',id=42,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-v3x7rtof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:00:54Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f5132a2b-83eb-4bb1-a556-480d9fb97fec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.868 2 DEBUG nova.network.os_vif_util [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "625280ef-3da7-4600-8f28-4ab4e77ff594", "address": "fa:16:3e:ea:6a:d6", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap625280ef-3d", "ovs_interfaceid": "625280ef-3da7-4600-8f28-4ab4e77ff594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.869 2 DEBUG nova.network.os_vif_util [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:6a:d6,bridge_name='br-int',has_traffic_filtering=True,id=625280ef-3da7-4600-8f28-4ab4e77ff594,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap625280ef-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.870 2 DEBUG os_vif [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:6a:d6,bridge_name='br-int',has_traffic_filtering=True,id=625280ef-3da7-4600-8f28-4ab4e77ff594,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap625280ef-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.872 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap625280ef-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 systemd[1]: libpod-conmon-9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9.scope: Deactivated successfully.
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.881 2 INFO os_vif [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:6a:d6,bridge_name='br-int',has_traffic_filtering=True,id=625280ef-3da7-4600-8f28-4ab4e77ff594,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap625280ef-3d')
Oct 14 09:00:57 compute-0 podman[305867]: 2025-10-14 09:00:57.923823266 +0000 UTC m=+0.046591176 container remove 9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.931 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d25a3069-1577-445a-952e-8e02fc9f66b1]: (4, ('Tue Oct 14 09:00:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9)\n9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9\nTue Oct 14 09:00:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9)\n9c2fed4c8ac03214c1ae4bb35af097635af3096c9c5165cef7c0a22c99f224b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.933 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[806a1046-a9aa-40ce-8bed-3a1f422c6028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 kernel: tap2322cf7a-00: left promiscuous mode
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.946 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d2bd25-ad90-40f0-b9ad-e791d14d8f4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:57 compute-0 nova_compute[259627]: 2025-10-14 09:00:57.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.976 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13bb69ab-4cff-427a-b5f6-5f42feb8f256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.977 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce089d02-ac72-44b2-83a8-1a6e4bef4a98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b7eda76c-119e-4705-9e29-3c3ccbca885c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622523, 'reachable_time': 41323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305922, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d2322cf7a\x2d0090\x2d40fa\x2da558\x2d42d84cc6fc2a.mount: Deactivated successfully.
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.997 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:00:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:57.997 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[642586be-273f-4ac8-b9ec-a06e261d7fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/483361604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.245 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.269 2 DEBUG nova.storage.rbd_utils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.273 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.319 2 INFO nova.virt.libvirt.driver [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Deleting instance files /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec_del
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.321 2 INFO nova.virt.libvirt.driver [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Deletion of /var/lib/nova/instances/f5132a2b-83eb-4bb1-a556-480d9fb97fec_del complete
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.413 2 INFO nova.compute.manager [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.414 2 DEBUG oslo.service.loopingcall [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.415 2 DEBUG nova.compute.manager [-] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.415 2 DEBUG nova.network.neutron [-] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:00:58 compute-0 ceph-mon[74249]: osdmap e164: 3 total, 3 up, 3 in
Oct 14 09:00:58 compute-0 ceph-mon[74249]: pgmap v1307: 305 pgs: 305 active+clean; 451 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 13 MiB/s wr, 457 op/s
Oct 14 09:00:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/483361604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:00:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224932008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.699 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.702 2 DEBUG nova.virt.libvirt.vif [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:56Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.703 2 DEBUG nova.network.os_vif_util [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.704 2 DEBUG nova.network.os_vif_util [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.709 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <uuid>310ebd88-5fe0-40ad-99dd-c3a1b410d357</uuid>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <name>instance-00000020</name>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdminTestJSON-server-490112967</nova:name>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:00:57</nova:creationTime>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <nova:user uuid="56001fe1c9fc432e923f8c57058754db">tempest-ServersAdminTestJSON-276167539-project-member</nova:user>
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <nova:project uuid="ed7ee17abdbe419cb7d7fd0da2cd2068">tempest-ServersAdminTestJSON-276167539</nova:project>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <nova:port uuid="60379992-d75d-4eff-a6bb-5d1615f35475">
Oct 14 09:00:58 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <system>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <entry name="serial">310ebd88-5fe0-40ad-99dd-c3a1b410d357</entry>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <entry name="uuid">310ebd88-5fe0-40ad-99dd-c3a1b410d357</entry>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </system>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <os>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   </os>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <features>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   </features>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk">
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config">
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       </source>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:00:58 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:47:c2:c5"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <target dev="tap60379992-d7"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/console.log" append="off"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <video>
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </video>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:00:58 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:00:58 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:00:58 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:00:58 compute-0 nova_compute[259627]: </domain>
Oct 14 09:00:58 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.711 2 DEBUG nova.compute.manager [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Preparing to wait for external event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.711 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.712 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.712 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.712 2 DEBUG nova.virt.libvirt.vif [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:00:56Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.713 2 DEBUG nova.network.os_vif_util [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.713 2 DEBUG nova.network.os_vif_util [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.713 2 DEBUG os_vif [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.714 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60379992-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60379992-d7, col_values=(('external_ids', {'iface-id': '60379992-d75d-4eff-a6bb-5d1615f35475', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:c2:c5', 'vm-uuid': '310ebd88-5fe0-40ad-99dd-c3a1b410d357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:58 compute-0 NetworkManager[44885]: <info>  [1760432458.7203] manager: (tap60379992-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.727 2 INFO os_vif [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7')
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.818 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.819 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.819 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No VIF found with MAC fa:16:3e:47:c2:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.819 2 INFO nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Using config drive
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.838 2 DEBUG nova.storage.rbd_utils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.861 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.870 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 1fa36bef-0a0f-4f9c-877c-325a656fa127 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.871 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432458.869705, 1fa36bef-0a0f-4f9c-877c-325a656fa127 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.872 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] VM Resumed (Lifecycle Event)
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.876 2 DEBUG nova.compute.manager [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.877 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.881 2 INFO nova.virt.libvirt.driver [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance spawned successfully.
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.882 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.927 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.930 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'keypairs' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.937 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.940 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.941 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.942 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.943 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.943 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.944 2 DEBUG nova.virt.libvirt.driver [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.980 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.981 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432458.8709002, 1fa36bef-0a0f-4f9c-877c-325a656fa127 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:00:58 compute-0 nova_compute[259627]: 2025-10-14 09:00:58.981 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] VM Started (Lifecycle Event)
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.012 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.016 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.028 2 DEBUG nova.compute.manager [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.040 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.109 2 DEBUG nova.network.neutron [-] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.111 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.112 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.112 2 DEBUG nova.objects.instance [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.147 2 INFO nova.compute.manager [-] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Took 0.73 seconds to deallocate network for instance.
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.186 2 DEBUG oslo_concurrency.lockutils [None req-0ab4c478-b408-4caf-bbb4-e12aad5e6ded ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.204 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.205 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.207 2 DEBUG nova.compute.manager [req-e631a05c-a6cf-40ec-b54a-0654af3d5f19 req-909cc69c-1205-4e0e-92ed-89ec1fdbdaed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received event network-vif-deleted-625280ef-3da7-4600-8f28-4ab4e77ff594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.320 2 DEBUG oslo_concurrency.processutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.364 2 INFO nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating config drive at /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.369 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8z4ep23m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.504 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8z4ep23m" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.528 2 DEBUG nova.storage.rbd_utils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.531 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:00:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1224932008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.566 2 DEBUG nova.compute.manager [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.567 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.567 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.568 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.568 2 DEBUG nova.compute.manager [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Processing event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.568 2 DEBUG nova.compute.manager [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received event network-vif-unplugged-625280ef-3da7-4600-8f28-4ab4e77ff594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.568 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.569 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.569 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.569 2 DEBUG nova.compute.manager [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] No waiting events found dispatching network-vif-unplugged-625280ef-3da7-4600-8f28-4ab4e77ff594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.569 2 WARNING nova.compute.manager [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received unexpected event network-vif-unplugged-625280ef-3da7-4600-8f28-4ab4e77ff594 for instance with vm_state deleted and task_state None.
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.569 2 DEBUG nova.compute.manager [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received event network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.570 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.570 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.570 2 DEBUG oslo_concurrency.lockutils [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.570 2 DEBUG nova.compute.manager [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] No waiting events found dispatching network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.571 2 WARNING nova.compute.manager [req-702a52b5-04c1-48bc-b6dc-f9b62c312c54 req-dfca6918-f0fe-42a6-85ed-92ce2acf92cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Received unexpected event network-vif-plugged-625280ef-3da7-4600-8f28-4ab4e77ff594 for instance with vm_state deleted and task_state None.
Oct 14 09:00:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 451 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 11 MiB/s wr, 384 op/s
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.737 2 DEBUG oslo_concurrency.processutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.738 2 INFO nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deleting local config drive /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config because it was imported into RBD.
Oct 14 09:00:59 compute-0 NetworkManager[44885]: <info>  [1760432459.7779] manager: (tap60379992-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Oct 14 09:00:59 compute-0 kernel: tap60379992-d7: entered promiscuous mode
Oct 14 09:00:59 compute-0 ovn_controller[152662]: 2025-10-14T09:00:59Z|00372|binding|INFO|Claiming lport 60379992-d75d-4eff-a6bb-5d1615f35475 for this chassis.
Oct 14 09:00:59 compute-0 ovn_controller[152662]: 2025-10-14T09:00:59Z|00373|binding|INFO|60379992-d75d-4eff-a6bb-5d1615f35475: Claiming fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.793 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:c2:c5 10.100.0.8'], port_security=['fa:16:3e:47:c2:c5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '310ebd88-5fe0-40ad-99dd-c3a1b410d357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=60379992-d75d-4eff-a6bb-5d1615f35475) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.794 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 60379992-d75d-4eff-a6bb-5d1615f35475 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a bound to our chassis
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.798 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:00:59 compute-0 ovn_controller[152662]: 2025-10-14T09:00:59Z|00374|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 ovn-installed in OVS
Oct 14 09:00:59 compute-0 ovn_controller[152662]: 2025-10-14T09:00:59Z|00375|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 up in Southbound
Oct 14 09:00:59 compute-0 systemd-machined[214636]: New machine qemu-50-instance-00000020.
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.817 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65bf7ca8-69a8-4019-bd95-f2b09cdb164a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:59 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-00000020.
Oct 14 09:00:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:00:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147589236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:00:59 compute-0 systemd-udevd[306103]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:00:59 compute-0 NetworkManager[44885]: <info>  [1760432459.8456] device (tap60379992-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:00:59 compute-0 NetworkManager[44885]: <info>  [1760432459.8467] device (tap60379992-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.854 2 DEBUG oslo_concurrency.processutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.855 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[06c02680-6e30-4a89-a749-cdb111779601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.858 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[07609839-6b7e-4ba9-aff9-7d39db8195aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.860 2 DEBUG nova.compute.provider_tree [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.884 2 DEBUG nova.scheduler.client.report [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.886 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ea16a0-3fa4-4d01-9ee4-2bc11f7469ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.903 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[957a58aa-ffad-4bf7-9089-535da82788d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 1168, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 1168, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306116, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.907 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.919 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfda763-7827-4801-979c-467f0024b579]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306117, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306117, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.921 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.924 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.924 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.924 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:00:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:00:59.924 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:00:59 compute-0 nova_compute[259627]: 2025-10-14 09:00:59.941 2 INFO nova.scheduler.client.report [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance f5132a2b-83eb-4bb1-a556-480d9fb97fec
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.005 2 DEBUG oslo_concurrency.lockutils [None req-89ebfa39-7f78-437b-936f-1a044e0ee9d1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f5132a2b-83eb-4bb1-a556-480d9fb97fec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.477 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "1fa36bef-0a0f-4f9c-877c-325a656fa127" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.478 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "1fa36bef-0a0f-4f9c-877c-325a656fa127" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.478 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "1fa36bef-0a0f-4f9c-877c-325a656fa127-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.478 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "1fa36bef-0a0f-4f9c-877c-325a656fa127-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.479 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "1fa36bef-0a0f-4f9c-877c-325a656fa127-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.480 2 INFO nova.compute.manager [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Terminating instance
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.480 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "refresh_cache-1fa36bef-0a0f-4f9c-877c-325a656fa127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.481 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquired lock "refresh_cache-1fa36bef-0a0f-4f9c-877c-325a656fa127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.481 2 DEBUG nova.network.neutron [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:01:00 compute-0 ceph-mon[74249]: pgmap v1308: 305 pgs: 305 active+clean; 451 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 11 MiB/s wr, 384 op/s
Oct 14 09:01:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2147589236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.578 2 DEBUG nova.compute.manager [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.580 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 310ebd88-5fe0-40ad-99dd-c3a1b410d357 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.581 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432460.5771081, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.581 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Started (Lifecycle Event)
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.587 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.594 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance spawned successfully.
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.596 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.611 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.633 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.634 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.635 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.636 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.637 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.638 2 DEBUG nova.virt.libvirt.driver [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.656 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.657 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432460.5772529, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.657 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Paused (Lifecycle Event)
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.687 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.693 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432460.586821, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.693 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Resumed (Lifecycle Event)
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.707 2 DEBUG nova.network.neutron [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.725 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.733 2 DEBUG nova.compute.manager [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.736 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.775 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.817 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.818 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.819 2 DEBUG nova.objects.instance [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.889 2 DEBUG oslo_concurrency.lockutils [None req-0fff2c2f-9ad4-4758-ba0d-33c3c36040fd 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.953 2 DEBUG nova.network.neutron [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.968 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Releasing lock "refresh_cache-1fa36bef-0a0f-4f9c-877c-325a656fa127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:00 compute-0 nova_compute[259627]: 2025-10-14 09:01:00.969 2 DEBUG nova.compute.manager [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:01:01 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct 14 09:01:01 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000029.scope: Consumed 3.364s CPU time.
Oct 14 09:01:01 compute-0 systemd-machined[214636]: Machine qemu-49-instance-00000029 terminated.
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.190 2 INFO nova.virt.libvirt.driver [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance destroyed successfully.
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.191 2 DEBUG nova.objects.instance [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lazy-loading 'resources' on Instance uuid 1fa36bef-0a0f-4f9c-877c-325a656fa127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.584 2 INFO nova.virt.libvirt.driver [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Deleting instance files /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127_del
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.585 2 INFO nova.virt.libvirt.driver [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Deletion of /var/lib/nova/instances/1fa36bef-0a0f-4f9c-877c-325a656fa127_del complete
Oct 14 09:01:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 372 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 11 MiB/s wr, 530 op/s
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.642 2 INFO nova.compute.manager [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Took 0.67 seconds to destroy the instance on the hypervisor.
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.643 2 DEBUG oslo.service.loopingcall [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.643 2 DEBUG nova.compute.manager [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.643 2 DEBUG nova.network.neutron [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:01:01 compute-0 CROND[306182]: (root) CMD (run-parts /etc/cron.hourly)
Oct 14 09:01:01 compute-0 run-parts[306185]: (/etc/cron.hourly) starting 0anacron
Oct 14 09:01:01 compute-0 run-parts[306191]: (/etc/cron.hourly) finished 0anacron
Oct 14 09:01:01 compute-0 CROND[306181]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.935 2 DEBUG nova.network.neutron [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.952 2 DEBUG nova.network.neutron [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:01 compute-0 nova_compute[259627]: 2025-10-14 09:01:01.967 2 INFO nova.compute.manager [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Took 0.32 seconds to deallocate network for instance.
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.020 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.021 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.150 2 DEBUG oslo_concurrency.processutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705784633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.626 2 DEBUG oslo_concurrency.processutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.635 2 DEBUG nova.compute.provider_tree [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.655 2 DEBUG nova.scheduler.client.report [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:02 compute-0 podman[306213]: 2025-10-14 09:01:02.659388527 +0000 UTC m=+0.067424429 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:01:02 compute-0 ceph-mon[74249]: pgmap v1309: 305 pgs: 305 active+clean; 372 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 11 MiB/s wr, 530 op/s
Oct 14 09:01:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/705784633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.682 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:02 compute-0 podman[306212]: 2025-10-14 09:01:02.686740959 +0000 UTC m=+0.096346330 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.723 2 INFO nova.scheduler.client.report [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Deleted allocations for instance 1fa36bef-0a0f-4f9c-877c-325a656fa127
Oct 14 09:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:01:02 compute-0 nova_compute[259627]: 2025-10-14 09:01:02.811 2 DEBUG oslo_concurrency.lockutils [None req-055c272e-915f-4742-9be7-b880c1057505 ea4260bffbae4126852a6ddd7dbb8c70 0dccd092671340e29357b2b2f18f6e3e - - default default] Lock "1fa36bef-0a0f-4f9c-877c-325a656fa127" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 372 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.8 MiB/s wr, 461 op/s
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.768 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "f921c880-38a7-40b6-8300-2123889a19c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.769 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.770 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "f921c880-38a7-40b6-8300-2123889a19c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.770 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.771 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.773 2 INFO nova.compute.manager [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Terminating instance
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.776 2 DEBUG nova.compute.manager [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:03 compute-0 kernel: tap07fd6657-ae (unregistering): left promiscuous mode
Oct 14 09:01:03 compute-0 NetworkManager[44885]: <info>  [1760432463.8315] device (tap07fd6657-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:03 compute-0 ovn_controller[152662]: 2025-10-14T09:01:03Z|00376|binding|INFO|Releasing lport 07fd6657-ae56-455d-b362-d5f4a3368a9e from this chassis (sb_readonly=0)
Oct 14 09:01:03 compute-0 ovn_controller[152662]: 2025-10-14T09:01:03Z|00377|binding|INFO|Setting lport 07fd6657-ae56-455d-b362-d5f4a3368a9e down in Southbound
Oct 14 09:01:03 compute-0 ovn_controller[152662]: 2025-10-14T09:01:03Z|00378|binding|INFO|Removing iface tap07fd6657-ae ovn-installed in OVS
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:03.856 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:30:a0 10.100.0.14'], port_security=['fa:16:3e:86:30:a0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f921c880-38a7-40b6-8300-2123889a19c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=07fd6657-ae56-455d-b362-d5f4a3368a9e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:03.858 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 07fd6657-ae56-455d-b362-d5f4a3368a9e in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a unbound from our chassis
Oct 14 09:01:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:03.860 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:01:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:03.880 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36db94d3-29b4-4c5d-9e9b-adb7a7fac59a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:03 compute-0 nova_compute[259627]: 2025-10-14 09:01:03.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:03 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct 14 09:01:03 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Consumed 15.604s CPU time.
Oct 14 09:01:03 compute-0 systemd-machined[214636]: Machine qemu-43-instance-00000026 terminated.
Oct 14 09:01:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:03.924 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[10c6f056-e232-47ad-b213-081945c3e6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:03.929 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[595a7c5f-fff5-4388-826f-7f075b4fc3b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:03.972 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae523b3-968f-47b0-894c-2db42ea85c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:03.993 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3c1a95-4f3c-495c-8ffc-3533af8e749a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 1168, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 1168, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306264, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.013 2 INFO nova.virt.libvirt.driver [-] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Instance destroyed successfully.
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.014 2 DEBUG nova.objects.instance [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'resources' on Instance uuid f921c880-38a7-40b6-8300-2123889a19c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:04.016 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf653bd-fdaa-4677-ad86-e1363837d455]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306269, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306269, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:04.019 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:04.027 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:04.028 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:04.028 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:04.028 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.036 2 DEBUG nova.virt.libvirt.vif [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:00:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2049488467',display_name='tempest-ServersAdminTestJSON-server-2049488467',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2049488467',id=38,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-v8hrxxla',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:00:23Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=f921c880-38a7-40b6-8300-2123889a19c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.036 2 DEBUG nova.network.os_vif_util [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "address": "fa:16:3e:86:30:a0", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07fd6657-ae", "ovs_interfaceid": "07fd6657-ae56-455d-b362-d5f4a3368a9e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.037 2 DEBUG nova.network.os_vif_util [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:30:a0,bridge_name='br-int',has_traffic_filtering=True,id=07fd6657-ae56-455d-b362-d5f4a3368a9e,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd6657-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.037 2 DEBUG os_vif [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:30:a0,bridge_name='br-int',has_traffic_filtering=True,id=07fd6657-ae56-455d-b362-d5f4a3368a9e,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd6657-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07fd6657-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.049 2 INFO os_vif [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:30:a0,bridge_name='br-int',has_traffic_filtering=True,id=07fd6657-ae56-455d-b362-d5f4a3368a9e,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07fd6657-ae')
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.473 2 INFO nova.virt.libvirt.driver [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Deleting instance files /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6_del
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.474 2 INFO nova.virt.libvirt.driver [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Deletion of /var/lib/nova/instances/f921c880-38a7-40b6-8300-2123889a19c6_del complete
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.546 2 INFO nova.compute.manager [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.547 2 DEBUG oslo.service.loopingcall [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.548 2 DEBUG nova.compute.manager [-] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:01:04 compute-0 nova_compute[259627]: 2025-10-14 09:01:04.548 2 DEBUG nova.network.neutron [-] [instance: f921c880-38a7-40b6-8300-2123889a19c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:01:04 compute-0 ceph-mon[74249]: pgmap v1310: 305 pgs: 305 active+clean; 372 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.8 MiB/s wr, 461 op/s
Oct 14 09:01:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:01:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1845760629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:01:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:01:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1845760629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:01:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 246 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 339 op/s
Oct 14 09:01:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1845760629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:01:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1845760629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.433 2 DEBUG nova.network.neutron [-] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.454 2 INFO nova.compute.manager [-] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Took 1.91 seconds to deallocate network for instance.
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.502 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.503 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.608 2 DEBUG oslo_concurrency.processutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:06 compute-0 ceph-mon[74249]: pgmap v1311: 305 pgs: 305 active+clean; 246 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 339 op/s
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.751 2 DEBUG nova.compute.manager [req-f5c3329e-130c-4428-82b5-feed9c647661 req-c758df88-1bfd-49f8-8d13-e153678324a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.752 2 DEBUG oslo_concurrency.lockutils [req-f5c3329e-130c-4428-82b5-feed9c647661 req-c758df88-1bfd-49f8-8d13-e153678324a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.753 2 DEBUG oslo_concurrency.lockutils [req-f5c3329e-130c-4428-82b5-feed9c647661 req-c758df88-1bfd-49f8-8d13-e153678324a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.753 2 DEBUG oslo_concurrency.lockutils [req-f5c3329e-130c-4428-82b5-feed9c647661 req-c758df88-1bfd-49f8-8d13-e153678324a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.754 2 DEBUG nova.compute.manager [req-f5c3329e-130c-4428-82b5-feed9c647661 req-c758df88-1bfd-49f8-8d13-e153678324a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] No waiting events found dispatching network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.754 2 WARNING nova.compute.manager [req-f5c3329e-130c-4428-82b5-feed9c647661 req-c758df88-1bfd-49f8-8d13-e153678324a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received unexpected event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 for instance with vm_state active and task_state None.
Oct 14 09:01:06 compute-0 ovn_controller[152662]: 2025-10-14T09:01:06Z|00379|binding|INFO|Releasing lport 6baedd76-8a05-42d6-8356-18b586f58672 from this chassis (sb_readonly=0)
Oct 14 09:01:06 compute-0 nova_compute[259627]: 2025-10-14 09:01:06.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:07.019 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:07.019 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:07.020 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619343670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.207 2 DEBUG oslo_concurrency.processutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.217 2 DEBUG nova.compute.provider_tree [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.237 2 DEBUG nova.scheduler.client.report [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.258 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.294 2 INFO nova.scheduler.client.report [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Deleted allocations for instance f921c880-38a7-40b6-8300-2123889a19c6
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.376 2 DEBUG oslo_concurrency.lockutils [None req-b62e3c7d-20a0-458a-aa74-2c8caa433925 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "f921c880-38a7-40b6-8300-2123889a19c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 246 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 337 op/s
Oct 14 09:01:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/619343670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.754 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.754 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.768 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.841 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.842 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.850 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:01:07 compute-0 nova_compute[259627]: 2025-10-14 09:01:07.851 2 INFO nova.compute.claims [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.043 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.196 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "6b630da6-e65a-48aa-9559-1d59beb73a93" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.196 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.197 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.197 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.197 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.199 2 INFO nova.compute.manager [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Terminating instance
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.200 2 DEBUG nova.compute.manager [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:01:08 compute-0 kernel: tapf92053b9-1a (unregistering): left promiscuous mode
Oct 14 09:01:08 compute-0 NetworkManager[44885]: <info>  [1760432468.2650] device (tapf92053b9-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:08 compute-0 ovn_controller[152662]: 2025-10-14T09:01:08Z|00380|binding|INFO|Releasing lport f92053b9-1a42-46c2-9abc-77adb8210c62 from this chassis (sb_readonly=0)
Oct 14 09:01:08 compute-0 ovn_controller[152662]: 2025-10-14T09:01:08Z|00381|binding|INFO|Setting lport f92053b9-1a42-46c2-9abc-77adb8210c62 down in Southbound
Oct 14 09:01:08 compute-0 ovn_controller[152662]: 2025-10-14T09:01:08Z|00382|binding|INFO|Removing iface tapf92053b9-1a ovn-installed in OVS
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.287 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:4e:2c 10.100.0.11'], port_security=['fa:16:3e:ef:4e:2c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6b630da6-e65a-48aa-9559-1d59beb73a93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f92053b9-1a42-46c2-9abc-77adb8210c62) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.289 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f92053b9-1a42-46c2-9abc-77adb8210c62 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a unbound from our chassis
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.292 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.330 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad61ae54-83f9-4854-b650-7c55748ff202]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:08 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000024.scope: Deactivated successfully.
Oct 14 09:01:08 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000024.scope: Consumed 15.308s CPU time.
Oct 14 09:01:08 compute-0 systemd-machined[214636]: Machine qemu-41-instance-00000024 terminated.
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.378 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e57c11d-0a64-4c38-a7d6-68f47b80465b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.382 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[770ef483-b497-4531-aa1e-55e74f7cc057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.430 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[795e2381-8118-4f32-a3f3-92bf42715db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.450 2 INFO nova.virt.libvirt.driver [-] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Instance destroyed successfully.
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.451 2 DEBUG nova.objects.instance [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'resources' on Instance uuid 6b630da6-e65a-48aa-9559-1d59beb73a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[579f156f-dc83-42b5-bb16-2124d3f58f53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 1168, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 1168, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 34581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306355, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.467 2 DEBUG nova.virt.libvirt.vif [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-303012336',display_name='tempest-ServersAdminTestJSON-server-303012336',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-303012336',id=36,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:00:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-9e7dvtz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:00:08Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=6b630da6-e65a-48aa-9559-1d59beb73a93,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.470 2 DEBUG nova.network.os_vif_util [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "f92053b9-1a42-46c2-9abc-77adb8210c62", "address": "fa:16:3e:ef:4e:2c", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92053b9-1a", "ovs_interfaceid": "f92053b9-1a42-46c2-9abc-77adb8210c62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.470 2 DEBUG nova.network.os_vif_util [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:4e:2c,bridge_name='br-int',has_traffic_filtering=True,id=f92053b9-1a42-46c2-9abc-77adb8210c62,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92053b9-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.471 2 DEBUG os_vif [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:4e:2c,bridge_name='br-int',has_traffic_filtering=True,id=f92053b9-1a42-46c2-9abc-77adb8210c62,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92053b9-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf92053b9-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.476 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b96cb742-a822-4d88-b70d-98dc2dab36ea]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306361, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306361, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.477 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935721364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.481 2 INFO os_vif [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:4e:2c,bridge_name='br-int',has_traffic_filtering=True,id=f92053b9-1a42-46c2-9abc-77adb8210c62,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92053b9-1a')
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.480 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.481 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.481 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:08.481 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.507 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.512 2 DEBUG nova.compute.provider_tree [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.530 2 DEBUG nova.scheduler.client.report [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.554 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.555 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.621 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.622 2 DEBUG nova.network.neutron [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.643 2 INFO nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.661 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:01:08 compute-0 ceph-mon[74249]: pgmap v1312: 305 pgs: 305 active+clean; 246 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 337 op/s
Oct 14 09:01:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/935721364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.777 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.778 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.779 2 INFO nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Creating image(s)
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.798 2 DEBUG nova.storage.rbd_utils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image de383510-2de3-40bd-b479-c0010b3f2d1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.818 2 DEBUG nova.storage.rbd_utils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image de383510-2de3-40bd-b479-c0010b3f2d1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.838 2 DEBUG nova.storage.rbd_utils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image de383510-2de3-40bd-b479-c0010b3f2d1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.840 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.870 2 DEBUG nova.policy [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa32af91355a41198fd57121e5c70ec2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '368d762ed02e459d892ad1e5488c2871', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.876 2 DEBUG nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.876 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.877 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.877 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.877 2 DEBUG nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] No waiting events found dispatching network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.878 2 WARNING nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received unexpected event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 for instance with vm_state active and task_state None.
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.878 2 DEBUG nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Received event network-vif-deleted-07fd6657-ae56-455d-b362-d5f4a3368a9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.878 2 DEBUG nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received event network-vif-unplugged-f92053b9-1a42-46c2-9abc-77adb8210c62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.879 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.879 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.879 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.879 2 DEBUG nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] No waiting events found dispatching network-vif-unplugged-f92053b9-1a42-46c2-9abc-77adb8210c62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.880 2 DEBUG nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received event network-vif-unplugged-f92053b9-1a42-46c2-9abc-77adb8210c62 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.880 2 DEBUG nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received event network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.880 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.881 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.881 2 DEBUG oslo_concurrency.lockutils [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.881 2 DEBUG nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] No waiting events found dispatching network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.881 2 WARNING nova.compute.manager [req-4853331f-ca2a-41b3-8e52-acacc6c182b5 req-af1730ef-7e05-4ed4-9b74-b9e239542ed5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received unexpected event network-vif-plugged-f92053b9-1a42-46c2-9abc-77adb8210c62 for instance with vm_state active and task_state deleting.
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.891 2 INFO nova.virt.libvirt.driver [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Deleting instance files /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93_del
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.892 2 INFO nova.virt.libvirt.driver [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Deletion of /var/lib/nova/instances/6b630da6-e65a-48aa-9559-1d59beb73a93_del complete
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.901 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.902 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.902 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.902 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.919 2 DEBUG nova.storage.rbd_utils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image de383510-2de3-40bd-b479-c0010b3f2d1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.922 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 de383510-2de3-40bd-b479-c0010b3f2d1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.959 2 INFO nova.compute.manager [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.960 2 DEBUG oslo.service.loopingcall [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.960 2 DEBUG nova.compute.manager [-] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:01:08 compute-0 nova_compute[259627]: 2025-10-14 09:01:08.960 2 DEBUG nova.network.neutron [-] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.163 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 de383510-2de3-40bd-b479-c0010b3f2d1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.219 2 DEBUG nova.storage.rbd_utils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] resizing rbd image de383510-2de3-40bd-b479-c0010b3f2d1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.294 2 DEBUG nova.objects.instance [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'migration_context' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.308 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.308 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Ensure instance console log exists: /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.309 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.309 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.309 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.502 2 DEBUG nova.network.neutron [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Successfully created port: 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:01:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 246 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 282 op/s
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.645 2 DEBUG nova.network.neutron [-] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.663 2 INFO nova.compute.manager [-] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Took 0.70 seconds to deallocate network for instance.
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.709 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.710 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.735 2 DEBUG nova.compute.manager [req-6cb488f3-943c-4477-a8ff-710054e6f3a8 req-64ca1bee-0e4e-4c39-ab32-bb87e5b9540f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Received event network-vif-deleted-f92053b9-1a42-46c2-9abc-77adb8210c62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:09 compute-0 nova_compute[259627]: 2025-10-14 09:01:09.823 2 DEBUG oslo_concurrency.processutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.237 2 DEBUG nova.network.neutron [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Successfully updated port: 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:01:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025309207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.257 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.257 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.257 2 DEBUG nova.network.neutron [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.263 2 DEBUG oslo_concurrency.processutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.268 2 DEBUG nova.compute.provider_tree [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.284 2 DEBUG nova.scheduler.client.report [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.313 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.359 2 INFO nova.scheduler.client.report [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Deleted allocations for instance 6b630da6-e65a-48aa-9559-1d59beb73a93
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.428 2 DEBUG oslo_concurrency.lockutils [None req-edc10e90-bfbd-4048-ade3-cba5d0ba4cd7 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "6b630da6-e65a-48aa-9559-1d59beb73a93" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.432 2 DEBUG nova.network.neutron [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:01:10 compute-0 ceph-mon[74249]: pgmap v1313: 305 pgs: 305 active+clean; 246 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 282 op/s
Oct 14 09:01:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4025309207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.952 2 DEBUG nova.compute.manager [req-6cfc9f3a-f6f8-4266-b9c0-d2f96a43249c req-c60e35bf-0195-4f21-8a1e-a8fb9f70f1ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-changed-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.953 2 DEBUG nova.compute.manager [req-6cfc9f3a-f6f8-4266-b9c0-d2f96a43249c req-c60e35bf-0195-4f21-8a1e-a8fb9f70f1ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Refreshing instance network info cache due to event network-changed-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:01:10 compute-0 nova_compute[259627]: 2025-10-14 09:01:10.954 2 DEBUG oslo_concurrency.lockutils [req-6cfc9f3a-f6f8-4266-b9c0-d2f96a43249c req-c60e35bf-0195-4f21-8a1e-a8fb9f70f1ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.286 2 DEBUG nova.network.neutron [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.314 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.315 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance network_info: |[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.315 2 DEBUG oslo_concurrency.lockutils [req-6cfc9f3a-f6f8-4266-b9c0-d2f96a43249c req-c60e35bf-0195-4f21-8a1e-a8fb9f70f1ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.316 2 DEBUG nova.network.neutron [req-6cfc9f3a-f6f8-4266-b9c0-d2f96a43249c req-c60e35bf-0195-4f21-8a1e-a8fb9f70f1ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Refreshing network info cache for port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.319 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start _get_guest_xml network_info=[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.324 2 WARNING nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.329 2 DEBUG nova.virt.libvirt.host [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.331 2 DEBUG nova.virt.libvirt.host [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.335 2 DEBUG nova.virt.libvirt.host [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.336 2 DEBUG nova.virt.libvirt.host [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.336 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.337 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.338 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.338 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.338 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.339 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.339 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.340 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.340 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.341 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.341 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.342 2 DEBUG nova.virt.hardware [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.346 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.440 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.441 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.442 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.442 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.443 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.445 2 INFO nova.compute.manager [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Terminating instance
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.449 2 DEBUG nova.compute.manager [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:01:11 compute-0 kernel: tap6fb13023-67 (unregistering): left promiscuous mode
Oct 14 09:01:11 compute-0 NetworkManager[44885]: <info>  [1760432471.5141] device (tap6fb13023-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:01:11 compute-0 ovn_controller[152662]: 2025-10-14T09:01:11Z|00383|binding|INFO|Releasing lport 6fb13023-6749-4e1b-b7d9-235dff8e72d4 from this chassis (sb_readonly=0)
Oct 14 09:01:11 compute-0 ovn_controller[152662]: 2025-10-14T09:01:11Z|00384|binding|INFO|Setting lport 6fb13023-6749-4e1b-b7d9-235dff8e72d4 down in Southbound
Oct 14 09:01:11 compute-0 ovn_controller[152662]: 2025-10-14T09:01:11Z|00385|binding|INFO|Removing iface tap6fb13023-67 ovn-installed in OVS
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.537 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:11:58 10.100.0.5'], port_security=['fa:16:3e:b3:11:58 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b932e3d1-4cf6-4934-9eec-c93284b17b43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6fb13023-6749-4e1b-b7d9-235dff8e72d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.538 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6fb13023-6749-4e1b-b7d9-235dff8e72d4 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a unbound from our chassis
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.539 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.556 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4879a1-6c85-42f6-b938-329ea199db5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:11 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 14 09:01:11 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 15.039s CPU time.
Oct 14 09:01:11 compute-0 systemd-machined[214636]: Machine qemu-38-instance-00000022 terminated.
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.588 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a8db39bd-1c75-48db-bfa4-6a54afbf8c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.591 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[369257f2-3aa3-4f75-abb3-eb0057589d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 213 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 340 op/s
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.624 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8c04a4-7fbc-4ec6-a7b9-98abac20d11c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.646 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8831c71f-d9b4-4f24-9ce1-9303421815b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 23, 'rx_bytes': 1168, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 23, 'rx_bytes': 1168, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 34581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306602, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79bf68ce-082b-4b39-977c-430e06269683]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306603, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306603, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.675 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.687 2 INFO nova.virt.libvirt.driver [-] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance destroyed successfully.
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.688 2 DEBUG nova.objects.instance [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'resources' on Instance uuid b932e3d1-4cf6-4934-9eec-c93284b17b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.688 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.689 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.689 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:11.689 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.705 2 DEBUG nova.virt.libvirt.vif [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-618227233',display_name='tempest-ServersAdminTestJSON-server-618227233',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-618227233',id=34,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-wask3hqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:55Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=b932e3d1-4cf6-4934-9eec-c93284b17b43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.706 2 DEBUG nova.network.os_vif_util [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.708 2 DEBUG nova.network.os_vif_util [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.709 2 DEBUG os_vif [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fb13023-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.717 2 INFO os_vif [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67')
Oct 14 09:01:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 14 09:01:11 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 14 09:01:11 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 14 09:01:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817278315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.829 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.865 2 DEBUG nova.storage.rbd_utils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:11 compute-0 nova_compute[259627]: 2025-10-14 09:01:11.871 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.110 2 INFO nova.virt.libvirt.driver [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Deleting instance files /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43_del
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.111 2 INFO nova.virt.libvirt.driver [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Deletion of /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43_del complete
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.183 2 INFO nova.compute.manager [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.184 2 DEBUG oslo.service.loopingcall [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.184 2 DEBUG nova.compute.manager [-] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.184 2 DEBUG nova.network.neutron [-] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.220 2 DEBUG nova.compute.manager [req-1c0cc4f2-ce85-4158-967d-899f9984d619 req-87bb19d1-d732-4b88-8ffb-161310d62bf9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-vif-unplugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.221 2 DEBUG oslo_concurrency.lockutils [req-1c0cc4f2-ce85-4158-967d-899f9984d619 req-87bb19d1-d732-4b88-8ffb-161310d62bf9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.221 2 DEBUG oslo_concurrency.lockutils [req-1c0cc4f2-ce85-4158-967d-899f9984d619 req-87bb19d1-d732-4b88-8ffb-161310d62bf9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.222 2 DEBUG oslo_concurrency.lockutils [req-1c0cc4f2-ce85-4158-967d-899f9984d619 req-87bb19d1-d732-4b88-8ffb-161310d62bf9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.222 2 DEBUG nova.compute.manager [req-1c0cc4f2-ce85-4158-967d-899f9984d619 req-87bb19d1-d732-4b88-8ffb-161310d62bf9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] No waiting events found dispatching network-vif-unplugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.223 2 DEBUG nova.compute.manager [req-1c0cc4f2-ce85-4158-967d-899f9984d619 req-87bb19d1-d732-4b88-8ffb-161310d62bf9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-vif-unplugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:01:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505971606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.336 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.338 2 DEBUG nova.virt.libvirt.vif [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:01:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.338 2 DEBUG nova.network.os_vif_util [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.339 2 DEBUG nova.network.os_vif_util [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.341 2 DEBUG nova.objects.instance [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.359 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <uuid>de383510-2de3-40bd-b479-c0010b3f2d1c</uuid>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <name>instance-0000002b</name>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestJSON-server-1794713901</nova:name>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:01:11</nova:creationTime>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <nova:port uuid="8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2">
Oct 14 09:01:12 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <system>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <entry name="serial">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <entry name="uuid">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </system>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <os>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   </os>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <features>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   </features>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk">
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config">
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:be:e2:1b"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <target dev="tap8ec905f0-b7"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log" append="off"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <video>
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </video>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:01:12 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:01:12 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:01:12 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:01:12 compute-0 nova_compute[259627]: </domain>
Oct 14 09:01:12 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.366 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Preparing to wait for external event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.367 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.367 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.368 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.368 2 DEBUG nova.virt.libvirt.vif [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:01:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.369 2 DEBUG nova.network.os_vif_util [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.370 2 DEBUG nova.network.os_vif_util [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.370 2 DEBUG os_vif [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:12 compute-0 NetworkManager[44885]: <info>  [1760432472.3932] manager: (tap8ec905f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.397 2 INFO os_vif [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.480 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.481 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.482 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No VIF found with MAC fa:16:3e:be:e2:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.483 2 INFO nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Using config drive
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.508 2 DEBUG nova.storage.rbd_utils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:12 compute-0 ceph-mon[74249]: pgmap v1314: 305 pgs: 305 active+clean; 213 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 340 op/s
Oct 14 09:01:12 compute-0 ceph-mon[74249]: osdmap e165: 3 total, 3 up, 3 in
Oct 14 09:01:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2817278315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1505971606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.835 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432457.8343177, f5132a2b-83eb-4bb1-a556-480d9fb97fec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.837 2 INFO nova.compute.manager [-] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] VM Stopped (Lifecycle Event)
Oct 14 09:01:12 compute-0 nova_compute[259627]: 2025-10-14 09:01:12.860 2 DEBUG nova.compute.manager [None req-9bce639b-2341-49ed-92eb-302d1150fcb2 - - - - - -] [instance: f5132a2b-83eb-4bb1-a556-480d9fb97fec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.168 2 INFO nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Creating config drive at /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/disk.config
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.178 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xdwy45u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.229 2 DEBUG nova.network.neutron [req-6cfc9f3a-f6f8-4266-b9c0-d2f96a43249c req-c60e35bf-0195-4f21-8a1e-a8fb9f70f1ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updated VIF entry in instance network info cache for port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.231 2 DEBUG nova.network.neutron [req-6cfc9f3a-f6f8-4266-b9c0-d2f96a43249c req-c60e35bf-0195-4f21-8a1e-a8fb9f70f1ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.261 2 DEBUG oslo_concurrency.lockutils [req-6cfc9f3a-f6f8-4266-b9c0-d2f96a43249c req-c60e35bf-0195-4f21-8a1e-a8fb9f70f1ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.334 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xdwy45u" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.359 2 DEBUG nova.storage.rbd_utils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.363 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/disk.config de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.415 2 DEBUG nova.network.neutron [-] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.442 2 INFO nova.compute.manager [-] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Took 1.26 seconds to deallocate network for instance.
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.505 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.505 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.549 2 DEBUG oslo_concurrency.processutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/disk.config de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.549 2 INFO nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Deleting local config drive /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/disk.config because it was imported into RBD.
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.588 2 DEBUG oslo_concurrency.processutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 213 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 222 op/s
Oct 14 09:01:13 compute-0 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 09:01:13 compute-0 systemd-udevd[306595]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:01:13 compute-0 NetworkManager[44885]: <info>  [1760432473.6241] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Oct 14 09:01:13 compute-0 NetworkManager[44885]: <info>  [1760432473.6643] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:01:13 compute-0 NetworkManager[44885]: <info>  [1760432473.6653] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:01:13 compute-0 ovn_controller[152662]: 2025-10-14T09:01:13Z|00386|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 09:01:13 compute-0 ovn_controller[152662]: 2025-10-14T09:01:13Z|00387|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.676 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.677 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.679 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.691 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4231cdc6-096e-4a16-8523-fcac27318e04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.692 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.695 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.695 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1eec5a06-62da-482c-9b10-905694ac0e36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.699 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4ddc81-ad65-477f-832b-1b71f4f1de6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 systemd-machined[214636]: New machine qemu-51-instance-0000002b.
Oct 14 09:01:13 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002b.
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.713 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[07d0047c-a2f8-4dc0-8d57-17c09a7ac4d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a44b318d-f99a-4ef2-96c0-c70d3ef639b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 podman[306741]: 2025-10-14 09:01:13.756163489 +0000 UTC m=+0.156029037 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent)
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:13 compute-0 ovn_controller[152662]: 2025-10-14T09:01:13Z|00388|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 09:01:13 compute-0 ovn_controller[152662]: 2025-10-14T09:01:13Z|00389|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 09:01:13 compute-0 nova_compute[259627]: 2025-10-14 09:01:13.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:13 compute-0 podman[306737]: 2025-10-14 09:01:13.780552368 +0000 UTC m=+0.181617096 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.787 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce974abb-5ade-41c7-abe2-ff5f9ff0ee68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_controller[152662]: 2025-10-14T09:01:13Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 09:01:13 compute-0 ovn_controller[152662]: 2025-10-14T09:01:13Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 09:01:13 compute-0 NetworkManager[44885]: <info>  [1760432473.8019] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/172)
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.800 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a943b192-1a4e-4690-925d-76db0d30bd2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.844 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f41305a3-aab0-4e90-99f6-576dce4f1319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.848 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c6526a-b8d4-4389-aa49-93b3cb7d3903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 NetworkManager[44885]: <info>  [1760432473.8700] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.876 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf17c0d-87f2-4728-9ab4-73432996d554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.900 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e18da15d-2b30-4d77-b54a-2254bd4f16eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625263, 'reachable_time': 36003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306843, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.919 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[041f41ce-8ded-4145-86e5-8865d7df1343]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625263, 'tstamp': 625263}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306844, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.940 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e20e7d5-87fc-4646-b301-a2bf7c07e8f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625263, 'reachable_time': 36003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306845, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:13.980 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90b01ac4-c907-461c-96c2-778ac8c598a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901028882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.048 2 DEBUG oslo_concurrency.processutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.055 2 DEBUG nova.compute.provider_tree [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.064 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c47a0a3-95b8-401c-95c1-02e817032f65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.066 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.066 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.067 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:14 compute-0 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.072 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:14 compute-0 NetworkManager[44885]: <info>  [1760432474.0727] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:14 compute-0 ovn_controller[152662]: 2025-10-14T09:01:14Z|00390|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.076 2 DEBUG nova.scheduler.client.report [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.112 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.119 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.120 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c14fab-7f4e-4da6-8bf5-f9c8005edbbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.121 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:01:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:14.121 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.150 2 INFO nova.scheduler.client.report [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Deleted allocations for instance b932e3d1-4cf6-4934-9eec-c93284b17b43
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.229 2 DEBUG oslo_concurrency.lockutils [None req-39e9bacd-ca8a-42ed-b7c8-b01f5b8a32af 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.335 2 DEBUG nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.335 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.336 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.336 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.337 2 DEBUG nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] No waiting events found dispatching network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.337 2 WARNING nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received unexpected event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 for instance with vm_state deleted and task_state None.
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.337 2 DEBUG nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-vif-deleted-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.338 2 DEBUG nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.338 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.338 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.339 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.339 2 DEBUG nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Processing event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.339 2 DEBUG nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.340 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.340 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.341 2 DEBUG oslo_concurrency.lockutils [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.341 2 DEBUG nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.341 2 WARNING nova.compute.manager [req-e4eec40c-393c-46ad-9b15-d7ec666a26ab req-a0cc0790-b0bb-4d35-9c6f-e43505f1ff36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state building and task_state spawning.
Oct 14 09:01:14 compute-0 podman[306921]: 2025-10-14 09:01:14.53388285 +0000 UTC m=+0.066235779 container create 14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:01:14 compute-0 systemd[1]: Started libpod-conmon-14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197.scope.
Oct 14 09:01:14 compute-0 podman[306921]: 2025-10-14 09:01:14.497505556 +0000 UTC m=+0.029858495 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:01:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b4ff029b0854d60f3170a2a1c6ee7d945a8747cb7bdaf2292da8824edbda19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:14 compute-0 podman[306921]: 2025-10-14 09:01:14.650555229 +0000 UTC m=+0.182908178 container init 14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:01:14 compute-0 podman[306921]: 2025-10-14 09:01:14.658111245 +0000 UTC m=+0.190464174 container start 14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:01:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 14 09:01:14 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[306936]: [NOTICE]   (306940) : New worker (306942) forked
Oct 14 09:01:14 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[306936]: [NOTICE]   (306940) : Loading success.
Oct 14 09:01:14 compute-0 ceph-mon[74249]: pgmap v1316: 305 pgs: 305 active+clean; 213 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 222 op/s
Oct 14 09:01:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3901028882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 14 09:01:14 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.764 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.766 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432474.7643204, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.766 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.775 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.780 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance spawned successfully.
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.780 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.808 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.817 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.818 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.818 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.818 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.819 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.819 2 DEBUG nova.virt.libvirt.driver [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.880 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.880 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432474.7657862, de383510-2de3-40bd-b479-c0010b3f2d1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.880 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Paused (Lifecycle Event)
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.933 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.933 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.934 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.935 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.936 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.937 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.938 2 INFO nova.compute.manager [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Terminating instance
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.939 2 DEBUG nova.compute.manager [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.943 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432474.772209, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.943 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.947 2 INFO nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Took 6.17 seconds to spawn the instance on the hypervisor.
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.948 2 DEBUG nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.978 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:14 compute-0 nova_compute[259627]: 2025-10-14 09:01:14.983 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:15 compute-0 kernel: tap60379992-d7 (unregistering): left promiscuous mode
Oct 14 09:01:15 compute-0 NetworkManager[44885]: <info>  [1760432475.0055] device (tap60379992-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.010 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.021 2 INFO nova.compute.manager [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Took 7.21 seconds to build instance.
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 ovn_controller[152662]: 2025-10-14T09:01:15Z|00391|binding|INFO|Releasing lport 60379992-d75d-4eff-a6bb-5d1615f35475 from this chassis (sb_readonly=0)
Oct 14 09:01:15 compute-0 ovn_controller[152662]: 2025-10-14T09:01:15Z|00392|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 down in Southbound
Oct 14 09:01:15 compute-0 ovn_controller[152662]: 2025-10-14T09:01:15Z|00393|binding|INFO|Removing iface tap60379992-d7 ovn-installed in OVS
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.033 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:c2:c5 10.100.0.8'], port_security=['fa:16:3e:47:c2:c5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '310ebd88-5fe0-40ad-99dd-c3a1b410d357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=60379992-d75d-4eff-a6bb-5d1615f35475) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.037 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 60379992-d75d-4eff-a6bb-5d1615f35475 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a unbound from our chassis
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ea0c857a-d31a-43a0-b285-c89c1ddc920a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.045 2 DEBUG oslo_concurrency.lockutils [None req-34ac4116-5f20-46a5-a6ce-1bda28a5c25c aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.045 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa5c3b8-1112-41ab-8231-f3c2726aed74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.046 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a namespace which is not needed anymore
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 14 09:01:15 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000020.scope: Consumed 12.476s CPU time.
Oct 14 09:01:15 compute-0 systemd-machined[214636]: Machine qemu-50-instance-00000020 terminated.
Oct 14 09:01:15 compute-0 NetworkManager[44885]: <info>  [1760432475.1632] manager: (tap60379992-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.191 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance destroyed successfully.
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.192 2 DEBUG nova.objects.instance [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'resources' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.207 2 DEBUG nova.virt.libvirt.vif [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:01:02Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.209 2 DEBUG nova.network.os_vif_util [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.209 2 DEBUG nova.network.os_vif_util [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.210 2 DEBUG os_vif [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60379992-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.218 2 INFO os_vif [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7')
Oct 14 09:01:15 compute-0 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [NOTICE]   (298905) : haproxy version is 2.8.14-c23fe91
Oct 14 09:01:15 compute-0 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [NOTICE]   (298905) : path to executable is /usr/sbin/haproxy
Oct 14 09:01:15 compute-0 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [WARNING]  (298905) : Exiting Master process...
Oct 14 09:01:15 compute-0 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [ALERT]    (298905) : Current worker (298910) exited with code 143 (Terminated)
Oct 14 09:01:15 compute-0 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [WARNING]  (298905) : All workers exited. Exiting... (0)
Oct 14 09:01:15 compute-0 systemd[1]: libpod-8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea.scope: Deactivated successfully.
Oct 14 09:01:15 compute-0 conmon[298887]: conmon 8d733b7f9d8d52263e3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea.scope/container/memory.events
Oct 14 09:01:15 compute-0 podman[306979]: 2025-10-14 09:01:15.273713871 +0000 UTC m=+0.072095274 container died 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 09:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea-userdata-shm.mount: Deactivated successfully.
Oct 14 09:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f93cef988fd9fccba9ac6e4003db9a58536ec9d10518828f62640c8f85b271e-merged.mount: Deactivated successfully.
Oct 14 09:01:15 compute-0 podman[306979]: 2025-10-14 09:01:15.326190091 +0000 UTC m=+0.124571474 container cleanup 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:01:15 compute-0 systemd[1]: libpod-conmon-8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea.scope: Deactivated successfully.
Oct 14 09:01:15 compute-0 podman[307030]: 2025-10-14 09:01:15.406645129 +0000 UTC m=+0.049819366 container remove 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf68836-263f-44f5-ab2f-67f664d4fd5a]: (4, ('Tue Oct 14 09:01:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a (8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea)\n8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea\nTue Oct 14 09:01:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a (8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea)\n8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.416 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06c433e9-9bd9-4223-8a19-4c5050319552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 kernel: tapea0c857a-d0: left promiscuous mode
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.435 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2133f956-2f91-41f0-8865-aa814901f56b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[914f7350-cc68-492a-b37f-ba12382a04e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.470 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e00a90-7aa3-4c86-87b7-cbba2d8ffe2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[23818670-2497-4c60-aac3-4ce7713046aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616915, 'reachable_time': 33130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307045, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.496 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:01:15 compute-0 systemd[1]: run-netns-ovnmeta\x2dea0c857a\x2dd31a\x2d43a0\x2db285\x2dc89c1ddc920a.mount: Deactivated successfully.
Oct 14 09:01:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:15.496 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc6964d-bfeb-43c5-8b78-bb3c4387076a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 155 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 632 KiB/s rd, 5.9 MiB/s wr, 287 op/s
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.711 2 INFO nova.virt.libvirt.driver [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deleting instance files /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357_del
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.713 2 INFO nova.virt.libvirt.driver [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deletion of /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357_del complete
Oct 14 09:01:15 compute-0 ceph-mon[74249]: osdmap e166: 3 total, 3 up, 3 in
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.774 2 INFO nova.compute.manager [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.775 2 DEBUG oslo.service.loopingcall [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.775 2 DEBUG nova.compute.manager [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:01:15 compute-0 nova_compute[259627]: 2025-10-14 09:01:15.776 2 DEBUG nova.network.neutron [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.189 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432461.1881418, 1fa36bef-0a0f-4f9c-877c-325a656fa127 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.189 2 INFO nova.compute.manager [-] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] VM Stopped (Lifecycle Event)
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.212 2 DEBUG nova.compute.manager [None req-0604c017-790e-49e5-94e4-9ac8e1baff31 - - - - - -] [instance: 1fa36bef-0a0f-4f9c-877c-325a656fa127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:16 compute-0 ceph-mon[74249]: pgmap v1318: 305 pgs: 305 active+clean; 155 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 632 KiB/s rd, 5.9 MiB/s wr, 287 op/s
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.826 2 DEBUG nova.network.neutron [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.845 2 INFO nova.compute.manager [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Took 1.07 seconds to deallocate network for instance.
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.897 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.897 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.904 2 DEBUG nova.compute.manager [req-436bfa94-1733-4bb3-9750-be46333b772d req-f1ba26f3-53fb-47e4-bee7-f5498a409855 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-deleted-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:16 compute-0 nova_compute[259627]: 2025-10-14 09:01:16.972 2 DEBUG oslo_concurrency.processutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:16 compute-0 sudo[307047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:16 compute-0 sudo[307047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:16 compute-0 sudo[307047]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:17 compute-0 sudo[307073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:01:17 compute-0 sudo[307073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:17 compute-0 sudo[307073]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:17 compute-0 sudo[307098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:17 compute-0 sudo[307098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:17 compute-0 sudo[307098]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:17 compute-0 sudo[307133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:01:17 compute-0 sudo[307133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840645882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.468 2 DEBUG oslo_concurrency.processutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.473 2 DEBUG nova.compute.provider_tree [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.497 2 DEBUG nova.scheduler.client.report [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.535 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.565 2 INFO nova.scheduler.client.report [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Deleted allocations for instance 310ebd88-5fe0-40ad-99dd-c3a1b410d357
Oct 14 09:01:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 155 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 632 KiB/s rd, 5.9 MiB/s wr, 287 op/s
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.637 2 DEBUG oslo_concurrency.lockutils [None req-1efe5d23-7093-4336-873a-a639385833ca 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:17 compute-0 sudo[307133]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:01:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:01:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:01:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:01:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7d0fa55a-2aa8-4d93-8fc6-09efb87cc959 does not exist
Oct 14 09:01:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f45d5273-1462-4d42-afe9-3d1c71bd3aad does not exist
Oct 14 09:01:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 9028d109-f200-439d-856a-f48f8950d098 does not exist
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:01:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:01:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:01:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 14 09:01:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2840645882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:01:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:01:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 14 09:01:17 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 14 09:01:17 compute-0 sudo[307201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:17 compute-0 sudo[307201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:17 compute-0 sudo[307201]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:17 compute-0 sudo[307226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:01:17 compute-0 sudo[307226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:17 compute-0 NetworkManager[44885]: <info>  [1760432477.8553] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Oct 14 09:01:17 compute-0 NetworkManager[44885]: <info>  [1760432477.8562] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Oct 14 09:01:17 compute-0 sudo[307226]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:17 compute-0 sudo[307251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:17 compute-0 sudo[307251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:17 compute-0 sudo[307251]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:17 compute-0 nova_compute[259627]: 2025-10-14 09:01:17.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:01:17 compute-0 sudo[307276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:01:18 compute-0 sudo[307276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:18 compute-0 nova_compute[259627]: 2025-10-14 09:01:18.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:18 compute-0 ovn_controller[152662]: 2025-10-14T09:01:18Z|00394|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:01:18 compute-0 nova_compute[259627]: 2025-10-14 09:01:18.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:18 compute-0 nova_compute[259627]: 2025-10-14 09:01:18.212 2 DEBUG nova.compute.manager [req-de277bd0-351f-4c6c-a891-61ffb94338f9 req-1b2b709c-379b-4837-987c-7fba1bf61250 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-changed-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:18 compute-0 nova_compute[259627]: 2025-10-14 09:01:18.213 2 DEBUG nova.compute.manager [req-de277bd0-351f-4c6c-a891-61ffb94338f9 req-1b2b709c-379b-4837-987c-7fba1bf61250 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Refreshing instance network info cache due to event network-changed-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:01:18 compute-0 nova_compute[259627]: 2025-10-14 09:01:18.213 2 DEBUG oslo_concurrency.lockutils [req-de277bd0-351f-4c6c-a891-61ffb94338f9 req-1b2b709c-379b-4837-987c-7fba1bf61250 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:18 compute-0 nova_compute[259627]: 2025-10-14 09:01:18.214 2 DEBUG oslo_concurrency.lockutils [req-de277bd0-351f-4c6c-a891-61ffb94338f9 req-1b2b709c-379b-4837-987c-7fba1bf61250 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:18 compute-0 nova_compute[259627]: 2025-10-14 09:01:18.214 2 DEBUG nova.network.neutron [req-de277bd0-351f-4c6c-a891-61ffb94338f9 req-1b2b709c-379b-4837-987c-7fba1bf61250 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Refreshing network info cache for port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:01:18 compute-0 podman[307344]: 2025-10-14 09:01:18.382366002 +0000 UTC m=+0.044526626 container create ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hopper, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:01:18 compute-0 systemd[1]: Started libpod-conmon-ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7.scope.
Oct 14 09:01:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:18 compute-0 podman[307344]: 2025-10-14 09:01:18.35992175 +0000 UTC m=+0.022082424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:01:18 compute-0 podman[307344]: 2025-10-14 09:01:18.472796855 +0000 UTC m=+0.134957489 container init ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hopper, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:01:18 compute-0 podman[307344]: 2025-10-14 09:01:18.479406008 +0000 UTC m=+0.141566632 container start ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:01:18 compute-0 podman[307344]: 2025-10-14 09:01:18.483389066 +0000 UTC m=+0.145549690 container attach ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 09:01:18 compute-0 awesome_hopper[307360]: 167 167
Oct 14 09:01:18 compute-0 systemd[1]: libpod-ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7.scope: Deactivated successfully.
Oct 14 09:01:18 compute-0 conmon[307360]: conmon ea4723244bd228d8baf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7.scope/container/memory.events
Oct 14 09:01:18 compute-0 podman[307344]: 2025-10-14 09:01:18.489638229 +0000 UTC m=+0.151798853 container died ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 09:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3803aa2348cb369d3bee87adeec1587a7c9531e389466101e9bb5a4dfba6dd-merged.mount: Deactivated successfully.
Oct 14 09:01:18 compute-0 podman[307344]: 2025-10-14 09:01:18.531711454 +0000 UTC m=+0.193872038 container remove ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:01:18 compute-0 systemd[1]: libpod-conmon-ea4723244bd228d8baf94c6497aee7071e38c9e0da31822f2124e6b2d5249ed7.scope: Deactivated successfully.
Oct 14 09:01:18 compute-0 podman[307384]: 2025-10-14 09:01:18.741868821 +0000 UTC m=+0.066188599 container create 91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:01:18 compute-0 ceph-mon[74249]: pgmap v1319: 305 pgs: 305 active+clean; 155 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 632 KiB/s rd, 5.9 MiB/s wr, 287 op/s
Oct 14 09:01:18 compute-0 ceph-mon[74249]: osdmap e167: 3 total, 3 up, 3 in
Oct 14 09:01:18 compute-0 systemd[1]: Started libpod-conmon-91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0.scope.
Oct 14 09:01:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f72c56c1cdf0b091b3b10782ef45db9dd2a02701bc8f1b94a949d5ae8ac5c99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:18 compute-0 podman[307384]: 2025-10-14 09:01:18.714340024 +0000 UTC m=+0.038659902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f72c56c1cdf0b091b3b10782ef45db9dd2a02701bc8f1b94a949d5ae8ac5c99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f72c56c1cdf0b091b3b10782ef45db9dd2a02701bc8f1b94a949d5ae8ac5c99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f72c56c1cdf0b091b3b10782ef45db9dd2a02701bc8f1b94a949d5ae8ac5c99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f72c56c1cdf0b091b3b10782ef45db9dd2a02701bc8f1b94a949d5ae8ac5c99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:18 compute-0 podman[307384]: 2025-10-14 09:01:18.831797792 +0000 UTC m=+0.156117580 container init 91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:01:18 compute-0 podman[307384]: 2025-10-14 09:01:18.837681886 +0000 UTC m=+0.162001664 container start 91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 09:01:18 compute-0 podman[307384]: 2025-10-14 09:01:18.844634927 +0000 UTC m=+0.168954745 container attach 91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:01:18 compute-0 nova_compute[259627]: 2025-10-14 09:01:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:19 compute-0 nova_compute[259627]: 2025-10-14 09:01:19.010 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432464.0095904, f921c880-38a7-40b6-8300-2123889a19c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:19 compute-0 nova_compute[259627]: 2025-10-14 09:01:19.012 2 INFO nova.compute.manager [-] [instance: f921c880-38a7-40b6-8300-2123889a19c6] VM Stopped (Lifecycle Event)
Oct 14 09:01:19 compute-0 nova_compute[259627]: 2025-10-14 09:01:19.048 2 DEBUG nova.compute.manager [None req-31dbd029-086c-462a-8270-b562ddb968e1 - - - - - -] [instance: f921c880-38a7-40b6-8300-2123889a19c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 155 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 586 KiB/s rd, 3.3 MiB/s wr, 204 op/s
Oct 14 09:01:19 compute-0 lucid_wu[307401]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:01:19 compute-0 lucid_wu[307401]: --> relative data size: 1.0
Oct 14 09:01:19 compute-0 lucid_wu[307401]: --> All data devices are unavailable
Oct 14 09:01:19 compute-0 systemd[1]: libpod-91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0.scope: Deactivated successfully.
Oct 14 09:01:19 compute-0 systemd[1]: libpod-91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0.scope: Consumed 1.002s CPU time.
Oct 14 09:01:19 compute-0 podman[307384]: 2025-10-14 09:01:19.899293817 +0000 UTC m=+1.223613605 container died 91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:01:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f72c56c1cdf0b091b3b10782ef45db9dd2a02701bc8f1b94a949d5ae8ac5c99-merged.mount: Deactivated successfully.
Oct 14 09:01:19 compute-0 podman[307384]: 2025-10-14 09:01:19.963621929 +0000 UTC m=+1.287941707 container remove 91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:01:19 compute-0 nova_compute[259627]: 2025-10-14 09:01:19.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:19 compute-0 systemd[1]: libpod-conmon-91d454c0ce49c6870292dfecbc5f7d22799cd03f2458d035bb59ff5f901d52a0.scope: Deactivated successfully.
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.002 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.004 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:20 compute-0 sudo[307276]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:20 compute-0 sudo[307441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:20 compute-0 sudo[307441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:20 compute-0 sudo[307441]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:20 compute-0 sudo[307467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:01:20 compute-0 sudo[307467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:20 compute-0 sudo[307467]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:20 compute-0 sudo[307502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:20 compute-0 sudo[307502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:20 compute-0 sudo[307502]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:20 compute-0 sudo[307536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:01:20 compute-0 sudo[307536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.274 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.275 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.292 2 DEBUG nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.368 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.369 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.379 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.379 2 INFO nova.compute.claims [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.409 2 DEBUG nova.network.neutron [req-de277bd0-351f-4c6c-a891-61ffb94338f9 req-1b2b709c-379b-4837-987c-7fba1bf61250 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updated VIF entry in instance network info cache for port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.410 2 DEBUG nova.network.neutron [req-de277bd0-351f-4c6c-a891-61ffb94338f9 req-1b2b709c-379b-4837-987c-7fba1bf61250 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.441 2 DEBUG oslo_concurrency.lockutils [req-de277bd0-351f-4c6c-a891-61ffb94338f9 req-1b2b709c-379b-4837-987c-7fba1bf61250 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1933843303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.488 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.512 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.593 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.594 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:01:20 compute-0 podman[307603]: 2025-10-14 09:01:20.648957519 +0000 UTC m=+0.047609742 container create fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:01:20 compute-0 systemd[1]: Started libpod-conmon-fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe.scope.
Oct 14 09:01:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:20 compute-0 podman[307603]: 2025-10-14 09:01:20.721774299 +0000 UTC m=+0.120426542 container init fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackburn, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:01:20 compute-0 podman[307603]: 2025-10-14 09:01:20.629434709 +0000 UTC m=+0.028086952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:01:20 compute-0 podman[307603]: 2025-10-14 09:01:20.730647448 +0000 UTC m=+0.129299671 container start fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:01:20 compute-0 podman[307603]: 2025-10-14 09:01:20.734295707 +0000 UTC m=+0.132947950 container attach fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackburn, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:01:20 compute-0 silly_blackburn[307635]: 167 167
Oct 14 09:01:20 compute-0 systemd[1]: libpod-fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe.scope: Deactivated successfully.
Oct 14 09:01:20 compute-0 podman[307640]: 2025-10-14 09:01:20.779502009 +0000 UTC m=+0.029303542 container died fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:01:20 compute-0 ceph-mon[74249]: pgmap v1321: 305 pgs: 305 active+clean; 155 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 586 KiB/s rd, 3.3 MiB/s wr, 204 op/s
Oct 14 09:01:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1933843303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.790 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.791 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4121MB free_disk=59.92872619628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.792 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98939c86e5fe1582014eeeb169366fead7a09880d5e77e8191f89b42ec7e3cb-merged.mount: Deactivated successfully.
Oct 14 09:01:20 compute-0 podman[307640]: 2025-10-14 09:01:20.812713555 +0000 UTC m=+0.062515068 container remove fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 09:01:20 compute-0 systemd[1]: libpod-conmon-fcdfa65e9869c52f9604d636615983ef4f09f6c408f7653c89d9d3eb95a5adfe.scope: Deactivated successfully.
Oct 14 09:01:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/91393918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.951 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.955 2 DEBUG nova.compute.provider_tree [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:20 compute-0 podman[307662]: 2025-10-14 09:01:20.965742878 +0000 UTC m=+0.036539560 container create 80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 09:01:20 compute-0 nova_compute[259627]: 2025-10-14 09:01:20.973 2 DEBUG nova.scheduler.client.report [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:21 compute-0 systemd[1]: Started libpod-conmon-80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343.scope.
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.003 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.004 2 DEBUG nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b46921079402b40d79451ee77022087b5ff3073b6f1d8ca8da250f9919c5d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b46921079402b40d79451ee77022087b5ff3073b6f1d8ca8da250f9919c5d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b46921079402b40d79451ee77022087b5ff3073b6f1d8ca8da250f9919c5d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b46921079402b40d79451ee77022087b5ff3073b6f1d8ca8da250f9919c5d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:21 compute-0 podman[307662]: 2025-10-14 09:01:20.948083894 +0000 UTC m=+0.018880596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:01:21 compute-0 podman[307662]: 2025-10-14 09:01:21.060377885 +0000 UTC m=+0.131174657 container init 80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:01:21 compute-0 podman[307662]: 2025-10-14 09:01:21.075165618 +0000 UTC m=+0.145962340 container start 80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 09:01:21 compute-0 podman[307662]: 2025-10-14 09:01:21.07931044 +0000 UTC m=+0.150107222 container attach 80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.085 2 DEBUG nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.086 2 DEBUG nova.network.neutron [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.108 2 INFO nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.123 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance de383510-2de3-40bd-b479-c0010b3f2d1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.123 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 333926ec-cf24-467b-b9b1-d1fa70a4feb2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.123 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.124 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.132 2 DEBUG nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.235 2 DEBUG nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.237 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.237 2 INFO nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Creating image(s)
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.264 2 DEBUG nova.storage.rbd_utils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.299 2 DEBUG nova.storage.rbd_utils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.339 2 DEBUG nova.storage.rbd_utils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.345 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.411 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.445 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.447 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.448 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.449 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.481 2 DEBUG nova.storage.rbd_utils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.484 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:21 compute-0 ovn_controller[152662]: 2025-10-14T09:01:21Z|00395|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.586 2 DEBUG nova.network.neutron [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.587 2 DEBUG nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 344 op/s
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.743 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.744 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.767 2 DEBUG nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.775 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.797843) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432481797878, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2286, "num_deletes": 261, "total_data_size": 3202679, "memory_usage": 3266336, "flush_reason": "Manual Compaction"}
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 14 09:01:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/91393918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432481812152, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3150618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25642, "largest_seqno": 27927, "table_properties": {"data_size": 3140325, "index_size": 6467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22752, "raw_average_key_size": 21, "raw_value_size": 3119158, "raw_average_value_size": 2893, "num_data_blocks": 282, "num_entries": 1078, "num_filter_entries": 1078, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432306, "oldest_key_time": 1760432306, "file_creation_time": 1760432481, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14356 microseconds, and 8422 cpu microseconds.
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.812194) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3150618 bytes OK
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.812219) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.814001) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.814044) EVENT_LOG_v1 {"time_micros": 1760432481814036, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.814065) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3192812, prev total WAL file size 3192812, number of live WAL files 2.
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.815191) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3076KB)], [59(7030KB)]
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432481815245, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10349728, "oldest_snapshot_seqno": -1}
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5265 keys, 8630943 bytes, temperature: kUnknown
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432481854977, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8630943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8594393, "index_size": 22296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 130934, "raw_average_key_size": 24, "raw_value_size": 8498225, "raw_average_value_size": 1614, "num_data_blocks": 917, "num_entries": 5265, "num_filter_entries": 5265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432481, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.855218) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8630943 bytes
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.856435) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.8 rd, 216.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.9 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5795, records dropped: 530 output_compression: NoCompression
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.856452) EVENT_LOG_v1 {"time_micros": 1760432481856442, "job": 32, "event": "compaction_finished", "compaction_time_micros": 39834, "compaction_time_cpu_micros": 18568, "output_level": 6, "num_output_files": 1, "total_output_size": 8630943, "num_input_records": 5795, "num_output_records": 5265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432481856959, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432481857963, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.815103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.858044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.858051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.858054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.858056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:01:21 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:01:21.858058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:01:21 compute-0 boring_hugle[307681]: {
Oct 14 09:01:21 compute-0 boring_hugle[307681]:     "0": [
Oct 14 09:01:21 compute-0 boring_hugle[307681]:         {
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "devices": [
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "/dev/loop3"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             ],
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_name": "ceph_lv0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_size": "21470642176",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "name": "ceph_lv0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "tags": {
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cluster_name": "ceph",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.crush_device_class": "",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.encrypted": "0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osd_id": "0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.type": "block",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.vdo": "0"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             },
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "type": "block",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "vg_name": "ceph_vg0"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:         }
Oct 14 09:01:21 compute-0 boring_hugle[307681]:     ],
Oct 14 09:01:21 compute-0 boring_hugle[307681]:     "1": [
Oct 14 09:01:21 compute-0 boring_hugle[307681]:         {
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "devices": [
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "/dev/loop4"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             ],
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_name": "ceph_lv1",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_size": "21470642176",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "name": "ceph_lv1",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "tags": {
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cluster_name": "ceph",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.crush_device_class": "",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.encrypted": "0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osd_id": "1",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.type": "block",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.vdo": "0"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             },
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "type": "block",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "vg_name": "ceph_vg1"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:         }
Oct 14 09:01:21 compute-0 boring_hugle[307681]:     ],
Oct 14 09:01:21 compute-0 boring_hugle[307681]:     "2": [
Oct 14 09:01:21 compute-0 boring_hugle[307681]:         {
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "devices": [
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "/dev/loop5"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             ],
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_name": "ceph_lv2",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_size": "21470642176",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "name": "ceph_lv2",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "tags": {
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.cluster_name": "ceph",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.crush_device_class": "",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.encrypted": "0",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osd_id": "2",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.type": "block",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:                 "ceph.vdo": "0"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             },
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "type": "block",
Oct 14 09:01:21 compute-0 boring_hugle[307681]:             "vg_name": "ceph_vg2"
Oct 14 09:01:21 compute-0 boring_hugle[307681]:         }
Oct 14 09:01:21 compute-0 boring_hugle[307681]:     ]
Oct 14 09:01:21 compute-0 boring_hugle[307681]: }
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.875 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.881 2 DEBUG nova.storage.rbd_utils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] resizing rbd image 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:01:21 compute-0 systemd[1]: libpod-80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343.scope: Deactivated successfully.
Oct 14 09:01:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3353789005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.930 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.943 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:21 compute-0 podman[307855]: 2025-10-14 09:01:21.946693366 +0000 UTC m=+0.035971985 container died 80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b46921079402b40d79451ee77022087b5ff3073b6f1d8ca8da250f9919c5d7-merged.mount: Deactivated successfully.
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.976 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:21 compute-0 nova_compute[259627]: 2025-10-14 09:01:21.987 2 DEBUG nova.objects.instance [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lazy-loading 'migration_context' on Instance uuid 333926ec-cf24-467b-b9b1-d1fa70a4feb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:21 compute-0 podman[307855]: 2025-10-14 09:01:21.992073252 +0000 UTC m=+0.081351851 container remove 80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:01:21 compute-0 systemd[1]: libpod-conmon-80dc31f75e4e8260110b72656351d0dd41a102cb634357f971924c6a15fef343.scope: Deactivated successfully.
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.001 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.001 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.002 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.002 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Ensure instance console log exists: /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.003 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.003 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.003 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.004 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.010 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.011 2 INFO nova.compute.claims [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.014 2 WARNING nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.019 2 DEBUG nova.virt.libvirt.host [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.019 2 DEBUG nova.virt.libvirt.host [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.023 2 DEBUG nova.virt.libvirt.host [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.023 2 DEBUG nova.virt.libvirt.host [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.024 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.024 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.025 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.025 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.025 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.025 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.025 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.026 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.026 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.026 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.027 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.027 2 DEBUG nova.virt.hardware [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.029 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:22 compute-0 sudo[307536]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:22 compute-0 sudo[307893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:22 compute-0 sudo[307893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:22 compute-0 sudo[307893]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:22 compute-0 sudo[307918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:01:22 compute-0 sudo[307918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:22 compute-0 sudo[307918]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.185 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:22 compute-0 sudo[307945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:22 compute-0 sudo[307945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:22 compute-0 sudo[307945]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:22 compute-0 sudo[307988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:01:22 compute-0 sudo[307988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3482916697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.497 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.529 2 DEBUG nova.storage.rbd_utils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.532 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454809682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.630 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.636 2 DEBUG nova.compute.provider_tree [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.650 2 DEBUG nova.scheduler.client.report [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:22 compute-0 podman[308092]: 2025-10-14 09:01:22.663408688 +0000 UTC m=+0.041706116 container create f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.671 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.672 2 DEBUG nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:01:22 compute-0 systemd[1]: Started libpod-conmon-f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e.scope.
Oct 14 09:01:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.724 2 DEBUG nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.725 2 DEBUG nova.network.neutron [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:01:22 compute-0 podman[308092]: 2025-10-14 09:01:22.732452016 +0000 UTC m=+0.110749454 container init f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:01:22 compute-0 podman[308092]: 2025-10-14 09:01:22.741082538 +0000 UTC m=+0.119379956 container start f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:01:22 compute-0 podman[308092]: 2025-10-14 09:01:22.646313578 +0000 UTC m=+0.024611016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:01:22 compute-0 sad_moore[308128]: 167 167
Oct 14 09:01:22 compute-0 systemd[1]: libpod-f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e.scope: Deactivated successfully.
Oct 14 09:01:22 compute-0 podman[308092]: 2025-10-14 09:01:22.744794799 +0000 UTC m=+0.123092227 container attach f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:01:22 compute-0 conmon[308128]: conmon f85e62312f6d08efe5ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e.scope/container/memory.events
Oct 14 09:01:22 compute-0 podman[308092]: 2025-10-14 09:01:22.745927507 +0000 UTC m=+0.124224935 container died f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.748 2 INFO nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:01:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-853314bcb007f25144814f901bbceab626eca2880e6f16dd4d9d86f82d732081-merged.mount: Deactivated successfully.
Oct 14 09:01:22 compute-0 podman[308092]: 2025-10-14 09:01:22.779924893 +0000 UTC m=+0.158222311 container remove f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:01:22 compute-0 systemd[1]: libpod-conmon-f85e62312f6d08efe5bab6860df1c39f5e076b53f19069f19cad35ff36181e9e.scope: Deactivated successfully.
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.796 2 DEBUG nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:01:22 compute-0 ceph-mon[74249]: pgmap v1322: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 344 op/s
Oct 14 09:01:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3353789005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3482916697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/454809682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.920 2 DEBUG nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.922 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.922 2 INFO nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Creating image(s)
Oct 14 09:01:22 compute-0 podman[308152]: 2025-10-14 09:01:22.940527052 +0000 UTC m=+0.043979363 container create 397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_maxwell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.952 2 DEBUG nova.storage.rbd_utils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:22 compute-0 systemd[1]: Started libpod-conmon-397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b.scope.
Oct 14 09:01:22 compute-0 nova_compute[259627]: 2025-10-14 09:01:22.975 2 DEBUG nova.storage.rbd_utils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986627789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14dee570a3457da766b6d5f488da0151452cabe5339f76dd4e9dc7635321a91b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14dee570a3457da766b6d5f488da0151452cabe5339f76dd4e9dc7635321a91b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14dee570a3457da766b6d5f488da0151452cabe5339f76dd4e9dc7635321a91b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14dee570a3457da766b6d5f488da0151452cabe5339f76dd4e9dc7635321a91b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.003 2 DEBUG nova.storage.rbd_utils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.006 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:23 compute-0 podman[308152]: 2025-10-14 09:01:23.014786916 +0000 UTC m=+0.118239237 container init 397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_maxwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:01:23 compute-0 podman[308152]: 2025-10-14 09:01:22.919467414 +0000 UTC m=+0.022919765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:01:23 compute-0 podman[308152]: 2025-10-14 09:01:23.023096921 +0000 UTC m=+0.126549242 container start 397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:01:23 compute-0 podman[308152]: 2025-10-14 09:01:23.026571636 +0000 UTC m=+0.130023977 container attach 397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_maxwell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.041 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.043 2 DEBUG nova.objects.instance [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 333926ec-cf24-467b-b9b1-d1fa70a4feb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.073 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.074 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.075 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.075 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.098 2 DEBUG nova.storage.rbd_utils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.102 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.191 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <uuid>333926ec-cf24-467b-b9b1-d1fa70a4feb2</uuid>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <name>instance-0000002c</name>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <nova:name>tempest-ListImageFiltersTestJSON-server-28834106</nova:name>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:01:22</nova:creationTime>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <nova:user uuid="fcce8bcba0284dda83483c56b84f3c0b">tempest-ListImageFiltersTestJSON-666128331-project-member</nova:user>
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <nova:project uuid="996304c8692b4264873203558d043ea2">tempest-ListImageFiltersTestJSON-666128331</nova:project>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <system>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <entry name="serial">333926ec-cf24-467b-b9b1-d1fa70a4feb2</entry>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <entry name="uuid">333926ec-cf24-467b-b9b1-d1fa70a4feb2</entry>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     </system>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <os>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   </os>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <features>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   </features>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk">
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk.config">
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2/console.log" append="off"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <video>
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     </video>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:01:23 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:01:23 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:01:23 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:01:23 compute-0 nova_compute[259627]: </domain>
Oct 14 09:01:23 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.253 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.255 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.256 2 INFO nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Using config drive
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.288 2 DEBUG nova.storage.rbd_utils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.310 2 DEBUG nova.network.neutron [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.311 2 DEBUG nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.353 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.424 2 DEBUG nova.storage.rbd_utils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] resizing rbd image 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.515 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432468.443003, 6b630da6-e65a-48aa-9559-1d59beb73a93 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.515 2 INFO nova.compute.manager [-] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] VM Stopped (Lifecycle Event)
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.542 2 DEBUG nova.compute.manager [None req-64c1cd2c-9086-40ab-b22b-306396d51b8e - - - - - -] [instance: 6b630da6-e65a-48aa-9559-1d59beb73a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.564 2 INFO nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Creating config drive at /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2/disk.config
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.569 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_hx7ceww execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.9 MiB/s wr, 310 op/s
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.635 2 DEBUG nova.objects.instance [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a03ef41-3cc5-48d2-8796-369687ac6a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.668 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.668 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Ensure instance console log exists: /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.669 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.669 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.670 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.671 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.706 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_hx7ceww" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.741 2 DEBUG nova.storage.rbd_utils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.746 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2/disk.config 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.789 2 WARNING nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.794 2 DEBUG nova.virt.libvirt.host [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.796 2 DEBUG nova.virt.libvirt.host [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.800 2 DEBUG nova.virt.libvirt.host [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.801 2 DEBUG nova.virt.libvirt.host [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.802 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.802 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.804 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.804 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.805 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.805 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.806 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.806 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.807 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.807 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.808 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.808 2 DEBUG nova.virt.hardware [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:01:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2986627789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.814 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.910 2 DEBUG oslo_concurrency.processutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2/disk.config 333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:23 compute-0 nova_compute[259627]: 2025-10-14 09:01:23.911 2 INFO nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Deleting local config drive /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2/disk.config because it was imported into RBD.
Oct 14 09:01:23 compute-0 systemd-machined[214636]: New machine qemu-52-instance-0000002c.
Oct 14 09:01:23 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002c.
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]: {
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "osd_id": 2,
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "type": "bluestore"
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:     },
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "osd_id": 1,
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "type": "bluestore"
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:     },
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "osd_id": 0,
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:         "type": "bluestore"
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]:     }
Oct 14 09:01:23 compute-0 crazy_maxwell[308202]: }
Oct 14 09:01:23 compute-0 systemd[1]: libpod-397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b.scope: Deactivated successfully.
Oct 14 09:01:24 compute-0 podman[308152]: 2025-10-14 09:01:24.000518332 +0000 UTC m=+1.103970653 container died 397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-14dee570a3457da766b6d5f488da0151452cabe5339f76dd4e9dc7635321a91b-merged.mount: Deactivated successfully.
Oct 14 09:01:24 compute-0 podman[308152]: 2025-10-14 09:01:24.061050261 +0000 UTC m=+1.164502602 container remove 397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:01:24 compute-0 systemd[1]: libpod-conmon-397839e2b51968179285a7919d040b1745a53c3414d4f12a2bc003086f97b27b.scope: Deactivated successfully.
Oct 14 09:01:24 compute-0 sudo[307988]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:01:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:01:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:01:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:01:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 32d71d78-0bd5-4ac4-bef9-4259372b2a94 does not exist
Oct 14 09:01:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8fed42f6-f5b6-435c-b7cc-fc88586277da does not exist
Oct 14 09:01:24 compute-0 sudo[308473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:01:24 compute-0 sudo[308473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:24 compute-0 sudo[308473]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:24 compute-0 sudo[308498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:01:24 compute-0 sudo[308498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:01:24 compute-0 sudo[308498]: pam_unix(sudo:session): session closed for user root
Oct 14 09:01:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278373157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.315 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.336 2 DEBUG nova.storage.rbd_utils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.339 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262650786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.801 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.803 2 DEBUG nova.objects.instance [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a03ef41-3cc5-48d2-8796-369687ac6a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.824 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <uuid>6a03ef41-3cc5-48d2-8796-369687ac6a10</uuid>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <name>instance-0000002d</name>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1592047477</nova:name>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:01:23</nova:creationTime>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <nova:user uuid="fcce8bcba0284dda83483c56b84f3c0b">tempest-ListImageFiltersTestJSON-666128331-project-member</nova:user>
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <nova:project uuid="996304c8692b4264873203558d043ea2">tempest-ListImageFiltersTestJSON-666128331</nova:project>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <system>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <entry name="serial">6a03ef41-3cc5-48d2-8796-369687ac6a10</entry>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <entry name="uuid">6a03ef41-3cc5-48d2-8796-369687ac6a10</entry>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     </system>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <os>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   </os>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <features>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   </features>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6a03ef41-3cc5-48d2-8796-369687ac6a10_disk">
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6a03ef41-3cc5-48d2-8796-369687ac6a10_disk.config">
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:24 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10/console.log" append="off"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <video>
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     </video>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:01:24 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:01:24 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:01:24 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:01:24 compute-0 nova_compute[259627]: </domain>
Oct 14 09:01:24 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.885 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432484.885494, 333926ec-cf24-467b-b9b1-d1fa70a4feb2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.887 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] VM Resumed (Lifecycle Event)
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.890 2 DEBUG nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.891 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.895 2 INFO nova.virt.libvirt.driver [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance spawned successfully.
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.896 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.905 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.905 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.906 2 INFO nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Using config drive
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.936 2 DEBUG nova.storage.rbd_utils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.944 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.951 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.956 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.957 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.957 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.958 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.959 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:24 compute-0 nova_compute[259627]: 2025-10-14 09:01:24.960 2 DEBUG nova.virt.libvirt.driver [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.002 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.002 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.011 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.012 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432484.886482, 333926ec-cf24-467b-b9b1-d1fa70a4feb2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.012 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] VM Started (Lifecycle Event)
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.039 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.040 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.047 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.051 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.057 2 INFO nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 3.82 seconds to spawn the instance on the hypervisor.
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.057 2 DEBUG nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:25 compute-0 ovn_controller[152662]: 2025-10-14T09:01:25Z|00396|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.087 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.116 2 INFO nova.compute.manager [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 4.77 seconds to build instance.
Oct 14 09:01:25 compute-0 ceph-mon[74249]: pgmap v1323: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.9 MiB/s wr, 310 op/s
Oct 14 09:01:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:01:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:01:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3278373157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1262650786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.161 2 DEBUG oslo_concurrency.lockutils [None req-d45a12d0-c5d8-46ff-b081-375698ba2ad3 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.201 2 INFO nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Creating config drive at /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10/disk.config
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.206 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprpkb50m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.345 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprpkb50m" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.369 2 DEBUG nova.storage.rbd_utils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] rbd image 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.373 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10/disk.config 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.648 2 DEBUG oslo_concurrency.processutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10/disk.config 6a03ef41-3cc5-48d2-8796-369687ac6a10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:25 compute-0 nova_compute[259627]: 2025-10-14 09:01:25.649 2 INFO nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Deleting local config drive /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10/disk.config because it was imported into RBD.
Oct 14 09:01:25 compute-0 systemd-machined[214636]: New machine qemu-53-instance-0000002d.
Oct 14 09:01:25 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-0000002d.
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.007 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.008 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.685 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432471.684153, b932e3d1-4cf6-4934-9eec-c93284b17b43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.685 2 INFO nova.compute.manager [-] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] VM Stopped (Lifecycle Event)
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.712 2 DEBUG nova.compute.manager [None req-95576f8a-8493-4491-80f3-180238a6818d - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.720 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432486.7194107, 6a03ef41-3cc5-48d2-8796-369687ac6a10 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.721 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] VM Resumed (Lifecycle Event)
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.722 2 DEBUG nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.723 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.726 2 INFO nova.virt.libvirt.driver [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance spawned successfully.
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.727 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.756 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.763 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.767 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.768 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.768 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.769 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.769 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.770 2 DEBUG nova.virt.libvirt.driver [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.803 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.803 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432486.7199926, 6a03ef41-3cc5-48d2-8796-369687ac6a10 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.803 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] VM Started (Lifecycle Event)
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.833 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.836 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.876 2 INFO nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Took 3.96 seconds to spawn the instance on the hypervisor.
Oct 14 09:01:26 compute-0 nova_compute[259627]: 2025-10-14 09:01:26.877 2 DEBUG nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.041 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:27 compute-0 ceph-mon[74249]: pgmap v1324: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.160 2 INFO nova.compute.manager [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Took 5.34 seconds to build instance.
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.181 2 DEBUG oslo_concurrency.lockutils [None req-db1c3cba-0e8a-46c8-bddb-e628b78d57da fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:27 compute-0 ovn_controller[152662]: 2025-10-14T09:01:27Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:01:27 compute-0 ovn_controller[152662]: 2025-10-14T09:01:27Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:01:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.887 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.888 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.907 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.982 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.983 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.988 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:01:27 compute-0 nova_compute[259627]: 2025-10-14 09:01:27.988 2 INFO nova.compute.claims [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.126 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.212 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.225 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.226 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.227 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.227 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2777147188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.601 2 DEBUG nova.compute.manager [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.608 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.614 2 DEBUG nova.compute.provider_tree [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.636 2 DEBUG nova.scheduler.client.report [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.664 2 INFO nova.compute.manager [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] instance snapshotting
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.670 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.671 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.726 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.727 2 DEBUG nova.network.neutron [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.744 2 INFO nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.762 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.844 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.845 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.845 2 INFO nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Creating image(s)
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.867 2 DEBUG nova.storage.rbd_utils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] rbd image c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.888 2 DEBUG nova.storage.rbd_utils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] rbd image c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.910 2 DEBUG nova.storage.rbd_utils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] rbd image c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.916 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:28 compute-0 nova_compute[259627]: 2025-10-14 09:01:28.978 2 INFO nova.virt.libvirt.driver [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Beginning live snapshot process
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.025 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.025 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.026 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.027 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.050 2 DEBUG nova.storage.rbd_utils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] rbd image c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.053 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.084 2 DEBUG nova.policy [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '770c2e6a53f04f00b0b44d2f4a0c799b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98be6c59bba64fefb29ee881cc1d3825', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:01:29 compute-0 ceph-mon[74249]: pgmap v1325: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 14 09:01:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2777147188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.203 2 DEBUG nova.virt.libvirt.imagebackend [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.306 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.356 2 DEBUG nova.storage.rbd_utils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] resizing rbd image c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.450 2 DEBUG nova.objects.instance [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lazy-loading 'migration_context' on Instance uuid c5dc9921-0deb-4a3f-83d2-703f8b5f1f37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.471 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.472 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Ensure instance console log exists: /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.472 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.472 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.473 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.521 2 DEBUG nova.storage.rbd_utils [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] creating snapshot(5e8d528818e94824b6784e8845e955d5) on rbd image(333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:01:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Oct 14 09:01:29 compute-0 nova_compute[259627]: 2025-10-14 09:01:29.652 2 DEBUG nova.network.neutron [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Successfully created port: 0abd3b45-48d9-4162-8943-889099bb3c13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:01:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 14 09:01:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 14 09:01:30 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.185 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432475.1836016, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.186 2 INFO nova.compute.manager [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Stopped (Lifecycle Event)
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.212 2 DEBUG nova.compute.manager [None req-55d644c2-90d4-4076-80f3-814b66cea0f8 - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.241 2 DEBUG nova.storage.rbd_utils [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] cloning vms/333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk@5e8d528818e94824b6784e8845e955d5 to images/d35dabca-3251-4f9e-98a3-4feb2a41ea29 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.336 2 DEBUG nova.storage.rbd_utils [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] flattening images/d35dabca-3251-4f9e-98a3-4feb2a41ea29 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.466 2 DEBUG nova.network.neutron [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Successfully updated port: 0abd3b45-48d9-4162-8943-889099bb3c13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.511 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "refresh_cache-c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.512 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquired lock "refresh_cache-c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.512 2 DEBUG nova.network.neutron [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.528 2 DEBUG nova.storage.rbd_utils [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] removing snapshot(5e8d528818e94824b6784e8845e955d5) on rbd image(333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.550 2 DEBUG nova.compute.manager [req-410f5d84-51cf-4034-a076-8e0a722c15d6 req-299c83dd-5486-4d63-9367-baa29ab68223 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received event network-changed-0abd3b45-48d9-4162-8943-889099bb3c13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.551 2 DEBUG nova.compute.manager [req-410f5d84-51cf-4034-a076-8e0a722c15d6 req-299c83dd-5486-4d63-9367-baa29ab68223 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Refreshing instance network info cache due to event network-changed-0abd3b45-48d9-4162-8943-889099bb3c13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.551 2 DEBUG oslo_concurrency.lockutils [req-410f5d84-51cf-4034-a076-8e0a722c15d6 req-299c83dd-5486-4d63-9367-baa29ab68223 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:30 compute-0 nova_compute[259627]: 2025-10-14 09:01:30.717 2 DEBUG nova.network.neutron [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:01:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 14 09:01:31 compute-0 ceph-mon[74249]: pgmap v1326: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Oct 14 09:01:31 compute-0 ceph-mon[74249]: osdmap e168: 3 total, 3 up, 3 in
Oct 14 09:01:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 14 09:01:31 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.237 2 DEBUG nova.storage.rbd_utils [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] creating snapshot(snap) on rbd image(d35dabca-3251-4f9e-98a3-4feb2a41ea29) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.491 2 DEBUG nova.network.neutron [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Updating instance_info_cache with network_info: [{"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.511 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Releasing lock "refresh_cache-c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.512 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Instance network_info: |[{"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.512 2 DEBUG oslo_concurrency.lockutils [req-410f5d84-51cf-4034-a076-8e0a722c15d6 req-299c83dd-5486-4d63-9367-baa29ab68223 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.513 2 DEBUG nova.network.neutron [req-410f5d84-51cf-4034-a076-8e0a722c15d6 req-299c83dd-5486-4d63-9367-baa29ab68223 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Refreshing network info cache for port 0abd3b45-48d9-4162-8943-889099bb3c13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.518 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Start _get_guest_xml network_info=[{"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.523 2 WARNING nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.532 2 DEBUG nova.virt.libvirt.host [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.533 2 DEBUG nova.virt.libvirt.host [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.537 2 DEBUG nova.virt.libvirt.host [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.537 2 DEBUG nova.virt.libvirt.host [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.538 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.538 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.539 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.539 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.540 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.540 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.540 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.541 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.541 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.541 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.542 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.542 2 DEBUG nova.virt.hardware [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:01:31 compute-0 nova_compute[259627]: 2025-10-14 09:01:31.545 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 270 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 11 MiB/s wr, 466 op/s
Oct 14 09:01:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753477380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.021 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.040 2 DEBUG nova.storage.rbd_utils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] rbd image c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.044 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 14 09:01:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 14 09:01:32 compute-0 ceph-mon[74249]: osdmap e169: 3 total, 3 up, 3 in
Oct 14 09:01:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/753477380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:32 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 14 09:01:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660431528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.467 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.469 2 DEBUG nova.virt.libvirt.vif [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-223637843',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-223637843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-223637843',id=46,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98be6c59bba64fefb29ee881cc1d3825',ramdisk_id='',reservation_id='r-h2k3kmv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-857809385',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-857809385-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:01:28Z,user_data=None,user_id='770c2e6a53f04f00b0b44d2f4a0c799b',uuid=c5dc9921-0deb-4a3f-83d2-703f8b5f1f37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.470 2 DEBUG nova.network.os_vif_util [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Converting VIF {"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.471 2 DEBUG nova.network.os_vif_util [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=0abd3b45-48d9-4162-8943-889099bb3c13,network=Network(c4a550d4-9ec5-40fd-abf2-7c1066a244e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0abd3b45-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.473 2 DEBUG nova.objects.instance [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5dc9921-0deb-4a3f-83d2-703f8b5f1f37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.497 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <uuid>c5dc9921-0deb-4a3f-83d2-703f8b5f1f37</uuid>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <name>instance-0000002e</name>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-223637843</nova:name>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:01:31</nova:creationTime>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <nova:user uuid="770c2e6a53f04f00b0b44d2f4a0c799b">tempest-ServersNegativeTestMultiTenantJSON-857809385-project-member</nova:user>
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <nova:project uuid="98be6c59bba64fefb29ee881cc1d3825">tempest-ServersNegativeTestMultiTenantJSON-857809385</nova:project>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <nova:port uuid="0abd3b45-48d9-4162-8943-889099bb3c13">
Oct 14 09:01:32 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <system>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <entry name="serial">c5dc9921-0deb-4a3f-83d2-703f8b5f1f37</entry>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <entry name="uuid">c5dc9921-0deb-4a3f-83d2-703f8b5f1f37</entry>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </system>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <os>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   </os>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <features>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   </features>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk">
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk.config">
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:32 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:00:a4:15"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <target dev="tap0abd3b45-48"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37/console.log" append="off"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <video>
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </video>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:01:32 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:01:32 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:01:32 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:01:32 compute-0 nova_compute[259627]: </domain>
Oct 14 09:01:32 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.510 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Preparing to wait for external event network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.510 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.511 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.511 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.513 2 DEBUG nova.virt.libvirt.vif [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-223637843',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-223637843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-223637843',id=46,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98be6c59bba64fefb29ee881cc1d3825',ramdisk_id='',reservation_id='r-h2k3kmv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-857809385',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-857809385-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:01:28Z,user_data=None,user_id='770c2e6a53f04f00b0b44d2f4a0c799b',uuid=c5dc9921-0deb-4a3f-83d2-703f8b5f1f37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.513 2 DEBUG nova.network.os_vif_util [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Converting VIF {"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.519 2 DEBUG nova.network.os_vif_util [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=0abd3b45-48d9-4162-8943-889099bb3c13,network=Network(c4a550d4-9ec5-40fd-abf2-7c1066a244e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0abd3b45-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.520 2 DEBUG os_vif [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=0abd3b45-48d9-4162-8943-889099bb3c13,network=Network(c4a550d4-9ec5-40fd-abf2-7c1066a244e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0abd3b45-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0abd3b45-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.531 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0abd3b45-48, col_values=(('external_ids', {'iface-id': '0abd3b45-48d9-4162-8943-889099bb3c13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:a4:15', 'vm-uuid': 'c5dc9921-0deb-4a3f-83d2-703f8b5f1f37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:32 compute-0 NetworkManager[44885]: <info>  [1760432492.5354] manager: (tap0abd3b45-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.542 2 INFO os_vif [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=0abd3b45-48d9-4162-8943-889099bb3c13,network=Network(c4a550d4-9ec5-40fd-abf2-7c1066a244e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0abd3b45-48')
Oct 14 09:01:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.593 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.594 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.594 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] No VIF found with MAC fa:16:3e:00:a4:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.594 2 INFO nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Using config drive
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.612 2 DEBUG nova.storage.rbd_utils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] rbd image c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:01:32
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', '.mgr', 'images', 'default.rgw.meta', 'vms']
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.911 2 DEBUG nova.network.neutron [req-410f5d84-51cf-4034-a076-8e0a722c15d6 req-299c83dd-5486-4d63-9367-baa29ab68223 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Updated VIF entry in instance network info cache for port 0abd3b45-48d9-4162-8943-889099bb3c13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.912 2 DEBUG nova.network.neutron [req-410f5d84-51cf-4034-a076-8e0a722c15d6 req-299c83dd-5486-4d63-9367-baa29ab68223 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Updating instance_info_cache with network_info: [{"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:32 compute-0 nova_compute[259627]: 2025-10-14 09:01:32.928 2 DEBUG oslo_concurrency.lockutils [req-410f5d84-51cf-4034-a076-8e0a722c15d6 req-299c83dd-5486-4d63-9367-baa29ab68223 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:01:32 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.001 2 INFO nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Creating config drive at /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37/disk.config
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.009 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo25ib8vg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.149 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo25ib8vg" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.186 2 DEBUG nova.storage.rbd_utils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] rbd image c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:33 compute-0 ceph-mon[74249]: pgmap v1329: 305 pgs: 305 active+clean; 270 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 11 MiB/s wr, 466 op/s
Oct 14 09:01:33 compute-0 ceph-mon[74249]: osdmap e170: 3 total, 3 up, 3 in
Oct 14 09:01:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3660431528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.192 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37/disk.config c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.364 2 DEBUG oslo_concurrency.processutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37/disk.config c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.364 2 INFO nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Deleting local config drive /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37/disk.config because it was imported into RBD.
Oct 14 09:01:33 compute-0 kernel: tap0abd3b45-48: entered promiscuous mode
Oct 14 09:01:33 compute-0 NetworkManager[44885]: <info>  [1760432493.4124] manager: (tap0abd3b45-48): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Oct 14 09:01:33 compute-0 ovn_controller[152662]: 2025-10-14T09:01:33Z|00397|binding|INFO|Claiming lport 0abd3b45-48d9-4162-8943-889099bb3c13 for this chassis.
Oct 14 09:01:33 compute-0 ovn_controller[152662]: 2025-10-14T09:01:33Z|00398|binding|INFO|0abd3b45-48d9-4162-8943-889099bb3c13: Claiming fa:16:3e:00:a4:15 10.100.0.13
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.435 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:a4:15 10.100.0.13'], port_security=['fa:16:3e:00:a4:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c5dc9921-0deb-4a3f-83d2-703f8b5f1f37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a550d4-9ec5-40fd-abf2-7c1066a244e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98be6c59bba64fefb29ee881cc1d3825', 'neutron:revision_number': '2', 'neutron:security_group_ids': '32ca5bd3-fb0c-497e-84c2-b7b58a36d3df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162871a5-7922-47e4-a28d-0dc7e6daf9df, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0abd3b45-48d9-4162-8943-889099bb3c13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.437 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0abd3b45-48d9-4162-8943-889099bb3c13 in datapath c4a550d4-9ec5-40fd-abf2-7c1066a244e7 bound to our chassis
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.438 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a550d4-9ec5-40fd-abf2-7c1066a244e7
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.453 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20b22453-6500-418f-aa9c-93d02444d184]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.455 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4a550d4-91 in ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.460 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4a550d4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.460 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce54748e-80eb-48c9-ab61-7425d2a16b2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.462 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[969f2f80-6e23-41b7-adbc-a59eee21d471]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_controller[152662]: 2025-10-14T09:01:33Z|00399|binding|INFO|Setting lport 0abd3b45-48d9-4162-8943-889099bb3c13 ovn-installed in OVS
Oct 14 09:01:33 compute-0 ovn_controller[152662]: 2025-10-14T09:01:33Z|00400|binding|INFO|Setting lport 0abd3b45-48d9-4162-8943-889099bb3c13 up in Southbound
Oct 14 09:01:33 compute-0 systemd-udevd[309200]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.480 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[0cdf7f6f-4647-4cc1-83d3-f0f562a2bd3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 NetworkManager[44885]: <info>  [1760432493.4862] device (tap0abd3b45-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:01:33 compute-0 NetworkManager[44885]: <info>  [1760432493.4875] device (tap0abd3b45-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:33 compute-0 systemd-machined[214636]: New machine qemu-54-instance-0000002e.
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc515662-79fa-4db6-aa2b-0eccb82cb662]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-0000002e.
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.550 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eb523d88-d0b3-4d8e-a24c-04ac9ac66beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.556 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31b6a0d0-df34-4427-a92b-b887c5c7cae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 NetworkManager[44885]: <info>  [1760432493.5576] manager: (tapc4a550d4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/179)
Oct 14 09:01:33 compute-0 systemd-udevd[309205]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.571 2 INFO nova.virt.libvirt.driver [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Snapshot image upload complete
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.573 2 INFO nova.compute.manager [None req-0de3b6a8-b70c-42a1-bf1e-e0e7d2ca1eef fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 4.90 seconds to snapshot the instance on the hypervisor.
Oct 14 09:01:33 compute-0 podman[309184]: 2025-10-14 09:01:33.591771209 +0000 UTC m=+0.145478268 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:01:33 compute-0 podman[309183]: 2025-10-14 09:01:33.60196449 +0000 UTC m=+0.151624869 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.606 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[37f698f8-f7af-4a21-b222-302fd7171bff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.608 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a06e80a7-19b8-486c-b272-f309cb819d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 270 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 8.1 MiB/s wr, 488 op/s
Oct 14 09:01:33 compute-0 NetworkManager[44885]: <info>  [1760432493.6429] device (tapc4a550d4-90): carrier: link connected
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.648 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ee30436d-14d8-4ce7-86b9-8bb6f4cb74bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.665 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cb1b89-8c20-4e77-b155-e3641909e152]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a550d4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627240, 'reachable_time': 24184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309256, 'error': None, 'target': 'ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.682 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f47ad342-46a6-4d74-8ef1-39cb64ba155c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:23a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 627240, 'tstamp': 627240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309257, 'error': None, 'target': 'ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.706 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3da868df-9bf7-4713-9903-5c4287b646e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a550d4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627240, 'reachable_time': 24184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309258, 'error': None, 'target': 'ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.740 2 DEBUG nova.compute.manager [req-776fbe1a-f774-4769-b89f-355ccca17435 req-d44fc2b6-07e6-433b-b9c8-1b00201d9e37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received event network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.740 2 DEBUG oslo_concurrency.lockutils [req-776fbe1a-f774-4769-b89f-355ccca17435 req-d44fc2b6-07e6-433b-b9c8-1b00201d9e37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.741 2 DEBUG oslo_concurrency.lockutils [req-776fbe1a-f774-4769-b89f-355ccca17435 req-d44fc2b6-07e6-433b-b9c8-1b00201d9e37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.741 2 DEBUG oslo_concurrency.lockutils [req-776fbe1a-f774-4769-b89f-355ccca17435 req-d44fc2b6-07e6-433b-b9c8-1b00201d9e37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.741 2 DEBUG nova.compute.manager [req-776fbe1a-f774-4769-b89f-355ccca17435 req-d44fc2b6-07e6-433b-b9c8-1b00201d9e37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Processing event network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.758 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6c0289-b564-4fc7-bfa4-a1d668a0f594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.784 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.784 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.813 2 DEBUG nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.827 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26073dc1-6e77-4375-b744-981451cbb8e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.828 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a550d4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.828 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.829 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a550d4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:33 compute-0 kernel: tapc4a550d4-90: entered promiscuous mode
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:33 compute-0 NetworkManager[44885]: <info>  [1760432493.8313] manager: (tapc4a550d4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.833 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a550d4-90, col_values=(('external_ids', {'iface-id': '3e748acc-2125-4714-a652-d533c2b69dbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:33 compute-0 ovn_controller[152662]: 2025-10-14T09:01:33Z|00401|binding|INFO|Releasing lport 3e748acc-2125-4714-a652-d533c2b69dbe from this chassis (sb_readonly=0)
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.863 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4a550d4-9ec5-40fd-abf2-7c1066a244e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4a550d4-9ec5-40fd-abf2-7c1066a244e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.864 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9247090b-fca7-4dc2-9789-97af5b545198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.864 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-c4a550d4-9ec5-40fd-abf2-7c1066a244e7
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/c4a550d4-9ec5-40fd-abf2-7c1066a244e7.pid.haproxy
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID c4a550d4-9ec5-40fd-abf2-7c1066a244e7
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:33.865 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7', 'env', 'PROCESS_TAG=haproxy-c4a550d4-9ec5-40fd-abf2-7c1066a244e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4a550d4-9ec5-40fd-abf2-7c1066a244e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.892 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.892 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.907 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:01:33 compute-0 nova_compute[259627]: 2025-10-14 09:01:33.908 2 INFO nova.compute.claims [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.093 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.198 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:01:34 compute-0 podman[309291]: 2025-10-14 09:01:34.293522112 +0000 UTC m=+0.069055679 container create 8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:01:34 compute-0 systemd[1]: Started libpod-conmon-8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9.scope.
Oct 14 09:01:34 compute-0 podman[309291]: 2025-10-14 09:01:34.262901179 +0000 UTC m=+0.038434756 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:01:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd0e12555e6ab20a155174cd52aa306b7eb55a1432f2d3d85ccd6696d20b670/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:34 compute-0 podman[309291]: 2025-10-14 09:01:34.394803112 +0000 UTC m=+0.170336679 container init 8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:01:34 compute-0 podman[309291]: 2025-10-14 09:01:34.400684587 +0000 UTC m=+0.176218134 container start 8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:01:34 compute-0 neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7[309324]: [NOTICE]   (309328) : New worker (309330) forked
Oct 14 09:01:34 compute-0 neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7[309324]: [NOTICE]   (309328) : Loading success.
Oct 14 09:01:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1182785725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.533 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.538 2 DEBUG nova.compute.provider_tree [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.556 2 DEBUG nova.scheduler.client.report [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.574 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.575 2 DEBUG nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.627 2 DEBUG nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.627 2 DEBUG nova.network.neutron [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.648 2 INFO nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.662 2 DEBUG nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.746 2 DEBUG nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.747 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.748 2 INFO nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Creating image(s)
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.770 2 DEBUG nova.storage.rbd_utils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.796 2 DEBUG nova.storage.rbd_utils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.823 2 DEBUG nova.storage.rbd_utils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.827 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.923 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.924 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.925 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.925 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.947 2 DEBUG nova.storage.rbd_utils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:34 compute-0 nova_compute[259627]: 2025-10-14 09:01:34.951 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.008 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.009 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432495.008, c5dc9921-0deb-4a3f-83d2-703f8b5f1f37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.010 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] VM Started (Lifecycle Event)
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.013 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.015 2 DEBUG nova.network.neutron [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.016 2 DEBUG nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.021 2 INFO nova.virt.libvirt.driver [-] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Instance spawned successfully.
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.022 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.034 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.042 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.047 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.047 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.048 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.049 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.049 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.049 2 DEBUG nova.virt.libvirt.driver [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.089 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.090 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432495.0094025, c5dc9921-0deb-4a3f-83d2-703f8b5f1f37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.090 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] VM Paused (Lifecycle Event)
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.138 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.149 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432495.0127113, c5dc9921-0deb-4a3f-83d2-703f8b5f1f37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.150 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] VM Resumed (Lifecycle Event)
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.161 2 INFO nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Took 6.32 seconds to spawn the instance on the hypervisor.
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.161 2 DEBUG nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.179 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.182 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:35 compute-0 ceph-mon[74249]: pgmap v1331: 305 pgs: 305 active+clean; 270 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 8.1 MiB/s wr, 488 op/s
Oct 14 09:01:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1182785725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.204 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.231 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.265 2 INFO nova.compute.manager [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Took 7.31 seconds to build instance.
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.271 2 DEBUG nova.storage.rbd_utils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] resizing rbd image 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.300 2 DEBUG oslo_concurrency.lockutils [None req-28292847-40aa-4c40-94df-cc584dac9ee9 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.366 2 DEBUG nova.objects.instance [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'migration_context' on Instance uuid 15778bbd-fbee-44e9-ba12-9884db0e7afb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.380 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.380 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Ensure instance console log exists: /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.380 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.381 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.381 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.383 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.386 2 WARNING nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.392 2 DEBUG nova.virt.libvirt.host [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.393 2 DEBUG nova.virt.libvirt.host [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.396 2 DEBUG nova.virt.libvirt.host [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.396 2 DEBUG nova.virt.libvirt.host [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.397 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.397 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.397 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.397 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.398 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.398 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.398 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.398 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.399 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.399 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.399 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.399 2 DEBUG nova.virt.hardware [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.402 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 344 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 628 op/s
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.772 2 DEBUG oslo_concurrency.lockutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.773 2 DEBUG oslo_concurrency.lockutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.773 2 INFO nova.compute.manager [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Rebooting instance
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.790 2 DEBUG oslo_concurrency.lockutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.790 2 DEBUG oslo_concurrency.lockutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.790 2 DEBUG nova.network.neutron [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.928 2 DEBUG nova.compute.manager [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344808481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.974 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:35 compute-0 nova_compute[259627]: 2025-10-14 09:01:35.996 2 DEBUG nova.storage.rbd_utils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.001 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.040 2 INFO nova.compute.manager [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] instance snapshotting
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.045 2 DEBUG nova.compute.manager [req-8001fde5-1e73-4e3d-935c-e6da4cc5cfc1 req-0d0817a4-b00a-4f13-94e6-16f0b15a9d8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received event network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.046 2 DEBUG oslo_concurrency.lockutils [req-8001fde5-1e73-4e3d-935c-e6da4cc5cfc1 req-0d0817a4-b00a-4f13-94e6-16f0b15a9d8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.046 2 DEBUG oslo_concurrency.lockutils [req-8001fde5-1e73-4e3d-935c-e6da4cc5cfc1 req-0d0817a4-b00a-4f13-94e6-16f0b15a9d8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.047 2 DEBUG oslo_concurrency.lockutils [req-8001fde5-1e73-4e3d-935c-e6da4cc5cfc1 req-0d0817a4-b00a-4f13-94e6-16f0b15a9d8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.047 2 DEBUG nova.compute.manager [req-8001fde5-1e73-4e3d-935c-e6da4cc5cfc1 req-0d0817a4-b00a-4f13-94e6-16f0b15a9d8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] No waiting events found dispatching network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.048 2 WARNING nova.compute.manager [req-8001fde5-1e73-4e3d-935c-e6da4cc5cfc1 req-0d0817a4-b00a-4f13-94e6-16f0b15a9d8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received unexpected event network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 for instance with vm_state active and task_state None.
Oct 14 09:01:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2344808481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.265 2 INFO nova.virt.libvirt.driver [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Beginning live snapshot process
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.413 2 DEBUG nova.virt.libvirt.imagebackend [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:01:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245091775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.484 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.485 2 DEBUG nova.objects.instance [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'pci_devices' on Instance uuid 15778bbd-fbee-44e9-ba12-9884db0e7afb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.500 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <uuid>15778bbd-fbee-44e9-ba12-9884db0e7afb</uuid>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <name>instance-0000002f</name>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-435190778</nova:name>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:01:35</nova:creationTime>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <nova:user uuid="a8208d81c99c41668cc80998cc83bf02">tempest-ServersAdminNegativeTestJSON-114279920-project-member</nova:user>
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <nova:project uuid="f0dbf2d79ae5410d965cc3670bdd26ba">tempest-ServersAdminNegativeTestJSON-114279920</nova:project>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <system>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <entry name="serial">15778bbd-fbee-44e9-ba12-9884db0e7afb</entry>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <entry name="uuid">15778bbd-fbee-44e9-ba12-9884db0e7afb</entry>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     </system>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <os>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   </os>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <features>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   </features>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/15778bbd-fbee-44e9-ba12-9884db0e7afb_disk">
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/15778bbd-fbee-44e9-ba12-9884db0e7afb_disk.config">
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb/console.log" append="off"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <video>
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     </video>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:01:36 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:01:36 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:01:36 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:01:36 compute-0 nova_compute[259627]: </domain>
Oct 14 09:01:36 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.556 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.557 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.558 2 INFO nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Using config drive
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.578 2 DEBUG nova.storage.rbd_utils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.663 2 DEBUG nova.storage.rbd_utils [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] creating snapshot(931ded9dd6394571ab3a756fc4b8d88b) on rbd image(6a03ef41-3cc5-48d2-8796-369687ac6a10_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.961 2 INFO nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Creating config drive at /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb/disk.config
Oct 14 09:01:36 compute-0 nova_compute[259627]: 2025-10-14 09:01:36.970 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3txjo3lj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.118 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3txjo3lj" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.145 2 DEBUG nova.storage.rbd_utils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.149 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb/disk.config 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 14 09:01:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 14 09:01:37 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 14 09:01:37 compute-0 ceph-mon[74249]: pgmap v1332: 305 pgs: 305 active+clean; 344 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 628 op/s
Oct 14 09:01:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/245091775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.282 2 DEBUG nova.storage.rbd_utils [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] cloning vms/6a03ef41-3cc5-48d2-8796-369687ac6a10_disk@931ded9dd6394571ab3a756fc4b8d88b to images/01d92147-a4cb-4451-9af2-7e425ac97593 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:01:37 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.394 2 DEBUG oslo_concurrency.processutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb/disk.config 15778bbd-fbee-44e9-ba12-9884db0e7afb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.395 2 INFO nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Deleting local config drive /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb/disk.config because it was imported into RBD.
Oct 14 09:01:37 compute-0 systemd-machined[214636]: New machine qemu-55-instance-0000002f.
Oct 14 09:01:37 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-0000002f.
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.493 2 DEBUG nova.storage.rbd_utils [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] flattening images/01d92147-a4cb-4451-9af2-7e425ac97593 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 14 09:01:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 14 09:01:37 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 14 09:01:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 344 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 139 op/s
Oct 14 09:01:37 compute-0 nova_compute[259627]: 2025-10-14 09:01:37.783 2 DEBUG nova.storage.rbd_utils [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] removing snapshot(931ded9dd6394571ab3a756fc4b8d88b) on rbd image(6a03ef41-3cc5-48d2-8796-369687ac6a10_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:01:38 compute-0 ceph-mon[74249]: osdmap e171: 3 total, 3 up, 3 in
Oct 14 09:01:38 compute-0 ceph-mon[74249]: osdmap e172: 3 total, 3 up, 3 in
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.497 2 DEBUG nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.500 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.501 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432498.4974265, 15778bbd-fbee-44e9-ba12-9884db0e7afb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.501 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] VM Resumed (Lifecycle Event)
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.510 2 INFO nova.virt.libvirt.driver [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance spawned successfully.
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.511 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.526 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.528 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.537 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.538 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.539 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.539 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.540 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.540 2 DEBUG nova.virt.libvirt.driver [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.545 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.546 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432498.4984787, 15778bbd-fbee-44e9-ba12-9884db0e7afb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.546 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] VM Started (Lifecycle Event)
Oct 14 09:01:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.571 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.574 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:38 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.599 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.608 2 DEBUG nova.storage.rbd_utils [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] creating snapshot(snap) on rbd image(01d92147-a4cb-4451-9af2-7e425ac97593) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.646 2 INFO nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Took 3.90 seconds to spawn the instance on the hypervisor.
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.647 2 DEBUG nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.705 2 INFO nova.compute.manager [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Took 4.84 seconds to build instance.
Oct 14 09:01:38 compute-0 nova_compute[259627]: 2025-10-14 09:01:38.723 2 DEBUG oslo_concurrency.lockutils [None req-aeeef32f-3fc5-40d6-a7c6-aab5b0eaa9c5 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:39 compute-0 ceph-mon[74249]: pgmap v1335: 305 pgs: 305 active+clean; 344 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 139 op/s
Oct 14 09:01:39 compute-0 ceph-mon[74249]: osdmap e173: 3 total, 3 up, 3 in
Oct 14 09:01:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 14 09:01:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 14 09:01:39 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.583 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.584 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.585 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.586 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.586 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.588 2 INFO nova.compute.manager [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Terminating instance
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.590 2 DEBUG nova.compute.manager [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.620 2 DEBUG nova.network.neutron [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 344 MiB data, 545 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:01:39 compute-0 kernel: tap0abd3b45-48 (unregistering): left promiscuous mode
Oct 14 09:01:39 compute-0 NetworkManager[44885]: <info>  [1760432499.6380] device (tap0abd3b45-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.644 2 DEBUG oslo_concurrency.lockutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.645 2 DEBUG nova.compute.manager [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:39 compute-0 ovn_controller[152662]: 2025-10-14T09:01:39Z|00402|binding|INFO|Releasing lport 0abd3b45-48d9-4162-8943-889099bb3c13 from this chassis (sb_readonly=0)
Oct 14 09:01:39 compute-0 ovn_controller[152662]: 2025-10-14T09:01:39Z|00403|binding|INFO|Setting lport 0abd3b45-48d9-4162-8943-889099bb3c13 down in Southbound
Oct 14 09:01:39 compute-0 ovn_controller[152662]: 2025-10-14T09:01:39Z|00404|binding|INFO|Removing iface tap0abd3b45-48 ovn-installed in OVS
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.654 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:a4:15 10.100.0.13'], port_security=['fa:16:3e:00:a4:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c5dc9921-0deb-4a3f-83d2-703f8b5f1f37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a550d4-9ec5-40fd-abf2-7c1066a244e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98be6c59bba64fefb29ee881cc1d3825', 'neutron:revision_number': '4', 'neutron:security_group_ids': '32ca5bd3-fb0c-497e-84c2-b7b58a36d3df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162871a5-7922-47e4-a28d-0dc7e6daf9df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0abd3b45-48d9-4162-8943-889099bb3c13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.655 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0abd3b45-48d9-4162-8943-889099bb3c13 in datapath c4a550d4-9ec5-40fd-abf2-7c1066a244e7 unbound from our chassis
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.656 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4a550d4-9ec5-40fd-abf2-7c1066a244e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.657 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2d212d3c-4aa6-468b-b7da-b5dd5cc6394e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.658 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7 namespace which is not needed anymore
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Oct 14 09:01:39 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002e.scope: Consumed 5.961s CPU time.
Oct 14 09:01:39 compute-0 systemd-machined[214636]: Machine qemu-54-instance-0000002e terminated.
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7[309324]: [NOTICE]   (309328) : haproxy version is 2.8.14-c23fe91
Oct 14 09:01:39 compute-0 neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7[309324]: [NOTICE]   (309328) : path to executable is /usr/sbin/haproxy
Oct 14 09:01:39 compute-0 neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7[309324]: [WARNING]  (309328) : Exiting Master process...
Oct 14 09:01:39 compute-0 neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7[309324]: [WARNING]  (309328) : Exiting Master process...
Oct 14 09:01:39 compute-0 neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7[309324]: [ALERT]    (309328) : Current worker (309330) exited with code 143 (Terminated)
Oct 14 09:01:39 compute-0 neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7[309324]: [WARNING]  (309328) : All workers exited. Exiting... (0)
Oct 14 09:01:39 compute-0 systemd[1]: libpod-8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9.scope: Deactivated successfully.
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.850 2 DEBUG nova.compute.manager [req-31f3cfee-a1bf-45f0-83f0-406b6bc3400c req-d23c479e-1ddd-42ad-94d3-de73d32e4444 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received event network-vif-unplugged-0abd3b45-48d9-4162-8943-889099bb3c13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.851 2 DEBUG oslo_concurrency.lockutils [req-31f3cfee-a1bf-45f0-83f0-406b6bc3400c req-d23c479e-1ddd-42ad-94d3-de73d32e4444 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.851 2 DEBUG oslo_concurrency.lockutils [req-31f3cfee-a1bf-45f0-83f0-406b6bc3400c req-d23c479e-1ddd-42ad-94d3-de73d32e4444 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.851 2 DEBUG oslo_concurrency.lockutils [req-31f3cfee-a1bf-45f0-83f0-406b6bc3400c req-d23c479e-1ddd-42ad-94d3-de73d32e4444 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.852 2 DEBUG nova.compute.manager [req-31f3cfee-a1bf-45f0-83f0-406b6bc3400c req-d23c479e-1ddd-42ad-94d3-de73d32e4444 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] No waiting events found dispatching network-vif-unplugged-0abd3b45-48d9-4162-8943-889099bb3c13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.852 2 DEBUG nova.compute.manager [req-31f3cfee-a1bf-45f0-83f0-406b6bc3400c req-d23c479e-1ddd-42ad-94d3-de73d32e4444 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received event network-vif-unplugged-0abd3b45-48d9-4162-8943-889099bb3c13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:01:39 compute-0 podman[309890]: 2025-10-14 09:01:39.854216201 +0000 UTC m=+0.070426833 container died 8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.857 2 INFO nova.virt.libvirt.driver [-] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Instance destroyed successfully.
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.858 2 DEBUG nova.objects.instance [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lazy-loading 'resources' on Instance uuid c5dc9921-0deb-4a3f-83d2-703f8b5f1f37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:39 compute-0 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 09:01:39 compute-0 NetworkManager[44885]: <info>  [1760432499.8842] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9-userdata-shm.mount: Deactivated successfully.
Oct 14 09:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fd0e12555e6ab20a155174cd52aa306b7eb55a1432f2d3d85ccd6696d20b670-merged.mount: Deactivated successfully.
Oct 14 09:01:39 compute-0 ovn_controller[152662]: 2025-10-14T09:01:39Z|00405|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 09:01:39 compute-0 ovn_controller[152662]: 2025-10-14T09:01:39Z|00406|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 ovn_controller[152662]: 2025-10-14T09:01:39Z|00407|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.898 2 DEBUG nova.virt.libvirt.vif [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-223637843',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-223637843',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-223637843',id=46,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98be6c59bba64fefb29ee881cc1d3825',ramdisk_id='',reservation_id='r-h2k3kmv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-857809385',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-857809385-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:01:35Z,user_data=None,user_id='770c2e6a53f04f00b0b44d2f4a0c799b',uuid=c5dc9921-0deb-4a3f-83d2-703f8b5f1f37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.899 2 DEBUG nova.network.os_vif_util [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Converting VIF {"id": "0abd3b45-48d9-4162-8943-889099bb3c13", "address": "fa:16:3e:00:a4:15", "network": {"id": "c4a550d4-9ec5-40fd-abf2-7c1066a244e7", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-634234326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98be6c59bba64fefb29ee881cc1d3825", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0abd3b45-48", "ovs_interfaceid": "0abd3b45-48d9-4162-8943-889099bb3c13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.899 2 DEBUG nova.network.os_vif_util [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=0abd3b45-48d9-4162-8943-889099bb3c13,network=Network(c4a550d4-9ec5-40fd-abf2-7c1066a244e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0abd3b45-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.899 2 DEBUG os_vif [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=0abd3b45-48d9-4162-8943-889099bb3c13,network=Network(c4a550d4-9ec5-40fd-abf2-7c1066a244e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0abd3b45-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.901 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0abd3b45-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 podman[309890]: 2025-10-14 09:01:39.905955213 +0000 UTC m=+0.122165855 container cleanup 8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 systemd[1]: libpod-conmon-8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9.scope: Deactivated successfully.
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.933 2 INFO os_vif [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=0abd3b45-48d9-4162-8943-889099bb3c13,network=Network(c4a550d4-9ec5-40fd-abf2-7c1066a244e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0abd3b45-48')
Oct 14 09:01:39 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 09:01:39 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002b.scope: Consumed 13.157s CPU time.
Oct 14 09:01:39 compute-0 systemd-machined[214636]: Machine qemu-51-instance-0000002b terminated.
Oct 14 09:01:39 compute-0 podman[309930]: 2025-10-14 09:01:39.969245619 +0000 UTC m=+0.043003108 container remove 8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.969 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.976 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8b241266-1e34-4d9b-bd73-70469226d299]: (4, ('Tue Oct 14 09:01:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7 (8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9)\n8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9\nTue Oct 14 09:01:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7 (8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9)\n8348b5d56242af3e1df2cfca2bc7db604bd08332ef9727a3677cbfc6f90b55f9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e23e4b0c-fd87-4597-999a-1f83dabda926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:39.982 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a550d4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:39 compute-0 kernel: tapc4a550d4-90: left promiscuous mode
Oct 14 09:01:39 compute-0 NetworkManager[44885]: <info>  [1760432499.9905] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Oct 14 09:01:39 compute-0 nova_compute[259627]: 2025-10-14 09:01:39.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.012 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8accca30-3850-41f1-baf5-c012b07782ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.018 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.018 2 DEBUG nova.objects.instance [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.040 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eae7d9b9-71d5-46a8-b10a-8b8ea416a3d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.041 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e83b3f4-29ab-46a2-9865-278c9032d69c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.042 2 DEBUG nova.virt.libvirt.vif [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:01:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.042 2 DEBUG nova.network.os_vif_util [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.043 2 DEBUG nova.network.os_vif_util [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.043 2 DEBUG os_vif [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.049 2 INFO os_vif [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.055 2 DEBUG nova.virt.libvirt.driver [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start _get_guest_xml network_info=[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.056 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c3babd0f-d8b6-49c4-adb5-0d56deee24e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627230, 'reachable_time': 15886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309978, 'error': None, 'target': 'ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 systemd[1]: run-netns-ovnmeta\x2dc4a550d4\x2d9ec5\x2d40fd\x2dabf2\x2d7c1066a244e7.mount: Deactivated successfully.
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.061 2 WARNING nova.virt.libvirt.driver [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.058 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4a550d4-9ec5-40fd-abf2-7c1066a244e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.058 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[980a9401-3099-4724-84d4-665e4b5b873f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.061 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.062 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.062 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f238501e-ae7f-45c2-a7c3-4503730d9719]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.063 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.065 2 DEBUG nova.virt.libvirt.host [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.066 2 DEBUG nova.virt.libvirt.host [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.070 2 DEBUG nova.virt.libvirt.host [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.070 2 DEBUG nova.virt.libvirt.host [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.071 2 DEBUG nova.virt.libvirt.driver [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.071 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.071 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.071 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.072 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.072 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.072 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.073 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.073 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.073 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.073 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.073 2 DEBUG nova.virt.hardware [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.074 2 DEBUG nova.objects.instance [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.094 2 DEBUG oslo_concurrency.processutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.144 2 DEBUG nova.compute.manager [req-b6246234-2db1-4833-8804-8a9e89cebc28 req-dad1d86d-2a28-4e81-b6be-458aec3af838 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.144 2 DEBUG oslo_concurrency.lockutils [req-b6246234-2db1-4833-8804-8a9e89cebc28 req-dad1d86d-2a28-4e81-b6be-458aec3af838 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.145 2 DEBUG oslo_concurrency.lockutils [req-b6246234-2db1-4833-8804-8a9e89cebc28 req-dad1d86d-2a28-4e81-b6be-458aec3af838 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.145 2 DEBUG oslo_concurrency.lockutils [req-b6246234-2db1-4833-8804-8a9e89cebc28 req-dad1d86d-2a28-4e81-b6be-458aec3af838 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.145 2 DEBUG nova.compute.manager [req-b6246234-2db1-4833-8804-8a9e89cebc28 req-dad1d86d-2a28-4e81-b6be-458aec3af838 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.145 2 WARNING nova.compute.manager [req-b6246234-2db1-4833-8804-8a9e89cebc28 req-dad1d86d-2a28-4e81-b6be-458aec3af838 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state reboot_started_hard.
Oct 14 09:01:40 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[306936]: [NOTICE]   (306940) : haproxy version is 2.8.14-c23fe91
Oct 14 09:01:40 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[306936]: [NOTICE]   (306940) : path to executable is /usr/sbin/haproxy
Oct 14 09:01:40 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[306936]: [WARNING]  (306940) : Exiting Master process...
Oct 14 09:01:40 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[306936]: [WARNING]  (306940) : Exiting Master process...
Oct 14 09:01:40 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[306936]: [ALERT]    (306940) : Current worker (306942) exited with code 143 (Terminated)
Oct 14 09:01:40 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[306936]: [WARNING]  (306940) : All workers exited. Exiting... (0)
Oct 14 09:01:40 compute-0 systemd[1]: libpod-14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197.scope: Deactivated successfully.
Oct 14 09:01:40 compute-0 podman[309996]: 2025-10-14 09:01:40.231711492 +0000 UTC m=+0.051726122 container died 14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197-userdata-shm.mount: Deactivated successfully.
Oct 14 09:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b4ff029b0854d60f3170a2a1c6ee7d945a8747cb7bdaf2292da8824edbda19-merged.mount: Deactivated successfully.
Oct 14 09:01:40 compute-0 podman[309996]: 2025-10-14 09:01:40.283503056 +0000 UTC m=+0.103517686 container cleanup 14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:01:40 compute-0 systemd[1]: libpod-conmon-14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197.scope: Deactivated successfully.
Oct 14 09:01:40 compute-0 podman[310045]: 2025-10-14 09:01:40.369125391 +0000 UTC m=+0.049297833 container remove 14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.376 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cceca59a-74c6-4347-9b12-2e41854767ff]: (4, ('Tue Oct 14 09:01:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197)\n14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197\nTue Oct 14 09:01:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197)\n14d9dfdc02a86ded9dfc6430111818c7d0785861fa1998b94f4eeaa7ffb2b197\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.378 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9b2b1f-462c-4003-8ab1-bc424d294bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.379 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:40 compute-0 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.404 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba874a2-ae54-489e-980c-a8ec0bbcda88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.418 2 INFO nova.virt.libvirt.driver [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Deleting instance files /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_del
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.419 2 INFO nova.virt.libvirt.driver [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Deletion of /var/lib/nova/instances/c5dc9921-0deb-4a3f-83d2-703f8b5f1f37_del complete
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.441 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d24569bc-6182-4ef9-9cb9-896ff7331f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.442 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73d9ee09-b517-47a1-9f0e-71415f41d6db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.458 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba97a13-12d6-48d5-afe1-a97b3b6274f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625253, 'reachable_time': 35655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310061, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.459 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.459 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2b3fab-4b9d-4e7f-a453-8a746fc92434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:40.460 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.483 2 INFO nova.compute.manager [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.484 2 DEBUG oslo.service.loopingcall [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.484 2 DEBUG nova.compute.manager [-] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.484 2 DEBUG nova.network.neutron [-] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:01:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944126965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.570 2 DEBUG oslo_concurrency.processutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:40 compute-0 ceph-mon[74249]: osdmap e174: 3 total, 3 up, 3 in
Oct 14 09:01:40 compute-0 ceph-mon[74249]: pgmap v1338: 305 pgs: 305 active+clean; 344 MiB data, 545 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:01:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/944126965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:40 compute-0 nova_compute[259627]: 2025-10-14 09:01:40.616 2 DEBUG oslo_concurrency.processutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 09:01:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837727409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.059 2 DEBUG oslo_concurrency.processutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.062 2 DEBUG nova.virt.libvirt.vif [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:01:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.062 2 DEBUG nova.network.os_vif_util [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:41 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:01:41 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.072 2 DEBUG nova.network.os_vif_util [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.073 2 DEBUG nova.objects.instance [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.076 2 DEBUG nova.network.neutron [-] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.092 2 DEBUG nova.virt.libvirt.driver [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <uuid>de383510-2de3-40bd-b479-c0010b3f2d1c</uuid>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <name>instance-0000002b</name>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestJSON-server-1794713901</nova:name>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:01:40</nova:creationTime>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <nova:port uuid="8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2">
Oct 14 09:01:41 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <system>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <entry name="serial">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <entry name="uuid">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </system>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <os>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   </os>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <features>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   </features>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk">
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config">
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:41 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:be:e2:1b"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <target dev="tap8ec905f0-b7"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log" append="off"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <video>
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </video>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:01:41 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:01:41 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:01:41 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:01:41 compute-0 nova_compute[259627]: </domain>
Oct 14 09:01:41 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.093 2 DEBUG nova.virt.libvirt.driver [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.094 2 DEBUG nova.virt.libvirt.driver [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.094 2 DEBUG nova.virt.libvirt.vif [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:01:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.099 2 DEBUG nova.network.os_vif_util [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.100 2 DEBUG nova.network.os_vif_util [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.100 2 DEBUG os_vif [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.105 2 INFO nova.compute.manager [-] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Took 0.62 seconds to deallocate network for instance.
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:41 compute-0 NetworkManager[44885]: <info>  [1760432501.1561] manager: (tap8ec905f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.167 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.167 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.174 2 INFO os_vif [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.219 2 INFO nova.virt.libvirt.driver [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Snapshot image upload complete
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.220 2 INFO nova.compute.manager [None req-532f3301-6a7c-4b14-97c1-5fd0613e7286 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Took 5.18 seconds to snapshot the instance on the hypervisor.
Oct 14 09:01:41 compute-0 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 09:01:41 compute-0 NetworkManager[44885]: <info>  [1760432501.2914] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/183)
Oct 14 09:01:41 compute-0 ovn_controller[152662]: 2025-10-14T09:01:41Z|00408|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 09:01:41 compute-0 ovn_controller[152662]: 2025-10-14T09:01:41Z|00409|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.303 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.304 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.305 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.318 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb02003-6ef0-4124-b435-9702daefbb37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.319 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.320 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a98f093-07b0-424d-9387-1f0871d9fa0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 systemd-udevd[310119]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.321 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da16d993-8ca4-4998-bcb0-01692b173358]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_controller[152662]: 2025-10-14T09:01:41Z|00410|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 09:01:41 compute-0 ovn_controller[152662]: 2025-10-14T09:01:41Z|00411|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 09:01:41 compute-0 NetworkManager[44885]: <info>  [1760432501.3396] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:01:41 compute-0 systemd-machined[214636]: New machine qemu-56-instance-0000002b.
Oct 14 09:01:41 compute-0 NetworkManager[44885]: <info>  [1760432501.3405] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.340 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[64328d13-aa3b-495d-b3f7-25b4c009ae2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-0000002b.
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.366 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[55dc4d54-80f2-4d37-ad82-9985238b4913]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.369 2 DEBUG oslo_concurrency.processutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.394 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41b50d39-9aa8-4195-a5ec-1442a7ebd39c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.399 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67ea3982-f883-442e-b13e-4a5ab47fc77a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 NetworkManager[44885]: <info>  [1760432501.4013] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/184)
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.422 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.423 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.434 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e61743-d466-4df0-bf5f-dfc2f249a6f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.437 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f1338462-3a10-4367-b57c-c33b994f25a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.439 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:01:41 compute-0 NetworkManager[44885]: <info>  [1760432501.4605] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.467 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3e8949-f167-494e-9bda-b340a7c2df6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.482 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3c885160-0a51-4fb8-921e-46c2b8dac58e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628022, 'reachable_time': 32081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310153, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.495 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc20b19-5de1-472d-933a-1403a5efe5d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628022, 'tstamp': 628022}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310154, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.500 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.511 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a77fa010-9c08-4204-95fe-222db3031253]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628022, 'reachable_time': 32081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310155, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.539 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d39198-ce84-43ea-af64-1b9c35e8ace8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1837727409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.611 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c122f4-36b3-440b-a733-a164d82a2143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 NetworkManager[44885]: <info>  [1760432501.6154] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Oct 14 09:01:41 compute-0 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.618 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 ovn_controller[152662]: 2025-10-14T09:01:41Z|00412|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.621 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:01:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 431 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 18 MiB/s wr, 979 op/s
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.622 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c96be955-862e-407a-b88e-4b69022e6c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.632 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:01:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:41.632 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903638867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.841 2 DEBUG oslo_concurrency.processutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.850 2 DEBUG nova.compute.provider_tree [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.875 2 DEBUG nova.scheduler.client.report [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.907 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.909 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.921 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.922 2 INFO nova.compute.claims [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.953 2 INFO nova.scheduler.client.report [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Deleted allocations for instance c5dc9921-0deb-4a3f-83d2-703f8b5f1f37
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.958 2 DEBUG nova.compute.manager [req-b187fb22-1b5f-4e31-a73b-7a68a420b9b9 req-499c51a2-d76e-4279-85fc-c99e178c55cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received event network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.959 2 DEBUG oslo_concurrency.lockutils [req-b187fb22-1b5f-4e31-a73b-7a68a420b9b9 req-499c51a2-d76e-4279-85fc-c99e178c55cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.960 2 DEBUG oslo_concurrency.lockutils [req-b187fb22-1b5f-4e31-a73b-7a68a420b9b9 req-499c51a2-d76e-4279-85fc-c99e178c55cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.960 2 DEBUG oslo_concurrency.lockutils [req-b187fb22-1b5f-4e31-a73b-7a68a420b9b9 req-499c51a2-d76e-4279-85fc-c99e178c55cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.960 2 DEBUG nova.compute.manager [req-b187fb22-1b5f-4e31-a73b-7a68a420b9b9 req-499c51a2-d76e-4279-85fc-c99e178c55cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] No waiting events found dispatching network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.960 2 WARNING nova.compute.manager [req-b187fb22-1b5f-4e31-a73b-7a68a420b9b9 req-499c51a2-d76e-4279-85fc-c99e178c55cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received unexpected event network-vif-plugged-0abd3b45-48d9-4162-8943-889099bb3c13 for instance with vm_state deleted and task_state None.
Oct 14 09:01:41 compute-0 nova_compute[259627]: 2025-10-14 09:01:41.961 2 DEBUG nova.compute.manager [req-b187fb22-1b5f-4e31-a73b-7a68a420b9b9 req-499c51a2-d76e-4279-85fc-c99e178c55cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Received event network-vif-deleted-0abd3b45-48d9-4162-8943-889099bb3c13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:42 compute-0 podman[310251]: 2025-10-14 09:01:42.001235158 +0000 UTC m=+0.050386810 container create 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.031 2 DEBUG oslo_concurrency.lockutils [None req-f8c8d22c-0f08-4fa8-804c-3662d82f46f6 770c2e6a53f04f00b0b44d2f4a0c799b 98be6c59bba64fefb29ee881cc1d3825 - - default default] Lock "c5dc9921-0deb-4a3f-83d2-703f8b5f1f37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:42 compute-0 systemd[1]: Started libpod-conmon-087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a.scope.
Oct 14 09:01:42 compute-0 podman[310251]: 2025-10-14 09:01:41.974819219 +0000 UTC m=+0.023970921 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:01:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4be05b7989e5d3ca75dc62592041c7de6672de773f72392d1c6ba933683c24/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.125 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:42 compute-0 podman[310251]: 2025-10-14 09:01:42.130137588 +0000 UTC m=+0.179289270 container init 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0)
Oct 14 09:01:42 compute-0 podman[310251]: 2025-10-14 09:01:42.135424778 +0000 UTC m=+0.184576430 container start 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 09:01:42 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [NOTICE]   (310270) : New worker (310273) forked
Oct 14 09:01:42 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [NOTICE]   (310270) : Loading success.
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.224 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.224 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432502.222744, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.225 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.228 2 DEBUG nova.compute.manager [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.233 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance rebooted successfully.
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.233 2 DEBUG nova.compute.manager [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.254 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.256 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.282 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.283 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432502.227795, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.283 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.291 2 DEBUG oslo_concurrency.lockutils [None req-38a06d56-703d-45cd-8e93-5817c36e5cc8 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.308 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.311 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.316 2 DEBUG nova.compute.manager [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.317 2 DEBUG oslo_concurrency.lockutils [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.317 2 DEBUG oslo_concurrency.lockutils [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.317 2 DEBUG oslo_concurrency.lockutils [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.317 2 DEBUG nova.compute.manager [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.317 2 WARNING nova.compute.manager [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.318 2 DEBUG nova.compute.manager [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.318 2 DEBUG oslo_concurrency.lockutils [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.318 2 DEBUG oslo_concurrency.lockutils [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.318 2 DEBUG oslo_concurrency.lockutils [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.318 2 DEBUG nova.compute.manager [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.319 2 WARNING nova.compute.manager [req-e37ebb29-4348-4935-ae48-16ef31366d44 req-02f3adae-0967-4c40-a527-0055ffdcfd48 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:01:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282387832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.565 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.571 2 DEBUG nova.compute.provider_tree [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:42 compute-0 ceph-mon[74249]: pgmap v1339: 305 pgs: 305 active+clean; 431 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 18 MiB/s wr, 979 op/s
Oct 14 09:01:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2903638867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3282387832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.609 2 DEBUG nova.scheduler.client.report [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.649 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.651 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.738 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.739 2 DEBUG nova.network.neutron [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.772 2 INFO nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.802 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.916 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.917 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.917 2 INFO nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Creating image(s)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002729758314612229 of space, bias 1.0, pg target 0.8189274943836687 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013588698354170736 of space, bias 1.0, pg target 0.4076609506251221 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:01:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.942 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.966 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.988 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:42 compute-0 nova_compute[259627]: 2025-10-14 09:01:42.992 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.064 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.065 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.066 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.067 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.094 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.098 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.259 2 DEBUG nova.network.neutron [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.259 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.329 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.393 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] resizing rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.475 2 DEBUG nova.objects.instance [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'migration_context' on Instance uuid f5ecec2f-eb67-49e5-abb8-15e2be8db618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.489 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.489 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Ensure instance console log exists: /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.490 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.490 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.491 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.492 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.495 2 WARNING nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.501 2 DEBUG nova.virt.libvirt.host [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.501 2 DEBUG nova.virt.libvirt.host [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.504 2 DEBUG nova.virt.libvirt.host [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.504 2 DEBUG nova.virt.libvirt.host [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.505 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.505 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.506 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.506 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.506 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.507 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.507 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.507 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.507 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.508 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.508 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.508 2 DEBUG nova.virt.hardware [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.511 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 431 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 13 MiB/s wr, 711 op/s
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.792 2 DEBUG nova.compute.manager [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.844 2 INFO nova.compute.manager [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] instance snapshotting
Oct 14 09:01:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/742082035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.911 2 INFO nova.compute.manager [None req-fea0fdf5-9c6a-4762-93fa-ef8925083a89 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Get console output
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.918 2 INFO oslo.privsep.daemon [None req-fea0fdf5-9c6a-4762-93fa-ef8925083a89 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpmq09rj3_/privsep.sock']
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.947 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.972 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:43 compute-0 nova_compute[259627]: 2025-10-14 09:01:43.976 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.145 2 INFO nova.virt.libvirt.driver [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Beginning live snapshot process
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.297 2 DEBUG nova.virt.libvirt.imagebackend [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:01:44 compute-0 rsyslogd[1002]: imjournal: 7799 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.447 2 DEBUG nova.compute.manager [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.448 2 DEBUG oslo_concurrency.lockutils [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.449 2 DEBUG oslo_concurrency.lockutils [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.450 2 DEBUG oslo_concurrency.lockutils [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.451 2 DEBUG nova.compute.manager [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.451 2 WARNING nova.compute.manager [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:01:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:01:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476326120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.479 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.482 2 DEBUG nova.objects.instance [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'pci_devices' on Instance uuid f5ecec2f-eb67-49e5-abb8-15e2be8db618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.502 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <uuid>f5ecec2f-eb67-49e5-abb8-15e2be8db618</uuid>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <name>instance-00000030</name>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-818396468</nova:name>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:01:43</nova:creationTime>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <nova:user uuid="a8208d81c99c41668cc80998cc83bf02">tempest-ServersAdminNegativeTestJSON-114279920-project-member</nova:user>
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <nova:project uuid="f0dbf2d79ae5410d965cc3670bdd26ba">tempest-ServersAdminNegativeTestJSON-114279920</nova:project>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <system>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <entry name="serial">f5ecec2f-eb67-49e5-abb8-15e2be8db618</entry>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <entry name="uuid">f5ecec2f-eb67-49e5-abb8-15e2be8db618</entry>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     </system>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <os>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   </os>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <features>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   </features>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk">
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config">
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       </source>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:01:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/console.log" append="off"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <video>
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     </video>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:01:44 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:01:44 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:01:44 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:01:44 compute-0 nova_compute[259627]: </domain>
Oct 14 09:01:44 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.554 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] creating snapshot(56598d4525d541c693ae71ee1e24ae4b) on rbd image(333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:01:44 compute-0 podman[310570]: 2025-10-14 09:01:44.609312052 +0000 UTC m=+0.056064270 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.623 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.623 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.624 2 INFO nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Using config drive
Oct 14 09:01:44 compute-0 podman[310569]: 2025-10-14 09:01:44.628253027 +0000 UTC m=+0.082971091 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.648 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 14 09:01:44 compute-0 ceph-mon[74249]: pgmap v1340: 305 pgs: 305 active+clean; 431 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 13 MiB/s wr, 711 op/s
Oct 14 09:01:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/742082035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1476326120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:01:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 14 09:01:44 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.746 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] cloning vms/333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk@56598d4525d541c693ae71ee1e24ae4b to images/44bff268-5d5f-42ed-bb32-357e8412a37d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.828 2 INFO oslo.privsep.daemon [None req-fea0fdf5-9c6a-4762-93fa-ef8925083a89 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Spawned new privsep daemon via rootwrap
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.690 20781 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.700 20781 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.705 20781 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.705 20781 INFO oslo.privsep.daemon [-] privsep daemon running as pid 20781
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.848 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] flattening images/44bff268-5d5f-42ed-bb32-357e8412a37d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.907 2 INFO nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Creating config drive at /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.914 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p4stps_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:44 compute-0 nova_compute[259627]: 2025-10-14 09:01:44.929 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:01:45 compute-0 nova_compute[259627]: 2025-10-14 09:01:45.077 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p4stps_" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:45 compute-0 nova_compute[259627]: 2025-10-14 09:01:45.109 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:01:45 compute-0 nova_compute[259627]: 2025-10-14 09:01:45.115 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:45 compute-0 nova_compute[259627]: 2025-10-14 09:01:45.217 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] removing snapshot(56598d4525d541c693ae71ee1e24ae4b) on rbd image(333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:01:45 compute-0 nova_compute[259627]: 2025-10-14 09:01:45.275 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:45 compute-0 nova_compute[259627]: 2025-10-14 09:01:45.275 2 INFO nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Deleting local config drive /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config because it was imported into RBD.
Oct 14 09:01:45 compute-0 systemd-machined[214636]: New machine qemu-57-instance-00000030.
Oct 14 09:01:45 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000030.
Oct 14 09:01:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 21 MiB/s rd, 20 MiB/s wr, 936 op/s
Oct 14 09:01:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 14 09:01:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 14 09:01:45 compute-0 ceph-mon[74249]: osdmap e175: 3 total, 3 up, 3 in
Oct 14 09:01:45 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 14 09:01:45 compute-0 nova_compute[259627]: 2025-10-14 09:01:45.726 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] creating snapshot(snap) on rbd image(44bff268-5d5f-42ed-bb32-357e8412a37d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:01:46 compute-0 ovn_controller[152662]: 2025-10-14T09:01:46Z|00413|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.299 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432506.2941918, f5ecec2f-eb67-49e5-abb8-15e2be8db618 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.299 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] VM Resumed (Lifecycle Event)
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.301 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.301 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.305 2 INFO nova.virt.libvirt.driver [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance spawned successfully.
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.305 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.339 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.352 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.359 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.359 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.359 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.360 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.360 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.360 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.374 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.374 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432506.2956293, f5ecec2f-eb67-49e5-abb8-15e2be8db618 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.374 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] VM Started (Lifecycle Event)
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.418 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.421 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.488 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.534 2 INFO nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Took 3.62 seconds to spawn the instance on the hypervisor.
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.535 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.674 2 INFO nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Took 5.19 seconds to build instance.
Oct 14 09:01:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 14 09:01:46 compute-0 ceph-mon[74249]: pgmap v1342: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 21 MiB/s rd, 20 MiB/s wr, 936 op/s
Oct 14 09:01:46 compute-0 ceph-mon[74249]: osdmap e176: 3 total, 3 up, 3 in
Oct 14 09:01:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 14 09:01:46 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 14 09:01:46 compute-0 nova_compute[259627]: 2025-10-14 09:01:46.755 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:01:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6268 writes, 28K keys, 6268 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6268 writes, 6268 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1633 writes, 7351 keys, 1633 commit groups, 1.0 writes per commit group, ingest: 9.84 MB, 0.02 MB/s
                                           Interval WAL: 1633 writes, 1633 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    138.3      0.24              0.11        16    0.015       0      0       0.0       0.0
                                             L6      1/0    8.23 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    209.5    169.8      0.63              0.34        15    0.042     69K   8346       0.0       0.0
                                            Sum      1/0    8.23 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    151.7    161.1      0.87              0.45        31    0.028     69K   8346       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    201.7    207.2      0.19              0.09         8    0.024     21K   2569       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    209.5    169.8      0.63              0.34        15    0.042     69K   8346       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    142.4      0.23              0.11        15    0.016       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 0.9 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 15.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000196 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(995,14.84 MB,4.88211%) FilterBlock(32,199.23 KB,0.0640016%) IndexBlock(32,372.81 KB,0.119761%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 14 09:01:47 compute-0 nova_compute[259627]: 2025-10-14 09:01:47.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:47 compute-0 nova_compute[259627]: 2025-10-14 09:01:47.266 2 DEBUG nova.objects.instance [None req-a43656ad-bcdb-4ae4-9031-85c5d6a52320 873b066be596492e8aa5f2a9f2a01a7f f93dae4c4a5f4a16921d0424a47c214e - - default default] Lazy-loading 'pci_devices' on Instance uuid f5ecec2f-eb67-49e5-abb8-15e2be8db618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:47 compute-0 nova_compute[259627]: 2025-10-14 09:01:47.291 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432507.2911167, f5ecec2f-eb67-49e5-abb8-15e2be8db618 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:47 compute-0 nova_compute[259627]: 2025-10-14 09:01:47.291 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] VM Paused (Lifecycle Event)
Oct 14 09:01:47 compute-0 nova_compute[259627]: 2025-10-14 09:01:47.311 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:47 compute-0 nova_compute[259627]: 2025-10-14 09:01:47.316 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:01:47 compute-0 nova_compute[259627]: 2025-10-14 09:01:47.339 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 14 09:01:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:01:47.511 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:01:47 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct 14 09:01:47 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000030.scope: Consumed 1.934s CPU time.
Oct 14 09:01:47 compute-0 systemd-machined[214636]: Machine qemu-57-instance-00000030 terminated.
Oct 14 09:01:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 14 09:01:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 14 09:01:47 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 14 09:01:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 18 MiB/s rd, 16 MiB/s wr, 571 op/s
Oct 14 09:01:47 compute-0 nova_compute[259627]: 2025-10-14 09:01:47.632 2 DEBUG nova.compute.manager [None req-a43656ad-bcdb-4ae4-9031-85c5d6a52320 873b066be596492e8aa5f2a9f2a01a7f f93dae4c4a5f4a16921d0424a47c214e - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:47 compute-0 ceph-mon[74249]: osdmap e177: 3 total, 3 up, 3 in
Oct 14 09:01:47 compute-0 ceph-mon[74249]: osdmap e178: 3 total, 3 up, 3 in
Oct 14 09:01:48 compute-0 nova_compute[259627]: 2025-10-14 09:01:48.251 2 INFO nova.virt.libvirt.driver [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Snapshot image upload complete
Oct 14 09:01:48 compute-0 nova_compute[259627]: 2025-10-14 09:01:48.252 2 INFO nova.compute.manager [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 4.41 seconds to snapshot the instance on the hypervisor.
Oct 14 09:01:48 compute-0 nova_compute[259627]: 2025-10-14 09:01:48.580 2 DEBUG oslo_concurrency.lockutils [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:48 compute-0 nova_compute[259627]: 2025-10-14 09:01:48.581 2 DEBUG oslo_concurrency.lockutils [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:48 compute-0 nova_compute[259627]: 2025-10-14 09:01:48.581 2 DEBUG nova.compute.manager [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:48 compute-0 nova_compute[259627]: 2025-10-14 09:01:48.586 2 DEBUG nova.compute.manager [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 09:01:48 compute-0 nova_compute[259627]: 2025-10-14 09:01:48.588 2 DEBUG nova.objects.instance [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:48 compute-0 nova_compute[259627]: 2025-10-14 09:01:48.627 2 DEBUG nova.virt.libvirt.driver [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:01:48 compute-0 ceph-mon[74249]: pgmap v1346: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 18 MiB/s rd, 16 MiB/s wr, 571 op/s
Oct 14 09:01:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 13 MiB/s wr, 463 op/s
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.857 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.857 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.858 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.858 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.858 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.859 2 INFO nova.compute.manager [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Terminating instance
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.860 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "refresh_cache-f5ecec2f-eb67-49e5-abb8-15e2be8db618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.860 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquired lock "refresh_cache-f5ecec2f-eb67-49e5-abb8-15e2be8db618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:49 compute-0 nova_compute[259627]: 2025-10-14 09:01:49.860 2 DEBUG nova.network.neutron [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.079 2 DEBUG nova.network.neutron [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:01:50 compute-0 ovn_controller[152662]: 2025-10-14T09:01:50Z|00414|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.582 2 DEBUG nova.network.neutron [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.598 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Releasing lock "refresh_cache-f5ecec2f-eb67-49e5-abb8-15e2be8db618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.598 2 DEBUG nova.compute.manager [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.603 2 INFO nova.virt.libvirt.driver [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance destroyed successfully.
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.604 2 DEBUG nova.objects.instance [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'resources' on Instance uuid f5ecec2f-eb67-49e5-abb8-15e2be8db618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:50 compute-0 ceph-mon[74249]: pgmap v1347: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 13 MiB/s wr, 463 op/s
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.972 2 INFO nova.virt.libvirt.driver [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Deleting instance files /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618_del
Oct 14 09:01:50 compute-0 nova_compute[259627]: 2025-10-14 09:01:50.974 2 INFO nova.virt.libvirt.driver [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Deletion of /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618_del complete
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.046 2 INFO nova.compute.manager [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Took 0.45 seconds to destroy the instance on the hypervisor.
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.046 2 DEBUG oslo.service.loopingcall [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.047 2 DEBUG nova.compute.manager [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.047 2 DEBUG nova.network.neutron [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.341 2 DEBUG nova.network.neutron [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.358 2 DEBUG nova.network.neutron [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.376 2 INFO nova.compute.manager [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Took 0.33 seconds to deallocate network for instance.
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.435 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.435 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:51 compute-0 nova_compute[259627]: 2025-10-14 09:01:51.566 2 DEBUG oslo_concurrency.processutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 571 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 243 op/s
Oct 14 09:01:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3557958970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.054 2 DEBUG oslo_concurrency.processutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.062 2 DEBUG nova.compute.provider_tree [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.084 2 DEBUG nova.scheduler.client.report [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.111 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.146 2 INFO nova.scheduler.client.report [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Deleted allocations for instance f5ecec2f-eb67-49e5-abb8-15e2be8db618
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.223 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 14 09:01:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 14 09:01:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 14 09:01:52 compute-0 ceph-mon[74249]: pgmap v1348: 305 pgs: 305 active+clean; 571 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 243 op/s
Oct 14 09:01:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3557958970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:52 compute-0 ceph-mon[74249]: osdmap e179: 3 total, 3 up, 3 in
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.869 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.869 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.870 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "15778bbd-fbee-44e9-ba12-9884db0e7afb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.870 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.870 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.871 2 INFO nova.compute.manager [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Terminating instance
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.872 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "refresh_cache-15778bbd-fbee-44e9-ba12-9884db0e7afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.872 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquired lock "refresh_cache-15778bbd-fbee-44e9-ba12-9884db0e7afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:01:52 compute-0 nova_compute[259627]: 2025-10-14 09:01:52.872 2 DEBUG nova.network.neutron [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:01:53 compute-0 nova_compute[259627]: 2025-10-14 09:01:53.195 2 DEBUG nova.network.neutron [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:01:53 compute-0 nova_compute[259627]: 2025-10-14 09:01:53.607 2 DEBUG nova.network.neutron [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:53 compute-0 nova_compute[259627]: 2025-10-14 09:01:53.626 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Releasing lock "refresh_cache-15778bbd-fbee-44e9-ba12-9884db0e7afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:01:53 compute-0 nova_compute[259627]: 2025-10-14 09:01:53.626 2 DEBUG nova.compute.manager [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:01:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 571 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 211 op/s
Oct 14 09:01:53 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct 14 09:01:53 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002f.scope: Consumed 12.366s CPU time.
Oct 14 09:01:53 compute-0 systemd-machined[214636]: Machine qemu-55-instance-0000002f terminated.
Oct 14 09:01:53 compute-0 ovn_controller[152662]: 2025-10-14T09:01:53Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:01:53 compute-0 nova_compute[259627]: 2025-10-14 09:01:53.845 2 INFO nova.virt.libvirt.driver [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance destroyed successfully.
Oct 14 09:01:53 compute-0 nova_compute[259627]: 2025-10-14 09:01:53.845 2 DEBUG nova.objects.instance [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'resources' on Instance uuid 15778bbd-fbee-44e9-ba12-9884db0e7afb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.277 2 INFO nova.virt.libvirt.driver [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Deleting instance files /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb_del
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.278 2 INFO nova.virt.libvirt.driver [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Deletion of /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb_del complete
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.329 2 INFO nova.compute.manager [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.330 2 DEBUG oslo.service.loopingcall [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.331 2 DEBUG nova.compute.manager [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.331 2 DEBUG nova.network.neutron [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:01:54 compute-0 ceph-mon[74249]: pgmap v1350: 305 pgs: 305 active+clean; 571 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 211 op/s
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.853 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432499.8486285, c5dc9921-0deb-4a3f-83d2-703f8b5f1f37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.853 2 INFO nova.compute.manager [-] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] VM Stopped (Lifecycle Event)
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.881 2 DEBUG nova.compute.manager [None req-a12e7eff-6ea4-4d4b-855d-b9f30261f910 - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.899 2 DEBUG nova.network.neutron [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.921 2 DEBUG nova.network.neutron [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:01:54 compute-0 nova_compute[259627]: 2025-10-14 09:01:54.942 2 INFO nova.compute.manager [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Took 0.61 seconds to deallocate network for instance.
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.005 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.005 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.111 2 DEBUG oslo_concurrency.processutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:01:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:01:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701239625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.554 2 DEBUG oslo_concurrency.processutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.560 2 DEBUG nova.compute.provider_tree [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.579 2 DEBUG nova.scheduler.client.report [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.615 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.8 MiB/s wr, 342 op/s
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.650 2 INFO nova.scheduler.client.report [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Deleted allocations for instance 15778bbd-fbee-44e9-ba12-9884db0e7afb
Oct 14 09:01:55 compute-0 nova_compute[259627]: 2025-10-14 09:01:55.707 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:01:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/701239625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:01:56 compute-0 nova_compute[259627]: 2025-10-14 09:01:56.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:56 compute-0 ceph-mon[74249]: pgmap v1351: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.8 MiB/s wr, 342 op/s
Oct 14 09:01:57 compute-0 nova_compute[259627]: 2025-10-14 09:01:57.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:01:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 276 op/s
Oct 14 09:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 14 09:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 14 09:01:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 14 09:01:58 compute-0 nova_compute[259627]: 2025-10-14 09:01:58.682 2 DEBUG nova.virt.libvirt.driver [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:01:58 compute-0 ceph-mon[74249]: pgmap v1352: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 276 op/s
Oct 14 09:01:58 compute-0 ceph-mon[74249]: osdmap e180: 3 total, 3 up, 3 in
Oct 14 09:01:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 96 KiB/s wr, 162 op/s
Oct 14 09:01:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 14 09:01:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 14 09:01:59 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 14 09:02:00 compute-0 ceph-mon[74249]: pgmap v1354: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 96 KiB/s wr, 162 op/s
Oct 14 09:02:00 compute-0 ceph-mon[74249]: osdmap e181: 3 total, 3 up, 3 in
Oct 14 09:02:00 compute-0 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 09:02:00 compute-0 NetworkManager[44885]: <info>  [1760432520.9233] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:02:00 compute-0 ovn_controller[152662]: 2025-10-14T09:02:00Z|00415|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 09:02:00 compute-0 nova_compute[259627]: 2025-10-14 09:02:00.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:00 compute-0 ovn_controller[152662]: 2025-10-14T09:02:00Z|00416|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 09:02:00 compute-0 ovn_controller[152662]: 2025-10-14T09:02:00Z|00417|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 09:02:00 compute-0 nova_compute[259627]: 2025-10-14 09:02:00.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.976 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:02:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.977 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis
Oct 14 09:02:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.978 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:02:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a652163c-afd2-4d32-aead-cc3a984ec147]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.980 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:00.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:01 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 09:02:01 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000002b.scope: Consumed 12.695s CPU time.
Oct 14 09:02:01 compute-0 systemd-machined[214636]: Machine qemu-56-instance-0000002b terminated.
Oct 14 09:02:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [NOTICE]   (310270) : haproxy version is 2.8.14-c23fe91
Oct 14 09:02:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [NOTICE]   (310270) : path to executable is /usr/sbin/haproxy
Oct 14 09:02:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [ALERT]    (310270) : Current worker (310273) exited with code 143 (Terminated)
Oct 14 09:02:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [WARNING]  (310270) : All workers exited. Exiting... (0)
Oct 14 09:02:01 compute-0 systemd[1]: libpod-087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a.scope: Deactivated successfully.
Oct 14 09:02:01 compute-0 podman[310951]: 2025-10-14 09:02:01.153103408 +0000 UTC m=+0.068353731 container died 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a-userdata-shm.mount: Deactivated successfully.
Oct 14 09:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b4be05b7989e5d3ca75dc62592041c7de6672de773f72392d1c6ba933683c24-merged.mount: Deactivated successfully.
Oct 14 09:02:01 compute-0 podman[310951]: 2025-10-14 09:02:01.189235267 +0000 UTC m=+0.104485580 container cleanup 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:01 compute-0 systemd[1]: libpod-conmon-087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a.scope: Deactivated successfully.
Oct 14 09:02:01 compute-0 podman[310983]: 2025-10-14 09:02:01.273128949 +0000 UTC m=+0.055645349 container remove 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.279 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3463c3-7942-428b-b51b-dab7f16175f9]: (4, ('Tue Oct 14 09:02:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a)\n087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a\nTue Oct 14 09:02:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a)\n087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7dc0b1-50a4-446f-9217-f5c98b02655b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.281 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:01 compute-0 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.308 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0efad08-24a0-41ba-b2a9-de19e05150cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.310 2 DEBUG nova.compute.manager [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.310 2 DEBUG oslo_concurrency.lockutils [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.310 2 DEBUG oslo_concurrency.lockutils [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.311 2 DEBUG oslo_concurrency.lockutils [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.311 2 DEBUG nova.compute.manager [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.311 2 WARNING nova.compute.manager [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state powering-off.
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.347 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[60eff4f5-30c8-4f1b-8929-d5e563561710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17ab300c-f636-40d6-88df-a9037afa3fd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.362 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82a2b9e8-3a4b-463a-95c7-3791f483739b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628014, 'reachable_time': 20762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311008, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.366 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:02:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.366 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9e991e-1f48-445e-a68c-1a64e1af9a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 346 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 959 KiB/s rd, 138 KiB/s wr, 233 op/s
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.695 2 INFO nova.virt.libvirt.driver [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance shutdown successfully after 13 seconds.
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.701 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.701 2 DEBUG nova.objects.instance [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.715 2 DEBUG nova.compute.manager [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:01 compute-0 nova_compute[259627]: 2025-10-14 09:02:01.760 2 DEBUG oslo_concurrency.lockutils [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 14 09:02:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 14 09:02:01 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.485 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.487 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.488 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "6a03ef41-3cc5-48d2-8796-369687ac6a10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.488 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.489 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.491 2 INFO nova.compute.manager [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Terminating instance
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.493 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "refresh_cache-6a03ef41-3cc5-48d2-8796-369687ac6a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.493 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquired lock "refresh_cache-6a03ef41-3cc5-48d2-8796-369687ac6a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.495 2 DEBUG nova.network.neutron [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:02:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 14 09:02:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 14 09:02:02 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.633 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432507.6316822, f5ecec2f-eb67-49e5-abb8-15e2be8db618 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.633 2 INFO nova.compute.manager [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] VM Stopped (Lifecycle Event)
Oct 14 09:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.744 2 DEBUG nova.compute.manager [None req-73ae561a-ea9c-4d91-b002-84bd8a6699fa - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:02:02 compute-0 ceph-mon[74249]: pgmap v1356: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 346 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 959 KiB/s rd, 138 KiB/s wr, 233 op/s
Oct 14 09:02:02 compute-0 ceph-mon[74249]: osdmap e182: 3 total, 3 up, 3 in
Oct 14 09:02:02 compute-0 ceph-mon[74249]: osdmap e183: 3 total, 3 up, 3 in
Oct 14 09:02:02 compute-0 nova_compute[259627]: 2025-10-14 09:02:02.869 2 DEBUG nova.network.neutron [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.436 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.456 2 DEBUG oslo_concurrency.lockutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.456 2 DEBUG oslo_concurrency.lockutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.456 2 DEBUG nova.network.neutron [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.457 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.470 2 DEBUG nova.compute.manager [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.470 2 DEBUG oslo_concurrency.lockutils [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.471 2 DEBUG oslo_concurrency.lockutils [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.471 2 DEBUG oslo_concurrency.lockutils [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.471 2 DEBUG nova.compute.manager [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.471 2 WARNING nova.compute.manager [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state stopped and task_state powering-on.
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.503 2 DEBUG nova.network.neutron [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.519 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Releasing lock "refresh_cache-6a03ef41-3cc5-48d2-8796-369687ac6a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.520 2 DEBUG nova.compute.manager [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:02:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 346 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 57 KiB/s wr, 97 op/s
Oct 14 09:02:03 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct 14 09:02:03 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002d.scope: Consumed 12.845s CPU time.
Oct 14 09:02:03 compute-0 systemd-machined[214636]: Machine qemu-53-instance-0000002d terminated.
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.740 2 INFO nova.virt.libvirt.driver [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance destroyed successfully.
Oct 14 09:02:03 compute-0 nova_compute[259627]: 2025-10-14 09:02:03.741 2 DEBUG nova.objects.instance [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lazy-loading 'resources' on Instance uuid 6a03ef41-3cc5-48d2-8796-369687ac6a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:03 compute-0 podman[311010]: 2025-10-14 09:02:03.7965177 +0000 UTC m=+0.058864998 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:02:03 compute-0 podman[311009]: 2025-10-14 09:02:03.802008205 +0000 UTC m=+0.062324703 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.422 2 INFO nova.virt.libvirt.driver [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Deleting instance files /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10_del
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.423 2 INFO nova.virt.libvirt.driver [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Deletion of /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10_del complete
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.467 2 INFO nova.compute.manager [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Took 0.95 seconds to destroy the instance on the hypervisor.
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.468 2 DEBUG oslo.service.loopingcall [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.468 2 DEBUG nova.compute.manager [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.468 2 DEBUG nova.network.neutron [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.781 2 DEBUG nova.network.neutron [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.804 2 DEBUG nova.network.neutron [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.819 2 INFO nova.compute.manager [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Took 0.35 seconds to deallocate network for instance.
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.871 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.872 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:04 compute-0 ceph-mon[74249]: pgmap v1359: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 346 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 57 KiB/s wr, 97 op/s
Oct 14 09:02:04 compute-0 nova_compute[259627]: 2025-10-14 09:02:04.969 2 DEBUG oslo_concurrency.processutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.266 2 DEBUG nova.network.neutron [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.297 2 DEBUG oslo_concurrency.lockutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.328 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.328 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.345 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:02:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340642275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.362 2 DEBUG nova.virt.libvirt.vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.362 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.363 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.363 2 DEBUG os_vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.374 2 INFO os_vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.375 2 DEBUG oslo_concurrency.processutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.382 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start _get_guest_xml network_info=[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.384 2 DEBUG nova.compute.provider_tree [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.388 2 WARNING nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.393 2 DEBUG nova.virt.libvirt.host [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.394 2 DEBUG nova.virt.libvirt.host [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.398 2 DEBUG nova.scheduler.client.report [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.401 2 DEBUG nova.virt.libvirt.host [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.402 2 DEBUG nova.virt.libvirt.host [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.402 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.402 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.403 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.403 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.403 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.403 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.404 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.404 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.404 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.404 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.405 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.405 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.405 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.419 2 DEBUG oslo_concurrency.processutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.452 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.489 2 INFO nova.scheduler.client.report [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Deleted allocations for instance 6a03ef41-3cc5-48d2-8796-369687ac6a10
Oct 14 09:02:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:02:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/793083822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:02:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:02:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/793083822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.566 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 61 KiB/s wr, 196 op/s
Oct 14 09:02:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:02:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401632240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.835 2 DEBUG oslo_concurrency.processutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:02:05 compute-0 nova_compute[259627]: 2025-10-14 09:02:05.882 2 DEBUG oslo_concurrency.processutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:02:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3340642275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:02:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/793083822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:02:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/793083822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:02:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1401632240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:02:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:02:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373227759' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.328 2 DEBUG oslo_concurrency.processutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.330 2 DEBUG nova.virt.libvirt.vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.331 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.332 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.334 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.353 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <uuid>de383510-2de3-40bd-b479-c0010b3f2d1c</uuid>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <name>instance-0000002b</name>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestJSON-server-1794713901</nova:name>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:02:05</nova:creationTime>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <nova:port uuid="8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2">
Oct 14 09:02:06 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <system>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <entry name="serial">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <entry name="uuid">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </system>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <os>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   </os>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <features>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   </features>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk">
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config">
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:02:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:be:e2:1b"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <target dev="tap8ec905f0-b7"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log" append="off"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <video>
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </video>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:02:06 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:02:06 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:02:06 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:02:06 compute-0 nova_compute[259627]: </domain>
Oct 14 09:02:06 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.356 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.356 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.357 2 DEBUG nova.virt.libvirt.vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.358 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.358 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.359 2 DEBUG os_vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.365 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 NetworkManager[44885]: <info>  [1760432526.3701] manager: (tap8ec905f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.375 2 INFO os_vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.406 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.407 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.408 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.408 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.408 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.409 2 INFO nova.compute.manager [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Terminating instance
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.410 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "refresh_cache-333926ec-cf24-467b-b9b1-d1fa70a4feb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.410 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquired lock "refresh_cache-333926ec-cf24-467b-b9b1-d1fa70a4feb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.411 2 DEBUG nova.network.neutron [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:02:06 compute-0 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 09:02:06 compute-0 NetworkManager[44885]: <info>  [1760432526.4519] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 ovn_controller[152662]: 2025-10-14T09:02:06Z|00418|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 09:02:06 compute-0 ovn_controller[152662]: 2025-10-14T09:02:06Z|00419|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.466 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.468 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.470 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:02:06 compute-0 ovn_controller[152662]: 2025-10-14T09:02:06Z|00420|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 09:02:06 compute-0 ovn_controller[152662]: 2025-10-14T09:02:06Z|00421|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 systemd-udevd[311167]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:02:06 compute-0 systemd-machined[214636]: New machine qemu-58-instance-0000002b.
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0ddf9c-6a87-450b-98fe-1bcf993864c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.490 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.492 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0eb5401-8f70-4b48-ad33-e533ca237ab2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d07268-d87b-45d4-82d7-75866d54ba5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 NetworkManager[44885]: <info>  [1760432526.5014] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:02:06 compute-0 NetworkManager[44885]: <info>  [1760432526.5029] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:02:06 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-0000002b.
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.512 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4fd39d-eb5a-42ee-89bd-9cbdb2467531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d31e8f25-31ca-4778-8da8-d257e38b818c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.594 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38742248-42eb-4e79-97a0-1bfe1b3a85a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 NetworkManager[44885]: <info>  [1760432526.6051] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.605 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2d65f4ad-7113-4191-a285-7099f8184795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.641 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2c044f-3b6f-497a-af77-49d653b37e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.645 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77c8cdbb-2630-4086-8677-2af75176119d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 NetworkManager[44885]: <info>  [1760432526.6751] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.683 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[655f5591-b0d7-47e1-8f2f-e9334f674207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.704 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd70405-eda6-490c-a355-5b817cdb6aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630543, 'reachable_time': 20601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311202, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5eaa3ec-ab2f-4e9e-9155-9ca0c5d9fe31]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630543, 'tstamp': 630543}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311203, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.748 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb91610a-9d90-41de-af83-eb552ce617aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630543, 'reachable_time': 20601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311204, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f85e6cf8-63e9-4322-933a-470b626f40b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a707ed85-f579-447a-a6bc-08e70f2c9a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.869 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 09:02:06 compute-0 NetworkManager[44885]: <info>  [1760432526.8731] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.876 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:06 compute-0 ovn_controller[152662]: 2025-10-14T09:02:06Z|00422|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.902 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.903 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9647d1ac-e513-4640-8fed-468bddea9d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.903 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:02:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.904 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:02:06 compute-0 ceph-mon[74249]: pgmap v1360: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 61 KiB/s wr, 196 op/s
Oct 14 09:02:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2373227759' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:02:06 compute-0 nova_compute[259627]: 2025-10-14 09:02:06.974 2 DEBUG nova.network.neutron [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:02:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:07.020 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:07.021 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:07.022 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.295 2 DEBUG nova.compute.manager [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.296 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.296 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432527.29517, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.296 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.301 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance rebooted successfully.
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.302 2 DEBUG nova.compute.manager [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:07 compute-0 podman[311278]: 2025-10-14 09:02:07.302686496 +0000 UTC m=+0.066245800 container create ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.325 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.329 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.333 2 DEBUG nova.compute.manager [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.334 2 DEBUG oslo_concurrency.lockutils [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.334 2 DEBUG oslo_concurrency.lockutils [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.334 2 DEBUG oslo_concurrency.lockutils [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.334 2 DEBUG nova.compute.manager [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.334 2 WARNING nova.compute.manager [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state stopped and task_state powering-on.
Oct 14 09:02:07 compute-0 systemd[1]: Started libpod-conmon-ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6.scope.
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.351 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432527.2958403, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)
Oct 14 09:02:07 compute-0 podman[311278]: 2025-10-14 09:02:07.266943957 +0000 UTC m=+0.030503261 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.371 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.374 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:02:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87bfab2c1e353f2bf941f3b9e1e0de39f9c4d7dd1ebe63d54aaa2d4f59edff49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.428 2 DEBUG nova.network.neutron [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:02:07 compute-0 podman[311278]: 2025-10-14 09:02:07.442487713 +0000 UTC m=+0.206047037 container init ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.448 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Releasing lock "refresh_cache-333926ec-cf24-467b-b9b1-d1fa70a4feb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.449 2 DEBUG nova.compute.manager [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:02:07 compute-0 podman[311278]: 2025-10-14 09:02:07.450401057 +0000 UTC m=+0.213960331 container start ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 09:02:07 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [NOTICE]   (311297) : New worker (311299) forked
Oct 14 09:02:07 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [NOTICE]   (311297) : Loading success.
Oct 14 09:02:07 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct 14 09:02:07 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002c.scope: Consumed 13.496s CPU time.
Oct 14 09:02:07 compute-0 systemd-machined[214636]: Machine qemu-52-instance-0000002c terminated.
Oct 14 09:02:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 14 09:02:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 14 09:02:07 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 14 09:02:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.7 KiB/s wr, 101 op/s
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.667 2 INFO nova.virt.libvirt.driver [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance destroyed successfully.
Oct 14 09:02:07 compute-0 nova_compute[259627]: 2025-10-14 09:02:07.667 2 DEBUG nova.objects.instance [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lazy-loading 'resources' on Instance uuid 333926ec-cf24-467b-b9b1-d1fa70a4feb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.126 2 INFO nova.virt.libvirt.driver [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Deleting instance files /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2_del
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.127 2 INFO nova.virt.libvirt.driver [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Deletion of /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2_del complete
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.184 2 INFO nova.compute.manager [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.185 2 DEBUG oslo.service.loopingcall [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.185 2 DEBUG nova.compute.manager [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.185 2 DEBUG nova.network.neutron [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.528 2 DEBUG nova.network.neutron [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.542 2 DEBUG nova.network.neutron [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.559 2 INFO nova.compute.manager [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 0.37 seconds to deallocate network for instance.
Oct 14 09:02:08 compute-0 ceph-mon[74249]: osdmap e184: 3 total, 3 up, 3 in
Oct 14 09:02:08 compute-0 ceph-mon[74249]: pgmap v1362: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.7 KiB/s wr, 101 op/s
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.609 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.609 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.677 2 DEBUG oslo_concurrency.processutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.844 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432513.8432994, 15778bbd-fbee-44e9-ba12-9884db0e7afb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.845 2 INFO nova.compute.manager [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] VM Stopped (Lifecycle Event)
Oct 14 09:02:08 compute-0 nova_compute[259627]: 2025-10-14 09:02:08.864 2 DEBUG nova.compute.manager [None req-e849734f-f36c-40ce-8b36-8665a06bd042 - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:02:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104698738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.126 2 DEBUG oslo_concurrency.processutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.134 2 DEBUG nova.compute.provider_tree [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.161 2 DEBUG nova.scheduler.client.report [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.199 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.223 2 INFO nova.scheduler.client.report [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Deleted allocations for instance 333926ec-cf24-467b-b9b1-d1fa70a4feb2
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.293 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.573 2 DEBUG nova.compute.manager [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.573 2 DEBUG oslo_concurrency.lockutils [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.573 2 DEBUG oslo_concurrency.lockutils [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.574 2 DEBUG oslo_concurrency.lockutils [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.574 2 DEBUG nova.compute.manager [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:02:09 compute-0 nova_compute[259627]: 2025-10-14 09:02:09.574 2 WARNING nova.compute.manager [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:02:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/104698738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:02:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 77 op/s
Oct 14 09:02:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 14 09:02:10 compute-0 ceph-mon[74249]: pgmap v1363: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 77 op/s
Oct 14 09:02:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 14 09:02:10 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.423 2 INFO nova.compute.manager [None req-bd9712f3-4522-41ab-9b35-19d5f9f70aa3 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Pausing
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.424 2 DEBUG nova.objects.instance [None req-bd9712f3-4522-41ab-9b35-19d5f9f70aa3 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.448 2 DEBUG nova.compute.manager [None req-bd9712f3-4522-41ab-9b35-19d5f9f70aa3 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.450 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432531.448608, de383510-2de3-40bd-b479-c0010b3f2d1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.450 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Paused (Lifecycle Event)
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.477 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.481 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:02:11 compute-0 nova_compute[259627]: 2025-10-14 09:02:11.526 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 14 09:02:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 14 09:02:11 compute-0 ceph-mon[74249]: osdmap e185: 3 total, 3 up, 3 in
Oct 14 09:02:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 14 09:02:11 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 14 09:02:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 123 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.3 KiB/s wr, 251 op/s
Oct 14 09:02:12 compute-0 nova_compute[259627]: 2025-10-14 09:02:12.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 14 09:02:12 compute-0 ceph-mon[74249]: osdmap e186: 3 total, 3 up, 3 in
Oct 14 09:02:12 compute-0 ceph-mon[74249]: pgmap v1366: 305 pgs: 305 active+clean; 123 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.3 KiB/s wr, 251 op/s
Oct 14 09:02:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 14 09:02:12 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 14 09:02:13 compute-0 ceph-mon[74249]: osdmap e187: 3 total, 3 up, 3 in
Oct 14 09:02:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 123 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.3 KiB/s wr, 251 op/s
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.323 2 INFO nova.compute.manager [None req-23959e8a-65dc-4a61-9312-1e002c0bb5a1 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Unpausing
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.324 2 DEBUG nova.objects.instance [None req-23959e8a-65dc-4a61-9312-1e002c0bb5a1 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.360 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432534.3604438, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.361 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)
Oct 14 09:02:14 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.368 2 DEBUG nova.virt.libvirt.guest [None req-23959e8a-65dc-4a61-9312-1e002c0bb5a1 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.369 2 DEBUG nova.compute.manager [None req-23959e8a-65dc-4a61-9312-1e002c0bb5a1 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.380 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.384 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:02:14 compute-0 nova_compute[259627]: 2025-10-14 09:02:14.409 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 14 09:02:14 compute-0 ceph-mon[74249]: pgmap v1368: 305 pgs: 305 active+clean; 123 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.3 KiB/s wr, 251 op/s
Oct 14 09:02:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 15 KiB/s wr, 389 op/s
Oct 14 09:02:15 compute-0 podman[311354]: 2025-10-14 09:02:15.68617884 +0000 UTC m=+0.077461085 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:02:15 compute-0 podman[311353]: 2025-10-14 09:02:15.730996582 +0000 UTC m=+0.129371172 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 14 09:02:15 compute-0 nova_compute[259627]: 2025-10-14 09:02:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:16 compute-0 nova_compute[259627]: 2025-10-14 09:02:16.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:16 compute-0 ceph-mon[74249]: pgmap v1369: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 15 KiB/s wr, 389 op/s
Oct 14 09:02:17 compute-0 nova_compute[259627]: 2025-10-14 09:02:17.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 14 09:02:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 14 09:02:17 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 14 09:02:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 7.3 KiB/s wr, 137 op/s
Oct 14 09:02:17 compute-0 nova_compute[259627]: 2025-10-14 09:02:17.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:18 compute-0 ceph-mon[74249]: osdmap e188: 3 total, 3 up, 3 in
Oct 14 09:02:18 compute-0 ceph-mon[74249]: pgmap v1371: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 7.3 KiB/s wr, 137 op/s
Oct 14 09:02:18 compute-0 nova_compute[259627]: 2025-10-14 09:02:18.739 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432523.7385423, 6a03ef41-3cc5-48d2-8796-369687ac6a10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:18 compute-0 nova_compute[259627]: 2025-10-14 09:02:18.741 2 INFO nova.compute.manager [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] VM Stopped (Lifecycle Event)
Oct 14 09:02:18 compute-0 nova_compute[259627]: 2025-10-14 09:02:18.775 2 DEBUG nova.compute.manager [None req-d33ac651-06f7-45f4-a734-3396fc03802f - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:18 compute-0 nova_compute[259627]: 2025-10-14 09:02:18.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.5 KiB/s wr, 103 op/s
Oct 14 09:02:19 compute-0 nova_compute[259627]: 2025-10-14 09:02:19.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:19 compute-0 nova_compute[259627]: 2025-10-14 09:02:19.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:19 compute-0 nova_compute[259627]: 2025-10-14 09:02:19.980 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:19 compute-0 nova_compute[259627]: 2025-10-14 09:02:19.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:02:20 compute-0 ceph-mon[74249]: pgmap v1372: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.5 KiB/s wr, 103 op/s
Oct 14 09:02:20 compute-0 nova_compute[259627]: 2025-10-14 09:02:20.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:20 compute-0 nova_compute[259627]: 2025-10-14 09:02:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.002 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.004 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.004 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:02:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691515915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.460 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.545 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.546 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:02:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.0 KiB/s wr, 98 op/s
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.679 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.680 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3961MB free_disk=59.942630767822266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.680 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.681 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1691515915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance de383510-2de3-40bd-b479-c0010b3f2d1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:02:21 compute-0 nova_compute[259627]: 2025-10-14 09:02:21.808 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:02:21 compute-0 ovn_controller[152662]: 2025-10-14T09:02:21Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:02:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036963260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.233 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.240 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.267 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.305 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.306 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.666 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432527.6653194, 333926ec-cf24-467b-b9b1-d1fa70a4feb2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.666 2 INFO nova.compute.manager [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] VM Stopped (Lifecycle Event)
Oct 14 09:02:22 compute-0 ceph-mon[74249]: pgmap v1373: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.0 KiB/s wr, 98 op/s
Oct 14 09:02:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4036963260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:02:22 compute-0 nova_compute[259627]: 2025-10-14 09:02:22.735 2 DEBUG nova.compute.manager [None req-2e6cb24c-3a7a-46a8-a5a9-6dbd39f94e74 - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.4 KiB/s wr, 88 op/s
Oct 14 09:02:24 compute-0 sudo[311443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:24 compute-0 sudo[311443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:24 compute-0 sudo[311443]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:24 compute-0 sudo[311468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:02:24 compute-0 sudo[311468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:24 compute-0 sudo[311468]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:24 compute-0 sudo[311493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:24 compute-0 sudo[311493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:24 compute-0 sudo[311493]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:24 compute-0 sudo[311518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:02:24 compute-0 sudo[311518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:24 compute-0 ceph-mon[74249]: pgmap v1374: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.4 KiB/s wr, 88 op/s
Oct 14 09:02:25 compute-0 sudo[311518]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:02:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:02:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:02:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:02:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:02:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:02:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 962703ad-148d-439c-84fa-162ca297d84c does not exist
Oct 14 09:02:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 59aaea5b-eae7-4285-8106-bec33bb28e18 does not exist
Oct 14 09:02:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a815fb60-2811-4271-8913-bf9986ab71fa does not exist
Oct 14 09:02:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:02:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:02:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:02:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:02:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:02:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:02:25 compute-0 sudo[311574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:25 compute-0 sudo[311574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:25 compute-0 sudo[311574]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:25 compute-0 sudo[311599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:02:25 compute-0 sudo[311599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:25 compute-0 sudo[311599]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:25 compute-0 sudo[311624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:25 compute-0 sudo[311624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:25 compute-0 sudo[311624]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:25 compute-0 sudo[311649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:02:25 compute-0 sudo[311649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 640 KiB/s rd, 13 KiB/s wr, 53 op/s
Oct 14 09:02:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:02:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:02:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:02:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:02:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:02:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:02:25 compute-0 podman[311715]: 2025-10-14 09:02:25.862383398 +0000 UTC m=+0.097845366 container create a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:02:25 compute-0 podman[311715]: 2025-10-14 09:02:25.800460136 +0000 UTC m=+0.035922114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:02:25 compute-0 systemd[1]: Started libpod-conmon-a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda.scope.
Oct 14 09:02:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:02:25 compute-0 podman[311715]: 2025-10-14 09:02:25.979372385 +0000 UTC m=+0.214834383 container init a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:02:25 compute-0 podman[311715]: 2025-10-14 09:02:25.986970372 +0000 UTC m=+0.222432340 container start a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:02:25 compute-0 brave_lumiere[311731]: 167 167
Oct 14 09:02:25 compute-0 systemd[1]: libpod-a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda.scope: Deactivated successfully.
Oct 14 09:02:26 compute-0 podman[311715]: 2025-10-14 09:02:26.004577125 +0000 UTC m=+0.240039143 container attach a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 09:02:26 compute-0 podman[311715]: 2025-10-14 09:02:26.006139643 +0000 UTC m=+0.241601631 container died a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:02:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c97b4ef3ad9edd5c353ea609f574c31a2417d5416dc9ff18dd9eb1f53096e0b-merged.mount: Deactivated successfully.
Oct 14 09:02:26 compute-0 podman[311715]: 2025-10-14 09:02:26.183900483 +0000 UTC m=+0.419362451 container remove a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:02:26 compute-0 systemd[1]: libpod-conmon-a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda.scope: Deactivated successfully.
Oct 14 09:02:26 compute-0 nova_compute[259627]: 2025-10-14 09:02:26.307 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:26 compute-0 nova_compute[259627]: 2025-10-14 09:02:26.309 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:02:26 compute-0 nova_compute[259627]: 2025-10-14 09:02:26.309 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:02:26 compute-0 podman[311756]: 2025-10-14 09:02:26.374276844 +0000 UTC m=+0.050044711 container create 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:02:26 compute-0 nova_compute[259627]: 2025-10-14 09:02:26.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:26 compute-0 systemd[1]: Started libpod-conmon-351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1.scope.
Oct 14 09:02:26 compute-0 podman[311756]: 2025-10-14 09:02:26.347194868 +0000 UTC m=+0.022962775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:02:26 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:26 compute-0 podman[311756]: 2025-10-14 09:02:26.478818995 +0000 UTC m=+0.154586882 container init 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:02:26 compute-0 podman[311756]: 2025-10-14 09:02:26.48515081 +0000 UTC m=+0.160918687 container start 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 09:02:26 compute-0 podman[311756]: 2025-10-14 09:02:26.496054808 +0000 UTC m=+0.171822725 container attach 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 09:02:26 compute-0 nova_compute[259627]: 2025-10-14 09:02:26.581 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:02:26 compute-0 nova_compute[259627]: 2025-10-14 09:02:26.581 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:02:26 compute-0 nova_compute[259627]: 2025-10-14 09:02:26.582 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:02:26 compute-0 nova_compute[259627]: 2025-10-14 09:02:26.582 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:27 compute-0 nova_compute[259627]: 2025-10-14 09:02:27.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:27 compute-0 ceph-mon[74249]: pgmap v1375: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 640 KiB/s rd, 13 KiB/s wr, 53 op/s
Oct 14 09:02:27 compute-0 crazy_shockley[311773]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:02:27 compute-0 crazy_shockley[311773]: --> relative data size: 1.0
Oct 14 09:02:27 compute-0 crazy_shockley[311773]: --> All data devices are unavailable
Oct 14 09:02:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:27 compute-0 systemd[1]: libpod-351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1.scope: Deactivated successfully.
Oct 14 09:02:27 compute-0 systemd[1]: libpod-351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1.scope: Consumed 1.030s CPU time.
Oct 14 09:02:27 compute-0 podman[311756]: 2025-10-14 09:02:27.583537455 +0000 UTC m=+1.259305372 container died 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:02:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 636 KiB/s rd, 13 KiB/s wr, 52 op/s
Oct 14 09:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48-merged.mount: Deactivated successfully.
Oct 14 09:02:27 compute-0 podman[311756]: 2025-10-14 09:02:27.695282783 +0000 UTC m=+1.371050650 container remove 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:02:27 compute-0 systemd[1]: libpod-conmon-351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1.scope: Deactivated successfully.
Oct 14 09:02:27 compute-0 sudo[311649]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:27 compute-0 sudo[311816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:27 compute-0 sudo[311816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:27 compute-0 sudo[311816]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:27 compute-0 sudo[311841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:02:27 compute-0 sudo[311841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:27 compute-0 sudo[311841]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:27 compute-0 sudo[311866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:27 compute-0 sudo[311866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:27 compute-0 sudo[311866]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:28 compute-0 sudo[311891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:02:28 compute-0 sudo[311891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:28 compute-0 podman[311956]: 2025-10-14 09:02:28.410398115 +0000 UTC m=+0.056916250 container create 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:02:28 compute-0 systemd[1]: Started libpod-conmon-3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f.scope.
Oct 14 09:02:28 compute-0 podman[311956]: 2025-10-14 09:02:28.381957946 +0000 UTC m=+0.028476111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:02:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:02:28 compute-0 podman[311956]: 2025-10-14 09:02:28.501051914 +0000 UTC m=+0.147570039 container init 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:02:28 compute-0 podman[311956]: 2025-10-14 09:02:28.508975559 +0000 UTC m=+0.155493674 container start 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:02:28 compute-0 focused_borg[311973]: 167 167
Oct 14 09:02:28 compute-0 systemd[1]: libpod-3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f.scope: Deactivated successfully.
Oct 14 09:02:28 compute-0 podman[311956]: 2025-10-14 09:02:28.520863491 +0000 UTC m=+0.167381636 container attach 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:02:28 compute-0 podman[311956]: 2025-10-14 09:02:28.52120928 +0000 UTC m=+0.167727395 container died 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:02:28 compute-0 nova_compute[259627]: 2025-10-14 09:02:28.531 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:02:28 compute-0 nova_compute[259627]: 2025-10-14 09:02:28.550 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:02:28 compute-0 nova_compute[259627]: 2025-10-14 09:02:28.551 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:02:28 compute-0 nova_compute[259627]: 2025-10-14 09:02:28.552 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:28 compute-0 nova_compute[259627]: 2025-10-14 09:02:28.552 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebf3bc561a275aafede5d37eee356d629fa0679de8415fc19cf02944ff6b89fb-merged.mount: Deactivated successfully.
Oct 14 09:02:28 compute-0 podman[311956]: 2025-10-14 09:02:28.640264177 +0000 UTC m=+0.286782332 container remove 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:02:28 compute-0 systemd[1]: libpod-conmon-3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f.scope: Deactivated successfully.
Oct 14 09:02:28 compute-0 podman[311997]: 2025-10-14 09:02:28.831309124 +0000 UTC m=+0.052962583 container create 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:02:28 compute-0 systemd[1]: Started libpod-conmon-8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046.scope.
Oct 14 09:02:28 compute-0 podman[311997]: 2025-10-14 09:02:28.805224593 +0000 UTC m=+0.026878102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:02:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:28 compute-0 podman[311997]: 2025-10-14 09:02:28.937841863 +0000 UTC m=+0.159495402 container init 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:02:28 compute-0 podman[311997]: 2025-10-14 09:02:28.952294019 +0000 UTC m=+0.173947508 container start 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:02:28 compute-0 podman[311997]: 2025-10-14 09:02:28.975085429 +0000 UTC m=+0.196738928 container attach 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:02:29 compute-0 ceph-mon[74249]: pgmap v1376: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 636 KiB/s rd, 13 KiB/s wr, 52 op/s
Oct 14 09:02:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 11 KiB/s wr, 44 op/s
Oct 14 09:02:29 compute-0 clever_faraday[312013]: {
Oct 14 09:02:29 compute-0 clever_faraday[312013]:     "0": [
Oct 14 09:02:29 compute-0 clever_faraday[312013]:         {
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "devices": [
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "/dev/loop3"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             ],
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_name": "ceph_lv0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_size": "21470642176",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "name": "ceph_lv0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "tags": {
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cluster_name": "ceph",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.crush_device_class": "",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.encrypted": "0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osd_id": "0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.type": "block",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.vdo": "0"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             },
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "type": "block",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "vg_name": "ceph_vg0"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:         }
Oct 14 09:02:29 compute-0 clever_faraday[312013]:     ],
Oct 14 09:02:29 compute-0 clever_faraday[312013]:     "1": [
Oct 14 09:02:29 compute-0 clever_faraday[312013]:         {
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "devices": [
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "/dev/loop4"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             ],
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_name": "ceph_lv1",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_size": "21470642176",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "name": "ceph_lv1",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "tags": {
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cluster_name": "ceph",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.crush_device_class": "",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.encrypted": "0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osd_id": "1",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.type": "block",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.vdo": "0"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             },
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "type": "block",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "vg_name": "ceph_vg1"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:         }
Oct 14 09:02:29 compute-0 clever_faraday[312013]:     ],
Oct 14 09:02:29 compute-0 clever_faraday[312013]:     "2": [
Oct 14 09:02:29 compute-0 clever_faraday[312013]:         {
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "devices": [
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "/dev/loop5"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             ],
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_name": "ceph_lv2",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_size": "21470642176",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "name": "ceph_lv2",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "tags": {
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.cluster_name": "ceph",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.crush_device_class": "",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.encrypted": "0",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osd_id": "2",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.type": "block",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:                 "ceph.vdo": "0"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             },
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "type": "block",
Oct 14 09:02:29 compute-0 clever_faraday[312013]:             "vg_name": "ceph_vg2"
Oct 14 09:02:29 compute-0 clever_faraday[312013]:         }
Oct 14 09:02:29 compute-0 clever_faraday[312013]:     ]
Oct 14 09:02:29 compute-0 clever_faraday[312013]: }
Oct 14 09:02:29 compute-0 systemd[1]: libpod-8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046.scope: Deactivated successfully.
Oct 14 09:02:29 compute-0 podman[311997]: 2025-10-14 09:02:29.848960365 +0000 UTC m=+1.070613834 container died 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed-merged.mount: Deactivated successfully.
Oct 14 09:02:29 compute-0 podman[311997]: 2025-10-14 09:02:29.929191997 +0000 UTC m=+1.150845496 container remove 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 09:02:29 compute-0 systemd[1]: libpod-conmon-8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046.scope: Deactivated successfully.
Oct 14 09:02:29 compute-0 sudo[311891]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:30 compute-0 sudo[312036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:30 compute-0 sudo[312036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:30 compute-0 sudo[312036]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:30 compute-0 sudo[312061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:02:30 compute-0 sudo[312061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:30 compute-0 sudo[312061]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:30 compute-0 sudo[312086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:30 compute-0 sudo[312086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:30 compute-0 sudo[312086]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:30 compute-0 sudo[312111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:02:30 compute-0 sudo[312111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:30 compute-0 podman[312175]: 2025-10-14 09:02:30.599228442 +0000 UTC m=+0.046970706 container create c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:02:30 compute-0 systemd[1]: Started libpod-conmon-c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684.scope.
Oct 14 09:02:30 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:02:30 compute-0 podman[312175]: 2025-10-14 09:02:30.57680851 +0000 UTC m=+0.024550754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:02:30 compute-0 podman[312175]: 2025-10-14 09:02:30.691819458 +0000 UTC m=+0.139561702 container init c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 09:02:30 compute-0 podman[312175]: 2025-10-14 09:02:30.703965067 +0000 UTC m=+0.151707301 container start c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:02:30 compute-0 podman[312175]: 2025-10-14 09:02:30.707700238 +0000 UTC m=+0.155442502 container attach c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:02:30 compute-0 busy_maxwell[312191]: 167 167
Oct 14 09:02:30 compute-0 systemd[1]: libpod-c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684.scope: Deactivated successfully.
Oct 14 09:02:30 compute-0 conmon[312191]: conmon c7d5ea9e01bd71e79574 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684.scope/container/memory.events
Oct 14 09:02:30 compute-0 podman[312175]: 2025-10-14 09:02:30.713485661 +0000 UTC m=+0.161227955 container died c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-90ec6ca40dcc481486dc66b368804c0bd1bd2a227b0a8d1c42c0a91794cdb63b-merged.mount: Deactivated successfully.
Oct 14 09:02:30 compute-0 podman[312175]: 2025-10-14 09:02:30.783361979 +0000 UTC m=+0.231104233 container remove c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:02:30 compute-0 systemd[1]: libpod-conmon-c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684.scope: Deactivated successfully.
Oct 14 09:02:30 compute-0 podman[312216]: 2025-10-14 09:02:30.988159924 +0000 UTC m=+0.038416445 container create 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:02:31 compute-0 systemd[1]: Started libpod-conmon-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope.
Oct 14 09:02:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:31 compute-0 podman[312216]: 2025-10-14 09:02:30.970191732 +0000 UTC m=+0.020448243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:02:31 compute-0 podman[312216]: 2025-10-14 09:02:31.078458473 +0000 UTC m=+0.128714984 container init 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:02:31 compute-0 podman[312216]: 2025-10-14 09:02:31.086430799 +0000 UTC m=+0.136687290 container start 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:02:31 compute-0 podman[312216]: 2025-10-14 09:02:31.089993217 +0000 UTC m=+0.140249738 container attach 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:02:31 compute-0 ceph-mon[74249]: pgmap v1377: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 11 KiB/s wr, 44 op/s
Oct 14 09:02:31 compute-0 nova_compute[259627]: 2025-10-14 09:02:31.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct 14 09:02:32 compute-0 amazing_hoover[312232]: {
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "osd_id": 2,
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "type": "bluestore"
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:     },
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "osd_id": 1,
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "type": "bluestore"
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:     },
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "osd_id": 0,
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:         "type": "bluestore"
Oct 14 09:02:32 compute-0 amazing_hoover[312232]:     }
Oct 14 09:02:32 compute-0 amazing_hoover[312232]: }
Oct 14 09:02:32 compute-0 nova_compute[259627]: 2025-10-14 09:02:32.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:32 compute-0 systemd[1]: libpod-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope: Deactivated successfully.
Oct 14 09:02:32 compute-0 conmon[312232]: conmon 6913a6f396f1cc07d80e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope/container/memory.events
Oct 14 09:02:32 compute-0 systemd[1]: libpod-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope: Consumed 1.052s CPU time.
Oct 14 09:02:32 compute-0 podman[312216]: 2025-10-14 09:02:32.136669971 +0000 UTC m=+1.186926472 container died 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7-merged.mount: Deactivated successfully.
Oct 14 09:02:32 compute-0 podman[312216]: 2025-10-14 09:02:32.278133669 +0000 UTC m=+1.328390160 container remove 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:02:32 compute-0 systemd[1]: libpod-conmon-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope: Deactivated successfully.
Oct 14 09:02:32 compute-0 sudo[312111]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:02:32 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:02:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:02:32 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 473321c2-10e0-4ace-ac05-3b901317db64 does not exist
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev af3cac45-41d7-4a50-884c-0174968db107 does not exist
Oct 14 09:02:32 compute-0 sudo[312278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:02:32 compute-0 sudo[312278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:32 compute-0 sudo[312278]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:32 compute-0 sudo[312303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:02:32 compute-0 sudo[312303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:02:32 compute-0 sudo[312303]: pam_unix(sudo:session): session closed for user root
Oct 14 09:02:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:02:32
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['vms', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:02:33 compute-0 ceph-mon[74249]: pgmap v1378: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct 14 09:02:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:02:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:02:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 20 KiB/s wr, 40 op/s
Oct 14 09:02:33 compute-0 ovn_controller[152662]: 2025-10-14T09:02:33Z|00423|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:02:33 compute-0 nova_compute[259627]: 2025-10-14 09:02:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:34 compute-0 nova_compute[259627]: 2025-10-14 09:02:34.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:34 compute-0 podman[312329]: 2025-10-14 09:02:34.655309186 +0000 UTC m=+0.057525706 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:02:34 compute-0 podman[312328]: 2025-10-14 09:02:34.677223025 +0000 UTC m=+0.081813553 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Oct 14 09:02:35 compute-0 nova_compute[259627]: 2025-10-14 09:02:35.112 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:35 compute-0 nova_compute[259627]: 2025-10-14 09:02:35.112 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:35 compute-0 nova_compute[259627]: 2025-10-14 09:02:35.113 2 INFO nova.compute.manager [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Rebooting instance
Oct 14 09:02:35 compute-0 nova_compute[259627]: 2025-10-14 09:02:35.139 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:02:35 compute-0 nova_compute[259627]: 2025-10-14 09:02:35.140 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:02:35 compute-0 nova_compute[259627]: 2025-10-14 09:02:35.140 2 DEBUG nova.network.neutron [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:02:35 compute-0 ceph-mon[74249]: pgmap v1379: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 20 KiB/s wr, 40 op/s
Oct 14 09:02:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 34 KiB/s wr, 42 op/s
Oct 14 09:02:36 compute-0 nova_compute[259627]: 2025-10-14 09:02:36.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.155 2 DEBUG nova.network.neutron [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.182 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.184 2 DEBUG nova.compute.manager [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 09:02:37 compute-0 NetworkManager[44885]: <info>  [1760432557.3901] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:02:37 compute-0 ovn_controller[152662]: 2025-10-14T09:02:37Z|00424|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 ovn_controller[152662]: 2025-10-14T09:02:37Z|00425|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 09:02:37 compute-0 ovn_controller[152662]: 2025-10-14T09:02:37Z|00426|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.455 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.457 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.458 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.459 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8f107c14-0a88-4923-8364-7812c3d0c22e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.460 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore
Oct 14 09:02:37 compute-0 ceph-mon[74249]: pgmap v1380: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 34 KiB/s wr, 42 op/s
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 09:02:37 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000002b.scope: Consumed 12.884s CPU time.
Oct 14 09:02:37 compute-0 systemd-machined[214636]: Machine qemu-58-instance-0000002b terminated.
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.555 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.556 2 DEBUG nova.objects.instance [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.570 2 DEBUG nova.virt.libvirt.vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.570 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.571 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.571 2 DEBUG os_vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.577 2 INFO os_vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:02:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:37 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [NOTICE]   (311297) : haproxy version is 2.8.14-c23fe91
Oct 14 09:02:37 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [NOTICE]   (311297) : path to executable is /usr/sbin/haproxy
Oct 14 09:02:37 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [WARNING]  (311297) : Exiting Master process...
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.583 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start _get_guest_xml network_info=[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:02:37 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [ALERT]    (311297) : Current worker (311299) exited with code 143 (Terminated)
Oct 14 09:02:37 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [WARNING]  (311297) : All workers exited. Exiting... (0)
Oct 14 09:02:37 compute-0 systemd[1]: libpod-ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6.scope: Deactivated successfully.
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.588 2 WARNING nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:02:37 compute-0 podman[312394]: 2025-10-14 09:02:37.593481116 +0000 UTC m=+0.046608137 container died ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.594 2 DEBUG nova.virt.libvirt.host [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.595 2 DEBUG nova.virt.libvirt.host [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.599 2 DEBUG nova.virt.libvirt.host [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.599 2 DEBUG nova.virt.libvirt.host [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.599 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.600 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.600 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.600 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.601 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.601 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.601 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.601 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.602 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.602 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.602 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.603 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.603 2 DEBUG nova.objects.instance [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.619 2 DEBUG oslo_concurrency.processutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6-userdata-shm.mount: Deactivated successfully.
Oct 14 09:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-87bfab2c1e353f2bf941f3b9e1e0de39f9c4d7dd1ebe63d54aaa2d4f59edff49-merged.mount: Deactivated successfully.
Oct 14 09:02:37 compute-0 podman[312394]: 2025-10-14 09:02:37.640671657 +0000 UTC m=+0.093798668 container cleanup ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:02:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 14 09:02:37 compute-0 systemd[1]: libpod-conmon-ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6.scope: Deactivated successfully.
Oct 14 09:02:37 compute-0 podman[312435]: 2025-10-14 09:02:37.704132917 +0000 UTC m=+0.042566608 container remove ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77074e26-c1ec-4856-a42a-245b929ac869]: (4, ('Tue Oct 14 09:02:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6)\nab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6\nTue Oct 14 09:02:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6)\nab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[700a38ec-b656-45ca-b334-ad990b3f7468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.714 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 09:02:37 compute-0 nova_compute[259627]: 2025-10-14 09:02:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.744 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[616ed5bb-37dd-4b3a-84be-dedb66bbb965]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.765 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8a3794-c370-4a30-83fe-d05afb5cd381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.767 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c00199a-ece1-4af9-a1cb-58482db23cd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.783 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f8626e6f-2bb0-4227-aa1c-e88eb03fdcec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630534, 'reachable_time': 22969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312469, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.786 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:02:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.786 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8e784a-51fb-4a7a-9b62-42ee63deffda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:02:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125458685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.079 2 DEBUG oslo_concurrency.processutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.112 2 DEBUG oslo_concurrency.processutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:02:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1125458685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:02:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:02:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975625554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.623 2 DEBUG oslo_concurrency.processutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.625 2 DEBUG nova.virt.libvirt.vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.626 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.627 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.629 2 DEBUG nova.objects.instance [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.647 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <uuid>de383510-2de3-40bd-b479-c0010b3f2d1c</uuid>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <name>instance-0000002b</name>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestJSON-server-1794713901</nova:name>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:02:37</nova:creationTime>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <nova:port uuid="8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2">
Oct 14 09:02:38 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <system>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <entry name="serial">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <entry name="uuid">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </system>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <os>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   </os>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <features>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   </features>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk">
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       </source>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config">
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       </source>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:02:38 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:be:e2:1b"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <target dev="tap8ec905f0-b7"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log" append="off"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <video>
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </video>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:02:38 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:02:38 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:02:38 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:02:38 compute-0 nova_compute[259627]: </domain>
Oct 14 09:02:38 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.650 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.651 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.653 2 DEBUG nova.virt.libvirt.vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.654 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.655 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.655 2 DEBUG os_vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:38 compute-0 NetworkManager[44885]: <info>  [1760432558.7112] manager: (tap8ec905f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.716 2 INFO os_vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:02:38 compute-0 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 09:02:38 compute-0 NetworkManager[44885]: <info>  [1760432558.7969] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Oct 14 09:02:38 compute-0 systemd-udevd[312371]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:02:38 compute-0 ovn_controller[152662]: 2025-10-14T09:02:38Z|00427|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 09:02:38 compute-0 ovn_controller[152662]: 2025-10-14T09:02:38Z|00428|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:38 compute-0 NetworkManager[44885]: <info>  [1760432558.8188] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.816 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:02:38 compute-0 NetworkManager[44885]: <info>  [1760432558.8203] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.820 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.821 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:02:38 compute-0 ovn_controller[152662]: 2025-10-14T09:02:38Z|00429|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 09:02:38 compute-0 ovn_controller[152662]: 2025-10-14T09:02:38Z|00430|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:38 compute-0 nova_compute[259627]: 2025-10-14 09:02:38.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.831 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[112103ed-ea69-4c24-9778-de3ca4da2ebd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.833 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:02:38 compute-0 systemd-machined[214636]: New machine qemu-59-instance-0000002b.
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.838 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.838 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb540b4-8414-4cc5-b705-315bcaa242e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.839 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d5b97c-51d2-4b1c-9c28-5942806aa673]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.850 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[341d8424-d80f-4ceb-b9b7-a74e66ce0a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-0000002b.
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.862 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6ae106-1ae9-4ad1-9490-266bfb3dd44e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.913 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0eed56d8-81cc-4a50-b50d-f76b029c3e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.919 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e581e25f-f129-4f41-8338-3469360b9a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 NetworkManager[44885]: <info>  [1760432558.9209] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.960 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2acf514e-218b-40ff-8943-d850ab49bedf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.965 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9982b274-6bfe-49b2-8b2e-79bdceac3d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:38 compute-0 NetworkManager[44885]: <info>  [1760432558.9913] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 09:02:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.999 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41af675a-b1ab-4a5b-982f-23bbfd0cd486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.016 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[515895c8-0dc3-496d-a6fa-7f0b0b509eb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312556, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.031 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b45aa3-1bf7-48b6-bc7e-dd75926290c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633775, 'tstamp': 633775}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312557, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.046 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9796f1aa-3192-474c-9fe9-512287c195e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312558, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.090 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e32a5e0a-efa9-4a5f-b429-37a6bad21314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.162 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[143d2519-effb-4934-b40a-f4af4824df74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.164 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.164 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.164 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:39 compute-0 nova_compute[259627]: 2025-10-14 09:02:39.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:39 compute-0 NetworkManager[44885]: <info>  [1760432559.1673] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct 14 09:02:39 compute-0 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 09:02:39 compute-0 nova_compute[259627]: 2025-10-14 09:02:39.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.171 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:39 compute-0 ovn_controller[152662]: 2025-10-14T09:02:39Z|00431|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:02:39 compute-0 nova_compute[259627]: 2025-10-14 09:02:39.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.196 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.198 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[baf7973e-4334-4a71-bdaf-272e5c9e06c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.199 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:02:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.200 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:02:39 compute-0 ceph-mon[74249]: pgmap v1381: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 14 09:02:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/975625554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:02:39 compute-0 podman[312628]: 2025-10-14 09:02:39.579912435 +0000 UTC m=+0.052257755 container create 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:02:39 compute-0 systemd[1]: Started libpod-conmon-123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6.scope.
Oct 14 09:02:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 14 09:02:39 compute-0 podman[312628]: 2025-10-14 09:02:39.550618855 +0000 UTC m=+0.022964205 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:02:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b394997ed82cbbabad23e931b57790be0862fb55a64f523707c20df554392e6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:02:39 compute-0 podman[312628]: 2025-10-14 09:02:39.678191422 +0000 UTC m=+0.150536762 container init 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:02:39 compute-0 podman[312628]: 2025-10-14 09:02:39.68340717 +0000 UTC m=+0.155752480 container start 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:02:39 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [NOTICE]   (312649) : New worker (312651) forked
Oct 14 09:02:39 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [NOTICE]   (312649) : Loading success.
Oct 14 09:02:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:40.014 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:40.015 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.138 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.138 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432560.137207, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.138 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.141 2 DEBUG nova.compute.manager [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.146 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance rebooted successfully.
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.146 2 DEBUG nova.compute.manager [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.155 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.158 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.189 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.189 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432560.1374612, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.189 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.200 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.220 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:02:40 compute-0 nova_compute[259627]: 2025-10-14 09:02:40.224 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:02:41 compute-0 ceph-mon[74249]: pgmap v1382: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 14 09:02:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 24 KiB/s wr, 10 op/s
Oct 14 09:02:41 compute-0 nova_compute[259627]: 2025-10-14 09:02:41.984 2 DEBUG nova.compute.manager [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:02:41 compute-0 nova_compute[259627]: 2025-10-14 09:02:41.984 2 DEBUG oslo_concurrency.lockutils [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:41 compute-0 nova_compute[259627]: 2025-10-14 09:02:41.985 2 DEBUG oslo_concurrency.lockutils [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:41 compute-0 nova_compute[259627]: 2025-10-14 09:02:41.985 2 DEBUG oslo_concurrency.lockutils [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:41 compute-0 nova_compute[259627]: 2025-10-14 09:02:41.986 2 DEBUG nova.compute.manager [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:02:41 compute-0 nova_compute[259627]: 2025-10-14 09:02:41.986 2 WARNING nova.compute.manager [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:02:42 compute-0 nova_compute[259627]: 2025-10-14 09:02:42.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007613715447352999 of space, bias 1.0, pg target 0.22841146342058996 quantized to 32 (current 32)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:02:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:02:43 compute-0 nova_compute[259627]: 2025-10-14 09:02:43.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:43 compute-0 ceph-mon[74249]: pgmap v1383: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 24 KiB/s wr, 10 op/s
Oct 14 09:02:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 14 09:02:43 compute-0 nova_compute[259627]: 2025-10-14 09:02:43.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.086 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.087 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.087 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.087 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.088 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.088 2 WARNING nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.088 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.088 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.089 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.089 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.089 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.090 2 WARNING nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.090 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.090 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.090 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.091 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.091 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:02:44 compute-0 nova_compute[259627]: 2025-10-14 09:02:44.091 2 WARNING nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:02:45 compute-0 nova_compute[259627]: 2025-10-14 09:02:45.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:45 compute-0 ceph-mon[74249]: pgmap v1384: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 14 09:02:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 14 09:02:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:02:46.017 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:02:46 compute-0 podman[312661]: 2025-10-14 09:02:46.647254127 +0000 UTC m=+0.062704273 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 14 09:02:46 compute-0 podman[312660]: 2025-10-14 09:02:46.706547085 +0000 UTC m=+0.121403386 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:02:47 compute-0 nova_compute[259627]: 2025-10-14 09:02:47.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:47 compute-0 ceph-mon[74249]: pgmap v1385: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 14 09:02:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 14 09:02:48 compute-0 nova_compute[259627]: 2025-10-14 09:02:48.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:49 compute-0 ovn_controller[152662]: 2025-10-14T09:02:49Z|00432|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:02:49 compute-0 nova_compute[259627]: 2025-10-14 09:02:49.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:49 compute-0 ceph-mon[74249]: pgmap v1386: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 14 09:02:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 14 09:02:51 compute-0 ceph-mon[74249]: pgmap v1387: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 14 09:02:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 73 op/s
Oct 14 09:02:52 compute-0 nova_compute[259627]: 2025-10-14 09:02:52.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:52 compute-0 ovn_controller[152662]: 2025-10-14T09:02:52Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:02:53 compute-0 ceph-mon[74249]: pgmap v1388: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 73 op/s
Oct 14 09:02:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 14 09:02:53 compute-0 nova_compute[259627]: 2025-10-14 09:02:53.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:54 compute-0 ceph-mon[74249]: pgmap v1389: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 14 09:02:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 9.3 KiB/s wr, 107 op/s
Oct 14 09:02:56 compute-0 ceph-mon[74249]: pgmap v1390: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 9.3 KiB/s wr, 107 op/s
Oct 14 09:02:57 compute-0 nova_compute[259627]: 2025-10-14 09:02:57.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:02:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 9.3 KiB/s wr, 44 op/s
Oct 14 09:02:58 compute-0 ceph-mon[74249]: pgmap v1391: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 9.3 KiB/s wr, 44 op/s
Oct 14 09:02:58 compute-0 nova_compute[259627]: 2025-10-14 09:02:58.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:58 compute-0 ovn_controller[152662]: 2025-10-14T09:02:58Z|00433|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:02:58 compute-0 nova_compute[259627]: 2025-10-14 09:02:58.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:02:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 9.3 KiB/s wr, 44 op/s
Oct 14 09:03:00 compute-0 ceph-mon[74249]: pgmap v1392: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 9.3 KiB/s wr, 44 op/s
Oct 14 09:03:01 compute-0 nova_compute[259627]: 2025-10-14 09:03:01.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 20 KiB/s wr, 44 op/s
Oct 14 09:03:02 compute-0 nova_compute[259627]: 2025-10-14 09:03:02.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:03:02 compute-0 ceph-mon[74249]: pgmap v1393: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 20 KiB/s wr, 44 op/s
Oct 14 09:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:03:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 526 KiB/s rd, 20 KiB/s wr, 42 op/s
Oct 14 09:03:03 compute-0 nova_compute[259627]: 2025-10-14 09:03:03.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:04 compute-0 nova_compute[259627]: 2025-10-14 09:03:04.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:04 compute-0 ceph-mon[74249]: pgmap v1394: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 526 KiB/s rd, 20 KiB/s wr, 42 op/s
Oct 14 09:03:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:03:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2589874852' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:03:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:03:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2589874852' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:03:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 526 KiB/s rd, 22 KiB/s wr, 43 op/s
Oct 14 09:03:05 compute-0 podman[312704]: 2025-10-14 09:03:05.667488239 +0000 UTC m=+0.085206446 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 14 09:03:05 compute-0 podman[312705]: 2025-10-14 09:03:05.667514749 +0000 UTC m=+0.074508673 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:03:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2589874852' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:03:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2589874852' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:03:06 compute-0 ceph-mon[74249]: pgmap v1395: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 526 KiB/s rd, 22 KiB/s wr, 43 op/s
Oct 14 09:03:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:07.021 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:07.021 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:07.022 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:07 compute-0 nova_compute[259627]: 2025-10-14 09:03:07.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:07 compute-0 nova_compute[259627]: 2025-10-14 09:03:07.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.476 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.476 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.501 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.580 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.580 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.588 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.589 2 INFO nova.compute.claims [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.698 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:08 compute-0 ceph-mon[74249]: pgmap v1396: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 09:03:08 compute-0 nova_compute[259627]: 2025-10-14 09:03:08.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:03:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451314201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.147 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.153 2 DEBUG nova.compute.provider_tree [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.183 2 DEBUG nova.scheduler.client.report [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.214 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.215 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.265 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.265 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.289 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.308 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.406 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.407 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.407 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Creating image(s)
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.431 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.453 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.472 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.476 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.542 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.543 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.544 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.544 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.563 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.566 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e0bc2109-5f5c-4797-98c7-866f2d11f513_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.761 2 DEBUG nova.policy [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f302a20e13b14bb999539ee5df041036', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '197096bf838b4b289aed810f1495a6c5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:03:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1451314201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.836 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e0bc2109-5f5c-4797-98c7-866f2d11f513_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.915 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] resizing rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:09 compute-0 nova_compute[259627]: 2025-10-14 09:03:09.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:03:10 compute-0 nova_compute[259627]: 2025-10-14 09:03:10.024 2 DEBUG nova.objects.instance [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lazy-loading 'migration_context' on Instance uuid e0bc2109-5f5c-4797-98c7-866f2d11f513 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:10 compute-0 nova_compute[259627]: 2025-10-14 09:03:10.048 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:03:10 compute-0 nova_compute[259627]: 2025-10-14 09:03:10.048 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Ensure instance console log exists: /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:03:10 compute-0 nova_compute[259627]: 2025-10-14 09:03:10.049 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:10 compute-0 nova_compute[259627]: 2025-10-14 09:03:10.050 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:10 compute-0 nova_compute[259627]: 2025-10-14 09:03:10.050 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:10 compute-0 nova_compute[259627]: 2025-10-14 09:03:10.753 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Successfully created port: ec24b957-093d-460e-a2cf-925bbfd2d421 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:03:10 compute-0 ceph-mon[74249]: pgmap v1397: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 09:03:10 compute-0 nova_compute[259627]: 2025-10-14 09:03:10.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 169 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 09:03:11 compute-0 nova_compute[259627]: 2025-10-14 09:03:11.718 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Successfully updated port: ec24b957-093d-460e-a2cf-925bbfd2d421 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:03:11 compute-0 nova_compute[259627]: 2025-10-14 09:03:11.747 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:03:11 compute-0 nova_compute[259627]: 2025-10-14 09:03:11.747 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquired lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:03:11 compute-0 nova_compute[259627]: 2025-10-14 09:03:11.748 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:03:11 compute-0 nova_compute[259627]: 2025-10-14 09:03:11.895 2 DEBUG nova.compute.manager [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received event network-changed-ec24b957-093d-460e-a2cf-925bbfd2d421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:11 compute-0 nova_compute[259627]: 2025-10-14 09:03:11.896 2 DEBUG nova.compute.manager [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Refreshing instance network info cache due to event network-changed-ec24b957-093d-460e-a2cf-925bbfd2d421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:03:11 compute-0 nova_compute[259627]: 2025-10-14 09:03:11.896 2 DEBUG oslo_concurrency.lockutils [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:03:12 compute-0 nova_compute[259627]: 2025-10-14 09:03:12.061 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:03:12 compute-0 nova_compute[259627]: 2025-10-14 09:03:12.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:12 compute-0 ceph-mon[74249]: pgmap v1398: 305 pgs: 305 active+clean; 169 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.120 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Updating instance_info_cache with network_info: [{"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.149 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Releasing lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.150 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance network_info: |[{"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.151 2 DEBUG oslo_concurrency.lockutils [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.152 2 DEBUG nova.network.neutron [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Refreshing network info cache for port ec24b957-093d-460e-a2cf-925bbfd2d421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.158 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start _get_guest_xml network_info=[{"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.165 2 WARNING nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.176 2 DEBUG nova.virt.libvirt.host [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.177 2 DEBUG nova.virt.libvirt.host [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.182 2 DEBUG nova.virt.libvirt.host [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.182 2 DEBUG nova.virt.libvirt.host [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.183 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.184 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.185 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.185 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.186 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.186 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.186 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.187 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.187 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.188 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.188 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.189 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.193 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:03:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958762108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.647 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 169 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.673 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.677 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:13 compute-0 nova_compute[259627]: 2025-10-14 09:03:13.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/958762108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:03:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4056529507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.115 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.117 2 DEBUG nova.virt.libvirt.vif [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-664198077',display_name='tempest-ImagesOneServerTestJSON-server-664198077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-664198077',id=49,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='197096bf838b4b289aed810f1495a6c5',ramdisk_id='',reservation_id='r-youk5hnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1747208535',owner_user_name='tempest-ImagesOneServerTestJSON-1747208535-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:09Z,user_data=None,user_id='f302a20e13b14bb999539ee5df041036',uuid=e0bc2109-5f5c-4797-98c7-866f2d11f513,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.117 2 DEBUG nova.network.os_vif_util [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converting VIF {"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.118 2 DEBUG nova.network.os_vif_util [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.119 2 DEBUG nova.objects.instance [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0bc2109-5f5c-4797-98c7-866f2d11f513 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.141 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <uuid>e0bc2109-5f5c-4797-98c7-866f2d11f513</uuid>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <name>instance-00000031</name>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <nova:name>tempest-ImagesOneServerTestJSON-server-664198077</nova:name>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:03:13</nova:creationTime>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <nova:user uuid="f302a20e13b14bb999539ee5df041036">tempest-ImagesOneServerTestJSON-1747208535-project-member</nova:user>
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <nova:project uuid="197096bf838b4b289aed810f1495a6c5">tempest-ImagesOneServerTestJSON-1747208535</nova:project>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <nova:port uuid="ec24b957-093d-460e-a2cf-925bbfd2d421">
Oct 14 09:03:14 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <system>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <entry name="serial">e0bc2109-5f5c-4797-98c7-866f2d11f513</entry>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <entry name="uuid">e0bc2109-5f5c-4797-98c7-866f2d11f513</entry>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </system>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <os>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   </os>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <features>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   </features>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e0bc2109-5f5c-4797-98c7-866f2d11f513_disk">
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       </source>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config">
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       </source>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:03:14 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b8:b3:92"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <target dev="tapec24b957-09"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/console.log" append="off"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <video>
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </video>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:03:14 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:03:14 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:03:14 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:03:14 compute-0 nova_compute[259627]: </domain>
Oct 14 09:03:14 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.143 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Preparing to wait for external event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.143 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.143 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.144 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.144 2 DEBUG nova.virt.libvirt.vif [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-664198077',display_name='tempest-ImagesOneServerTestJSON-server-664198077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-664198077',id=49,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='197096bf838b4b289aed810f1495a6c5',ramdisk_id='',reservation_id='r-youk5hnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1747208535',owner_user_name='tempest-ImagesOneServerTestJSON-1747208535-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:09Z,user_data=None,user_id='f302a20e13b14bb999539ee5df041036',uuid=e0bc2109-5f5c-4797-98c7-866f2d11f513,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.145 2 DEBUG nova.network.os_vif_util [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converting VIF {"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.146 2 DEBUG nova.network.os_vif_util [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.146 2 DEBUG os_vif [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.152 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec24b957-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.152 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec24b957-09, col_values=(('external_ids', {'iface-id': 'ec24b957-093d-460e-a2cf-925bbfd2d421', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:b3:92', 'vm-uuid': 'e0bc2109-5f5c-4797-98c7-866f2d11f513'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:14 compute-0 NetworkManager[44885]: <info>  [1760432594.1997] manager: (tapec24b957-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.208 2 INFO os_vif [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09')
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.263 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.264 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.264 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No VIF found with MAC fa:16:3e:b8:b3:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.265 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Using config drive
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.300 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.719 2 DEBUG nova.network.neutron [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Updated VIF entry in instance network info cache for port ec24b957-093d-460e-a2cf-925bbfd2d421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.720 2 DEBUG nova.network.neutron [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Updating instance_info_cache with network_info: [{"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.742 2 DEBUG oslo_concurrency.lockutils [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.764 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Creating config drive at /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.774 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgaksstgg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:14 compute-0 ceph-mon[74249]: pgmap v1399: 305 pgs: 305 active+clean; 169 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:03:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4056529507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.936 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgaksstgg" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.980 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:14 compute-0 nova_compute[259627]: 2025-10-14 09:03:14.986 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.155 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.156 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Deleting local config drive /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config because it was imported into RBD.
Oct 14 09:03:15 compute-0 NetworkManager[44885]: <info>  [1760432595.2123] manager: (tapec24b957-09): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Oct 14 09:03:15 compute-0 kernel: tapec24b957-09: entered promiscuous mode
Oct 14 09:03:15 compute-0 ovn_controller[152662]: 2025-10-14T09:03:15Z|00434|binding|INFO|Claiming lport ec24b957-093d-460e-a2cf-925bbfd2d421 for this chassis.
Oct 14 09:03:15 compute-0 ovn_controller[152662]: 2025-10-14T09:03:15Z|00435|binding|INFO|ec24b957-093d-460e-a2cf-925bbfd2d421: Claiming fa:16:3e:b8:b3:92 10.100.0.8
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.223 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b3:92 10.100.0.8'], port_security=['fa:16:3e:b8:b3:92 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e0bc2109-5f5c-4797-98c7-866f2d11f513', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '197096bf838b4b289aed810f1495a6c5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43932228-589c-42e3-996e-587f7969918e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b5f101-b0f2-4232-8035-0864b53402a3, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ec24b957-093d-460e-a2cf-925bbfd2d421) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.224 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ec24b957-093d-460e-a2cf-925bbfd2d421 in datapath 17ac22f4-a94a-4a44-af02-3207d6bbc30c bound to our chassis
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.225 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17ac22f4-a94a-4a44-af02-3207d6bbc30c
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a0e49-1270-4bed-9021-7c81d5cc8ab7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.237 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17ac22f4-a1 in ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.239 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17ac22f4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.239 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf11bf4-db18-4bf4-ab6b-8fd6dad10875]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.240 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9abbdddf-8f37-4415-90ce-6368bf31c12e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_controller[152662]: 2025-10-14T09:03:15Z|00436|binding|INFO|Setting lport ec24b957-093d-460e-a2cf-925bbfd2d421 ovn-installed in OVS
Oct 14 09:03:15 compute-0 ovn_controller[152662]: 2025-10-14T09:03:15Z|00437|binding|INFO|Setting lport ec24b957-093d-460e-a2cf-925bbfd2d421 up in Southbound
Oct 14 09:03:15 compute-0 systemd-machined[214636]: New machine qemu-60-instance-00000031.
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.252 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[1efd37da-9841-4048-9b77-6f9453fc6212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:15 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000031.
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.275 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fba52819-f2ad-4969-bf71-15f45dada596]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 systemd-udevd[313070]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:03:15 compute-0 NetworkManager[44885]: <info>  [1760432595.2900] device (tapec24b957-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:03:15 compute-0 NetworkManager[44885]: <info>  [1760432595.2909] device (tapec24b957-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.303 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca19724-3ff9-490c-a722-97e02a12fc3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d472a8d7-8bc1-4e48-bffa-d3483e606678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 NetworkManager[44885]: <info>  [1760432595.3131] manager: (tap17ac22f4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/196)
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.344 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[adfc49ee-abfe-467f-af61-49d7b67a62ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.348 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bc19d1f7-5a73-4ee8-9183-1bd43df28510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 NetworkManager[44885]: <info>  [1760432595.3708] device (tap17ac22f4-a0): carrier: link connected
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.375 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb78aa0-9018-4739-8c02-84499071860e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88e7809b-91c2-4c6f-a22e-f79fc5bd8171]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17ac22f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:2e:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637413, 'reachable_time': 38862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313100, 'error': None, 'target': 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.403 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5227661-4da6-4911-b915-c8288d87ca39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:2ece'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637413, 'tstamp': 637413}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313101, 'error': None, 'target': 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.423 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4ce3b6-d676-4ebb-b648-1d48231614c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17ac22f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:2e:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637413, 'reachable_time': 38862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313109, 'error': None, 'target': 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.453 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5432dc-da3c-4505-b25b-ac57b9e7697a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.511 2 DEBUG nova.compute.manager [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.511 2 DEBUG oslo_concurrency.lockutils [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.512 2 DEBUG oslo_concurrency.lockutils [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.512 2 DEBUG oslo_concurrency.lockutils [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.512 2 DEBUG nova.compute.manager [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Processing event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.544 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df449c85-a2ad-4775-966d-350361734dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17ac22f4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17ac22f4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:15 compute-0 NetworkManager[44885]: <info>  [1760432595.5509] manager: (tap17ac22f4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:15 compute-0 kernel: tap17ac22f4-a0: entered promiscuous mode
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17ac22f4-a0, col_values=(('external_ids', {'iface-id': '90534a6a-0aa1-48a2-852b-3056843e4924'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:15 compute-0 ovn_controller[152662]: 2025-10-14T09:03:15Z|00438|binding|INFO|Releasing lport 90534a6a-0aa1-48a2-852b-3056843e4924 from this chassis (sb_readonly=0)
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.576 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17ac22f4-a94a-4a44-af02-3207d6bbc30c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17ac22f4-a94a-4a44-af02-3207d6bbc30c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.578 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc1b56c-2c04-47df-b51d-db23bc853700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.579 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-17ac22f4-a94a-4a44-af02-3207d6bbc30c
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/17ac22f4-a94a-4a44-af02-3207d6bbc30c.pid.haproxy
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 17ac22f4-a94a-4a44-af02-3207d6bbc30c
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:03:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.580 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'env', 'PROCESS_TAG=haproxy-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17ac22f4-a94a-4a44-af02-3207d6bbc30c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:03:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:03:15 compute-0 nova_compute[259627]: 2025-10-14 09:03:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.020 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.021 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432596.01992, e0bc2109-5f5c-4797-98c7-866f2d11f513 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.022 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] VM Started (Lifecycle Event)
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.031 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.037 2 INFO nova.virt.libvirt.driver [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance spawned successfully.
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.037 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:03:16 compute-0 podman[313176]: 2025-10-14 09:03:16.059233165 +0000 UTC m=+0.084324854 container create 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.061 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.067 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.070 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.070 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.071 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.071 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.071 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.072 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.098 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.098 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432596.0209563, e0bc2109-5f5c-4797-98c7-866f2d11f513 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.098 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] VM Paused (Lifecycle Event)
Oct 14 09:03:16 compute-0 systemd[1]: Started libpod-conmon-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed.scope.
Oct 14 09:03:16 compute-0 podman[313176]: 2025-10-14 09:03:16.027204977 +0000 UTC m=+0.052296666 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.128 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.131 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432596.0301926, e0bc2109-5f5c-4797-98c7-866f2d11f513 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.131 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] VM Resumed (Lifecycle Event)
Oct 14 09:03:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.139 2 INFO nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 6.73 seconds to spawn the instance on the hypervisor.
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.139 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f27f2d659583d74823866ac940d88f699c60f5ba72cd023b094e06f8a3931c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.146 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.148 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.188 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:03:16 compute-0 podman[313176]: 2025-10-14 09:03:16.197342431 +0000 UTC m=+0.222434130 container init 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:03:16 compute-0 podman[313176]: 2025-10-14 09:03:16.203371589 +0000 UTC m=+0.228463288 container start 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.207 2 INFO nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 7.66 seconds to build instance.
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.222 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:16 compute-0 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [NOTICE]   (313193) : New worker (313195) forked
Oct 14 09:03:16 compute-0 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [NOTICE]   (313193) : Loading success.
Oct 14 09:03:16 compute-0 ceph-mon[74249]: pgmap v1400: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:03:16 compute-0 nova_compute[259627]: 2025-10-14 09:03:16.989 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:17 compute-0 nova_compute[259627]: 2025-10-14 09:03:17.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:17 compute-0 nova_compute[259627]: 2025-10-14 09:03:17.460 2 DEBUG nova.compute.manager [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:17 compute-0 nova_compute[259627]: 2025-10-14 09:03:17.500 2 INFO nova.compute.manager [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] instance snapshotting
Oct 14 09:03:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:03:17 compute-0 podman[313205]: 2025-10-14 09:03:17.693348092 +0000 UTC m=+0.086819886 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:03:17 compute-0 podman[313204]: 2025-10-14 09:03:17.727300607 +0000 UTC m=+0.124855041 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:03:17 compute-0 nova_compute[259627]: 2025-10-14 09:03:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:17 compute-0 nova_compute[259627]: 2025-10-14 09:03:17.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:03:17 compute-0 nova_compute[259627]: 2025-10-14 09:03:17.997 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.050 2 INFO nova.virt.libvirt.driver [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Beginning live snapshot process
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.281 2 DEBUG nova.virt.libvirt.imagebackend [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.399 2 DEBUG nova.compute.manager [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.400 2 DEBUG oslo_concurrency.lockutils [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.400 2 DEBUG oslo_concurrency.lockutils [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.400 2 DEBUG oslo_concurrency.lockutils [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.401 2 DEBUG nova.compute.manager [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] No waiting events found dispatching network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.401 2 WARNING nova.compute.manager [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received unexpected event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 for instance with vm_state active and task_state image_uploading.
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.572 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] creating snapshot(ec9a431003054ee8811038a4d62f7339) on rbd image(e0bc2109-5f5c-4797-98c7-866f2d11f513_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:03:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 14 09:03:18 compute-0 ceph-mon[74249]: pgmap v1401: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:03:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 14 09:03:18 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 14 09:03:18 compute-0 nova_compute[259627]: 2025-10-14 09:03:18.949 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] cloning vms/e0bc2109-5f5c-4797-98c7-866f2d11f513_disk@ec9a431003054ee8811038a4d62f7339 to images/72577407-a8f9-488a-b219-b5d5d896d73d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.103 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] flattening images/72577407-a8f9-488a-b219-b5d5d896d73d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.242 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.243 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.262 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.404 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] removing snapshot(ec9a431003054ee8811038a4d62f7339) on rbd image(e0bc2109-5f5c-4797-98c7-866f2d11f513_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.460 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.461 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.470 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.471 2 INFO nova.compute.claims [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:03:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.700 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 14 09:03:19 compute-0 ceph-mon[74249]: osdmap e189: 3 total, 3 up, 3 in
Oct 14 09:03:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 14 09:03:19 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.995 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.995 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:19 compute-0 nova_compute[259627]: 2025-10-14 09:03:19.996 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.081 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] creating snapshot(snap) on rbd image(72577407-a8f9-488a-b219-b5d5d896d73d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:03:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:03:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1494532258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.259 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.267 2 DEBUG nova.compute.provider_tree [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.294 2 DEBUG nova.scheduler.client.report [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.355 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.357 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.418 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.419 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.441 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.457 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.543 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.545 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.545 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating image(s)
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.573 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.609 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.643 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.649 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.736 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.739 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.740 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.740 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.768 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.773 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:20 compute-0 nova_compute[259627]: 2025-10-14 09:03:20.897 2 DEBUG nova.policy [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa32af91355a41198fd57121e5c70ec2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '368d762ed02e459d892ad1e5488c2871', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:03:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 14 09:03:20 compute-0 ceph-mon[74249]: pgmap v1403: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct 14 09:03:20 compute-0 ceph-mon[74249]: osdmap e190: 3 total, 3 up, 3 in
Oct 14 09:03:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1494532258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 14 09:03:20 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.117 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.177 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] resizing rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.265 2 DEBUG nova.objects.instance [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'migration_context' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.281 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.281 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Ensure instance console log exists: /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.282 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.282 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.283 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 215 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.888 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Successfully created port: d1066ec7-d932-4d99-aff7-7f7e80c54724 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:03:21 compute-0 ceph-mon[74249]: osdmap e191: 3 total, 3 up, 3 in
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:21 compute-0 nova_compute[259627]: 2025-10-14 09:03:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.001 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.002 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.003 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.003 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:03:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178111964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.455 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.585 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.585 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:03:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.589 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.590 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.730 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Successfully updated port: d1066ec7-d932-4d99-aff7-7f7e80c54724 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.743 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.744 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.744 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.770 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.771 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3835MB free_disk=59.92169189453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.771 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.772 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.818 2 DEBUG nova.compute.manager [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-changed-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.819 2 DEBUG nova.compute.manager [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Refreshing instance network info cache due to event network-changed-d1066ec7-d932-4d99-aff7-7f7e80c54724. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.819 2 DEBUG oslo_concurrency.lockutils [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.851 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance de383510-2de3-40bd-b479-c0010b3f2d1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.851 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e0bc2109-5f5c-4797-98c7-866f2d11f513 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.852 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.852 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.852 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.897 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:03:22 compute-0 nova_compute[259627]: 2025-10-14 09:03:22.929 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:22 compute-0 ceph-mon[74249]: pgmap v1406: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 215 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Oct 14 09:03:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1178111964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:23 compute-0 nova_compute[259627]: 2025-10-14 09:03:23.014 2 INFO nova.virt.libvirt.driver [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Snapshot image upload complete
Oct 14 09:03:23 compute-0 nova_compute[259627]: 2025-10-14 09:03:23.016 2 INFO nova.compute.manager [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 5.51 seconds to snapshot the instance on the hypervisor.
Oct 14 09:03:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:03:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378566224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:23 compute-0 nova_compute[259627]: 2025-10-14 09:03:23.459 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:23 compute-0 nova_compute[259627]: 2025-10-14 09:03:23.467 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:03:23 compute-0 nova_compute[259627]: 2025-10-14 09:03:23.503 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:03:23 compute-0 nova_compute[259627]: 2025-10-14 09:03:23.541 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:03:23 compute-0 nova_compute[259627]: 2025-10-14 09:03:23.542 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 215 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Oct 14 09:03:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3378566224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.473 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Updating instance_info_cache with network_info: [{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.500 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.500 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance network_info: |[{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.501 2 DEBUG oslo_concurrency.lockutils [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.501 2 DEBUG nova.network.neutron [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Refreshing network info cache for port d1066ec7-d932-4d99-aff7-7f7e80c54724 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.505 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start _get_guest_xml network_info=[{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.511 2 WARNING nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.519 2 DEBUG nova.virt.libvirt.host [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.520 2 DEBUG nova.virt.libvirt.host [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.524 2 DEBUG nova.virt.libvirt.host [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.525 2 DEBUG nova.virt.libvirt.host [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.525 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.525 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.526 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.526 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.526 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.526 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.528 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.530 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:24 compute-0 ceph-mon[74249]: pgmap v1407: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 215 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Oct 14 09:03:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:03:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3667482358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:24 compute-0 nova_compute[259627]: 2025-10-14 09:03:24.996 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.016 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.020 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:03:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077889941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.451 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.453 2 DEBUG nova.virt.libvirt.vif [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-tempest.common.compute-instance-1481502960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:20Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.453 2 DEBUG nova.network.os_vif_util [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.454 2 DEBUG nova.network.os_vif_util [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.455 2 DEBUG nova.objects.instance [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.483 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <uuid>dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</uuid>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <name>instance-00000032</name>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <nova:name>tempest-tempest.common.compute-instance-1481502960</nova:name>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:03:24</nova:creationTime>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <nova:port uuid="d1066ec7-d932-4d99-aff7-7f7e80c54724">
Oct 14 09:03:25 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <system>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <entry name="serial">dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</entry>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <entry name="uuid">dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</entry>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </system>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <os>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   </os>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <features>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   </features>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk">
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       </source>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config">
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       </source>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:03:25 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:ba:e4:cb"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <target dev="tapd1066ec7-d9"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/console.log" append="off"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <video>
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </video>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:03:25 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:03:25 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:03:25 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:03:25 compute-0 nova_compute[259627]: </domain>
Oct 14 09:03:25 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.486 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Preparing to wait for external event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.487 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.488 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.488 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.490 2 DEBUG nova.virt.libvirt.vif [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-tempest.common.compute-instance-1481502960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:20Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.490 2 DEBUG nova.network.os_vif_util [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.492 2 DEBUG nova.network.os_vif_util [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.492 2 DEBUG os_vif [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1066ec7-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1066ec7-d9, col_values=(('external_ids', {'iface-id': 'd1066ec7-d932-4d99-aff7-7f7e80c54724', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:e4:cb', 'vm-uuid': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:25 compute-0 NetworkManager[44885]: <info>  [1760432605.5050] manager: (tapd1066ec7-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.515 2 INFO os_vif [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9')
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.592 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.592 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.593 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No VIF found with MAC fa:16:3e:ba:e4:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.593 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Using config drive
Oct 14 09:03:25 compute-0 nova_compute[259627]: 2025-10-14 09:03:25.621 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.3 MiB/s wr, 289 op/s
Oct 14 09:03:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3667482358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1077889941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:26 compute-0 ovn_controller[152662]: 2025-10-14T09:03:26Z|00439|binding|INFO|Releasing lport 90534a6a-0aa1-48a2-852b-3056843e4924 from this chassis (sb_readonly=0)
Oct 14 09:03:26 compute-0 ovn_controller[152662]: 2025-10-14T09:03:26Z|00440|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.328 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating config drive at /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.338 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwe02j6kv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.486 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwe02j6kv" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.514 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.518 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.550 2 DEBUG nova.network.neutron [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Updated VIF entry in instance network info cache for port d1066ec7-d932-4d99-aff7-7f7e80c54724. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.551 2 DEBUG nova.network.neutron [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Updating instance_info_cache with network_info: [{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.570 2 DEBUG oslo_concurrency.lockutils [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.670 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.671 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deleting local config drive /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config because it was imported into RBD.
Oct 14 09:03:26 compute-0 kernel: tapd1066ec7-d9: entered promiscuous mode
Oct 14 09:03:26 compute-0 NetworkManager[44885]: <info>  [1760432606.7163] manager: (tapd1066ec7-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/199)
Oct 14 09:03:26 compute-0 ovn_controller[152662]: 2025-10-14T09:03:26Z|00441|binding|INFO|Claiming lport d1066ec7-d932-4d99-aff7-7f7e80c54724 for this chassis.
Oct 14 09:03:26 compute-0 ovn_controller[152662]: 2025-10-14T09:03:26Z|00442|binding|INFO|d1066ec7-d932-4d99-aff7-7f7e80c54724: Claiming fa:16:3e:ba:e4:cb 10.100.0.7
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.726 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e4:cb 10.100.0.7'], port_security=['fa:16:3e:ba:e4:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e3897ff-4300-4387-bfc4-36acf3f6c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d1066ec7-d932-4d99-aff7-7f7e80c54724) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.728 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d1066ec7-d932-4d99-aff7-7f7e80c54724 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.729 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:03:26 compute-0 ovn_controller[152662]: 2025-10-14T09:03:26Z|00443|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 ovn-installed in OVS
Oct 14 09:03:26 compute-0 ovn_controller[152662]: 2025-10-14T09:03:26Z|00444|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 up in Southbound
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:26 compute-0 systemd-udevd[313760]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db47fdfd-d0c5-4806-a2a0-2960c05b5153]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:26 compute-0 NetworkManager[44885]: <info>  [1760432606.7579] device (tapd1066ec7-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:03:26 compute-0 NetworkManager[44885]: <info>  [1760432606.7586] device (tapd1066ec7-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:03:26 compute-0 systemd-machined[214636]: New machine qemu-61-instance-00000032.
Oct 14 09:03:26 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000032.
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.786 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[95faa109-5686-4f4c-a654-14f062658dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.789 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[79218951-7899-4646-b041-2e3eecc14d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.827 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1eeb58f3-610f-4a14-a18f-d92098154bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.854 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06a95283-777f-4994-be60-5d4b4e0b7869]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313773, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d794515b-7249-4d1b-8a32-45e2cc5bb63d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313775, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633791, 'tstamp': 633791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313775, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.875 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:26 compute-0 nova_compute[259627]: 2025-10-14 09:03:26.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.878 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.879 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.879 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.880 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 14 09:03:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 14 09:03:26 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 14 09:03:26 compute-0 ceph-mon[74249]: pgmap v1408: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.3 MiB/s wr, 289 op/s
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.335 2 DEBUG nova.compute.manager [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.336 2 DEBUG oslo_concurrency.lockutils [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.336 2 DEBUG oslo_concurrency.lockutils [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.336 2 DEBUG oslo_concurrency.lockutils [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.336 2 DEBUG nova.compute.manager [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Processing event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.542 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.543 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.544 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.581 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:03:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 14 09:03:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 14 09:03:27 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 14 09:03:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.2 MiB/s wr, 73 op/s
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.848 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.848 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.848 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.849 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.905 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432607.9047358, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.905 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Started (Lifecycle Event)
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.907 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.915 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.918 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance spawned successfully.
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.919 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.925 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.927 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.939 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.939 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.940 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.940 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.941 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.941 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.948 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.949 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432607.9049146, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.949 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Paused (Lifecycle Event)
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.971 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.973 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432607.91331, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:27 compute-0 nova_compute[259627]: 2025-10-14 09:03:27.973 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Resumed (Lifecycle Event)
Oct 14 09:03:27 compute-0 ceph-mon[74249]: osdmap e192: 3 total, 3 up, 3 in
Oct 14 09:03:27 compute-0 ceph-mon[74249]: osdmap e193: 3 total, 3 up, 3 in
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.002 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.005 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.024 2 INFO nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Took 7.48 seconds to spawn the instance on the hypervisor.
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.024 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.025 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.089 2 INFO nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Took 8.78 seconds to build instance.
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.109 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:28 compute-0 ovn_controller[152662]: 2025-10-14T09:03:28Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:b3:92 10.100.0.8
Oct 14 09:03:28 compute-0 ovn_controller[152662]: 2025-10-14T09:03:28Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:b3:92 10.100.0.8
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.662 2 DEBUG nova.compute.manager [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:28 compute-0 nova_compute[259627]: 2025-10-14 09:03:28.713 2 INFO nova.compute.manager [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] instance snapshotting
Oct 14 09:03:28 compute-0 ceph-mon[74249]: pgmap v1411: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.2 MiB/s wr, 73 op/s
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.042 2 INFO nova.virt.libvirt.driver [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Beginning live snapshot process
Oct 14 09:03:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:03:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 71K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 5948 syncs, 3.06 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 48.06 MB, 0.08 MB/s
                                           Interval WAL: 12K writes, 4849 syncs, 2.51 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.221 2 DEBUG nova.virt.libvirt.imagebackend [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.529 2 DEBUG nova.compute.manager [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.530 2 DEBUG oslo_concurrency.lockutils [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.531 2 DEBUG oslo_concurrency.lockutils [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.531 2 DEBUG oslo_concurrency.lockutils [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.532 2 DEBUG nova.compute.manager [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.532 2 WARNING nova.compute.manager [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state active and task_state None.
Oct 14 09:03:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 61 op/s
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.932 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] creating snapshot(2af6a09057ae4279bdcc856d858c397f) on rbd image(e0bc2109-5f5c-4797-98c7-866f2d11f513_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:03:29 compute-0 nova_compute[259627]: 2025-10-14 09:03:29.980 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:03:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 14 09:03:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.001 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.001 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.002 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.002 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:30 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.053 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] cloning vms/e0bc2109-5f5c-4797-98c7-866f2d11f513_disk@2af6a09057ae4279bdcc856d858c397f to images/03a1c53c-9452-4b5f-b9d0-29539ac9e6c6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.135 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.160 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid de383510-2de3-40bd-b479-c0010b3f2d1c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.160 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid e0bc2109-5f5c-4797-98c7-866f2d11f513 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.160 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.161 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.161 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.161 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.162 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.162 2 INFO nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] During sync_power_state the instance has a pending task (image_uploading). Skip.
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.162 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.163 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.163 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.170 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] flattening images/03a1c53c-9452-4b5f-b9d0-29539ac9e6c6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.255 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.256 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.473 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] removing snapshot(2af6a09057ae4279bdcc856d858c397f) on rbd image(e0bc2109-5f5c-4797-98c7-866f2d11f513_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:03:30 compute-0 nova_compute[259627]: 2025-10-14 09:03:30.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 14 09:03:31 compute-0 ceph-mon[74249]: pgmap v1412: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 61 op/s
Oct 14 09:03:31 compute-0 ceph-mon[74249]: osdmap e194: 3 total, 3 up, 3 in
Oct 14 09:03:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 14 09:03:31 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 14 09:03:31 compute-0 nova_compute[259627]: 2025-10-14 09:03:31.066 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] creating snapshot(snap) on rbd image(03a1c53c-9452-4b5f-b9d0-29539ac9e6c6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:03:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 285 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 547 op/s
Oct 14 09:03:31 compute-0 nova_compute[259627]: 2025-10-14 09:03:31.961 2 INFO nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Rebuilding instance
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:03:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 14 09:03:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 14 09:03:32 compute-0 ceph-mon[74249]: osdmap e195: 3 total, 3 up, 3 in
Oct 14 09:03:32 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.221 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.236 2 DEBUG nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.295 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_requests' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.307 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.319 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.339 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'migration_context' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.354 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:03:32 compute-0 nova_compute[259627]: 2025-10-14 09:03:32.358 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:03:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 14 09:03:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 14 09:03:32 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 14 09:03:32 compute-0 sudo[313960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:32 compute-0 sudo[313960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:32 compute-0 sudo[313960]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:32 compute-0 sudo[313985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:03:32 compute-0 sudo[313985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:32 compute-0 sudo[313985]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:03:32 compute-0 sudo[314010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:32 compute-0 sudo[314010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:32 compute-0 sudo[314010]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:03:32
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'backups', 'cephfs.cephfs.data', 'images']
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:03:32 compute-0 sudo[314035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:03:32 compute-0 sudo[314035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:03:33 compute-0 ceph-mon[74249]: pgmap v1415: 305 pgs: 305 active+clean; 285 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 547 op/s
Oct 14 09:03:33 compute-0 ceph-mon[74249]: osdmap e196: 3 total, 3 up, 3 in
Oct 14 09:03:33 compute-0 ceph-mon[74249]: osdmap e197: 3 total, 3 up, 3 in
Oct 14 09:03:33 compute-0 sudo[314035]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:03:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:03:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:03:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:03:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:03:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 097101cc-f3e2-4a4e-be06-0d77e2cd731b does not exist
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6f1e9222-7c98-4a36-a2d2-7b69ece705d1 does not exist
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 29a49308-7056-4d18-b833-90d996acc8c0 does not exist
Oct 14 09:03:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:03:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:03:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:03:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:03:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:03:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:03:33 compute-0 sudo[314091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:33 compute-0 sudo[314091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:33 compute-0 sudo[314091]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:33 compute-0 sudo[314116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:03:33 compute-0 sudo[314116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:33 compute-0 sudo[314116]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:33 compute-0 sudo[314141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:33 compute-0 sudo[314141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:33 compute-0 sudo[314141]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:33 compute-0 nova_compute[259627]: 2025-10-14 09:03:33.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 285 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 14 MiB/s wr, 642 op/s
Oct 14 09:03:33 compute-0 sudo[314166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:03:33 compute-0 sudo[314166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:33 compute-0 nova_compute[259627]: 2025-10-14 09:03:33.862 2 INFO nova.virt.libvirt.driver [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Snapshot image upload complete
Oct 14 09:03:33 compute-0 nova_compute[259627]: 2025-10-14 09:03:33.863 2 INFO nova.compute.manager [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 5.15 seconds to snapshot the instance on the hypervisor.
Oct 14 09:03:34 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:03:34 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:03:34 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:03:34 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:03:34 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:03:34 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:03:34 compute-0 podman[314229]: 2025-10-14 09:03:34.095545877 +0000 UTC m=+0.049624241 container create a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:03:34 compute-0 systemd[1]: Started libpod-conmon-a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3.scope.
Oct 14 09:03:34 compute-0 podman[314229]: 2025-10-14 09:03:34.072261435 +0000 UTC m=+0.026339799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:03:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:03:34 compute-0 podman[314229]: 2025-10-14 09:03:34.203242185 +0000 UTC m=+0.157320549 container init a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:03:34 compute-0 podman[314229]: 2025-10-14 09:03:34.211925119 +0000 UTC m=+0.166003463 container start a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:03:34 compute-0 podman[314229]: 2025-10-14 09:03:34.216024609 +0000 UTC m=+0.170102973 container attach a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 09:03:34 compute-0 optimistic_knuth[314245]: 167 167
Oct 14 09:03:34 compute-0 systemd[1]: libpod-a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3.scope: Deactivated successfully.
Oct 14 09:03:34 compute-0 conmon[314245]: conmon a05546547dd6590ca339 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3.scope/container/memory.events
Oct 14 09:03:34 compute-0 podman[314229]: 2025-10-14 09:03:34.220777706 +0000 UTC m=+0.174856060 container died a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ddd5c392fb4b2841d7d90b2cd5e1ce1bd71a01e8b92a455db2491b6aa66e6b-merged.mount: Deactivated successfully.
Oct 14 09:03:34 compute-0 podman[314229]: 2025-10-14 09:03:34.265816004 +0000 UTC m=+0.219894348 container remove a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:03:34 compute-0 systemd[1]: libpod-conmon-a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3.scope: Deactivated successfully.
Oct 14 09:03:34 compute-0 podman[314269]: 2025-10-14 09:03:34.46090021 +0000 UTC m=+0.038021346 container create 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 09:03:34 compute-0 systemd[1]: Started libpod-conmon-318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc.scope.
Oct 14 09:03:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:34 compute-0 podman[314269]: 2025-10-14 09:03:34.445343338 +0000 UTC m=+0.022464494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:34 compute-0 podman[314269]: 2025-10-14 09:03:34.556739066 +0000 UTC m=+0.133860242 container init 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 14 09:03:34 compute-0 podman[314269]: 2025-10-14 09:03:34.562384495 +0000 UTC m=+0.139505631 container start 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:03:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:03:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 19K writes, 76K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 6199 syncs, 3.11 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 49.38 MB, 0.08 MB/s
                                           Interval WAL: 12K writes, 4738 syncs, 2.55 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:03:34 compute-0 podman[314269]: 2025-10-14 09:03:34.577335033 +0000 UTC m=+0.154456179 container attach 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:03:35 compute-0 ceph-mon[74249]: pgmap v1418: 305 pgs: 305 active+clean; 285 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 14 MiB/s wr, 642 op/s
Oct 14 09:03:35 compute-0 nova_compute[259627]: 2025-10-14 09:03:35.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:35 compute-0 suspicious_germain[314286]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:03:35 compute-0 suspicious_germain[314286]: --> relative data size: 1.0
Oct 14 09:03:35 compute-0 suspicious_germain[314286]: --> All data devices are unavailable
Oct 14 09:03:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 13 MiB/s wr, 528 op/s
Oct 14 09:03:35 compute-0 systemd[1]: libpod-318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc.scope: Deactivated successfully.
Oct 14 09:03:35 compute-0 podman[314269]: 2025-10-14 09:03:35.702811404 +0000 UTC m=+1.279932560 container died 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:03:35 compute-0 systemd[1]: libpod-318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc.scope: Consumed 1.056s CPU time.
Oct 14 09:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455-merged.mount: Deactivated successfully.
Oct 14 09:03:35 compute-0 podman[314269]: 2025-10-14 09:03:35.776206998 +0000 UTC m=+1.353328134 container remove 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 14 09:03:35 compute-0 systemd[1]: libpod-conmon-318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc.scope: Deactivated successfully.
Oct 14 09:03:35 compute-0 sudo[314166]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:35 compute-0 podman[314323]: 2025-10-14 09:03:35.818307784 +0000 UTC m=+0.070105705 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:03:35 compute-0 podman[314315]: 2025-10-14 09:03:35.844361774 +0000 UTC m=+0.109711638 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 09:03:35 compute-0 sudo[314365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:35 compute-0 sudo[314365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:35 compute-0 sudo[314365]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:35 compute-0 sudo[314391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:03:35 compute-0 sudo[314391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:35 compute-0 sudo[314391]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:35 compute-0 sudo[314416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:35 compute-0 sudo[314416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:35 compute-0 sudo[314416]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:36 compute-0 sudo[314441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:03:36 compute-0 sudo[314441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 14 09:03:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 14 09:03:36 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 14 09:03:36 compute-0 podman[314506]: 2025-10-14 09:03:36.421963346 +0000 UTC m=+0.047032448 container create 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:03:36 compute-0 systemd[1]: Started libpod-conmon-44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5.scope.
Oct 14 09:03:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:03:36 compute-0 podman[314506]: 2025-10-14 09:03:36.396149091 +0000 UTC m=+0.021218243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:03:36 compute-0 podman[314506]: 2025-10-14 09:03:36.516538951 +0000 UTC m=+0.141608073 container init 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 09:03:36 compute-0 podman[314506]: 2025-10-14 09:03:36.524141048 +0000 UTC m=+0.149210150 container start 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 09:03:36 compute-0 priceless_lehmann[314522]: 167 167
Oct 14 09:03:36 compute-0 systemd[1]: libpod-44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5.scope: Deactivated successfully.
Oct 14 09:03:36 compute-0 podman[314506]: 2025-10-14 09:03:36.531064668 +0000 UTC m=+0.156133790 container attach 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:03:36 compute-0 conmon[314522]: conmon 44180dd25a3b4424ff15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5.scope/container/memory.events
Oct 14 09:03:36 compute-0 podman[314506]: 2025-10-14 09:03:36.532268038 +0000 UTC m=+0.157337140 container died 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:03:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-55c20d58613d94b246b9551e6c77513bf642172b0189d31023452c72f6119fcc-merged.mount: Deactivated successfully.
Oct 14 09:03:36 compute-0 podman[314506]: 2025-10-14 09:03:36.576923876 +0000 UTC m=+0.201992978 container remove 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:03:36 compute-0 systemd[1]: libpod-conmon-44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5.scope: Deactivated successfully.
Oct 14 09:03:36 compute-0 podman[314545]: 2025-10-14 09:03:36.751479948 +0000 UTC m=+0.038010996 container create 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:03:36 compute-0 systemd[1]: Started libpod-conmon-6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391.scope.
Oct 14 09:03:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:36 compute-0 podman[314545]: 2025-10-14 09:03:36.735540776 +0000 UTC m=+0.022071854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:03:36 compute-0 podman[314545]: 2025-10-14 09:03:36.835224997 +0000 UTC m=+0.121756075 container init 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:03:36 compute-0 podman[314545]: 2025-10-14 09:03:36.84269133 +0000 UTC m=+0.129222388 container start 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:03:36 compute-0 podman[314545]: 2025-10-14 09:03:36.847060518 +0000 UTC m=+0.133591626 container attach 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:03:36 compute-0 nova_compute[259627]: 2025-10-14 09:03:36.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.251 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.251 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.252 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.252 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.252 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.253 2 INFO nova.compute.manager [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Terminating instance
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.254 2 DEBUG nova.compute.manager [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:03:37 compute-0 kernel: tapec24b957-09 (unregistering): left promiscuous mode
Oct 14 09:03:37 compute-0 NetworkManager[44885]: <info>  [1760432617.3158] device (tapec24b957-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 ovn_controller[152662]: 2025-10-14T09:03:37Z|00445|binding|INFO|Releasing lport ec24b957-093d-460e-a2cf-925bbfd2d421 from this chassis (sb_readonly=0)
Oct 14 09:03:37 compute-0 ovn_controller[152662]: 2025-10-14T09:03:37Z|00446|binding|INFO|Setting lport ec24b957-093d-460e-a2cf-925bbfd2d421 down in Southbound
Oct 14 09:03:37 compute-0 ovn_controller[152662]: 2025-10-14T09:03:37Z|00447|binding|INFO|Removing iface tapec24b957-09 ovn-installed in OVS
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.339 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b3:92 10.100.0.8'], port_security=['fa:16:3e:b8:b3:92 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e0bc2109-5f5c-4797-98c7-866f2d11f513', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '197096bf838b4b289aed810f1495a6c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43932228-589c-42e3-996e-587f7969918e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b5f101-b0f2-4232-8035-0864b53402a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ec24b957-093d-460e-a2cf-925bbfd2d421) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.340 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ec24b957-093d-460e-a2cf-925bbfd2d421 in datapath 17ac22f4-a94a-4a44-af02-3207d6bbc30c unbound from our chassis
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.342 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17ac22f4-a94a-4a44-af02-3207d6bbc30c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a882c0-9aa6-466b-85d9-542dd97b52a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.345 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c namespace which is not needed anymore
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct 14 09:03:37 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000031.scope: Consumed 12.428s CPU time.
Oct 14 09:03:37 compute-0 systemd-machined[214636]: Machine qemu-60-instance-00000031 terminated.
Oct 14 09:03:37 compute-0 ceph-mon[74249]: pgmap v1419: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 13 MiB/s wr, 528 op/s
Oct 14 09:03:37 compute-0 ceph-mon[74249]: osdmap e198: 3 total, 3 up, 3 in
Oct 14 09:03:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 14 09:03:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 14 09:03:37 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 14 09:03:37 compute-0 kernel: tapec24b957-09: entered promiscuous mode
Oct 14 09:03:37 compute-0 NetworkManager[44885]: <info>  [1760432617.4712] manager: (tapec24b957-09): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Oct 14 09:03:37 compute-0 ovn_controller[152662]: 2025-10-14T09:03:37Z|00448|binding|INFO|Claiming lport ec24b957-093d-460e-a2cf-925bbfd2d421 for this chassis.
Oct 14 09:03:37 compute-0 ovn_controller[152662]: 2025-10-14T09:03:37Z|00449|binding|INFO|ec24b957-093d-460e-a2cf-925bbfd2d421: Claiming fa:16:3e:b8:b3:92 10.100.0.8
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 kernel: tapec24b957-09 (unregistering): left promiscuous mode
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.496 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b3:92 10.100.0.8'], port_security=['fa:16:3e:b8:b3:92 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e0bc2109-5f5c-4797-98c7-866f2d11f513', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '197096bf838b4b289aed810f1495a6c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43932228-589c-42e3-996e-587f7969918e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b5f101-b0f2-4232-8035-0864b53402a3, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ec24b957-093d-460e-a2cf-925bbfd2d421) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.498 2 INFO nova.virt.libvirt.driver [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance destroyed successfully.
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.498 2 DEBUG nova.objects.instance [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lazy-loading 'resources' on Instance uuid e0bc2109-5f5c-4797-98c7-866f2d11f513 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:37 compute-0 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [NOTICE]   (313193) : haproxy version is 2.8.14-c23fe91
Oct 14 09:03:37 compute-0 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [NOTICE]   (313193) : path to executable is /usr/sbin/haproxy
Oct 14 09:03:37 compute-0 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [WARNING]  (313193) : Exiting Master process...
Oct 14 09:03:37 compute-0 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [ALERT]    (313193) : Current worker (313195) exited with code 143 (Terminated)
Oct 14 09:03:37 compute-0 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [WARNING]  (313193) : All workers exited. Exiting... (0)
Oct 14 09:03:37 compute-0 systemd[1]: libpod-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed.scope: Deactivated successfully.
Oct 14 09:03:37 compute-0 conmon[313189]: conmon 2d35b2e3998973554c79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed.scope/container/memory.events
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 ovn_controller[152662]: 2025-10-14T09:03:37Z|00450|binding|INFO|Releasing lport ec24b957-093d-460e-a2cf-925bbfd2d421 from this chassis (sb_readonly=0)
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.513 2 DEBUG nova.virt.libvirt.vif [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-664198077',display_name='tempest-ImagesOneServerTestJSON-server-664198077',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-664198077',id=49,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='197096bf838b4b289aed810f1495a6c5',ramdisk_id='',reservation_id='r-youk5hnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1747208535',owner_user_name='tempest-ImagesOneServerTestJSON-1747208535-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:03:33Z,user_data=None,user_id='f302a20e13b14bb999539ee5df041036',uuid=e0bc2109-5f5c-4797-98c7-866f2d11f513,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.513 2 DEBUG nova.network.os_vif_util [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converting VIF {"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.515 2 DEBUG nova.network.os_vif_util [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.515 2 DEBUG os_vif [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec24b957-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:37 compute-0 podman[314591]: 2025-10-14 09:03:37.518827914 +0000 UTC m=+0.070419482 container died 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.525 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b3:92 10.100.0.8'], port_security=['fa:16:3e:b8:b3:92 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e0bc2109-5f5c-4797-98c7-866f2d11f513', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '197096bf838b4b289aed810f1495a6c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43932228-589c-42e3-996e-587f7969918e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b5f101-b0f2-4232-8035-0864b53402a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ec24b957-093d-460e-a2cf-925bbfd2d421) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.530 2 INFO os_vif [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09')
Oct 14 09:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed-userdata-shm.mount: Deactivated successfully.
Oct 14 09:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0f27f2d659583d74823866ac940d88f699c60f5ba72cd023b094e06f8a3931c-merged.mount: Deactivated successfully.
Oct 14 09:03:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 14 09:03:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 14 09:03:37 compute-0 podman[314591]: 2025-10-14 09:03:37.631449005 +0000 UTC m=+0.183040563 container cleanup 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]: {
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:     "0": [
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:         {
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "devices": [
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "/dev/loop3"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             ],
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_name": "ceph_lv0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_size": "21470642176",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "name": "ceph_lv0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "tags": {
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cluster_name": "ceph",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.crush_device_class": "",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.encrypted": "0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osd_id": "0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.type": "block",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.vdo": "0"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             },
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "type": "block",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "vg_name": "ceph_vg0"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:         }
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:     ],
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:     "1": [
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:         {
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "devices": [
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "/dev/loop4"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             ],
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_name": "ceph_lv1",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_size": "21470642176",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "name": "ceph_lv1",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "tags": {
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cluster_name": "ceph",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.crush_device_class": "",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.encrypted": "0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osd_id": "1",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.type": "block",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.vdo": "0"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             },
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "type": "block",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "vg_name": "ceph_vg1"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:         }
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:     ],
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:     "2": [
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:         {
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "devices": [
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "/dev/loop5"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             ],
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_name": "ceph_lv2",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_size": "21470642176",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "name": "ceph_lv2",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "tags": {
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.cluster_name": "ceph",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.crush_device_class": "",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.encrypted": "0",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osd_id": "2",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.type": "block",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:                 "ceph.vdo": "0"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             },
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "type": "block",
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:             "vg_name": "ceph_vg2"
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:         }
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]:     ]
Oct 14 09:03:37 compute-0 funny_chandrasekhar[314563]: }
Oct 14 09:03:37 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 14 09:03:37 compute-0 systemd[1]: libpod-conmon-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed.scope: Deactivated successfully.
Oct 14 09:03:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 83 op/s
Oct 14 09:03:37 compute-0 systemd[1]: libpod-6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391.scope: Deactivated successfully.
Oct 14 09:03:37 compute-0 podman[314545]: 2025-10-14 09:03:37.671414536 +0000 UTC m=+0.957945594 container died 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32-merged.mount: Deactivated successfully.
Oct 14 09:03:37 compute-0 podman[314643]: 2025-10-14 09:03:37.720738938 +0000 UTC m=+0.052343037 container remove 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.728 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[defa7945-225b-43bf-a2d9-3ee5c7a527dd]: (4, ('Tue Oct 14 09:03:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c (2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed)\n2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed\nTue Oct 14 09:03:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c (2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed)\n2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.731 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[693aa322-4bd3-4c9d-a847-87c0de9a4923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.732 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17ac22f4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:37 compute-0 kernel: tap17ac22f4-a0: left promiscuous mode
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 podman[314545]: 2025-10-14 09:03:37.742485713 +0000 UTC m=+1.029016771 container remove 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 14 09:03:37 compute-0 systemd[1]: libpod-conmon-6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391.scope: Deactivated successfully.
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.758 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0251ad9-b43f-4723-b118-42551af44172]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 nova_compute[259627]: 2025-10-14 09:03:37.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:37 compute-0 sudo[314441]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.792 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1317886b-7355-44ed-ac96-b8d1a7cb9843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b98da0b-38ba-468f-99d1-ff20c4a461a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.810 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b935ff-05aa-4424-b273-c685a20aec2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637406, 'reachable_time': 18355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314677, 'error': None, 'target': 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d17ac22f4\x2da94a\x2d4a44\x2daf02\x2d3207d6bbc30c.mount: Deactivated successfully.
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.813 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.813 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1cabe8-835f-44c2-96c3-7833e53c4fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.816 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ec24b957-093d-460e-a2cf-925bbfd2d421 in datapath 17ac22f4-a94a-4a44-af02-3207d6bbc30c unbound from our chassis
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.818 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17ac22f4-a94a-4a44-af02-3207d6bbc30c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.818 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b112c84-0e8d-440a-859e-fd29d07545d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.819 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ec24b957-093d-460e-a2cf-925bbfd2d421 in datapath 17ac22f4-a94a-4a44-af02-3207d6bbc30c unbound from our chassis
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.820 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17ac22f4-a94a-4a44-af02-3207d6bbc30c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:03:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.821 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f09731ae-e82d-4e68-943b-c3ab79850612]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:37 compute-0 sudo[314669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:37 compute-0 sudo[314669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:37 compute-0 sudo[314669]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:37 compute-0 sudo[314695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:03:37 compute-0 sudo[314695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:37 compute-0 sudo[314695]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:37 compute-0 sudo[314721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:37 compute-0 sudo[314721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:37 compute-0 sudo[314721]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:38 compute-0 sudo[314746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:03:38 compute-0 sudo[314746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:38 compute-0 nova_compute[259627]: 2025-10-14 09:03:38.057 2 INFO nova.virt.libvirt.driver [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Deleting instance files /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513_del
Oct 14 09:03:38 compute-0 nova_compute[259627]: 2025-10-14 09:03:38.058 2 INFO nova.virt.libvirt.driver [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Deletion of /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513_del complete
Oct 14 09:03:38 compute-0 nova_compute[259627]: 2025-10-14 09:03:38.116 2 INFO nova.compute.manager [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 14 09:03:38 compute-0 nova_compute[259627]: 2025-10-14 09:03:38.116 2 DEBUG oslo.service.loopingcall [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:03:38 compute-0 nova_compute[259627]: 2025-10-14 09:03:38.117 2 DEBUG nova.compute.manager [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:03:38 compute-0 nova_compute[259627]: 2025-10-14 09:03:38.117 2 DEBUG nova.network.neutron [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:03:38 compute-0 podman[314812]: 2025-10-14 09:03:38.369167801 +0000 UTC m=+0.048885853 container create 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:03:38 compute-0 systemd[1]: Started libpod-conmon-056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828.scope.
Oct 14 09:03:38 compute-0 ceph-mon[74249]: osdmap e199: 3 total, 3 up, 3 in
Oct 14 09:03:38 compute-0 ceph-mon[74249]: osdmap e200: 3 total, 3 up, 3 in
Oct 14 09:03:38 compute-0 podman[314812]: 2025-10-14 09:03:38.349717533 +0000 UTC m=+0.029435595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:03:38 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:03:38 compute-0 podman[314812]: 2025-10-14 09:03:38.477596957 +0000 UTC m=+0.157315029 container init 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 09:03:38 compute-0 podman[314812]: 2025-10-14 09:03:38.48622932 +0000 UTC m=+0.165947332 container start 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:03:38 compute-0 podman[314812]: 2025-10-14 09:03:38.492154985 +0000 UTC m=+0.171873047 container attach 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:03:38 compute-0 bold_merkle[314828]: 167 167
Oct 14 09:03:38 compute-0 systemd[1]: libpod-056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828.scope: Deactivated successfully.
Oct 14 09:03:38 compute-0 podman[314812]: 2025-10-14 09:03:38.497473686 +0000 UTC m=+0.177191698 container died 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ace89244b44c3003ba7262e9f0b76d9c34dfc1e2306e1bb9c822a9ab49baf77-merged.mount: Deactivated successfully.
Oct 14 09:03:38 compute-0 podman[314812]: 2025-10-14 09:03:38.546897261 +0000 UTC m=+0.226615273 container remove 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:03:38 compute-0 systemd[1]: libpod-conmon-056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828.scope: Deactivated successfully.
Oct 14 09:03:38 compute-0 podman[314853]: 2025-10-14 09:03:38.727796359 +0000 UTC m=+0.044802573 container create 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:03:38 compute-0 systemd[1]: Started libpod-conmon-452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415.scope.
Oct 14 09:03:38 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:03:38 compute-0 podman[314853]: 2025-10-14 09:03:38.709845328 +0000 UTC m=+0.026851562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:03:38 compute-0 podman[314853]: 2025-10-14 09:03:38.805636863 +0000 UTC m=+0.122643097 container init 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:03:38 compute-0 podman[314853]: 2025-10-14 09:03:38.811949808 +0000 UTC m=+0.128956022 container start 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:03:38 compute-0 podman[314853]: 2025-10-14 09:03:38.815217588 +0000 UTC m=+0.132223852 container attach 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:03:39 compute-0 nova_compute[259627]: 2025-10-14 09:03:39.393 2 DEBUG nova.network.neutron [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:03:39 compute-0 nova_compute[259627]: 2025-10-14 09:03:39.417 2 INFO nova.compute.manager [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 1.30 seconds to deallocate network for instance.
Oct 14 09:03:39 compute-0 ceph-mon[74249]: pgmap v1423: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 83 op/s
Oct 14 09:03:39 compute-0 nova_compute[259627]: 2025-10-14 09:03:39.474 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:39 compute-0 nova_compute[259627]: 2025-10-14 09:03:39.475 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:39 compute-0 nova_compute[259627]: 2025-10-14 09:03:39.540 2 DEBUG nova.compute.manager [req-88538669-7805-4271-bd0b-689c2c37ac80 req-6a82fc19-be16-4856-b06f-8ec2d7a009b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received event network-vif-deleted-ec24b957-093d-460e-a2cf-925bbfd2d421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:39 compute-0 nova_compute[259627]: 2025-10-14 09:03:39.561 2 DEBUG oslo_concurrency.processutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.0 MiB/s wr, 70 op/s
Oct 14 09:03:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:03:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 5485 syncs, 3.09 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.20 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4359 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:03:39 compute-0 funny_archimedes[314870]: {
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "osd_id": 2,
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "type": "bluestore"
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:     },
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "osd_id": 1,
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "type": "bluestore"
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:     },
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "osd_id": 0,
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:         "type": "bluestore"
Oct 14 09:03:39 compute-0 funny_archimedes[314870]:     }
Oct 14 09:03:39 compute-0 funny_archimedes[314870]: }
Oct 14 09:03:39 compute-0 systemd[1]: libpod-452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415.scope: Deactivated successfully.
Oct 14 09:03:39 compute-0 podman[314853]: 2025-10-14 09:03:39.818502695 +0000 UTC m=+1.135508919 container died 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508-merged.mount: Deactivated successfully.
Oct 14 09:03:39 compute-0 podman[314853]: 2025-10-14 09:03:39.879928675 +0000 UTC m=+1.196934889 container remove 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:03:39 compute-0 systemd[1]: libpod-conmon-452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415.scope: Deactivated successfully.
Oct 14 09:03:39 compute-0 sudo[314746]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:03:39 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:03:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:03:39 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:03:39 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d27e72a3-6703-4150-8a05-07e01c2038f5 does not exist
Oct 14 09:03:39 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 1a418c26-05a4-4720-906c-4e5eb1164e93 does not exist
Oct 14 09:03:39 compute-0 sudo[314935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:03:39 compute-0 sudo[314935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:39 compute-0 sudo[314935]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:03:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747435737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:40 compute-0 sudo[314960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:03:40 compute-0 sudo[314960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:03:40 compute-0 sudo[314960]: pam_unix(sudo:session): session closed for user root
Oct 14 09:03:40 compute-0 nova_compute[259627]: 2025-10-14 09:03:40.062 2 DEBUG oslo_concurrency.processutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:40 compute-0 nova_compute[259627]: 2025-10-14 09:03:40.068 2 DEBUG nova.compute.provider_tree [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:03:40 compute-0 nova_compute[259627]: 2025-10-14 09:03:40.090 2 DEBUG nova.scheduler.client.report [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:03:40 compute-0 nova_compute[259627]: 2025-10-14 09:03:40.184 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:40 compute-0 nova_compute[259627]: 2025-10-14 09:03:40.215 2 INFO nova.scheduler.client.report [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Deleted allocations for instance e0bc2109-5f5c-4797-98c7-866f2d11f513
Oct 14 09:03:40 compute-0 nova_compute[259627]: 2025-10-14 09:03:40.281 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:40 compute-0 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 09:03:40 compute-0 ceph-mon[74249]: pgmap v1424: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.0 MiB/s wr, 70 op/s
Oct 14 09:03:40 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:03:40 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:03:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2747435737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:40 compute-0 ovn_controller[152662]: 2025-10-14T09:03:40Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:e4:cb 10.100.0.7
Oct 14 09:03:40 compute-0 ovn_controller[152662]: 2025-10-14T09:03:40Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:e4:cb 10.100.0.7
Oct 14 09:03:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:41.068 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:41.070 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:03:41 compute-0 nova_compute[259627]: 2025-10-14 09:03:41.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 4.3 MiB/s wr, 247 op/s
Oct 14 09:03:42 compute-0 nova_compute[259627]: 2025-10-14 09:03:42.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:42 compute-0 nova_compute[259627]: 2025-10-14 09:03:42.412 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:03:42 compute-0 nova_compute[259627]: 2025-10-14 09:03:42.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 14 09:03:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 14 09:03:42 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 14 09:03:42 compute-0 ovn_controller[152662]: 2025-10-14T09:03:42Z|00451|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:03:42 compute-0 nova_compute[259627]: 2025-10-14 09:03:42.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:42 compute-0 ceph-mon[74249]: pgmap v1425: 305 pgs: 305 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 4.3 MiB/s wr, 247 op/s
Oct 14 09:03:42 compute-0 ceph-mon[74249]: osdmap e201: 3 total, 3 up, 3 in
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015175922421371362 of space, bias 1.0, pg target 0.4552776726411409 quantized to 32 (current 32)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:03:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:03:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 4.1 MiB/s wr, 237 op/s
Oct 14 09:03:43 compute-0 ovn_controller[152662]: 2025-10-14T09:03:43Z|00452|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:03:43 compute-0 nova_compute[259627]: 2025-10-14 09:03:43.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:44 compute-0 kernel: tapd1066ec7-d9 (unregistering): left promiscuous mode
Oct 14 09:03:44 compute-0 NetworkManager[44885]: <info>  [1760432624.7222] device (tapd1066ec7-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:44 compute-0 ovn_controller[152662]: 2025-10-14T09:03:44Z|00453|binding|INFO|Releasing lport d1066ec7-d932-4d99-aff7-7f7e80c54724 from this chassis (sb_readonly=0)
Oct 14 09:03:44 compute-0 ovn_controller[152662]: 2025-10-14T09:03:44Z|00454|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 down in Southbound
Oct 14 09:03:44 compute-0 ovn_controller[152662]: 2025-10-14T09:03:44Z|00455|binding|INFO|Removing iface tapd1066ec7-d9 ovn-installed in OVS
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.757 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e4:cb 10.100.0.7'], port_security=['fa:16:3e:ba:e4:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e3897ff-4300-4387-bfc4-36acf3f6c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d1066ec7-d932-4d99-aff7-7f7e80c54724) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.758 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d1066ec7-d932-4d99-aff7-7f7e80c54724 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.759 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:44 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.780 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ca23a335-5a49-490b-a2bd-f81ba21df87f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:44 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000032.scope: Consumed 12.853s CPU time.
Oct 14 09:03:44 compute-0 systemd-machined[214636]: Machine qemu-61-instance-00000032 terminated.
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.811 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc26749-eff9-49dc-880b-aa9091a782f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.815 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b5ac4c-94d1-48d4-864e-c04df80c264f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.844 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ed81fbeb-0a4f-4b33-9cbb-88438574347e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.862 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5531dba-8f4b-4e7f-b21d-2ea07a3140f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314999, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.879 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40fb4e1a-52f8-4de6-9bc1-e3d90272f3e8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315000, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633791, 'tstamp': 633791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315000, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.887 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.887 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.887 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.888 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 14 09:03:44 compute-0 ceph-mon[74249]: pgmap v1427: 305 pgs: 305 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 4.1 MiB/s wr, 237 op/s
Oct 14 09:03:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 14 09:03:44 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.981 2 DEBUG nova.compute.manager [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.982 2 DEBUG oslo_concurrency.lockutils [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.982 2 DEBUG oslo_concurrency.lockutils [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.983 2 DEBUG oslo_concurrency.lockutils [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.983 2 DEBUG nova.compute.manager [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:03:44 compute-0 nova_compute[259627]: 2025-10-14 09:03:44.983 2 WARNING nova.compute.manager [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state active and task_state rebuilding.
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.430 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance shutdown successfully after 13 seconds.
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.437 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance destroyed successfully.
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.443 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance destroyed successfully.
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.444 2 DEBUG nova.virt.libvirt.vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-ServerActionsTestJSON-server-1331294446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:31Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.445 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.446 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.447 2 DEBUG os_vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1066ec7-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.457 2 INFO os_vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9')
Oct 14 09:03:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 17 MiB/s wr, 259 op/s
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.900 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deleting instance files /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_del
Oct 14 09:03:45 compute-0 nova_compute[259627]: 2025-10-14 09:03:45.901 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deletion of /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_del complete
Oct 14 09:03:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 14 09:03:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 14 09:03:45 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 14 09:03:45 compute-0 ceph-mon[74249]: osdmap e202: 3 total, 3 up, 3 in
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.065 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.066 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating image(s)
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.088 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.113 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.139 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.143 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.207 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.208 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.208 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.209 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.236 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.242 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.494 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.582 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] resizing rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.672 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.673 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Ensure instance console log exists: /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.674 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.674 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.675 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.678 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start _get_guest_xml network_info=[{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.682 2 WARNING nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.687 2 DEBUG nova.virt.libvirt.host [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.688 2 DEBUG nova.virt.libvirt.host [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.692 2 DEBUG nova.virt.libvirt.host [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.692 2 DEBUG nova.virt.libvirt.host [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.692 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.693 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.693 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.694 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.694 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.694 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.695 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.695 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.695 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.696 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.696 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.696 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.697 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:46 compute-0 nova_compute[259627]: 2025-10-14 09:03:46.719 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:46 compute-0 ceph-mon[74249]: pgmap v1429: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 17 MiB/s wr, 259 op/s
Oct 14 09:03:46 compute-0 ceph-mon[74249]: osdmap e203: 3 total, 3 up, 3 in
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.136 2 DEBUG nova.compute.manager [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.137 2 DEBUG oslo_concurrency.lockutils [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.138 2 DEBUG oslo_concurrency.lockutils [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.138 2 DEBUG oslo_concurrency.lockutils [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.138 2 DEBUG nova.compute.manager [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.139 2 WARNING nova.compute.manager [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state active and task_state rebuild_spawning.
Oct 14 09:03:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:03:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510293326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.207 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.240 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.244 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 19 MiB/s wr, 98 op/s
Oct 14 09:03:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:03:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047009119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.697 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.698 2 DEBUG nova.virt.libvirt.vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-ServerActionsTestJSON-server-1331294446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:46Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.699 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.700 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.702 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <uuid>dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</uuid>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <name>instance-00000032</name>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestJSON-server-1331294446</nova:name>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:03:46</nova:creationTime>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <nova:port uuid="d1066ec7-d932-4d99-aff7-7f7e80c54724">
Oct 14 09:03:47 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <system>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <entry name="serial">dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</entry>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <entry name="uuid">dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</entry>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </system>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <os>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   </os>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <features>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   </features>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk">
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       </source>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config">
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       </source>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:03:47 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:ba:e4:cb"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <target dev="tapd1066ec7-d9"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/console.log" append="off"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <video>
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </video>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:03:47 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:03:47 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:03:47 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:03:47 compute-0 nova_compute[259627]: </domain>
Oct 14 09:03:47 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.703 2 DEBUG nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Preparing to wait for external event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.703 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.704 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.704 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.704 2 DEBUG nova.virt.libvirt.vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-ServerActionsTestJSON-server-1331294446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:46Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.705 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.705 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.705 2 DEBUG os_vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1066ec7-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1066ec7-d9, col_values=(('external_ids', {'iface-id': 'd1066ec7-d932-4d99-aff7-7f7e80c54724', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:e4:cb', 'vm-uuid': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:47 compute-0 NetworkManager[44885]: <info>  [1760432627.7120] manager: (tapd1066ec7-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.720 2 INFO os_vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9')
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.791 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.792 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.793 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No VIF found with MAC fa:16:3e:ba:e4:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.794 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Using config drive
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.839 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.859 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'ec2_ids' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:47 compute-0 podman[315264]: 2025-10-14 09:03:47.871229696 +0000 UTC m=+0.095247473 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:03:47 compute-0 nova_compute[259627]: 2025-10-14 09:03:47.895 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'keypairs' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:47 compute-0 podman[315265]: 2025-10-14 09:03:47.900997318 +0000 UTC m=+0.123717173 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 09:03:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3510293326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1047009119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:03:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.072 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.384 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating config drive at /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.393 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zcyj4oo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.549 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zcyj4oo" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.590 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.596 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.793 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.794 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deleting local config drive /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config because it was imported into RBD.
Oct 14 09:03:48 compute-0 NetworkManager[44885]: <info>  [1760432628.8646] manager: (tapd1066ec7-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Oct 14 09:03:48 compute-0 kernel: tapd1066ec7-d9: entered promiscuous mode
Oct 14 09:03:48 compute-0 ovn_controller[152662]: 2025-10-14T09:03:48Z|00456|binding|INFO|Claiming lport d1066ec7-d932-4d99-aff7-7f7e80c54724 for this chassis.
Oct 14 09:03:48 compute-0 ovn_controller[152662]: 2025-10-14T09:03:48Z|00457|binding|INFO|d1066ec7-d932-4d99-aff7-7f7e80c54724: Claiming fa:16:3e:ba:e4:cb 10.100.0.7
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.916 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e4:cb 10.100.0.7'], port_security=['fa:16:3e:ba:e4:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0e3897ff-4300-4387-bfc4-36acf3f6c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d1066ec7-d932-4d99-aff7-7f7e80c54724) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.917 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d1066ec7-d932-4d99-aff7-7f7e80c54724 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis
Oct 14 09:03:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.918 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:03:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.938 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbe5a6b-1e3e-4235-8af8-1c70fd119388]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:48 compute-0 systemd-machined[214636]: New machine qemu-62-instance-00000032.
Oct 14 09:03:48 compute-0 ovn_controller[152662]: 2025-10-14T09:03:48Z|00458|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 ovn-installed in OVS
Oct 14 09:03:48 compute-0 ovn_controller[152662]: 2025-10-14T09:03:48Z|00459|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 up in Southbound
Oct 14 09:03:48 compute-0 nova_compute[259627]: 2025-10-14 09:03:48.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:48 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000032.
Oct 14 09:03:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.978 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a81b7eb2-e865-4f71-ac2e-c51a3e721ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:48 compute-0 systemd-udevd[315382]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:03:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.982 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa1734c-5ce0-4537-a965-47675e4cd3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:48 compute-0 NetworkManager[44885]: <info>  [1760432628.9995] device (tapd1066ec7-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:03:49 compute-0 NetworkManager[44885]: <info>  [1760432629.0004] device (tapd1066ec7-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:03:49 compute-0 ceph-mon[74249]: pgmap v1431: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 19 MiB/s wr, 98 op/s
Oct 14 09:03:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.016 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[73c7d8a4-c912-465f-9ae9-a13c13381480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.031 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d448a334-ba8f-4655-9691-96e8c21a15bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315390, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.048 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fef6653-b1cd-42a4-9b6e-737ce5f5e331]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315393, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633791, 'tstamp': 633791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315393, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.049 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:49 compute-0 nova_compute[259627]: 2025-10-14 09:03:49.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:49 compute-0 nova_compute[259627]: 2025-10-14 09:03:49.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.054 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.054 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.055 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.055 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:49 compute-0 nova_compute[259627]: 2025-10-14 09:03:49.314 2 DEBUG nova.compute.manager [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:49 compute-0 nova_compute[259627]: 2025-10-14 09:03:49.315 2 DEBUG oslo_concurrency.lockutils [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:49 compute-0 nova_compute[259627]: 2025-10-14 09:03:49.315 2 DEBUG oslo_concurrency.lockutils [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:49 compute-0 nova_compute[259627]: 2025-10-14 09:03:49.316 2 DEBUG oslo_concurrency.lockutils [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:49 compute-0 nova_compute[259627]: 2025-10-14 09:03:49.317 2 DEBUG nova.compute.manager [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Processing event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:03:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 16 MiB/s wr, 84 op/s
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.187 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.188 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432630.186601, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.188 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Started (Lifecycle Event)
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.191 2 DEBUG nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.194 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.199 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance spawned successfully.
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.199 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.215 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.221 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.226 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.228 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.229 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.229 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.230 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.230 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.269 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.270 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432630.1867893, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.270 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Paused (Lifecycle Event)
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.292 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.295 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432630.1939597, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.296 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Resumed (Lifecycle Event)
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.303 2 DEBUG nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.316 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.319 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.339 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.367 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.367 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.367 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:03:50 compute-0 nova_compute[259627]: 2025-10-14 09:03:50.423 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 14 09:03:51 compute-0 ceph-mon[74249]: pgmap v1432: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 16 MiB/s wr, 84 op/s
Oct 14 09:03:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 14 09:03:51 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 14 09:03:51 compute-0 nova_compute[259627]: 2025-10-14 09:03:51.460 2 DEBUG nova.compute.manager [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:51 compute-0 nova_compute[259627]: 2025-10-14 09:03:51.461 2 DEBUG oslo_concurrency.lockutils [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:51 compute-0 nova_compute[259627]: 2025-10-14 09:03:51.461 2 DEBUG oslo_concurrency.lockutils [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:51 compute-0 nova_compute[259627]: 2025-10-14 09:03:51.462 2 DEBUG oslo_concurrency.lockutils [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:51 compute-0 nova_compute[259627]: 2025-10-14 09:03:51.462 2 DEBUG nova.compute.manager [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:03:51 compute-0 nova_compute[259627]: 2025-10-14 09:03:51.463 2 WARNING nova.compute.manager [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state active and task_state None.
Oct 14 09:03:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 169 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 171 KiB/s rd, 20 MiB/s wr, 250 op/s
Oct 14 09:03:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 14 09:03:52 compute-0 ceph-mon[74249]: osdmap e204: 3 total, 3 up, 3 in
Oct 14 09:03:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 14 09:03:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 14 09:03:52 compute-0 nova_compute[259627]: 2025-10-14 09:03:52.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:52 compute-0 nova_compute[259627]: 2025-10-14 09:03:52.497 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432617.496041, e0bc2109-5f5c-4797-98c7-866f2d11f513 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:03:52 compute-0 nova_compute[259627]: 2025-10-14 09:03:52.498 2 INFO nova.compute.manager [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] VM Stopped (Lifecycle Event)
Oct 14 09:03:52 compute-0 nova_compute[259627]: 2025-10-14 09:03:52.519 2 DEBUG nova.compute.manager [None req-efd05d70-8ac7-429a-a0ee-90c067b76042 - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 14 09:03:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 14 09:03:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 14 09:03:52 compute-0 nova_compute[259627]: 2025-10-14 09:03:52.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:53 compute-0 ceph-mon[74249]: pgmap v1434: 305 pgs: 305 active+clean; 169 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 171 KiB/s rd, 20 MiB/s wr, 250 op/s
Oct 14 09:03:53 compute-0 ceph-mon[74249]: osdmap e205: 3 total, 3 up, 3 in
Oct 14 09:03:53 compute-0 ceph-mon[74249]: osdmap e206: 3 total, 3 up, 3 in
Oct 14 09:03:53 compute-0 nova_compute[259627]: 2025-10-14 09:03:53.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 14 09:03:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 14 09:03:53 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 14 09:03:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 169 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 5.4 MiB/s wr, 271 op/s
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.519 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.520 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.521 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.521 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.521 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.523 2 INFO nova.compute.manager [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Terminating instance
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.524 2 DEBUG nova.compute.manager [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:03:54 compute-0 kernel: tapd1066ec7-d9 (unregistering): left promiscuous mode
Oct 14 09:03:54 compute-0 NetworkManager[44885]: <info>  [1760432634.5734] device (tapd1066ec7-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:54 compute-0 ovn_controller[152662]: 2025-10-14T09:03:54Z|00460|binding|INFO|Releasing lport d1066ec7-d932-4d99-aff7-7f7e80c54724 from this chassis (sb_readonly=0)
Oct 14 09:03:54 compute-0 ovn_controller[152662]: 2025-10-14T09:03:54Z|00461|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 down in Southbound
Oct 14 09:03:54 compute-0 ovn_controller[152662]: 2025-10-14T09:03:54Z|00462|binding|INFO|Removing iface tapd1066ec7-d9 ovn-installed in OVS
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.593 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e4:cb 10.100.0.7'], port_security=['fa:16:3e:ba:e4:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0e3897ff-4300-4387-bfc4-36acf3f6c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d1066ec7-d932-4d99-aff7-7f7e80c54724) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.594 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d1066ec7-d932-4d99-aff7-7f7e80c54724 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.595 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.621 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85bfd522-9a5d-4671-8a15-619f13c87bd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:54 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 14 09:03:54 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000032.scope: Consumed 5.586s CPU time.
Oct 14 09:03:54 compute-0 systemd-machined[214636]: Machine qemu-62-instance-00000032 terminated.
Oct 14 09:03:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 14 09:03:54 compute-0 ceph-mon[74249]: osdmap e207: 3 total, 3 up, 3 in
Oct 14 09:03:54 compute-0 ceph-mon[74249]: pgmap v1438: 305 pgs: 305 active+clean; 169 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 5.4 MiB/s wr, 271 op/s
Oct 14 09:03:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 14 09:03:54 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.662 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b15bf109-c2d2-40be-a556-5cbf2d172a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.665 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dc806668-b34a-41c0-887b-f5d5651d19fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.694 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[03e3b7e5-a143-47ca-9c94-b7d1fc7d6651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a29a4c55-f2d8-4746-90b0-b404c3695243]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315448, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.728 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db2b76-f935-4e2d-9a14-e455eb1e5f64]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315449, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633791, 'tstamp': 633791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315449, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.730 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.738 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.738 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.759 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance destroyed successfully.
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.760 2 DEBUG nova.objects.instance [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.781 2 DEBUG nova.virt.libvirt.vif [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-ServerActionsTestJSON-server-1331294446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:03:50Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.782 2 DEBUG nova.network.os_vif_util [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.783 2 DEBUG nova.network.os_vif_util [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.784 2 DEBUG os_vif [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1066ec7-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.797 2 DEBUG nova.compute.manager [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.798 2 DEBUG oslo_concurrency.lockutils [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.798 2 DEBUG oslo_concurrency.lockutils [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.799 2 DEBUG oslo_concurrency.lockutils [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.799 2 DEBUG nova.compute.manager [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.800 2 DEBUG nova.compute.manager [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:03:54 compute-0 nova_compute[259627]: 2025-10-14 09:03:54.800 2 INFO os_vif [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9')
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.170 2 INFO nova.virt.libvirt.driver [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deleting instance files /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_del
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.171 2 INFO nova.virt.libvirt.driver [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deletion of /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_del complete
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.239 2 INFO nova.compute.manager [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.239 2 DEBUG oslo.service.loopingcall [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.240 2 DEBUG nova.compute.manager [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.240 2 DEBUG nova.network.neutron [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:03:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 14 09:03:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 14 09:03:55 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 14 09:03:55 compute-0 ceph-mon[74249]: osdmap e208: 3 total, 3 up, 3 in
Oct 14 09:03:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 49 KiB/s wr, 481 op/s
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.868 2 DEBUG nova.network.neutron [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.894 2 INFO nova.compute.manager [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Took 0.65 seconds to deallocate network for instance.
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.939 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:55 compute-0 nova_compute[259627]: 2025-10-14 09:03:55.940 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.026 2 DEBUG oslo_concurrency.processutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:03:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283183907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.481 2 DEBUG oslo_concurrency.processutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.487 2 DEBUG nova.compute.provider_tree [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.510 2 DEBUG nova.scheduler.client.report [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.539 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.573 2 INFO nova.scheduler.client.report [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Deleted allocations for instance dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.653 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:56 compute-0 ceph-mon[74249]: osdmap e209: 3 total, 3 up, 3 in
Oct 14 09:03:56 compute-0 ceph-mon[74249]: pgmap v1441: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 49 KiB/s wr, 481 op/s
Oct 14 09:03:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3283183907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.928 2 DEBUG nova.compute.manager [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.929 2 DEBUG oslo_concurrency.lockutils [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.929 2 DEBUG oslo_concurrency.lockutils [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.930 2 DEBUG oslo_concurrency.lockutils [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.930 2 DEBUG nova.compute.manager [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.931 2 WARNING nova.compute.manager [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state deleted and task_state None.
Oct 14 09:03:56 compute-0 nova_compute[259627]: 2025-10-14 09:03:56.931 2 DEBUG nova.compute.manager [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-deleted-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:03:57 compute-0 nova_compute[259627]: 2025-10-14 09:03:57.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:03:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:03:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 35 KiB/s wr, 347 op/s
Oct 14 09:03:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 14 09:03:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 14 09:03:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 14 09:03:58 compute-0 ceph-mon[74249]: pgmap v1442: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 35 KiB/s wr, 347 op/s
Oct 14 09:03:58 compute-0 ceph-mon[74249]: osdmap e210: 3 total, 3 up, 3 in
Oct 14 09:03:58 compute-0 nova_compute[259627]: 2025-10-14 09:03:58.951 2 DEBUG oslo_concurrency.lockutils [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:58 compute-0 nova_compute[259627]: 2025-10-14 09:03:58.951 2 DEBUG oslo_concurrency.lockutils [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:58 compute-0 nova_compute[259627]: 2025-10-14 09:03:58.951 2 DEBUG nova.compute.manager [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:03:58 compute-0 nova_compute[259627]: 2025-10-14 09:03:58.955 2 DEBUG nova.compute.manager [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 09:03:58 compute-0 nova_compute[259627]: 2025-10-14 09:03:58.956 2 DEBUG nova.objects.instance [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:03:58 compute-0 nova_compute[259627]: 2025-10-14 09:03:58.982 2 DEBUG nova.virt.libvirt.driver [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.483 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.483 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.499 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.568 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.569 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.574 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.574 2 INFO nova.compute.claims [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:03:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 291 op/s
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.699 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:03:59 compute-0 nova_compute[259627]: 2025-10-14 09:03:59.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916589333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.163 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.169 2 DEBUG nova.compute.provider_tree [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.191 2 DEBUG nova.scheduler.client.report [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.251 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.252 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.351 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.351 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.373 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.393 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.574 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.576 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.576 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Creating image(s)
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.599 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.622 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.644 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.648 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 14 09:04:00 compute-0 ceph-mon[74249]: pgmap v1444: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 291 op/s
Oct 14 09:04:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2916589333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.695 2 DEBUG nova.policy [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:04:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 14 09:04:00 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.721 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.722 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.723 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.723 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.748 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.751 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:00 compute-0 nova_compute[259627]: 2025-10-14 09:04:00.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:00 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.046 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.126 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] resizing rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:04:01 compute-0 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 09:04:01 compute-0 NetworkManager[44885]: <info>  [1760432641.2221] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:04:01 compute-0 ovn_controller[152662]: 2025-10-14T09:04:01Z|00463|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 09:04:01 compute-0 ovn_controller[152662]: 2025-10-14T09:04:01Z|00464|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 09:04:01 compute-0 ovn_controller[152662]: 2025-10-14T09:04:01Z|00465|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.245 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.246 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.247 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.248 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[16e8c3be-9ec0-469f-a3f3-5f5ba5f0630a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.249 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.271 2 DEBUG nova.objects.instance [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'migration_context' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.294 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.295 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Ensure instance console log exists: /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.295 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.296 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.296 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:01 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 09:04:01 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000002b.scope: Consumed 16.264s CPU time.
Oct 14 09:04:01 compute-0 systemd-machined[214636]: Machine qemu-59-instance-0000002b terminated.
Oct 14 09:04:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [NOTICE]   (312649) : haproxy version is 2.8.14-c23fe91
Oct 14 09:04:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [NOTICE]   (312649) : path to executable is /usr/sbin/haproxy
Oct 14 09:04:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [WARNING]  (312649) : Exiting Master process...
Oct 14 09:04:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [ALERT]    (312649) : Current worker (312651) exited with code 143 (Terminated)
Oct 14 09:04:01 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [WARNING]  (312649) : All workers exited. Exiting... (0)
Oct 14 09:04:01 compute-0 systemd[1]: libpod-123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6.scope: Deactivated successfully.
Oct 14 09:04:01 compute-0 podman[315715]: 2025-10-14 09:04:01.403724894 +0000 UTC m=+0.048359540 container died 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:04:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6-userdata-shm.mount: Deactivated successfully.
Oct 14 09:04:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b394997ed82cbbabad23e931b57790be0862fb55a64f523707c20df554392e6a-merged.mount: Deactivated successfully.
Oct 14 09:04:01 compute-0 podman[315715]: 2025-10-14 09:04:01.455971009 +0000 UTC m=+0.100605635 container cleanup 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:04:01 compute-0 systemd[1]: libpod-conmon-123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6.scope: Deactivated successfully.
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.510 2 DEBUG nova.compute.manager [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.511 2 DEBUG oslo_concurrency.lockutils [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.511 2 DEBUG oslo_concurrency.lockutils [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.511 2 DEBUG oslo_concurrency.lockutils [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.511 2 DEBUG nova.compute.manager [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.513 2 WARNING nova.compute.manager [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state powering-off.
Oct 14 09:04:01 compute-0 podman[315747]: 2025-10-14 09:04:01.517947843 +0000 UTC m=+0.041485241 container remove 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.523 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9722ad-69f4-459c-82e2-230a6417f59c]: (4, ('Tue Oct 14 09:04:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6)\n123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6\nTue Oct 14 09:04:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6)\n123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.525 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f25aea4-fef7-4e30-a3a6-807adce82155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.525 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:01 compute-0 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.551 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[539da1f5-05fe-4a4d-a4e1-9785d0158dec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.589 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3b0f85-19df-4011-9537-41c32bb1413f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.590 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff1b751-f043-4ea4-8746-d47c1d42a779]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.605 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4cca35-4b91-4340-8e6a-332d1411265a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633766, 'reachable_time': 17100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315775, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.610 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:04:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.610 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[55c5f20f-c8d9-46f9-8a5d-91b52583dcfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:01 compute-0 nova_compute[259627]: 2025-10-14 09:04:01.641 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Successfully created port: 09d03bcf-f719-4ec4-91a0-3c14e350a342 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:04:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 123 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 19 KiB/s wr, 41 op/s
Oct 14 09:04:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 14 09:04:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 14 09:04:01 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 14 09:04:01 compute-0 ceph-mon[74249]: osdmap e211: 3 total, 3 up, 3 in
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.003 2 INFO nova.virt.libvirt.driver [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance shutdown successfully after 3 seconds.
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.008 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.009 2 DEBUG nova.objects.instance [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.027 2 DEBUG nova.compute.manager [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.082 2 DEBUG oslo_concurrency.lockutils [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.535 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Successfully updated port: 09d03bcf-f719-4ec4-91a0-3c14e350a342 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.549 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.549 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.550 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:04:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 14 09:04:02 compute-0 ceph-mon[74249]: pgmap v1446: 305 pgs: 305 active+clean; 123 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 19 KiB/s wr, 41 op/s
Oct 14 09:04:02 compute-0 ceph-mon[74249]: osdmap e212: 3 total, 3 up, 3 in
Oct 14 09:04:02 compute-0 nova_compute[259627]: 2025-10-14 09:04:02.750 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:04:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 14 09:04:02 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.005 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.035 2 DEBUG oslo_concurrency.lockutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.036 2 DEBUG oslo_concurrency.lockutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.036 2 DEBUG nova.network.neutron [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.037 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.610 2 DEBUG nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.611 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.612 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.612 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.613 2 DEBUG nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.613 2 WARNING nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state stopped and task_state powering-on.
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.614 2 DEBUG nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-changed-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.614 2 DEBUG nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing instance network info cache due to event network-changed-09d03bcf-f719-4ec4-91a0-3c14e350a342. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.615 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.660 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 123 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 19 KiB/s wr, 42 op/s
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.682 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.682 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance network_info: |[{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.682 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.683 2 DEBUG nova.network.neutron [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing network info cache for port 09d03bcf-f719-4ec4-91a0-3c14e350a342 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.686 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start _get_guest_xml network_info=[{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.693 2 WARNING nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.698 2 DEBUG nova.virt.libvirt.host [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.699 2 DEBUG nova.virt.libvirt.host [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.702 2 DEBUG nova.virt.libvirt.host [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.702 2 DEBUG nova.virt.libvirt.host [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.703 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.703 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.704 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.704 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.704 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.704 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.705 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.705 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.705 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.706 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.706 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.706 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:04:03 compute-0 nova_compute[259627]: 2025-10-14 09:04:03.710 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 14 09:04:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 14 09:04:03 compute-0 ceph-mon[74249]: osdmap e213: 3 total, 3 up, 3 in
Oct 14 09:04:03 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 14 09:04:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371345489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.153 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.178 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.182 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.408 2 DEBUG nova.network.neutron [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.430 2 DEBUG oslo_concurrency.lockutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.454 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.455 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.466 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.476 2 DEBUG nova.virt.libvirt.vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.477 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.477 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.478 2 DEBUG os_vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.486 2 INFO os_vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.492 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start _get_guest_xml network_info=[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.495 2 WARNING nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.501 2 DEBUG nova.virt.libvirt.host [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.502 2 DEBUG nova.virt.libvirt.host [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.505 2 DEBUG nova.virt.libvirt.host [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.505 2 DEBUG nova.virt.libvirt.host [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.505 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.505 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.506 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.506 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.506 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.506 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.508 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.508 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.523 2 DEBUG oslo_concurrency.processutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495734449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.632 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.634 2 DEBUG nova.virt.libvirt.vif [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.634 2 DEBUG nova.network.os_vif_util [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.635 2 DEBUG nova.network.os_vif_util [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.636 2 DEBUG nova.objects.instance [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_devices' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.663 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <name>instance-00000033</name>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:04:03</nova:creationTime>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 09:04:04 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <entry name="serial">3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <entry name="uuid">3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk">
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config">
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:04 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:48:7e:a5"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <target dev="tap09d03bcf-f7"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log" append="off"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:04:04 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:04:04 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:04 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:04 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:04 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.669 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Preparing to wait for external event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.669 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.670 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.670 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.671 2 DEBUG nova.virt.libvirt.vif [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.671 2 DEBUG nova.network.os_vif_util [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.672 2 DEBUG nova.network.os_vif_util [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.672 2 DEBUG os_vif [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09d03bcf-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09d03bcf-f7, col_values=(('external_ids', {'iface-id': '09d03bcf-f719-4ec4-91a0-3c14e350a342', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:7e:a5', 'vm-uuid': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:04 compute-0 NetworkManager[44885]: <info>  [1760432644.6795] manager: (tap09d03bcf-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.684 2 INFO os_vif [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7')
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.738 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.739 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.739 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:48:7e:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.740 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Using config drive
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.764 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 14 09:04:04 compute-0 ceph-mon[74249]: pgmap v1449: 305 pgs: 305 active+clean; 123 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 19 KiB/s wr, 42 op/s
Oct 14 09:04:04 compute-0 ceph-mon[74249]: osdmap e214: 3 total, 3 up, 3 in
Oct 14 09:04:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/371345489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3495734449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 14 09:04:04 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 14 09:04:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063659031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.960 2 DEBUG oslo_concurrency.processutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.993 2 DEBUG nova.network.neutron [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updated VIF entry in instance network info cache for port 09d03bcf-f719-4ec4-91a0-3c14e350a342. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.993 2 DEBUG nova.network.neutron [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:04 compute-0 nova_compute[259627]: 2025-10-14 09:04:04.998 2 DEBUG oslo_concurrency.processutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.027 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.105 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Creating config drive at /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.115 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3pwvt98p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.259 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3pwvt98p" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.288 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.291 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292753674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.415 2 DEBUG oslo_concurrency.processutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.417 2 DEBUG nova.virt.libvirt.vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.417 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.418 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.419 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.434 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <uuid>de383510-2de3-40bd-b479-c0010b3f2d1c</uuid>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <name>instance-0000002b</name>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestJSON-server-1794713901</nova:name>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:04:04</nova:creationTime>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <nova:port uuid="8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2">
Oct 14 09:04:05 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <entry name="serial">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <entry name="uuid">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk">
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config">
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:05 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:be:e2:1b"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <target dev="tap8ec905f0-b7"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log" append="off"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:04:05 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:04:05 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:05 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:05 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:05 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.435 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.436 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.436 2 DEBUG nova.virt.libvirt.vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.437 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.437 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.437 2 DEBUG os_vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.4459] manager: (tap8ec905f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.455 2 INFO os_vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.5238] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/205)
Oct 14 09:04:05 compute-0 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00466|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00467|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.537 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.538 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.539 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:04:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:04:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918559064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.551 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b55295c-18d8-4347-a670-4f04a7bb42b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.549 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.552 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.550 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Deleting local config drive /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config because it was imported into RBD.
Oct 14 09:04:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:04:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918559064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.554 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.554 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d8a1e9-46d8-4467-9abb-eb145be19564]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00468|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00469|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7483f5-ae23-4b9d-8ba6-3d0b69dbe69e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 systemd-machined[214636]: New machine qemu-63-instance-0000002b.
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.569 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea9c7dd-eeab-44c4-887a-d047272d4fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 systemd-udevd[315980]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.5866] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.5877] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:04:05 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-0000002b.
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.588 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07789fe6-9207-445b-b0d0-cca72a24621d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.614 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[19ff0ecc-88d8-4cd5-ad58-ef67b1fadf3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.6220] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/206)
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.621 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[76d3be43-bcc5-48a2-85df-65dcad2c6ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 systemd-udevd[315986]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:05 compute-0 kernel: tap09d03bcf-f7: entered promiscuous mode
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.6352] manager: (tap09d03bcf-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00470|binding|INFO|Claiming lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 for this chassis.
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00471|binding|INFO|09d03bcf-f719-4ec4-91a0-3c14e350a342: Claiming fa:16:3e:48:7e:a5 10.100.0.8
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.648 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:7e:a5 10.100.0.8'], port_security=['fa:16:3e:48:7e:a5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c6f10764-f6df-4d21-b829-68562680623d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=09d03bcf-f719-4ec4-91a0-3c14e350a342) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:05 compute-0 systemd-udevd[316008]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00472|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 ovn-installed in OVS
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00473|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 up in Southbound
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.664 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[badd8954-20f3-405a-8a0f-80733c1706ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.6686] device (tap09d03bcf-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.6695] device (tap09d03bcf-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.670 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f7690746-286b-415e-942e-68238157e8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 5.3 MiB/s wr, 401 op/s
Oct 14 09:04:05 compute-0 systemd-machined[214636]: New machine qemu-64-instance-00000033.
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.695 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.696 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.6977] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 09:04:05 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000033.
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.705 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c63953-2eb4-4b4b-9490-b20899a26f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.717 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.728 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e7bd21-e1f6-457a-9d9a-6a2574afcb2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642445, 'reachable_time': 24937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316026, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.746 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[322282eb-cf0d-44c5-b91b-21b42754d5b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642445, 'tstamp': 642445}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316032, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d137cde-be04-489d-8c24-841c67e7a5a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642445, 'reachable_time': 24937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316033, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 14 09:04:05 compute-0 ceph-mon[74249]: osdmap e215: 3 total, 3 up, 3 in
Oct 14 09:04:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3063659031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/292753674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2918559064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:04:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2918559064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:04:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 14 09:04:05 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.814 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d85a2892-3c8e-48dc-8728-750c79fc5852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.821 2 DEBUG nova.compute.manager [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.822 2 DEBUG oslo_concurrency.lockutils [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.822 2 DEBUG oslo_concurrency.lockutils [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.822 2 DEBUG oslo_concurrency.lockutils [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.823 2 DEBUG nova.compute.manager [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.823 2 WARNING nova.compute.manager [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state stopped and task_state powering-on.
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.826 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.826 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.834 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.835 2 INFO nova.compute.claims [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.874 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c49725-bb73-4591-a724-e7f28f41dc7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.875 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.876 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.876 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 NetworkManager[44885]: <info>  [1760432645.8783] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Oct 14 09:04:05 compute-0 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.880 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:05 compute-0 ovn_controller[152662]: 2025-10-14T09:04:05Z|00474|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.896 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.898 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c300575d-465a-4743-bcc2-fea554af3374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.899 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.900 2 DEBUG nova.compute.manager [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.900 2 DEBUG oslo_concurrency.lockutils [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.901 2 DEBUG oslo_concurrency.lockutils [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.901 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.901 2 DEBUG oslo_concurrency.lockutils [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.901 2 DEBUG nova.compute.manager [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Processing event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:04:05 compute-0 nova_compute[259627]: 2025-10-14 09:04:05.982 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:06 compute-0 podman[316170]: 2025-10-14 09:04:06.369738962 +0000 UTC m=+0.071160761 container create fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:04:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675114764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:06 compute-0 podman[316170]: 2025-10-14 09:04:06.338860692 +0000 UTC m=+0.040282531 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:04:06 compute-0 systemd[1]: Started libpod-conmon-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246.scope.
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.448 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.460 2 DEBUG nova.compute.provider_tree [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:04:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8301634e7ba8182a15159584c4f3aa16339d156ed90bf5af3faa623c7d035d54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:06 compute-0 podman[316184]: 2025-10-14 09:04:06.485137439 +0000 UTC m=+0.071978911 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.491 2 DEBUG nova.scheduler.client.report [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:04:06 compute-0 podman[316170]: 2025-10-14 09:04:06.493165286 +0000 UTC m=+0.194587105 container init fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:04:06 compute-0 podman[316170]: 2025-10-14 09:04:06.498790835 +0000 UTC m=+0.200212634 container start fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:04:06 compute-0 podman[316183]: 2025-10-14 09:04:06.512693827 +0000 UTC m=+0.097218302 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.519 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:06 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [NOTICE]   (316227) : New worker (316230) forked
Oct 14 09:04:06 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [NOTICE]   (316227) : Loading success.
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.519 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.524 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.525 2 DEBUG nova.compute.manager [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.525 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5244722, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.526 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] VM Started (Lifecycle Event)
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.532 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.536 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance rebooted successfully.
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.536 2 DEBUG nova.compute.manager [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.537 2 INFO nova.virt.libvirt.driver [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance spawned successfully.
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.537 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.562 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 09d03bcf-f719-4ec4-91a0-3c14e350a342 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.564 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.574 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.574 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b69e1da0-5d01-42dc-be84-1f619af67ff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.575 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc2d149f-a1 in ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.577 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc2d149f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.577 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c74b36a-7c7a-4600-a1cd-ed0c59531fd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.578 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.579 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee1b09b-563d-4f0c-96f6-f0685432a61e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.595 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.595 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.595 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d13c65-2fe1-438f-a027-14df14bc9449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.595 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.596 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.596 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.596 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.607 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.607 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5255911, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.607 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] VM Paused (Lifecycle Event)
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6385c6-31cd-4cec-8a79-cff24cf6e585]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.624 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.624 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.637 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.640 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.640 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5273564, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.640 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.642 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[49146578-8d02-4406-af0c-0074f85d1a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.649 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b3152a-6630-4f82-919e-60afd918d371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 systemd-udevd[316010]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:06 compute-0 NetworkManager[44885]: <info>  [1760432646.6502] manager: (tapfc2d149f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.658 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.665 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.668 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.675 2 INFO nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Took 6.10 seconds to spawn the instance on the hypervisor.
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.675 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.680 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.681 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6f6847-5866-4243-bc31-89db5e3275ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.684 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[17b1d391-8091-44a9-b244-0566cca5b489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 NetworkManager[44885]: <info>  [1760432646.7071] device (tapfc2d149f-a0): carrier: link connected
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.712 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdfe06e-dbfb-446f-8a5c-0e0ca07af71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.713 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5324605, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.713 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.730 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[361e1a3a-b4c2-4575-97fc-e2c1032b6f08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642546, 'reachable_time': 32167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316249, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.745 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.749 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.749 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[033d1f2a-853f-45bd-995c-a8fb28305745]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:e73e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642546, 'tstamp': 642546}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316250, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.764 2 INFO nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Took 7.22 seconds to build instance.
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.769 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a93009c4-883c-4e13-8ba9-ae63b2d137da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642546, 'reachable_time': 32167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316251, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.780 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5330865, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.781 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] VM Resumed (Lifecycle Event)
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.796 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:06 compute-0 ceph-mon[74249]: pgmap v1452: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 5.3 MiB/s wr, 401 op/s
Oct 14 09:04:06 compute-0 ceph-mon[74249]: osdmap e216: 3 total, 3 up, 3 in
Oct 14 09:04:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/675114764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.802 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.802 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfe5e95-53dd-4beb-bf4b-4dccc4e5da07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.807 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.808 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.809 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Creating image(s)
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.832 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.862 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.871 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[980f086b-f67f-43fe-9a2f-8a7a7349ef57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.873 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.873 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.874 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:06 compute-0 NetworkManager[44885]: <info>  [1760432646.8774] manager: (tapfc2d149f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Oct 14 09:04:06 compute-0 kernel: tapfc2d149f-a0: entered promiscuous mode
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.879 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:06 compute-0 ovn_controller[152662]: 2025-10-14T09:04:06Z|00475|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.887 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.891 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.905 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.906 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d74be060-c40f-41ba-a7c5-c218871ce118]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.907 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:04:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.909 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'env', 'PROCESS_TAG=haproxy-fc2d149f-aebf-406a-aed2-5161dd22b079', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc2d149f-aebf-406a-aed2-5161dd22b079.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.931 2 DEBUG nova.policy [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4907b291c4c64d2eb768d0036817a00b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e520d17b20a44440b176c2297c35286a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.967 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.967 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.968 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.968 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.987 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:06 compute-0 nova_compute[259627]: 2025-10-14 09:04:06.989 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e038df86-1323-4b04-afae-9fe68c98c22c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:07.022 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:07.023 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:07.024 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.239 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e038df86-1323-4b04-afae-9fe68c98c22c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.295 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] resizing rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:04:07 compute-0 podman[316377]: 2025-10-14 09:04:07.335033915 +0000 UTC m=+0.072432402 container create 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 09:04:07 compute-0 systemd[1]: Started libpod-conmon-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6.scope.
Oct 14 09:04:07 compute-0 podman[316377]: 2025-10-14 09:04:07.30350945 +0000 UTC m=+0.040907947 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:04:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6ef93dddd05609592d54febe7e864f7239987c158375c17766114f7714daa3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:07 compute-0 podman[316377]: 2025-10-14 09:04:07.436771117 +0000 UTC m=+0.174169624 container init 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:04:07 compute-0 podman[316377]: 2025-10-14 09:04:07.4454395 +0000 UTC m=+0.182837987 container start 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.461 2 DEBUG nova.objects.instance [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'migration_context' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:07 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [NOTICE]   (316466) : New worker (316470) forked
Oct 14 09:04:07 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [NOTICE]   (316466) : Loading success.
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.476 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.477 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Ensure instance console log exists: /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.477 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.478 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.478 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 14 09:04:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 14 09:04:07 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 14 09:04:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 5.3 MiB/s wr, 401 op/s
Oct 14 09:04:07 compute-0 nova_compute[259627]: 2025-10-14 09:04:07.969 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Successfully created port: 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.216 2 DEBUG nova.compute.manager [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.217 2 DEBUG oslo_concurrency.lockutils [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.218 2 DEBUG oslo_concurrency.lockutils [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.218 2 DEBUG oslo_concurrency.lockutils [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.219 2 DEBUG nova.compute.manager [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.219 2 WARNING nova.compute.manager [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.221 2 DEBUG nova.compute.manager [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.222 2 DEBUG oslo_concurrency.lockutils [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.223 2 DEBUG oslo_concurrency.lockutils [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.223 2 DEBUG oslo_concurrency.lockutils [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.224 2 DEBUG nova.compute.manager [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:08 compute-0 nova_compute[259627]: 2025-10-14 09:04:08.225 2 WARNING nova.compute.manager [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state active and task_state None.
Oct 14 09:04:08 compute-0 ceph-mon[74249]: osdmap e217: 3 total, 3 up, 3 in
Oct 14 09:04:08 compute-0 ceph-mon[74249]: pgmap v1455: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 5.3 MiB/s wr, 401 op/s
Oct 14 09:04:09 compute-0 nova_compute[259627]: 2025-10-14 09:04:09.041 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Successfully updated port: 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:04:09 compute-0 nova_compute[259627]: 2025-10-14 09:04:09.058 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:09 compute-0 nova_compute[259627]: 2025-10-14 09:04:09.058 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:09 compute-0 nova_compute[259627]: 2025-10-14 09:04:09.058 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:09 compute-0 nova_compute[259627]: 2025-10-14 09:04:09.178 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:04:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 3.6 MiB/s wr, 272 op/s
Oct 14 09:04:09 compute-0 nova_compute[259627]: 2025-10-14 09:04:09.758 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432634.757353, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:09 compute-0 nova_compute[259627]: 2025-10-14 09:04:09.759 2 INFO nova.compute.manager [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Stopped (Lifecycle Event)
Oct 14 09:04:09 compute-0 nova_compute[259627]: 2025-10-14 09:04:09.778 2 DEBUG nova.compute.manager [None req-919cec2c-8cd9-41f2-bfb8-c9c338aef802 - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.162 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.186 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.186 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance network_info: |[{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.188 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start _get_guest_xml network_info=[{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.193 2 WARNING nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.204 2 DEBUG nova.virt.libvirt.host [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.205 2 DEBUG nova.virt.libvirt.host [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.208 2 DEBUG nova.virt.libvirt.host [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.208 2 DEBUG nova.virt.libvirt.host [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.209 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.209 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.209 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.211 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.211 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.211 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.211 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.214 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.310 2 DEBUG nova.compute.manager [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-changed-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.310 2 DEBUG nova.compute.manager [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing instance network info cache due to event network-changed-09d03bcf-f719-4ec4-91a0-3c14e350a342. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.311 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.311 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.311 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing network info cache for port 09d03bcf-f719-4ec4-91a0-3c14e350a342 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.514 2 DEBUG nova.objects.instance [None req-a2bcc681-167a-46c3-a9a5-3a1afb58fa73 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.541 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432650.53431, de383510-2de3-40bd-b479-c0010b3f2d1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Paused (Lifecycle Event)
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.557 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.568 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.584 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 14 09:04:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197499560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.686 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.705 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.708 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:10 compute-0 ceph-mon[74249]: pgmap v1456: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 3.6 MiB/s wr, 272 op/s
Oct 14 09:04:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3197499560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:10 compute-0 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 09:04:10 compute-0 NetworkManager[44885]: <info>  [1760432650.9741] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:04:10 compute-0 ovn_controller[152662]: 2025-10-14T09:04:10Z|00476|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 09:04:10 compute-0 ovn_controller[152662]: 2025-10-14T09:04:10Z|00477|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 09:04:10 compute-0 ovn_controller[152662]: 2025-10-14T09:04:10Z|00478|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 09:04:10 compute-0 nova_compute[259627]: 2025-10-14 09:04:10.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.003 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.004 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.006 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.007 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf99e61-3bc4-4a4a-889f-632ad6fd15a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.008 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 09:04:11 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000002b.scope: Consumed 4.898s CPU time.
Oct 14 09:04:11 compute-0 systemd-machined[214636]: Machine qemu-63-instance-0000002b terminated.
Oct 14 09:04:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705211513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:11 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [NOTICE]   (316227) : haproxy version is 2.8.14-c23fe91
Oct 14 09:04:11 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [NOTICE]   (316227) : path to executable is /usr/sbin/haproxy
Oct 14 09:04:11 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [WARNING]  (316227) : Exiting Master process...
Oct 14 09:04:11 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [ALERT]    (316227) : Current worker (316230) exited with code 143 (Terminated)
Oct 14 09:04:11 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [WARNING]  (316227) : All workers exited. Exiting... (0)
Oct 14 09:04:11 compute-0 systemd[1]: libpod-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246.scope: Deactivated successfully.
Oct 14 09:04:11 compute-0 conmon[316208]: conmon fb10a26491d9f1f52952 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246.scope/container/memory.events
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 podman[316566]: 2025-10-14 09:04:11.150740691 +0000 UTC m=+0.051547849 container died fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.156 2 DEBUG nova.compute.manager [None req-a2bcc681-167a-46c3-a9a5-3a1afb58fa73 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.161 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.162 2 DEBUG nova.virt.libvirt.vif [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1327010057',display_name='tempest-AttachInterfacesUnderV243Test-server-1327010057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1327010057',id=52,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoL8Eizmz78I7kJk+2faYxVDwYlZ7Qa0JVnSyW4HvPt6t6qpenjELhDNQJBBgBLKQxH+hNzILHY6YG4gLNrrM0gadWtg4ztrg1o/Wi2tCk6CtSq2N27wHKOX5s993gLcg==',key_name='tempest-keypair-1836165188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e520d17b20a44440b176c2297c35286a',ramdisk_id='',reservation_id='r-oukj60f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1413244718',owner_user_name='tempest-AttachInterfacesUnderV243Test-1413244718-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4907b291c4c64d2eb768d0036817a00b',uuid=e038df86-1323-4b04-afae-9fe68c98c22c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.163 2 DEBUG nova.network.os_vif_util [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converting VIF {"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.163 2 DEBUG nova.network.os_vif_util [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.165 2 DEBUG nova.objects.instance [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'pci_devices' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246-userdata-shm.mount: Deactivated successfully.
Oct 14 09:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8301634e7ba8182a15159584c4f3aa16339d156ed90bf5af3faa623c7d035d54-merged.mount: Deactivated successfully.
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.190 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <uuid>e038df86-1323-4b04-afae-9fe68c98c22c</uuid>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <name>instance-00000034</name>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1327010057</nova:name>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:04:10</nova:creationTime>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <nova:user uuid="4907b291c4c64d2eb768d0036817a00b">tempest-AttachInterfacesUnderV243Test-1413244718-project-member</nova:user>
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <nova:project uuid="e520d17b20a44440b176c2297c35286a">tempest-AttachInterfacesUnderV243Test-1413244718</nova:project>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <nova:port uuid="4f8c1944-ec5d-4de3-82f1-19760c6b4dd4">
Oct 14 09:04:11 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <entry name="serial">e038df86-1323-4b04-afae-9fe68c98c22c</entry>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <entry name="uuid">e038df86-1323-4b04-afae-9fe68c98c22c</entry>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e038df86-1323-4b04-afae-9fe68c98c22c_disk">
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e038df86-1323-4b04-afae-9fe68c98c22c_disk.config">
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:11 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:fb:80:d4"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <target dev="tap4f8c1944-ec"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/console.log" append="off"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:04:11 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:04:11 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:11 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:11 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:11 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.191 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Preparing to wait for external event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.191 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.191 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.191 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.192 2 DEBUG nova.virt.libvirt.vif [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1327010057',display_name='tempest-AttachInterfacesUnderV243Test-server-1327010057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1327010057',id=52,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoL8Eizmz78I7kJk+2faYxVDwYlZ7Qa0JVnSyW4HvPt6t6qpenjELhDNQJBBgBLKQxH+hNzILHY6YG4gLNrrM0gadWtg4ztrg1o/Wi2tCk6CtSq2N27wHKOX5s993gLcg==',key_name='tempest-keypair-1836165188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e520d17b20a44440b176c2297c35286a',ramdisk_id='',reservation_id='r-oukj60f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1413244718',owner_user_name='tempest-AttachInterfacesUnderV243Test-1413244718-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4907b291c4c64d2eb768d0036817a00b',uuid=e038df86-1323-4b04-afae-9fe68c98c22c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.192 2 DEBUG nova.network.os_vif_util [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converting VIF {"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.193 2 DEBUG nova.network.os_vif_util [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.193 2 DEBUG os_vif [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:11 compute-0 podman[316566]: 2025-10-14 09:04:11.195826939 +0000 UTC m=+0.096634107 container cleanup fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f8c1944-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f8c1944-ec, col_values=(('external_ids', {'iface-id': '4f8c1944-ec5d-4de3-82f1-19760c6b4dd4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:80:d4', 'vm-uuid': 'e038df86-1323-4b04-afae-9fe68c98c22c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 NetworkManager[44885]: <info>  [1760432651.1985] manager: (tap4f8c1944-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:11 compute-0 systemd[1]: libpod-conmon-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246.scope: Deactivated successfully.
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.205 2 INFO os_vif [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec')
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.267 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.267 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.267 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] No VIF found with MAC fa:16:3e:fb:80:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:04:11 compute-0 podman[316607]: 2025-10-14 09:04:11.2678827 +0000 UTC m=+0.046097085 container remove fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.268 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Using config drive
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.279 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38ce6d25-dcc5-40ed-995b-188bd2898478]: (4, ('Tue Oct 14 09:04:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246)\nfb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246\nTue Oct 14 09:04:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246)\nfb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[92fa0b98-c7a6-4cd3-a484-cabe738709a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.282 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:11 compute-0 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.296 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.311 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f14cc1af-51f1-49ad-a5f1-d8c5818ca6d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba31b3d-70f2-49e8-a43f-14b3d95ecd53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de14f6ce-27b5-4a2c-b364-c6c5669bcf11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.357 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9380612-9d07-4e59-9802-5ade09921f2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642437, 'reachable_time': 44473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316647, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.361 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:04:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.361 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ed95b2-e952-4b95-9c41-40be868e774a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.500 2 DEBUG nova.compute.manager [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.500 2 DEBUG oslo_concurrency.lockutils [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.501 2 DEBUG oslo_concurrency.lockutils [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.501 2 DEBUG oslo_concurrency.lockutils [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.501 2 DEBUG nova.compute.manager [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.501 2 WARNING nova.compute.manager [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state suspended and task_state None.
Oct 14 09:04:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 215 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.2 MiB/s wr, 563 op/s
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.686 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Creating config drive at /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.692 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn8lj4pft execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.826 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn8lj4pft" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.852 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:11 compute-0 nova_compute[259627]: 2025-10-14 09:04:11.855 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config e038df86-1323-4b04-afae-9fe68c98c22c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2705211513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.935947) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432651936002, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2612, "num_deletes": 529, "total_data_size": 3416722, "memory_usage": 3475456, "flush_reason": "Manual Compaction"}
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432651952285, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3349392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27928, "largest_seqno": 30539, "table_properties": {"data_size": 3337760, "index_size": 7166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 27116, "raw_average_key_size": 20, "raw_value_size": 3312510, "raw_average_value_size": 2481, "num_data_blocks": 311, "num_entries": 1335, "num_filter_entries": 1335, "num_deletions": 529, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432482, "oldest_key_time": 1760432482, "file_creation_time": 1760432651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 16375 microseconds, and 7398 cpu microseconds.
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.952328) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3349392 bytes OK
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.952349) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.954967) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.954984) EVENT_LOG_v1 {"time_micros": 1760432651954978, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.955001) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3404533, prev total WAL file size 3404533, number of live WAL files 2.
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.956144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3270KB)], [62(8428KB)]
Oct 14 09:04:11 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432651956212, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11980335, "oldest_snapshot_seqno": -1}
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5543 keys, 10230631 bytes, temperature: kUnknown
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432652004149, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10230631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10189175, "index_size": 26498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 139135, "raw_average_key_size": 25, "raw_value_size": 10085005, "raw_average_value_size": 1819, "num_data_blocks": 1083, "num_entries": 5543, "num_filter_entries": 5543, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.004346) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10230631 bytes
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.008824) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.6 rd, 213.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 6600, records dropped: 1057 output_compression: NoCompression
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.008844) EVENT_LOG_v1 {"time_micros": 1760432652008834, "job": 34, "event": "compaction_finished", "compaction_time_micros": 47998, "compaction_time_cpu_micros": 21010, "output_level": 6, "num_output_files": 1, "total_output_size": 10230631, "num_input_records": 6600, "num_output_records": 5543, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432652009544, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432652010731, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.956039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.060 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config e038df86-1323-4b04-afae-9fe68c98c22c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.061 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Deleting local config drive /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config because it was imported into RBD.
Oct 14 09:04:12 compute-0 kernel: tap4f8c1944-ec: entered promiscuous mode
Oct 14 09:04:12 compute-0 systemd-udevd[316545]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:12 compute-0 NetworkManager[44885]: <info>  [1760432652.1099] manager: (tap4f8c1944-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Oct 14 09:04:12 compute-0 ovn_controller[152662]: 2025-10-14T09:04:12Z|00479|binding|INFO|Claiming lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 for this chassis.
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:12 compute-0 ovn_controller[152662]: 2025-10-14T09:04:12Z|00480|binding|INFO|4f8c1944-ec5d-4de3-82f1-19760c6b4dd4: Claiming fa:16:3e:fb:80:d4 10.100.0.7
Oct 14 09:04:12 compute-0 NetworkManager[44885]: <info>  [1760432652.1241] device (tap4f8c1944-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.121 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:80:d4 10.100.0.7'], port_security=['fa:16:3e:fb:80:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e038df86-1323-4b04-afae-9fe68c98c22c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e520d17b20a44440b176c2297c35286a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dfbcaeb0-59cf-4ea6-aad2-32a400918089', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c03d4d8-729d-49db-b443-08ab27defcda, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.122 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 in datapath e964bc94-eb23-4bb9-b6af-2d14c0f7d764 bound to our chassis
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.124 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e964bc94-eb23-4bb9-b6af-2d14c0f7d764
Oct 14 09:04:12 compute-0 NetworkManager[44885]: <info>  [1760432652.1253] device (tap4f8c1944-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.134 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8106ade3-b0c7-4750-8cf0-691b1b251272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.136 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape964bc94-e1 in ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.140 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape964bc94-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.140 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[007e7b43-5aec-457e-90dc-3eef11599246]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.143 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c466e13-59a9-4205-b16c-a1188a139728]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 systemd-machined[214636]: New machine qemu-65-instance-00000034.
Oct 14 09:04:12 compute-0 ovn_controller[152662]: 2025-10-14T09:04:12Z|00481|binding|INFO|Setting lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 ovn-installed in OVS
Oct 14 09:04:12 compute-0 ovn_controller[152662]: 2025-10-14T09:04:12Z|00482|binding|INFO|Setting lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 up in Southbound
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.161 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8664c9-aeaa-4852-9a61-0c8e75eca6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000034.
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.184 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52a6d86b-ef41-4cb2-9fc8-07187a65cc94]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.212 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e401cd-4397-4b26-b993-1af6655bed49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 NetworkManager[44885]: <info>  [1760432652.2265] manager: (tape964bc94-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.225 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab719fd-614b-4573-8a44-ee420663fab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.242 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updated VIF entry in instance network info cache for port 09d03bcf-f719-4ec4-91a0-3c14e350a342. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.243 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.269 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.270 2 DEBUG nova.compute.manager [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.270 2 DEBUG nova.compute.manager [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing instance network info cache due to event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.270 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.271 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.271 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.271 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[336403d9-b402-4c3e-a97b-acce9d74fdd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.275 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ecf19a-5859-44cf-a86f-40ab00ce3550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 NetworkManager[44885]: <info>  [1760432652.3014] device (tape964bc94-e0): carrier: link connected
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.311 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ef882f33-b618-41bc-aef0-0b797be442a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.334 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5f53a6-c7d9-4215-bb25-760171a8b7f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape964bc94-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:2c:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643106, 'reachable_time': 27840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316735, 'error': None, 'target': 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.355 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19885a57-f883-4fb1-8c19-27bc9bbd2f11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:2c10'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643106, 'tstamp': 643106}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316736, 'error': None, 'target': 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.374 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[971c94d6-7932-4dda-a927-d52e7ac6a5ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape964bc94-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:2c:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643106, 'reachable_time': 27840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316737, 'error': None, 'target': 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[76d72c4e-7197-4998-a499-2f428b0b4aa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.456 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26f36671-bb20-4f6e-8d41-52c575543d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.457 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape964bc94-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.458 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.458 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape964bc94-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:12 compute-0 NetworkManager[44885]: <info>  [1760432652.4605] manager: (tape964bc94-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Oct 14 09:04:12 compute-0 kernel: tape964bc94-e0: entered promiscuous mode
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.465 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape964bc94-e0, col_values=(('external_ids', {'iface-id': 'bcf643fa-2c1a-444e-ad03-f473ae6c9565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:12 compute-0 ovn_controller[152662]: 2025-10-14T09:04:12Z|00483|binding|INFO|Releasing lport bcf643fa-2c1a-444e-ad03-f473ae6c9565 from this chassis (sb_readonly=0)
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:12 compute-0 nova_compute[259627]: 2025-10-14 09:04:12.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.488 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e964bc94-eb23-4bb9-b6af-2d14c0f7d764.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e964bc94-eb23-4bb9-b6af-2d14c0f7d764.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61f10717-e2dd-46de-bb35-a84d9b9c226e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.490 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-e964bc94-eb23-4bb9-b6af-2d14c0f7d764
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/e964bc94-eb23-4bb9-b6af-2d14c0f7d764.pid.haproxy
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID e964bc94-eb23-4bb9-b6af-2d14c0f7d764
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:04:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.492 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'env', 'PROCESS_TAG=haproxy-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e964bc94-eb23-4bb9-b6af-2d14c0f7d764.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:04:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 14 09:04:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 14 09:04:12 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 14 09:04:12 compute-0 podman[316813]: 2025-10-14 09:04:12.849805874 +0000 UTC m=+0.047398036 container create a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 09:04:12 compute-0 systemd[1]: Started libpod-conmon-a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436.scope.
Oct 14 09:04:12 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:12 compute-0 podman[316813]: 2025-10-14 09:04:12.825374473 +0000 UTC m=+0.022966685 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7bf3898c6dcba520ee1a299c42284442b12fb92ae3baa3607f44b8c13a94b30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:12 compute-0 podman[316813]: 2025-10-14 09:04:12.931640496 +0000 UTC m=+0.129232668 container init a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:04:12 compute-0 podman[316813]: 2025-10-14 09:04:12.938779792 +0000 UTC m=+0.136371954 container start a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 09:04:12 compute-0 ceph-mon[74249]: pgmap v1457: 305 pgs: 305 active+clean; 215 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.2 MiB/s wr, 563 op/s
Oct 14 09:04:12 compute-0 ceph-mon[74249]: osdmap e218: 3 total, 3 up, 3 in
Oct 14 09:04:12 compute-0 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [NOTICE]   (316835) : New worker (316837) forked
Oct 14 09:04:12 compute-0 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [NOTICE]   (316835) : Loading success.
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.148 2 INFO nova.compute.manager [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Resuming
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.149 2 DEBUG nova.objects.instance [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.187 2 DEBUG oslo_concurrency.lockutils [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.187 2 DEBUG oslo_concurrency.lockutils [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.187 2 DEBUG nova.network.neutron [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.240 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432653.2396312, e038df86-1323-4b04-afae-9fe68c98c22c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.240 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] VM Started (Lifecycle Event)
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.262 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.266 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432653.2398152, e038df86-1323-4b04-afae-9fe68c98c22c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.267 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] VM Paused (Lifecycle Event)
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.283 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.286 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.308 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.590 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.590 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.590 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.590 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.591 2 WARNING nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state suspended and task_state resuming.
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Processing event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] No waiting events found dispatching network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.592 2 WARNING nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received unexpected event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 for instance with vm_state building and task_state spawning.
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.593 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.596 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432653.5961614, e038df86-1323-4b04-afae-9fe68c98c22c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] VM Resumed (Lifecycle Event)
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.598 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updated VIF entry in instance network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.598 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.600 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.603 2 INFO nova.virt.libvirt.driver [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance spawned successfully.
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.603 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.619 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.623 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.627 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.632 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.633 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.633 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.633 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.634 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.634 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.655 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:04:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 215 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 289 op/s
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.697 2 INFO nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Took 6.89 seconds to spawn the instance on the hypervisor.
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.697 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.770 2 INFO nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Took 7.98 seconds to build instance.
Oct 14 09:04:13 compute-0 nova_compute[259627]: 2025-10-14 09:04:13.793 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:14 compute-0 ceph-mon[74249]: pgmap v1459: 305 pgs: 305 active+clean; 215 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 289 op/s
Oct 14 09:04:14 compute-0 nova_compute[259627]: 2025-10-14 09:04:14.979 2 DEBUG nova.network.neutron [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:14 compute-0 nova_compute[259627]: 2025-10-14 09:04:14.996 2 DEBUG oslo_concurrency.lockutils [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.000 2 DEBUG nova.virt.libvirt.vif [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.001 2 DEBUG nova.network.os_vif_util [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.001 2 DEBUG nova.network.os_vif_util [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.002 2 DEBUG os_vif [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.003 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.003 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.007 2 INFO os_vif [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.026 2 DEBUG nova.objects.instance [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:15 compute-0 NetworkManager[44885]: <info>  [1760432655.0983] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Oct 14 09:04:15 compute-0 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 ovn_controller[152662]: 2025-10-14T09:04:15Z|00484|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 09:04:15 compute-0 ovn_controller[152662]: 2025-10-14T09:04:15Z|00485|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.114 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.116 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.118 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:04:15 compute-0 ovn_controller[152662]: 2025-10-14T09:04:15Z|00486|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 09:04:15 compute-0 ovn_controller[152662]: 2025-10-14T09:04:15Z|00487|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.130 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e331b1-4b36-453e-b0a8-38fc01345b29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.131 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.136 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.136 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f151d475-75f5-4ade-a9a2-373178afec13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.141 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d81f2785-49fd-498a-b979-d068a4b2ea36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 systemd-machined[214636]: New machine qemu-66-instance-0000002b.
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.161 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[62c85fa7-d635-4ab0-87e3-8251764ac726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000002b.
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.176 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc619c9e-6ad1-4d86-8648-115c2d60f412]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 systemd-udevd[316864]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:15 compute-0 NetworkManager[44885]: <info>  [1760432655.2090] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:04:15 compute-0 NetworkManager[44885]: <info>  [1760432655.2099] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.211 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[202fbd0d-9d3a-428d-9659-5ba0caeb26b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.217 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9aab6e59-4286-4a50-8352-070eb7ac0e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 NetworkManager[44885]: <info>  [1760432655.2234] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.262 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cd737a75-e947-419d-8c20-5c451a062ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.267 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b606665a-d0f8-45b3-9d0a-211f1f2d7865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 NetworkManager[44885]: <info>  [1760432655.2925] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.303 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e884e930-eee0-45c0-9fe4-b7a04dcebf9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.321 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[27414336-5c80-4941-bcae-7bc1fc10bbc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643405, 'reachable_time': 44987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316893, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52e65582-023a-46a0-9193-9d612fecbfe6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643405, 'tstamp': 643405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316894, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.358 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c684ce15-acdc-4d52-b741-fa814a4c1653]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643405, 'reachable_time': 44987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316895, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f21281-9ac5-428e-9888-c115c612cc16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b07e4f39-2d63-46f6-9e60-075ee0ea7df0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.447 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.447 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.448 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 NetworkManager[44885]: <info>  [1760432655.4505] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct 14 09:04:15 compute-0 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.457 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 ovn_controller[152662]: 2025-10-14T09:04:15Z|00488|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.462 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aec749a2-d3d9-4213-88ce-9f01420a18bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.473 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.478 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:04:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 2.7 MiB/s wr, 393 op/s
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.791 2 DEBUG nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.795 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.795 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.795 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.795 2 DEBUG nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.796 2 WARNING nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state suspended and task_state resuming.
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.796 2 DEBUG nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.796 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.797 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.797 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.797 2 DEBUG nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:15 compute-0 nova_compute[259627]: 2025-10-14 09:04:15.797 2 WARNING nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state suspended and task_state resuming.
Oct 14 09:04:15 compute-0 podman[316925]: 2025-10-14 09:04:15.867731845 +0000 UTC m=+0.055857854 container create c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:04:15 compute-0 systemd[1]: Started libpod-conmon-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97.scope.
Oct 14 09:04:15 compute-0 podman[316925]: 2025-10-14 09:04:15.833737059 +0000 UTC m=+0.021863108 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:04:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085b80beecbc83e40c99bbc55ff06db8151404b833b2eae4640168b2b3becb0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:15 compute-0 podman[316925]: 2025-10-14 09:04:15.970963043 +0000 UTC m=+0.159089082 container init c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:04:15 compute-0 podman[316925]: 2025-10-14 09:04:15.977714359 +0000 UTC m=+0.165840398 container start c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 09:04:16 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [NOTICE]   (316981) : New worker (316987) forked
Oct 14 09:04:16 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [NOTICE]   (316981) : Loading success.
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.507 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.508 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432656.5071824, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.508 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.523 2 DEBUG nova.compute.manager [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.524 2 DEBUG nova.objects.instance [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.543 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.547 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.550 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance running successfully.
Oct 14 09:04:16 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.553 2 DEBUG nova.virt.libvirt.guest [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.553 2 DEBUG nova.compute.manager [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.573 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.574 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432656.5109954, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.574 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.596 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:16 compute-0 nova_compute[259627]: 2025-10-14 09:04:16.600 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:16 compute-0 ceph-mon[74249]: pgmap v1460: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 2.7 MiB/s wr, 393 op/s
Oct 14 09:04:17 compute-0 nova_compute[259627]: 2025-10-14 09:04:17.042 2 DEBUG nova.compute.manager [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:17 compute-0 nova_compute[259627]: 2025-10-14 09:04:17.042 2 DEBUG nova.compute.manager [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing instance network info cache due to event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:17 compute-0 nova_compute[259627]: 2025-10-14 09:04:17.043 2 DEBUG oslo_concurrency.lockutils [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:17 compute-0 nova_compute[259627]: 2025-10-14 09:04:17.043 2 DEBUG oslo_concurrency.lockutils [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:17 compute-0 nova_compute[259627]: 2025-10-14 09:04:17.043 2 DEBUG nova.network.neutron [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:17 compute-0 nova_compute[259627]: 2025-10-14 09:04:17.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Oct 14 09:04:17 compute-0 nova_compute[259627]: 2025-10-14 09:04:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:18 compute-0 ovn_controller[152662]: 2025-10-14T09:04:18Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:7e:a5 10.100.0.8
Oct 14 09:04:18 compute-0 ovn_controller[152662]: 2025-10-14T09:04:18Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:7e:a5 10.100.0.8
Oct 14 09:04:18 compute-0 nova_compute[259627]: 2025-10-14 09:04:18.599 2 DEBUG nova.network.neutron [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updated VIF entry in instance network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:18 compute-0 nova_compute[259627]: 2025-10-14 09:04:18.600 2 DEBUG nova.network.neutron [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:18 compute-0 nova_compute[259627]: 2025-10-14 09:04:18.618 2 DEBUG oslo_concurrency.lockutils [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:18 compute-0 podman[316997]: 2025-10-14 09:04:18.692888196 +0000 UTC m=+0.108723734 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:04:18 compute-0 podman[316998]: 2025-10-14 09:04:18.694407614 +0000 UTC m=+0.106288185 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:04:18 compute-0 ceph-mon[74249]: pgmap v1461: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.091 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.092 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.092 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.092 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.092 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.093 2 INFO nova.compute.manager [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Terminating instance
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.094 2 DEBUG nova.compute.manager [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:04:19 compute-0 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 09:04:19 compute-0 NetworkManager[44885]: <info>  [1760432659.1410] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:04:19 compute-0 ovn_controller[152662]: 2025-10-14T09:04:19Z|00489|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 09:04:19 compute-0 ovn_controller[152662]: 2025-10-14T09:04:19Z|00490|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 09:04:19 compute-0 ovn_controller[152662]: 2025-10-14T09:04:19Z|00491|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.170 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.171 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.173 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.174 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[60ffc238-f780-4a34-8ef8-2a7944aebc1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.174 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:19 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 09:04:19 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000002b.scope: Consumed 3.817s CPU time.
Oct 14 09:04:19 compute-0 systemd-machined[214636]: Machine qemu-66-instance-0000002b terminated.
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.334 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.335 2 DEBUG nova.objects.instance [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:19 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [NOTICE]   (316981) : haproxy version is 2.8.14-c23fe91
Oct 14 09:04:19 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [NOTICE]   (316981) : path to executable is /usr/sbin/haproxy
Oct 14 09:04:19 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [WARNING]  (316981) : Exiting Master process...
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.351 2 DEBUG nova.virt.libvirt.vif [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.352 2 DEBUG nova.network.os_vif_util [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.352 2 DEBUG nova.network.os_vif_util [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.353 2 DEBUG os_vif [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:04:19 compute-0 systemd[1]: libpod-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97.scope: Deactivated successfully.
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:19 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [ALERT]    (316981) : Current worker (316987) exited with code 143 (Terminated)
Oct 14 09:04:19 compute-0 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [WARNING]  (316981) : All workers exited. Exiting... (0)
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:19 compute-0 conmon[316951]: conmon c0ed9eadfcd606c190c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97.scope/container/memory.events
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:19 compute-0 podman[317061]: 2025-10-14 09:04:19.359761402 +0000 UTC m=+0.060841967 container died c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.365 2 INFO os_vif [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.414 2 DEBUG nova.compute.manager [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.415 2 DEBUG oslo_concurrency.lockutils [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.415 2 DEBUG oslo_concurrency.lockutils [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97-userdata-shm.mount: Deactivated successfully.
Oct 14 09:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4085b80beecbc83e40c99bbc55ff06db8151404b833b2eae4640168b2b3becb0-merged.mount: Deactivated successfully.
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.416 2 DEBUG oslo_concurrency.lockutils [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.421 2 DEBUG nova.compute.manager [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.421 2 DEBUG nova.compute.manager [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:04:19 compute-0 podman[317061]: 2025-10-14 09:04:19.434252334 +0000 UTC m=+0.135332879 container cleanup c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:04:19 compute-0 systemd[1]: libpod-conmon-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97.scope: Deactivated successfully.
Oct 14 09:04:19 compute-0 podman[317116]: 2025-10-14 09:04:19.495499289 +0000 UTC m=+0.038037696 container remove c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.501 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01beda27-41b0-4a84-971d-f7e82b4cf92e]: (4, ('Tue Oct 14 09:04:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97)\nc0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97\nTue Oct 14 09:04:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97)\nc0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8e3219-b228-49bb-b78e-81b2a6ea4e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.503 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:19 compute-0 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.521 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01897d1b-179c-447b-9025-861e8476acef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.554 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[96be702e-e042-4339-be3c-9855756e83ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.556 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b68542e-c823-4438-8c8f-dd529cd6e437]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.577 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[216c2a44-9b5e-472b-87ce-3b67dae34592]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643396, 'reachable_time': 28838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317133, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.580 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:04:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.581 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[191aaa96-eee4-424c-9d28-a6b7e74f815a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.781 2 INFO nova.virt.libvirt.driver [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Deleting instance files /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c_del
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.782 2 INFO nova.virt.libvirt.driver [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Deletion of /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c_del complete
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.885 2 INFO nova.compute.manager [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.886 2 DEBUG oslo.service.loopingcall [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.887 2 DEBUG nova.compute.manager [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:04:19 compute-0 nova_compute[259627]: 2025-10-14 09:04:19.887 2 DEBUG nova.network.neutron [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:04:20 compute-0 nova_compute[259627]: 2025-10-14 09:04:20.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:20 compute-0 nova_compute[259627]: 2025-10-14 09:04:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:20 compute-0 ceph-mon[74249]: pgmap v1462: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.063 2 DEBUG nova.network.neutron [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.084 2 INFO nova.compute.manager [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Took 1.20 seconds to deallocate network for instance.
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.120 2 DEBUG nova.compute.manager [req-be2304d6-7415-4da6-84e4-1cda39ba1827 req-506e0de1-7515-47e6-8bcb-8aabc6ec5c0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-deleted-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.128 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.129 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.171 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.188 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.189 2 DEBUG nova.compute.provider_tree [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.208 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.245 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.529 2 DEBUG oslo_concurrency.processutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.605 2 DEBUG nova.compute.manager [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.606 2 DEBUG oslo_concurrency.lockutils [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.607 2 DEBUG oslo_concurrency.lockutils [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.608 2 DEBUG oslo_concurrency.lockutils [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.608 2 DEBUG nova.compute.manager [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.609 2 WARNING nova.compute.manager [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state deleted and task_state None.
Oct 14 09:04:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 167 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:21 compute-0 nova_compute[259627]: 2025-10-14 09:04:21.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:04:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453293817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:22 compute-0 nova_compute[259627]: 2025-10-14 09:04:22.023 2 DEBUG oslo_concurrency.processutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:22 compute-0 nova_compute[259627]: 2025-10-14 09:04:22.029 2 DEBUG nova.compute.provider_tree [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:04:22 compute-0 nova_compute[259627]: 2025-10-14 09:04:22.050 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:04:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2453293817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:22 compute-0 nova_compute[259627]: 2025-10-14 09:04:22.082 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:22 compute-0 nova_compute[259627]: 2025-10-14 09:04:22.108 2 INFO nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Deleted allocations for instance de383510-2de3-40bd-b479-c0010b3f2d1c
Oct 14 09:04:22 compute-0 nova_compute[259627]: 2025-10-14 09:04:22.178 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:22 compute-0 nova_compute[259627]: 2025-10-14 09:04:22.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:22 compute-0 nova_compute[259627]: 2025-10-14 09:04:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:23 compute-0 ceph-mon[74249]: pgmap v1463: 305 pgs: 305 active+clean; 167 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Oct 14 09:04:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895446000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.446 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.508 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.508 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.512 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.512 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.662 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.663 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3753MB free_disk=59.921939849853516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.663 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.664 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 167 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 186 op/s
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.742 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.743 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e038df86-1323-4b04-afae-9fe68c98c22c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.743 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.743 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:04:23 compute-0 nova_compute[259627]: 2025-10-14 09:04:23.812 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1895446000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495202466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:24 compute-0 nova_compute[259627]: 2025-10-14 09:04:24.253 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:24 compute-0 nova_compute[259627]: 2025-10-14 09:04:24.258 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:04:24 compute-0 nova_compute[259627]: 2025-10-14 09:04:24.272 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:04:24 compute-0 nova_compute[259627]: 2025-10-14 09:04:24.308 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:04:24 compute-0 nova_compute[259627]: 2025-10-14 09:04:24.308 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:24 compute-0 nova_compute[259627]: 2025-10-14 09:04:24.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:25 compute-0 ovn_controller[152662]: 2025-10-14T09:04:25Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:80:d4 10.100.0.7
Oct 14 09:04:25 compute-0 ovn_controller[152662]: 2025-10-14T09:04:25Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:80:d4 10.100.0.7
Oct 14 09:04:25 compute-0 ceph-mon[74249]: pgmap v1464: 305 pgs: 305 active+clean; 167 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 186 op/s
Oct 14 09:04:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3495202466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 215 op/s
Oct 14 09:04:27 compute-0 ceph-mon[74249]: pgmap v1465: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 215 op/s
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.310 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.310 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.341 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:04:27 compute-0 ovn_controller[152662]: 2025-10-14T09:04:27Z|00492|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 09:04:27 compute-0 ovn_controller[152662]: 2025-10-14T09:04:27Z|00493|binding|INFO|Releasing lport bcf643fa-2c1a-444e-ad03-f473ae6c9565 from this chassis (sb_readonly=0)
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.605 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.606 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.607 2 DEBUG nova.objects.instance [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 4.2 MiB/s wr, 141 op/s
Oct 14 09:04:27 compute-0 nova_compute[259627]: 2025-10-14 09:04:27.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:28 compute-0 nova_compute[259627]: 2025-10-14 09:04:28.073 2 DEBUG nova.objects.instance [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:28 compute-0 nova_compute[259627]: 2025-10-14 09:04:28.088 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:04:28 compute-0 nova_compute[259627]: 2025-10-14 09:04:28.293 2 DEBUG nova.policy [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:04:28 compute-0 nova_compute[259627]: 2025-10-14 09:04:28.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:04:29 compute-0 ceph-mon[74249]: pgmap v1466: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 4.2 MiB/s wr, 141 op/s
Oct 14 09:04:29 compute-0 nova_compute[259627]: 2025-10-14 09:04:29.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 4.2 MiB/s wr, 141 op/s
Oct 14 09:04:29 compute-0 nova_compute[259627]: 2025-10-14 09:04:29.768 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Successfully created port: 194dc9cd-03af-4e2c-b8d6-107081204a25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:04:30 compute-0 nova_compute[259627]: 2025-10-14 09:04:30.875 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Successfully updated port: 194dc9cd-03af-4e2c-b8d6-107081204a25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:04:30 compute-0 nova_compute[259627]: 2025-10-14 09:04:30.896 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:30 compute-0 nova_compute[259627]: 2025-10-14 09:04:30.897 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:30 compute-0 nova_compute[259627]: 2025-10-14 09:04:30.898 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:31 compute-0 nova_compute[259627]: 2025-10-14 09:04:31.033 2 DEBUG nova.compute.manager [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-changed-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:31 compute-0 nova_compute[259627]: 2025-10-14 09:04:31.034 2 DEBUG nova.compute.manager [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing instance network info cache due to event network-changed-194dc9cd-03af-4e2c-b8d6-107081204a25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:31 compute-0 nova_compute[259627]: 2025-10-14 09:04:31.034 2 DEBUG oslo_concurrency.lockutils [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:31 compute-0 nova_compute[259627]: 2025-10-14 09:04:31.116 2 WARNING nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:04:31 compute-0 ceph-mon[74249]: pgmap v1467: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 4.2 MiB/s wr, 141 op/s
Oct 14 09:04:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Oct 14 09:04:32 compute-0 nova_compute[259627]: 2025-10-14 09:04:32.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:04:32
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.control']
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:04:32 compute-0 nova_compute[259627]: 2025-10-14 09:04:32.970 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:32 compute-0 nova_compute[259627]: 2025-10-14 09:04:32.995 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:32 compute-0 nova_compute[259627]: 2025-10-14 09:04:32.997 2 DEBUG oslo_concurrency.lockutils [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:32 compute-0 nova_compute[259627]: 2025-10-14 09:04:32.997 2 DEBUG nova.network.neutron [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing network info cache for port 194dc9cd-03af-4e2c-b8d6-107081204a25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.001 2 DEBUG nova.virt.libvirt.vif [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.001 2 DEBUG nova.network.os_vif_util [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.003 2 DEBUG nova.network.os_vif_util [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.003 2 DEBUG os_vif [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.004 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap194dc9cd-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap194dc9cd-03, col_values=(('external_ids', {'iface-id': '194dc9cd-03af-4e2c-b8d6-107081204a25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:f7:fb', 'vm-uuid': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:04:33 compute-0 NetworkManager[44885]: <info>  [1760432673.0141] manager: (tap194dc9cd-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.023 2 INFO os_vif [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03')
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.024 2 DEBUG nova.virt.libvirt.vif [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.024 2 DEBUG nova.network.os_vif_util [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.025 2 DEBUG nova.network.os_vif_util [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.027 2 DEBUG nova.virt.libvirt.guest [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:5f:f7:fb"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <target dev="tap194dc9cd-03"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]: </interface>
Oct 14 09:04:33 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 09:04:33 compute-0 NetworkManager[44885]: <info>  [1760432673.0405] manager: (tap194dc9cd-03): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Oct 14 09:04:33 compute-0 kernel: tap194dc9cd-03: entered promiscuous mode
Oct 14 09:04:33 compute-0 ovn_controller[152662]: 2025-10-14T09:04:33Z|00494|binding|INFO|Claiming lport 194dc9cd-03af-4e2c-b8d6-107081204a25 for this chassis.
Oct 14 09:04:33 compute-0 ovn_controller[152662]: 2025-10-14T09:04:33Z|00495|binding|INFO|194dc9cd-03af-4e2c-b8d6-107081204a25: Claiming fa:16:3e:5f:f7:fb 10.100.0.14
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.049 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:f7:fb 10.100.0.14'], port_security=['fa:16:3e:5f:f7:fb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=194dc9cd-03af-4e2c-b8d6-107081204a25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.053 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 194dc9cd-03af-4e2c-b8d6-107081204a25 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.054 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:04:33 compute-0 ovn_controller[152662]: 2025-10-14T09:04:33Z|00496|binding|INFO|Setting lport 194dc9cd-03af-4e2c-b8d6-107081204a25 ovn-installed in OVS
Oct 14 09:04:33 compute-0 ovn_controller[152662]: 2025-10-14T09:04:33Z|00497|binding|INFO|Setting lport 194dc9cd-03af-4e2c-b8d6-107081204a25 up in Southbound
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db394068-a4fb-44a8-9be8-35f4aaeb8368]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:33 compute-0 systemd-udevd[317211]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:33 compute-0 NetworkManager[44885]: <info>  [1760432673.1106] device (tap194dc9cd-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:04:33 compute-0 NetworkManager[44885]: <info>  [1760432673.1129] device (tap194dc9cd-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.121 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e921f0-7a1e-4ea4-ba46-ed87857c0820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.122 2 DEBUG nova.virt.libvirt.driver [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.122 2 DEBUG nova.virt.libvirt.driver [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.123 2 DEBUG nova.virt.libvirt.driver [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:48:7e:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.123 2 DEBUG nova.virt.libvirt.driver [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:5f:f7:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.125 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ea50318c-ca23-4328-b9cf-21216de8d085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.143 2 DEBUG nova.virt.libvirt.guest [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:04:33</nova:creationTime>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 09:04:33 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     <nova:port uuid="194dc9cd-03af-4e2c-b8d6-107081204a25">
Oct 14 09:04:33 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:04:33 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:33 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:04:33 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:04:33 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.154 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[02720ab9-0740-491d-a072-5cfa496d3a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.164 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.168 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8de1f2e-47ce-4870-b329-d27df913e96e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642546, 'reachable_time': 32167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317217, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:33 compute-0 ceph-mon[74249]: pgmap v1468: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56c4368f-b955-44dc-899e-ddeabc36ce9a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642559, 'tstamp': 642559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317218, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642562, 'tstamp': 642562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317218, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.188 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.191 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.191 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.192 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.192 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.450 2 DEBUG nova.compute.manager [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.450 2 DEBUG oslo_concurrency.lockutils [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.451 2 DEBUG oslo_concurrency.lockutils [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.451 2 DEBUG oslo_concurrency.lockutils [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.452 2 DEBUG nova.compute.manager [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:33 compute-0 nova_compute[259627]: 2025-10-14 09:04:33.452 2 WARNING nova.compute.manager [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 for instance with vm_state active and task_state None.
Oct 14 09:04:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 09:04:34 compute-0 nova_compute[259627]: 2025-10-14 09:04:34.246 2 DEBUG nova.objects.instance [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'flavor' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:34 compute-0 nova_compute[259627]: 2025-10-14 09:04:34.285 2 DEBUG oslo_concurrency.lockutils [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:34 compute-0 nova_compute[259627]: 2025-10-14 09:04:34.286 2 DEBUG oslo_concurrency.lockutils [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:34 compute-0 nova_compute[259627]: 2025-10-14 09:04:34.331 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432659.3304415, de383510-2de3-40bd-b479-c0010b3f2d1c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:34 compute-0 nova_compute[259627]: 2025-10-14 09:04:34.332 2 INFO nova.compute.manager [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Stopped (Lifecycle Event)
Oct 14 09:04:34 compute-0 nova_compute[259627]: 2025-10-14 09:04:34.353 2 DEBUG nova.compute.manager [None req-f30420e9-2a73-4f1c-981e-e81862b060d3 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.140 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-194dc9cd-03af-4e2c-b8d6-107081204a25" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.140 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-194dc9cd-03af-4e2c-b8d6-107081204a25" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.165 2 DEBUG nova.objects.instance [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:35 compute-0 ceph-mon[74249]: pgmap v1469: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.193 2 DEBUG nova.virt.libvirt.vif [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.194 2 DEBUG nova.network.os_vif_util [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.195 2 DEBUG nova.network.os_vif_util [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.199 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.203 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.207 2 DEBUG nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Attempting to detach device tap194dc9cd-03 from instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.207 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:5f:f7:fb"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <target dev="tap194dc9cd-03"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]: </interface>
Oct 14 09:04:35 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.214 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.218 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface>not found in domain: <domain type='kvm' id='64'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <name>instance-00000033</name>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:04:33</nova:creationTime>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:port uuid="194dc9cd-03af-4e2c-b8d6-107081204a25">
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:04:35 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='serial'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='uuid'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk' index='2'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config' index='1'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:48:7e:a5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target dev='tap09d03bcf-f7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:5f:f7:fb'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target dev='tap194dc9cd-03'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='net1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <source path='/dev/pts/1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </target>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/1'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <source path='/dev/pts/1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </console>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c247,c365</label>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c247,c365</imagelabel>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:04:35 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:35 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.219 2 INFO nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tap194dc9cd-03 from instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d from the persistent domain config.
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.220 2 DEBUG nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] (1/8): Attempting to detach device tap194dc9cd-03 with device alias net1 from instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.220 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:5f:f7:fb"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <target dev="tap194dc9cd-03"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]: </interface>
Oct 14 09:04:35 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.293 2 DEBUG nova.network.neutron [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updated VIF entry in instance network info cache for port 194dc9cd-03af-4e2c-b8d6-107081204a25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.294 2 DEBUG nova.network.neutron [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.317 2 DEBUG oslo_concurrency.lockutils [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:35 compute-0 kernel: tap194dc9cd-03 (unregistering): left promiscuous mode
Oct 14 09:04:35 compute-0 NetworkManager[44885]: <info>  [1760432675.3376] device (tap194dc9cd-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:04:35 compute-0 ovn_controller[152662]: 2025-10-14T09:04:35Z|00498|binding|INFO|Releasing lport 194dc9cd-03af-4e2c-b8d6-107081204a25 from this chassis (sb_readonly=0)
Oct 14 09:04:35 compute-0 ovn_controller[152662]: 2025-10-14T09:04:35Z|00499|binding|INFO|Setting lport 194dc9cd-03af-4e2c-b8d6-107081204a25 down in Southbound
Oct 14 09:04:35 compute-0 ovn_controller[152662]: 2025-10-14T09:04:35Z|00500|binding|INFO|Removing iface tap194dc9cd-03 ovn-installed in OVS
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.357 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:f7:fb 10.100.0.14'], port_security=['fa:16:3e:5f:f7:fb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=194dc9cd-03af-4e2c-b8d6-107081204a25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.358 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760432675.3579726, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.358 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 194dc9cd-03af-4e2c-b8d6-107081204a25 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.360 2 DEBUG nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Start waiting for the detach event from libvirt for device tap194dc9cd-03 with device alias net1 for instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.360 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.360 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.364 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface>not found in domain: <domain type='kvm' id='64'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <name>instance-00000033</name>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:04:33</nova:creationTime>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:port uuid="194dc9cd-03af-4e2c-b8d6-107081204a25">
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:04:35 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='serial'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='uuid'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk' index='2'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config' index='1'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:48:7e:a5'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target dev='tap09d03bcf-f7'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <source path='/dev/pts/1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       </target>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/1'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <source path='/dev/pts/1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </console>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c247,c365</label>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c247,c365</imagelabel>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:04:35 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:35 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.364 2 INFO nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tap194dc9cd-03 from instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d from the live domain config.
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.365 2 DEBUG nova.virt.libvirt.vif [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.365 2 DEBUG nova.network.os_vif_util [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.368 2 DEBUG nova.network.os_vif_util [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.369 2 DEBUG os_vif [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap194dc9cd-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.379 2 INFO os_vif [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03')
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.380 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:04:35</nova:creationTime>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 09:04:35 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:04:35 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:35 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:04:35 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:04:35 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.394 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbba2fe-e9cc-4225-a02c-d6417fa54866]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.439 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[865acbdb-f6d4-42ed-a762-0b2fed629e40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.443 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[02ee724d-54dd-4ec7-8e8a-c3ee46f4c13a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.475 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2bda4eb9-a08d-4566-a00a-5866b938ffd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.494 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4429ce0c-0800-4709-94ed-44f8855286e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642546, 'reachable_time': 32167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317228, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.511 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fac5ee-465b-475f-9273-e3065c0c3028]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642559, 'tstamp': 642559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317229, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642562, 'tstamp': 642562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317229, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.513 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.516 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.516 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.516 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.516 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.682 2 DEBUG nova.compute.manager [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.682 2 DEBUG oslo_concurrency.lockutils [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.683 2 DEBUG oslo_concurrency.lockutils [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.683 2 DEBUG oslo_concurrency.lockutils [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.683 2 DEBUG nova.compute.manager [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:35 compute-0 nova_compute[259627]: 2025-10-14 09:04:35.683 2 WARNING nova.compute.manager [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 for instance with vm_state active and task_state None.
Oct 14 09:04:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Oct 14 09:04:36 compute-0 nova_compute[259627]: 2025-10-14 09:04:36.289 2 DEBUG nova.network.neutron [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:36 compute-0 nova_compute[259627]: 2025-10-14 09:04:36.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:36 compute-0 podman[317230]: 2025-10-14 09:04:36.664834225 +0000 UTC m=+0.064356833 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:04:36 compute-0 podman[317231]: 2025-10-14 09:04:36.6670683 +0000 UTC m=+0.062664462 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 09:04:36 compute-0 nova_compute[259627]: 2025-10-14 09:04:36.877 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:36 compute-0 nova_compute[259627]: 2025-10-14 09:04:36.878 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:36 compute-0 nova_compute[259627]: 2025-10-14 09:04:36.878 2 DEBUG nova.network.neutron [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.057 2 DEBUG nova.compute.manager [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-deleted-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.058 2 INFO nova.compute.manager [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Neutron deleted interface 194dc9cd-03af-4e2c-b8d6-107081204a25; detaching it from the instance and deleting it from the info cache
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.058 2 DEBUG nova.network.neutron [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.087 2 DEBUG nova.objects.instance [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'system_metadata' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.137 2 DEBUG nova.objects.instance [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'flavor' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.179 2 DEBUG nova.virt.libvirt.vif [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.180 2 DEBUG nova.network.os_vif_util [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.181 2 DEBUG nova.network.os_vif_util [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.185 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:04:37 compute-0 ceph-mon[74249]: pgmap v1470: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.190 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface>not found in domain: <domain type='kvm' id='64'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <name>instance-00000033</name>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:04:35</nova:creationTime>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:04:37 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='serial'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='uuid'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk' index='2'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config' index='1'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:48:7e:a5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target dev='tap09d03bcf-f7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <source path='/dev/pts/1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </target>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/1'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <source path='/dev/pts/1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </console>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c247,c365</label>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c247,c365</imagelabel>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:04:37 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:37 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.191 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.199 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface>not found in domain: <domain type='kvm' id='64'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <name>instance-00000033</name>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:04:35</nova:creationTime>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:04:37 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='serial'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='uuid'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk' index='2'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config' index='1'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:48:7e:a5'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target dev='tap09d03bcf-f7'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <source path='/dev/pts/1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       </target>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/1'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <source path='/dev/pts/1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </console>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </input>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c247,c365</label>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c247,c365</imagelabel>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:04:37 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:37 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.200 2 WARNING nova.virt.libvirt.driver [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Detaching interface fa:16:3e:5f:f7:fb failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap194dc9cd-03' not found.
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.201 2 DEBUG nova.virt.libvirt.vif [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.201 2 DEBUG nova.network.os_vif_util [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.202 2 DEBUG nova.network.os_vif_util [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.203 2 DEBUG os_vif [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap194dc9cd-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.214 2 INFO os_vif [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03')
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.215 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:04:37</nova:creationTime>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 09:04:37 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:04:37 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:04:37 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:04:37 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:04:37 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.296 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.296 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.297 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.297 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.297 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.298 2 INFO nova.compute.manager [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Terminating instance
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.299 2 DEBUG nova.compute.manager [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:04:37 compute-0 kernel: tap09d03bcf-f7 (unregistering): left promiscuous mode
Oct 14 09:04:37 compute-0 NetworkManager[44885]: <info>  [1760432677.3676] device (tap09d03bcf-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00501|binding|INFO|Releasing lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 from this chassis (sb_readonly=0)
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00502|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 down in Southbound
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00503|binding|INFO|Removing iface tap09d03bcf-f7 ovn-installed in OVS
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.407 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:7e:a5 10.100.0.8'], port_security=['fa:16:3e:48:7e:a5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6f10764-f6df-4d21-b829-68562680623d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=09d03bcf-f719-4ec4-91a0-3c14e350a342) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.408 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 09d03bcf-f719-4ec4-91a0-3c14e350a342 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.409 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[014c12f8-24c4-4981-814e-aaaba87e246a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.411 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace which is not needed anymore
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 14 09:04:37 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000033.scope: Consumed 12.886s CPU time.
Oct 14 09:04:37 compute-0 systemd-machined[214636]: Machine qemu-64-instance-00000033 terminated.
Oct 14 09:04:37 compute-0 kernel: tap09d03bcf-f7: entered promiscuous mode
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.516 2 DEBUG nova.network.neutron [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:37 compute-0 systemd-udevd[317276]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:37 compute-0 NetworkManager[44885]: <info>  [1760432677.5212] manager: (tap09d03bcf-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00504|binding|INFO|Claiming lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 for this chassis.
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00505|binding|INFO|09d03bcf-f719-4ec4-91a0-3c14e350a342: Claiming fa:16:3e:48:7e:a5 10.100.0.8
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.529 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:7e:a5 10.100.0.8'], port_security=['fa:16:3e:48:7e:a5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6f10764-f6df-4d21-b829-68562680623d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=09d03bcf-f719-4ec4-91a0-3c14e350a342) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:37 compute-0 kernel: tap09d03bcf-f7 (unregistering): left promiscuous mode
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.534 2 DEBUG oslo_concurrency.lockutils [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.535 2 DEBUG nova.compute.manager [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.535 2 DEBUG nova.compute.manager [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] network_info to inject: |[{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00506|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 ovn-installed in OVS
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00507|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 up in Southbound
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00508|binding|INFO|Releasing lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 from this chassis (sb_readonly=1)
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00509|if_status|INFO|Dropped 2 log messages in last 246 seconds (most recently, 246 seconds ago) due to excessive rate
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00510|if_status|INFO|Not setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 down as sb is readonly
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00511|binding|INFO|Releasing lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 from this chassis (sb_readonly=0)
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00512|binding|INFO|Removing iface tap09d03bcf-f7 ovn-installed in OVS
Oct 14 09:04:37 compute-0 ovn_controller[152662]: 2025-10-14T09:04:37Z|00513|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 down in Southbound
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.556 2 INFO nova.virt.libvirt.driver [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance destroyed successfully.
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.557 2 DEBUG nova.objects.instance [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'resources' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.562 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:7e:a5 10.100.0.8'], port_security=['fa:16:3e:48:7e:a5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6f10764-f6df-4d21-b829-68562680623d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=09d03bcf-f719-4ec4-91a0-3c14e350a342) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:37 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [NOTICE]   (316466) : haproxy version is 2.8.14-c23fe91
Oct 14 09:04:37 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [NOTICE]   (316466) : path to executable is /usr/sbin/haproxy
Oct 14 09:04:37 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [WARNING]  (316466) : Exiting Master process...
Oct 14 09:04:37 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [WARNING]  (316466) : Exiting Master process...
Oct 14 09:04:37 compute-0 systemd[1]: libpod-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6.scope: Deactivated successfully.
Oct 14 09:04:37 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [ALERT]    (316466) : Current worker (316470) exited with code 143 (Terminated)
Oct 14 09:04:37 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [WARNING]  (316466) : All workers exited. Exiting... (0)
Oct 14 09:04:37 compute-0 conmon[316446]: conmon 167ba1bfdc1cafb6e28f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6.scope/container/memory.events
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 podman[317295]: 2025-10-14 09:04:37.574000108 +0000 UTC m=+0.058428407 container died 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.583 2 DEBUG nova.virt.libvirt.vif [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.583 2 DEBUG nova.network.os_vif_util [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.584 2 DEBUG nova.network.os_vif_util [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.584 2 DEBUG os_vif [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09d03bcf-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.592 2 INFO os_vif [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7')
Oct 14 09:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6ef93dddd05609592d54febe7e864f7239987c158375c17766114f7714daa3-merged.mount: Deactivated successfully.
Oct 14 09:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6-userdata-shm.mount: Deactivated successfully.
Oct 14 09:04:37 compute-0 podman[317295]: 2025-10-14 09:04:37.618163574 +0000 UTC m=+0.102591873 container cleanup 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:04:37 compute-0 systemd[1]: libpod-conmon-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6.scope: Deactivated successfully.
Oct 14 09:04:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 110 KiB/s wr, 18 op/s
Oct 14 09:04:37 compute-0 podman[317344]: 2025-10-14 09:04:37.706955907 +0000 UTC m=+0.054319256 container remove 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79a58077-9dd4-4d1c-b4af-ca5f67a4943a]: (4, ('Tue Oct 14 09:04:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6)\n167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6\nTue Oct 14 09:04:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6)\n167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32e1385d-e783-4b1f-b0a9-82a86f8137aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.715 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 kernel: tapfc2d149f-a0: left promiscuous mode
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f75c01f-6c65-45ee-9bcb-10133315eb89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fdb4d3-36eb-4aec-b27b-2cef8863dcb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.753 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dfffda8e-d2d4-47ae-a543-f2eeb4bdec9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.771 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0386ad51-dac0-44f3-8ef9-3ddec6a0aa61]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642540, 'reachable_time': 44268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317362, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.775 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.775 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[16ed3af3-e068-40cc-860c-3190dc746d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc2d149f\x2daebf\x2d406a\x2daed2\x2d5161dd22b079.mount: Deactivated successfully.
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.776 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 09d03bcf-f719-4ec4-91a0-3c14e350a342 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.779 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.780 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d37b3e1b-c837-4420-ae9f-a53d4bf5ad7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.781 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 09d03bcf-f719-4ec4-91a0-3c14e350a342 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.783 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:04:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.784 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc72115-d228-4ef6-8de9-7ad6cbddc464]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.856 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.856 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-unplugged-194dc9cd-03af-4e2c-b8d6-107081204a25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-194dc9cd-03af-4e2c-b8d6-107081204a25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.858 2 WARNING nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 for instance with vm_state active and task_state deleting.
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.859 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing instance network info cache due to event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.859 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.859 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.859 2 DEBUG nova.network.neutron [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.984 2 INFO nova.virt.libvirt.driver [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Deleting instance files /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_del
Oct 14 09:04:37 compute-0 nova_compute[259627]: 2025-10-14 09:04:37.985 2 INFO nova.virt.libvirt.driver [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Deletion of /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_del complete
Oct 14 09:04:38 compute-0 nova_compute[259627]: 2025-10-14 09:04:38.147 2 INFO nova.compute.manager [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 14 09:04:38 compute-0 nova_compute[259627]: 2025-10-14 09:04:38.148 2 DEBUG oslo.service.loopingcall [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:04:38 compute-0 nova_compute[259627]: 2025-10-14 09:04:38.155 2 DEBUG nova.compute.manager [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:04:38 compute-0 nova_compute[259627]: 2025-10-14 09:04:38.155 2 DEBUG nova.network.neutron [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:04:39 compute-0 ceph-mon[74249]: pgmap v1471: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 110 KiB/s wr, 18 op/s
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.675 2 INFO nova.network.neutron [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Port 194dc9cd-03af-4e2c-b8d6-107081204a25 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.676 2 DEBUG nova.network.neutron [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 110 KiB/s wr, 18 op/s
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.706 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.754 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-194dc9cd-03af-4e2c-b8d6-107081204a25" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.908 2 DEBUG nova.network.neutron [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.923 2 DEBUG nova.objects.instance [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'flavor' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.927 2 INFO nova.compute.manager [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Took 1.77 seconds to deallocate network for instance.
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.956 2 DEBUG oslo_concurrency.lockutils [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.985 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:39 compute-0 nova_compute[259627]: 2025-10-14 09:04:39.985 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.040 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.041 2 WARNING nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state deleted and task_state None.
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.042 2 WARNING nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state deleted and task_state None.
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.043 2 WARNING nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state deleted and task_state None.
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.044 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.044 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.044 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.044 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.044 2 WARNING nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state deleted and task_state None.
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.072 2 DEBUG oslo_concurrency.processutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:40 compute-0 sudo[317364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:40 compute-0 sudo[317364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:40 compute-0 sudo[317364]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.176 2 DEBUG nova.network.neutron [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updated VIF entry in instance network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.177 2 DEBUG nova.network.neutron [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:40 compute-0 sudo[317390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:04:40 compute-0 sudo[317390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.194 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.195 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.195 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.195 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.196 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:40 compute-0 sudo[317390]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.196 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.197 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.198 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.198 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.198 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.198 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.203 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.204 2 WARNING nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state active and task_state deleting.
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.204 2 DEBUG oslo_concurrency.lockutils [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:40 compute-0 sudo[317415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:40 compute-0 sudo[317415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:40 compute-0 sudo[317415]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:40 compute-0 sudo[317459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 14 09:04:40 compute-0 sudo[317459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1377394577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.541 2 DEBUG oslo_concurrency.processutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.546 2 DEBUG nova.compute.provider_tree [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.571 2 DEBUG nova.scheduler.client.report [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.603 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:40 compute-0 sudo[317459]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:04:40 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:04:40 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.632 2 INFO nova.scheduler.client.report [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Deleted allocations for instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d
Oct 14 09:04:40 compute-0 sudo[317505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:40 compute-0 sudo[317505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:40 compute-0 sudo[317505]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:40 compute-0 nova_compute[259627]: 2025-10-14 09:04:40.699 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:40 compute-0 sudo[317530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:04:40 compute-0 sudo[317530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:40 compute-0 sudo[317530]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:40 compute-0 sudo[317555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:40 compute-0 sudo[317555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:40 compute-0 sudo[317555]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:40 compute-0 sudo[317580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:04:40 compute-0 sudo[317580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:41 compute-0 ceph-mon[74249]: pgmap v1472: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 110 KiB/s wr, 18 op/s
Oct 14 09:04:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1377394577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:41 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:41 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:41 compute-0 sudo[317580]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:41 compute-0 nova_compute[259627]: 2025-10-14 09:04:41.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 09:04:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:04:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:04:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:04:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:04:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:04:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:04:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:41 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 64edb15e-1350-41f6-a412-c158d20679fe does not exist
Oct 14 09:04:41 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev fb5d74a8-ad22-4e2f-a4cc-28a9a4ad79c8 does not exist
Oct 14 09:04:41 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ac65fe4d-22be-4cf1-9b7a-e588513f9fce does not exist
Oct 14 09:04:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:04:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:04:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:04:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:04:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:04:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:04:41 compute-0 sudo[317636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:41 compute-0 sudo[317636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:41 compute-0 sudo[317636]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:41.512 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:41 compute-0 nova_compute[259627]: 2025-10-14 09:04:41.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:41.514 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:04:41 compute-0 sudo[317661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:04:41 compute-0 sudo[317661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:41 compute-0 sudo[317661]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:41 compute-0 nova_compute[259627]: 2025-10-14 09:04:41.577 2 DEBUG nova.network.neutron [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:41 compute-0 sudo[317686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:41 compute-0 sudo[317686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:41 compute-0 sudo[317686]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:41 compute-0 nova_compute[259627]: 2025-10-14 09:04:41.659 2 DEBUG nova.compute.manager [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:41 compute-0 nova_compute[259627]: 2025-10-14 09:04:41.659 2 DEBUG nova.compute.manager [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing instance network info cache due to event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:41 compute-0 nova_compute[259627]: 2025-10-14 09:04:41.660 2 DEBUG oslo_concurrency.lockutils [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:41 compute-0 sudo[317711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:04:41 compute-0 sudo[317711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 121 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 114 KiB/s wr, 105 op/s
Oct 14 09:04:42 compute-0 podman[317776]: 2025-10-14 09:04:42.055211515 +0000 UTC m=+0.041461460 container create 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:04:42 compute-0 systemd[1]: Started libpod-conmon-59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f.scope.
Oct 14 09:04:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:42 compute-0 podman[317776]: 2025-10-14 09:04:42.03670135 +0000 UTC m=+0.022951335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:04:42 compute-0 podman[317776]: 2025-10-14 09:04:42.146182732 +0000 UTC m=+0.132432717 container init 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:04:42 compute-0 podman[317776]: 2025-10-14 09:04:42.153803799 +0000 UTC m=+0.140053764 container start 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:04:42 compute-0 nova_compute[259627]: 2025-10-14 09:04:42.152 2 DEBUG nova.compute.manager [req-f72d75b5-46d2-414a-b667-223f5cf49cce req-0d7b2539-c0ec-42da-98a8-e4e838fde150 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-deleted-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:42 compute-0 podman[317776]: 2025-10-14 09:04:42.157629693 +0000 UTC m=+0.143879638 container attach 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 14 09:04:42 compute-0 cool_edison[317793]: 167 167
Oct 14 09:04:42 compute-0 podman[317776]: 2025-10-14 09:04:42.158659228 +0000 UTC m=+0.144909173 container died 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:04:42 compute-0 systemd[1]: libpod-59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f.scope: Deactivated successfully.
Oct 14 09:04:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-151d15ba7a3999ab1fddf68fab7262dce235815330dfc81cff9456f3f8eac6d4-merged.mount: Deactivated successfully.
Oct 14 09:04:42 compute-0 podman[317776]: 2025-10-14 09:04:42.211650771 +0000 UTC m=+0.197900716 container remove 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:04:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:04:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:04:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:04:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:04:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:04:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:04:42 compute-0 systemd[1]: libpod-conmon-59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f.scope: Deactivated successfully.
Oct 14 09:04:42 compute-0 nova_compute[259627]: 2025-10-14 09:04:42.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:42 compute-0 podman[317817]: 2025-10-14 09:04:42.373344987 +0000 UTC m=+0.039374069 container create eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:04:42 compute-0 systemd[1]: Started libpod-conmon-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope.
Oct 14 09:04:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:42 compute-0 podman[317817]: 2025-10-14 09:04:42.356709798 +0000 UTC m=+0.022738900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:42 compute-0 podman[317817]: 2025-10-14 09:04:42.474091554 +0000 UTC m=+0.140120666 container init eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:04:42 compute-0 podman[317817]: 2025-10-14 09:04:42.480949663 +0000 UTC m=+0.146978755 container start eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:04:42 compute-0 podman[317817]: 2025-10-14 09:04:42.484863069 +0000 UTC m=+0.150892181 container attach eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:04:42 compute-0 nova_compute[259627]: 2025-10-14 09:04:42.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.653832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682653866, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 531, "num_deletes": 251, "total_data_size": 508241, "memory_usage": 518768, "flush_reason": "Manual Compaction"}
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682658212, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 384812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30540, "largest_seqno": 31070, "table_properties": {"data_size": 382026, "index_size": 758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7617, "raw_average_key_size": 20, "raw_value_size": 376193, "raw_average_value_size": 1033, "num_data_blocks": 33, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432652, "oldest_key_time": 1760432652, "file_creation_time": 1760432682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4403 microseconds, and 1559 cpu microseconds.
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.658239) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 384812 bytes OK
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.658253) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.659746) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.659755) EVENT_LOG_v1 {"time_micros": 1760432682659752, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.659767) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 505197, prev total WAL file size 505197, number of live WAL files 2.
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.660197) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323531' seq:0, type:0; will stop at (end)
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(375KB)], [65(9990KB)]
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682660228, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10615443, "oldest_snapshot_seqno": -1}
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5398 keys, 7432390 bytes, temperature: kUnknown
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682700579, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7432390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7396384, "index_size": 21432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 136314, "raw_average_key_size": 25, "raw_value_size": 7299152, "raw_average_value_size": 1352, "num_data_blocks": 873, "num_entries": 5398, "num_filter_entries": 5398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.700790) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7432390 bytes
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.702407) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.5 rd, 183.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.8 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(46.9) write-amplify(19.3) OK, records in: 5907, records dropped: 509 output_compression: NoCompression
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.702421) EVENT_LOG_v1 {"time_micros": 1760432682702414, "job": 36, "event": "compaction_finished", "compaction_time_micros": 40440, "compaction_time_cpu_micros": 18068, "output_level": 6, "num_output_files": 1, "total_output_size": 7432390, "num_input_records": 5907, "num_output_records": 5398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682702569, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682703961, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.660122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000759845367747607 of space, bias 1.0, pg target 0.2279536103242821 quantized to 32 (current 32)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:04:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:04:43 compute-0 ceph-mon[74249]: pgmap v1473: 305 pgs: 305 active+clean; 121 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 114 KiB/s wr, 105 op/s
Oct 14 09:04:43 compute-0 nova_compute[259627]: 2025-10-14 09:04:43.419 2 DEBUG nova.network.neutron [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:43 compute-0 nova_compute[259627]: 2025-10-14 09:04:43.442 2 DEBUG oslo_concurrency.lockutils [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:43 compute-0 nova_compute[259627]: 2025-10-14 09:04:43.443 2 DEBUG nova.compute.manager [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 14 09:04:43 compute-0 nova_compute[259627]: 2025-10-14 09:04:43.443 2 DEBUG nova.compute.manager [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] network_info to inject: |[{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 14 09:04:43 compute-0 nova_compute[259627]: 2025-10-14 09:04:43.446 2 DEBUG oslo_concurrency.lockutils [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:43 compute-0 nova_compute[259627]: 2025-10-14 09:04:43.446 2 DEBUG nova.network.neutron [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:43 compute-0 practical_solomon[317834]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:04:43 compute-0 practical_solomon[317834]: --> relative data size: 1.0
Oct 14 09:04:43 compute-0 practical_solomon[317834]: --> All data devices are unavailable
Oct 14 09:04:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 121 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 16 KiB/s wr, 87 op/s
Oct 14 09:04:43 compute-0 systemd[1]: libpod-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope: Deactivated successfully.
Oct 14 09:04:43 compute-0 systemd[1]: libpod-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope: Consumed 1.161s CPU time.
Oct 14 09:04:43 compute-0 conmon[317834]: conmon eaaa6a24ab2126dd0802 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope/container/memory.events
Oct 14 09:04:43 compute-0 podman[317817]: 2025-10-14 09:04:43.717572276 +0000 UTC m=+1.383601358 container died eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c-merged.mount: Deactivated successfully.
Oct 14 09:04:43 compute-0 podman[317817]: 2025-10-14 09:04:43.790596482 +0000 UTC m=+1.456625564 container remove eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:04:43 compute-0 systemd[1]: libpod-conmon-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope: Deactivated successfully.
Oct 14 09:04:43 compute-0 sudo[317711]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:43 compute-0 sudo[317875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:43 compute-0 sudo[317875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:43 compute-0 sudo[317875]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:44 compute-0 sudo[317900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:04:44 compute-0 sudo[317900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:44 compute-0 sudo[317900]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:44 compute-0 sudo[317925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:44 compute-0 sudo[317925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:44 compute-0 sudo[317925]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:44 compute-0 sudo[317950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:04:44 compute-0 sudo[317950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:44 compute-0 podman[318016]: 2025-10-14 09:04:44.489784562 +0000 UTC m=+0.040609839 container create 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:04:44 compute-0 systemd[1]: Started libpod-conmon-3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e.scope.
Oct 14 09:04:44 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:44 compute-0 podman[318016]: 2025-10-14 09:04:44.470790095 +0000 UTC m=+0.021615402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:04:44 compute-0 podman[318016]: 2025-10-14 09:04:44.574363502 +0000 UTC m=+0.125188789 container init 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:04:44 compute-0 podman[318016]: 2025-10-14 09:04:44.582514372 +0000 UTC m=+0.133339649 container start 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 09:04:44 compute-0 infallible_austin[318033]: 167 167
Oct 14 09:04:44 compute-0 systemd[1]: libpod-3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e.scope: Deactivated successfully.
Oct 14 09:04:44 compute-0 podman[318016]: 2025-10-14 09:04:44.624388532 +0000 UTC m=+0.175213909 container attach 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:04:44 compute-0 podman[318016]: 2025-10-14 09:04:44.624955426 +0000 UTC m=+0.175780713 container died 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9444ddadd17af2c923978a01c31124bf4633214eab3e50b74c29cc84a1b4479-merged.mount: Deactivated successfully.
Oct 14 09:04:44 compute-0 podman[318016]: 2025-10-14 09:04:44.685121915 +0000 UTC m=+0.235947202 container remove 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:04:44 compute-0 systemd[1]: libpod-conmon-3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e.scope: Deactivated successfully.
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.729 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.732 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.732 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.733 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.733 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.734 2 INFO nova.compute.manager [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Terminating instance
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.735 2 DEBUG nova.compute.manager [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:04:44 compute-0 kernel: tap4f8c1944-ec (unregistering): left promiscuous mode
Oct 14 09:04:44 compute-0 NetworkManager[44885]: <info>  [1760432684.7956] device (tap4f8c1944-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:04:44 compute-0 ovn_controller[152662]: 2025-10-14T09:04:44Z|00514|binding|INFO|Releasing lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 from this chassis (sb_readonly=0)
Oct 14 09:04:44 compute-0 ovn_controller[152662]: 2025-10-14T09:04:44Z|00515|binding|INFO|Setting lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 down in Southbound
Oct 14 09:04:44 compute-0 ovn_controller[152662]: 2025-10-14T09:04:44Z|00516|binding|INFO|Removing iface tap4f8c1944-ec ovn-installed in OVS
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.812 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:80:d4 10.100.0.7'], port_security=['fa:16:3e:fb:80:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e038df86-1323-4b04-afae-9fe68c98c22c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e520d17b20a44440b176c2297c35286a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dfbcaeb0-59cf-4ea6-aad2-32a400918089', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c03d4d8-729d-49db-b443-08ab27defcda, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.814 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 in datapath e964bc94-eb23-4bb9-b6af-2d14c0f7d764 unbound from our chassis
Oct 14 09:04:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.816 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e964bc94-eb23-4bb9-b6af-2d14c0f7d764, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:04:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.818 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[188e5e8a-9c0d-4096-b04c-db058a149b63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.818 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 namespace which is not needed anymore
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:44 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct 14 09:04:44 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000034.scope: Consumed 13.534s CPU time.
Oct 14 09:04:44 compute-0 systemd-machined[214636]: Machine qemu-65-instance-00000034 terminated.
Oct 14 09:04:44 compute-0 podman[318068]: 2025-10-14 09:04:44.932475037 +0000 UTC m=+0.059058773 container create 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.972 2 INFO nova.virt.libvirt.driver [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance destroyed successfully.
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.973 2 DEBUG nova.objects.instance [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'resources' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:44 compute-0 systemd[1]: Started libpod-conmon-342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9.scope.
Oct 14 09:04:44 compute-0 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [NOTICE]   (316835) : haproxy version is 2.8.14-c23fe91
Oct 14 09:04:44 compute-0 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [NOTICE]   (316835) : path to executable is /usr/sbin/haproxy
Oct 14 09:04:44 compute-0 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [WARNING]  (316835) : Exiting Master process...
Oct 14 09:04:44 compute-0 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [WARNING]  (316835) : Exiting Master process...
Oct 14 09:04:44 compute-0 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [ALERT]    (316835) : Current worker (316837) exited with code 143 (Terminated)
Oct 14 09:04:44 compute-0 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [WARNING]  (316835) : All workers exited. Exiting... (0)
Oct 14 09:04:44 compute-0 systemd[1]: libpod-a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436.scope: Deactivated successfully.
Oct 14 09:04:44 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.997 2 DEBUG nova.virt.libvirt.vif [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1327010057',display_name='tempest-AttachInterfacesUnderV243Test-server-1327010057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1327010057',id=52,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoL8Eizmz78I7kJk+2faYxVDwYlZ7Qa0JVnSyW4HvPt6t6qpenjELhDNQJBBgBLKQxH+hNzILHY6YG4gLNrrM0gadWtg4ztrg1o/Wi2tCk6CtSq2N27wHKOX5s993gLcg==',key_name='tempest-keypair-1836165188',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e520d17b20a44440b176c2297c35286a',ramdisk_id='',reservation_id='r-oukj60f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1413244718',owner_user_name='tempest-AttachInterfacesUnderV243Test-1413244718-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4907b291c4c64d2eb768d0036817a00b',uuid=e038df86-1323-4b04-afae-9fe68c98c22c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:44.999 2 DEBUG nova.network.os_vif_util [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converting VIF {"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.000 2 DEBUG nova.network.os_vif_util [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:45 compute-0 podman[318068]: 2025-10-14 09:04:44.905648557 +0000 UTC m=+0.032232293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.000 2 DEBUG os_vif [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.002 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f8c1944-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:45 compute-0 podman[318096]: 2025-10-14 09:04:45.003912903 +0000 UTC m=+0.064690521 container died a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.009 2 INFO os_vif [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec')
Oct 14 09:04:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:45 compute-0 podman[318068]: 2025-10-14 09:04:45.048193262 +0000 UTC m=+0.174777048 container init 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:04:45 compute-0 podman[318068]: 2025-10-14 09:04:45.056149128 +0000 UTC m=+0.182732854 container start 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436-userdata-shm.mount: Deactivated successfully.
Oct 14 09:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7bf3898c6dcba520ee1a299c42284442b12fb92ae3baa3607f44b8c13a94b30-merged.mount: Deactivated successfully.
Oct 14 09:04:45 compute-0 podman[318068]: 2025-10-14 09:04:45.068214054 +0000 UTC m=+0.194797780 container attach 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:04:45 compute-0 podman[318096]: 2025-10-14 09:04:45.072891849 +0000 UTC m=+0.133669447 container cleanup a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:04:45 compute-0 systemd[1]: libpod-conmon-a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436.scope: Deactivated successfully.
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.141 2 DEBUG nova.network.neutron [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updated VIF entry in instance network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.142 2 DEBUG nova.network.neutron [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:45 compute-0 podman[318161]: 2025-10-14 09:04:45.156942706 +0000 UTC m=+0.051182710 container remove a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.162 2 DEBUG oslo_concurrency.lockutils [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.162 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a60d988-abf5-496a-ad2c-a6e8f908e0b7]: (4, ('Tue Oct 14 09:04:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 (a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436)\na036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436\nTue Oct 14 09:04:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 (a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436)\na036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.165 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ade080df-8f08-4d51-a2d5-638edac27dae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.166 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape964bc94-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:45 compute-0 kernel: tape964bc94-e0: left promiscuous mode
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.195 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[398a5f8e-cf05-491c-b051-e9aab1a05501]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[149ffd2d-92a7-4644-8fe6-7f0395abf22f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.229 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[47f9412c-95e0-4749-a0f4-e3bddb53e3b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.256 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec64dec2-44e7-4098-a053-12bf1c983021]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643096, 'reachable_time': 25660, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318175, 'error': None, 'target': 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.258 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:04:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.258 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[cf44bed7-cbcf-4e4f-b2cd-13be280674b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:45 compute-0 systemd[1]: run-netns-ovnmeta\x2de964bc94\x2deb23\x2d4bb9\x2db6af\x2d2d14c0f7d764.mount: Deactivated successfully.
Oct 14 09:04:45 compute-0 ceph-mon[74249]: pgmap v1474: 305 pgs: 305 active+clean; 121 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 16 KiB/s wr, 87 op/s
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.464 2 INFO nova.virt.libvirt.driver [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Deleting instance files /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c_del
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.465 2 INFO nova.virt.libvirt.driver [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Deletion of /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c_del complete
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.518 2 INFO nova.compute.manager [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.519 2 DEBUG oslo.service.loopingcall [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.519 2 DEBUG nova.compute.manager [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:04:45 compute-0 nova_compute[259627]: 2025-10-14 09:04:45.519 2 DEBUG nova.network.neutron [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:04:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 17 KiB/s wr, 99 op/s
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]: {
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:     "0": [
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:         {
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "devices": [
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "/dev/loop3"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             ],
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_name": "ceph_lv0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_size": "21470642176",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "name": "ceph_lv0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "tags": {
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cluster_name": "ceph",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.crush_device_class": "",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.encrypted": "0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osd_id": "0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.type": "block",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.vdo": "0"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             },
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "type": "block",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "vg_name": "ceph_vg0"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:         }
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:     ],
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:     "1": [
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:         {
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "devices": [
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "/dev/loop4"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             ],
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_name": "ceph_lv1",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_size": "21470642176",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "name": "ceph_lv1",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "tags": {
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cluster_name": "ceph",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.crush_device_class": "",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.encrypted": "0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osd_id": "1",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.type": "block",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.vdo": "0"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             },
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "type": "block",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "vg_name": "ceph_vg1"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:         }
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:     ],
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:     "2": [
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:         {
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "devices": [
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "/dev/loop5"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             ],
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_name": "ceph_lv2",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_size": "21470642176",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "name": "ceph_lv2",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "tags": {
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.cluster_name": "ceph",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.crush_device_class": "",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.encrypted": "0",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osd_id": "2",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.type": "block",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:                 "ceph.vdo": "0"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             },
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "type": "block",
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:             "vg_name": "ceph_vg2"
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:         }
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]:     ]
Oct 14 09:04:45 compute-0 ecstatic_roentgen[318121]: }
Oct 14 09:04:45 compute-0 systemd[1]: libpod-342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9.scope: Deactivated successfully.
Oct 14 09:04:45 compute-0 podman[318068]: 2025-10-14 09:04:45.832407293 +0000 UTC m=+0.958991029 container died 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a-merged.mount: Deactivated successfully.
Oct 14 09:04:45 compute-0 podman[318068]: 2025-10-14 09:04:45.897865553 +0000 UTC m=+1.024449249 container remove 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 09:04:45 compute-0 systemd[1]: libpod-conmon-342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9.scope: Deactivated successfully.
Oct 14 09:04:45 compute-0 sudo[317950]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:46 compute-0 sudo[318196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:46 compute-0 sudo[318196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:46 compute-0 sudo[318196]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:46 compute-0 sudo[318221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:04:46 compute-0 sudo[318221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:46 compute-0 sudo[318221]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:46 compute-0 sudo[318246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:46 compute-0 sudo[318246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:46 compute-0 sudo[318246]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:46 compute-0 sudo[318271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:04:46 compute-0 sudo[318271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.404 2 DEBUG nova.network.neutron [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.431 2 INFO nova.compute.manager [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Took 0.91 seconds to deallocate network for instance.
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.494 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.495 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.571 2 DEBUG oslo_concurrency.processutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:46 compute-0 podman[318339]: 2025-10-14 09:04:46.724032156 +0000 UTC m=+0.043381748 container create a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:04:46 compute-0 systemd[1]: Started libpod-conmon-a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604.scope.
Oct 14 09:04:46 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:46 compute-0 podman[318339]: 2025-10-14 09:04:46.704026404 +0000 UTC m=+0.023376026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:04:46 compute-0 podman[318339]: 2025-10-14 09:04:46.810322567 +0000 UTC m=+0.129672189 container init a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 09:04:46 compute-0 podman[318339]: 2025-10-14 09:04:46.817130654 +0000 UTC m=+0.136480266 container start a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 09:04:46 compute-0 kind_kepler[318374]: 167 167
Oct 14 09:04:46 compute-0 podman[318339]: 2025-10-14 09:04:46.821880431 +0000 UTC m=+0.141230103 container attach a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 09:04:46 compute-0 systemd[1]: libpod-a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604.scope: Deactivated successfully.
Oct 14 09:04:46 compute-0 podman[318339]: 2025-10-14 09:04:46.822676461 +0000 UTC m=+0.142026053 container died a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-669081a5f319c5adcf99d7323d93857029afe17de2f02a3d0d8fccfaf4c2ab00-merged.mount: Deactivated successfully.
Oct 14 09:04:46 compute-0 podman[318339]: 2025-10-14 09:04:46.861860544 +0000 UTC m=+0.181210136 container remove a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.867 2 DEBUG nova.compute.manager [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.867 2 DEBUG oslo_concurrency.lockutils [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.868 2 DEBUG oslo_concurrency.lockutils [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.868 2 DEBUG oslo_concurrency.lockutils [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.868 2 DEBUG nova.compute.manager [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] No waiting events found dispatching network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.868 2 WARNING nova.compute.manager [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received unexpected event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 for instance with vm_state deleted and task_state None.
Oct 14 09:04:46 compute-0 systemd[1]: libpod-conmon-a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604.scope: Deactivated successfully.
Oct 14 09:04:46 compute-0 nova_compute[259627]: 2025-10-14 09:04:46.936 2 DEBUG nova.compute.manager [req-e0b42417-c412-444c-a4bf-4f8c387bb3c6 req-c2fbe515-f97a-4b86-ad58-ed719c4870e0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-vif-deleted-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105800694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.019 2 DEBUG oslo_concurrency.processutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.028 2 DEBUG nova.compute.provider_tree [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.048 2 DEBUG nova.scheduler.client.report [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:04:47 compute-0 podman[318398]: 2025-10-14 09:04:47.055508884 +0000 UTC m=+0.070548444 container create 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.084 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:47 compute-0 systemd[1]: Started libpod-conmon-3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de.scope.
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.116 2 INFO nova.scheduler.client.report [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Deleted allocations for instance e038df86-1323-4b04-afae-9fe68c98c22c
Oct 14 09:04:47 compute-0 podman[318398]: 2025-10-14 09:04:47.02685595 +0000 UTC m=+0.041895470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:04:47 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:47 compute-0 podman[318398]: 2025-10-14 09:04:47.157476482 +0000 UTC m=+0.172516022 container init 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:04:47 compute-0 podman[318398]: 2025-10-14 09:04:47.174433218 +0000 UTC m=+0.189472778 container start 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:04:47 compute-0 podman[318398]: 2025-10-14 09:04:47.178531899 +0000 UTC m=+0.193571429 container attach 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.221 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.370 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.374 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.394 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:04:47 compute-0 ceph-mon[74249]: pgmap v1475: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 17 KiB/s wr, 99 op/s
Oct 14 09:04:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2105800694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.483 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.486 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.497 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.498 2 INFO nova.compute.claims [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:04:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:47.515 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.616 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 4.1 KiB/s wr, 99 op/s
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.858 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.859 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.878 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:04:47 compute-0 nova_compute[259627]: 2025-10-14 09:04:47.959 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/575635753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.107 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.112 2 DEBUG nova.compute.provider_tree [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.131 2 DEBUG nova.scheduler.client.report [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.150 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.150 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.152 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.159 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.159 2 INFO nova.compute.claims [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:04:48 compute-0 infallible_hertz[318416]: {
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "osd_id": 2,
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "type": "bluestore"
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:     },
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "osd_id": 1,
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "type": "bluestore"
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:     },
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "osd_id": 0,
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:         "type": "bluestore"
Oct 14 09:04:48 compute-0 infallible_hertz[318416]:     }
Oct 14 09:04:48 compute-0 infallible_hertz[318416]: }
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.212 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.214 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:04:48 compute-0 systemd[1]: libpod-3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de.scope: Deactivated successfully.
Oct 14 09:04:48 compute-0 podman[318398]: 2025-10-14 09:04:48.219103304 +0000 UTC m=+1.234142864 container died 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 09:04:48 compute-0 systemd[1]: libpod-3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de.scope: Consumed 1.047s CPU time.
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.242 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352-merged.mount: Deactivated successfully.
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.275 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:04:48 compute-0 podman[318398]: 2025-10-14 09:04:48.276250599 +0000 UTC m=+1.291290119 container remove 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:04:48 compute-0 systemd[1]: libpod-conmon-3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de.scope: Deactivated successfully.
Oct 14 09:04:48 compute-0 sudo[318271]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:04:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:04:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 43bfb54d-6621-4c8f-9561-3d1a3e3b0989 does not exist
Oct 14 09:04:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ff13fbef-07d3-4797-ab9f-737deb07e2d5 does not exist
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.344 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.375 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.376 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.377 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Creating image(s)
Oct 14 09:04:48 compute-0 sudo[318484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:04:48 compute-0 sudo[318484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:48 compute-0 sudo[318484]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.399 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/575635753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:48 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:48 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.428 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:48 compute-0 sudo[318528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.449 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:48 compute-0 sudo[318528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.452 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:48 compute-0 sudo[318528]: pam_unix(sudo:session): session closed for user root
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.516 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.517 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.517 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.518 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.535 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.537 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.772 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:04:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887727024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.807 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.849 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] resizing rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.888 2 DEBUG nova.compute.provider_tree [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.905 2 DEBUG nova.scheduler.client.report [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.953 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.954 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.961 2 DEBUG nova.objects.instance [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'migration_context' on Instance uuid 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.989 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.989 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Ensure instance console log exists: /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.990 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.991 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:48 compute-0 nova_compute[259627]: 2025-10-14 09:04:48.991 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.008 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.009 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.032 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.056 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.126 2 DEBUG nova.policy [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd952679a4e6a4fc6bacf42c02d3e92d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.189 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.191 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.192 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Creating image(s)
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.218 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.239 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.264 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.268 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.310 2 DEBUG nova.policy [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.359 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.360 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.361 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.361 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.389 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.393 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:49 compute-0 ceph-mon[74249]: pgmap v1476: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 4.1 KiB/s wr, 99 op/s
Oct 14 09:04:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1887727024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:04:49 compute-0 podman[318817]: 2025-10-14 09:04:49.664097171 +0000 UTC m=+0.070561186 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:04:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 4.1 KiB/s wr, 99 op/s
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.704 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:49 compute-0 podman[318816]: 2025-10-14 09:04:49.725263625 +0000 UTC m=+0.123100718 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.755 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] resizing rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.830 2 DEBUG nova.objects.instance [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'migration_context' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.853 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.854 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Ensure instance console log exists: /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.854 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.855 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:49 compute-0 nova_compute[259627]: 2025-10-14 09:04:49.855 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:50 compute-0 nova_compute[259627]: 2025-10-14 09:04:50.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:50 compute-0 nova_compute[259627]: 2025-10-14 09:04:50.275 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Successfully created port: 58429c4c-bdab-4d51-8440-95fb6e0fab00 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:04:50 compute-0 nova_compute[259627]: 2025-10-14 09:04:50.515 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully created port: 971d99c2-5a60-4cac-8f99-e819d71e419c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.300 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully updated port: 971d99c2-5a60-4cac-8f99-e819d71e419c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.317 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.317 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.317 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.351 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Successfully updated port: 58429c4c-bdab-4d51-8440-95fb6e0fab00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.364 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.364 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquired lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.365 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.413 2 DEBUG nova.compute.manager [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.413 2 DEBUG nova.compute.manager [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-971d99c2-5a60-4cac-8f99-e819d71e419c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.413 2 DEBUG oslo_concurrency.lockutils [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:51 compute-0 ceph-mon[74249]: pgmap v1477: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 4.1 KiB/s wr, 99 op/s
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.516 2 DEBUG nova.compute.manager [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-changed-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.516 2 DEBUG nova.compute.manager [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Refreshing instance network info cache due to event network-changed-58429c4c-bdab-4d51-8440-95fb6e0fab00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.516 2 DEBUG oslo_concurrency.lockutils [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.555 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:04:51 compute-0 nova_compute[259627]: 2025-10-14 09:04:51.563 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:04:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 134 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 3.5 MiB/s wr, 169 op/s
Oct 14 09:04:52 compute-0 nova_compute[259627]: 2025-10-14 09:04:52.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:52 compute-0 nova_compute[259627]: 2025-10-14 09:04:52.553 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432677.5520945, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:52 compute-0 nova_compute[259627]: 2025-10-14 09:04:52.554 2 INFO nova.compute.manager [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] VM Stopped (Lifecycle Event)
Oct 14 09:04:52 compute-0 nova_compute[259627]: 2025-10-14 09:04:52.576 2 DEBUG nova.compute.manager [None req-e06b9750-cf6b-499e-a701-deea2b3443b8 - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.379 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.406 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.407 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance network_info: |[{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.408 2 DEBUG oslo_concurrency.lockutils [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.408 2 DEBUG nova.network.neutron [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port 971d99c2-5a60-4cac-8f99-e819d71e419c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.413 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start _get_guest_xml network_info=[{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.420 2 WARNING nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:04:53 compute-0 ceph-mon[74249]: pgmap v1478: 305 pgs: 305 active+clean; 134 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 3.5 MiB/s wr, 169 op/s
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.434 2 DEBUG nova.virt.libvirt.host [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.435 2 DEBUG nova.virt.libvirt.host [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.439 2 DEBUG nova.virt.libvirt.host [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.440 2 DEBUG nova.virt.libvirt.host [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.441 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.441 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.442 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.443 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.443 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.444 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.444 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.445 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.445 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.446 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.446 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.447 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.452 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 134 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Oct 14 09:04:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971864206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.942 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.974 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:53 compute-0 nova_compute[259627]: 2025-10-14 09:04:53.979 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.221 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.248 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Releasing lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.249 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance network_info: |[{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.250 2 DEBUG oslo_concurrency.lockutils [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.251 2 DEBUG nova.network.neutron [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Refreshing network info cache for port 58429c4c-bdab-4d51-8440-95fb6e0fab00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.255 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start _get_guest_xml network_info=[{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.260 2 WARNING nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.265 2 DEBUG nova.virt.libvirt.host [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.266 2 DEBUG nova.virt.libvirt.host [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.268 2 DEBUG nova.virt.libvirt.host [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.269 2 DEBUG nova.virt.libvirt.host [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.269 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.270 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.270 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.271 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.271 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.272 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.272 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.272 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.273 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.273 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.274 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.274 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.278 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1971864206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1743216292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.491 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.493 2 DEBUG nova.virt.libvirt.vif [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.493 2 DEBUG nova.network.os_vif_util [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.494 2 DEBUG nova.network.os_vif_util [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.495 2 DEBUG nova.objects.instance [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_devices' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.515 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <name>instance-00000036</name>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:04:53</nova:creationTime>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:04:54 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <entry name="serial">47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <entry name="uuid">47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk">
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config">
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:54 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:9a:79:ab"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <target dev="tap971d99c2-5a"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log" append="off"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:04:54 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:04:54 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:54 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:54 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:54 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.521 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Preparing to wait for external event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.521 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.521 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.521 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.522 2 DEBUG nova.virt.libvirt.vif [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.522 2 DEBUG nova.network.os_vif_util [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.522 2 DEBUG nova.network.os_vif_util [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.525 2 DEBUG os_vif [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap971d99c2-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap971d99c2-5a, col_values=(('external_ids', {'iface-id': '971d99c2-5a60-4cac-8f99-e819d71e419c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:79:ab', 'vm-uuid': '47257c6e-4d10-4d8e-af5a-b57db20048ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:54 compute-0 NetworkManager[44885]: <info>  [1760432694.5336] manager: (tap971d99c2-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.539 2 INFO os_vif [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a')
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.596 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.597 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.597 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9a:79:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.598 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Using config drive
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.618 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218737721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.761 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.791 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:54 compute-0 nova_compute[259627]: 2025-10-14 09:04:54.797 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.103 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Creating config drive at /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.116 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9th7ndf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.280 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9th7ndf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:04:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460046403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.319 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.323 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.364 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.365 2 DEBUG nova.virt.libvirt.vif [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1496520165',display_name='tempest-ServerActionsTestOtherA-server-1496520165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1496520165',id=53,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHn+mln6XiHS3Dbrh5f5r23+s3Q61qobcQwb2UzGhsgS1DhTJSEpJGmS/ZP0w8jiE9rcTktB/Gz7RvHBySi5EJz+HH+wa+mTFVBHeaIG5cz8L5ypIzO20Wa3eu2dAxGK5A==',key_name='tempest-keypair-1288175355',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-60w53hyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=1ce7a863-d0bf-4ea3-80f5-18675b16ac93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.366 2 DEBUG nova.network.os_vif_util [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.367 2 DEBUG nova.network.os_vif_util [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.369 2 DEBUG nova.objects.instance [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.386 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <uuid>1ce7a863-d0bf-4ea3-80f5-18675b16ac93</uuid>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <name>instance-00000035</name>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestOtherA-server-1496520165</nova:name>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:04:54</nova:creationTime>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <nova:user uuid="d952679a4e6a4fc6bacf42c02d3e92d0">tempest-ServerActionsTestOtherA-894139105-project-member</nova:user>
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <nova:project uuid="4e47722c609640d3a70fee8dd6ff94cc">tempest-ServerActionsTestOtherA-894139105</nova:project>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <nova:port uuid="58429c4c-bdab-4d51-8440-95fb6e0fab00">
Oct 14 09:04:55 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <system>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <entry name="serial">1ce7a863-d0bf-4ea3-80f5-18675b16ac93</entry>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <entry name="uuid">1ce7a863-d0bf-4ea3-80f5-18675b16ac93</entry>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </system>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <os>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   </os>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <features>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   </features>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk">
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config">
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       </source>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:04:55 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:53:a5:80"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <target dev="tap58429c4c-bd"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/console.log" append="off"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <video>
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </video>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:04:55 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:04:55 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:04:55 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:04:55 compute-0 nova_compute[259627]: </domain>
Oct 14 09:04:55 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.394 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Preparing to wait for external event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.394 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.394 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.394 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.395 2 DEBUG nova.virt.libvirt.vif [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1496520165',display_name='tempest-ServerActionsTestOtherA-server-1496520165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1496520165',id=53,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHn+mln6XiHS3Dbrh5f5r23+s3Q61qobcQwb2UzGhsgS1DhTJSEpJGmS/ZP0w8jiE9rcTktB/Gz7RvHBySi5EJz+HH+wa+mTFVBHeaIG5cz8L5ypIzO20Wa3eu2dAxGK5A==',key_name='tempest-keypair-1288175355',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-60w53hyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=1ce7a863-d0bf-4ea3-80f5-18675b16ac93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.395 2 DEBUG nova.network.os_vif_util [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.396 2 DEBUG nova.network.os_vif_util [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.397 2 DEBUG os_vif [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.399 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.400 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.403 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58429c4c-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.404 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58429c4c-bd, col_values=(('external_ids', {'iface-id': '58429c4c-bdab-4d51-8440-95fb6e0fab00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:a5:80', 'vm-uuid': '1ce7a863-d0bf-4ea3-80f5-18675b16ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:55 compute-0 NetworkManager[44885]: <info>  [1760432695.4064] manager: (tap58429c4c-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:04:55 compute-0 ceph-mon[74249]: pgmap v1479: 305 pgs: 305 active+clean; 134 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Oct 14 09:04:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1743216292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2218737721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/460046403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.480 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.480 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Deleting local config drive /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config because it was imported into RBD.
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.505 2 INFO os_vif [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd')
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 NetworkManager[44885]: <info>  [1760432695.5385] manager: (tap971d99c2-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Oct 14 09:04:55 compute-0 kernel: tap971d99c2-5a: entered promiscuous mode
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 ovn_controller[152662]: 2025-10-14T09:04:55Z|00517|binding|INFO|Claiming lport 971d99c2-5a60-4cac-8f99-e819d71e419c for this chassis.
Oct 14 09:04:55 compute-0 ovn_controller[152662]: 2025-10-14T09:04:55Z|00518|binding|INFO|971d99c2-5a60-4cac-8f99-e819d71e419c: Claiming fa:16:3e:9a:79:ab 10.100.0.3
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.550 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:79:ab 10.100.0.3'], port_security=['fa:16:3e:9a:79:ab 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c96b2336-ed00-4da6-b121-ce1c9aa6f017', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=971d99c2-5a60-4cac-8f99-e819d71e419c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.552 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 971d99c2-5a60-4cac-8f99-e819d71e419c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.553 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.564 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84c1bf6c-7a60-4e8b-9d77-d0220961d135]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.565 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc2d149f-a1 in ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.566 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc2d149f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9acc79-2ebe-4314-b343-0b1ea71dbbcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.567 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0846a2ba-bcb9-4948-b9c8-d5abfce2d104]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.565 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.566 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.566 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No VIF found with MAC fa:16:3e:53:a5:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.566 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Using config drive
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.579 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[64b8b4fb-3d3e-4dad-9b6e-63fb506d22a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 systemd-machined[214636]: New machine qemu-67-instance-00000036.
Oct 14 09:04:55 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-00000036.
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.600 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:55 compute-0 systemd-udevd[319154]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.605 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26ccb375-d5ad-4d8b-933d-db110e1db7b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 NetworkManager[44885]: <info>  [1760432695.6269] device (tap971d99c2-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:04:55 compute-0 NetworkManager[44885]: <info>  [1760432695.6285] device (tap971d99c2-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.637 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9004165e-f9f7-4bcd-951f-aaafc5196771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a37dab74-74ff-4e29-a6de-a2fdf18e91a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 NetworkManager[44885]: <info>  [1760432695.6468] manager: (tapfc2d149f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/224)
Oct 14 09:04:55 compute-0 ovn_controller[152662]: 2025-10-14T09:04:55Z|00519|binding|INFO|Setting lport 971d99c2-5a60-4cac-8f99-e819d71e419c ovn-installed in OVS
Oct 14 09:04:55 compute-0 ovn_controller[152662]: 2025-10-14T09:04:55Z|00520|binding|INFO|Setting lport 971d99c2-5a60-4cac-8f99-e819d71e419c up in Southbound
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.685 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4dff792e-feb7-492a-83d2-4f96daebcd69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.690 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb8ad08-1aa0-4ad1-8d83-1af6e58c31f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 MiB/s wr, 83 op/s
Oct 14 09:04:55 compute-0 NetworkManager[44885]: <info>  [1760432695.7115] device (tapfc2d149f-a0): carrier: link connected
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.713 2 DEBUG nova.network.neutron [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port 971d99c2-5a60-4cac-8f99-e819d71e419c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.713 2 DEBUG nova.network.neutron [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.718 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c51fd0-8ec7-4824-aaa1-3512d2caedbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.731 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88bd7c63-f57f-416c-affe-550e260ab681]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319189, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.740 2 DEBUG oslo_concurrency.lockutils [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.746 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1a7034-2103-4a23-bbdc-467525b35da6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:e73e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647447, 'tstamp': 647447}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319190, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a770bc77-6923-4a5d-8285-553b420408fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319198, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.796 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[457f0635-b6a6-49f8-92ee-ac11e1992d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.879 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[243685fd-1512-4a7e-b59e-1ccd6ea87abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.880 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.882 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:55 compute-0 NetworkManager[44885]: <info>  [1760432695.9215] manager: (tapfc2d149f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.923 2 DEBUG nova.network.neutron [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updated VIF entry in instance network info cache for port 58429c4c-bdab-4d51-8440-95fb6e0fab00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.923 2 DEBUG nova.network.neutron [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:04:55 compute-0 kernel: tapfc2d149f-a0: entered promiscuous mode
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.927 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:55 compute-0 ovn_controller[152662]: 2025-10-14T09:04:55Z|00521|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.940 2 DEBUG oslo_concurrency.lockutils [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 nova_compute[259627]: 2025-10-14 09:04:55.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.951 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.952 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6afe58ff-f0d4-40bc-9f3c-c5470ff1cff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.952 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:04:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.953 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'env', 'PROCESS_TAG=haproxy-fc2d149f-aebf-406a-aed2-5161dd22b079', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc2d149f-aebf-406a-aed2-5161dd22b079.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.071 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Creating config drive at /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.078 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0t4ksl32 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.137 2 DEBUG nova.compute.manager [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.138 2 DEBUG oslo_concurrency.lockutils [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.138 2 DEBUG oslo_concurrency.lockutils [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.138 2 DEBUG oslo_concurrency.lockutils [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.138 2 DEBUG nova.compute.manager [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Processing event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.215 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0t4ksl32" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.239 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.242 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:04:56 compute-0 podman[319291]: 2025-10-14 09:04:56.308981356 +0000 UTC m=+0.041458410 container create 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:04:56 compute-0 systemd[1]: Started libpod-conmon-7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3.scope.
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.340 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432696.3396842, 47257c6e-4d10-4d8e-af5a-b57db20048ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.341 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] VM Started (Lifecycle Event)
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.344 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.348 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.355 2 INFO nova.virt.libvirt.driver [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance spawned successfully.
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.355 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:04:56 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f03912ee5e1d4e48b9e053b6890e84bd070633917fc91412b280f99baf2363/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:56 compute-0 podman[319291]: 2025-10-14 09:04:56.373234726 +0000 UTC m=+0.105711800 container init 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.375 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:56 compute-0 podman[319291]: 2025-10-14 09:04:56.378570107 +0000 UTC m=+0.111047161 container start 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:04:56 compute-0 podman[319291]: 2025-10-14 09:04:56.287775745 +0000 UTC m=+0.020252819 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.382 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.388 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.389 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.389 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.389 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.390 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.390 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:04:56 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [NOTICE]   (319328) : New worker (319330) forked
Oct 14 09:04:56 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [NOTICE]   (319328) : Loading success.
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.416 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.416 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432696.3398838, 47257c6e-4d10-4d8e-af5a-b57db20048ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.416 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] VM Paused (Lifecycle Event)
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.423 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.424 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Deleting local config drive /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config because it was imported into RBD.
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.441 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.446 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432696.3480465, 47257c6e-4d10-4d8e-af5a-b57db20048ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.447 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] VM Resumed (Lifecycle Event)
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.449 2 INFO nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Took 7.26 seconds to spawn the instance on the hypervisor.
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.450 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.461 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.467 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:56 compute-0 kernel: tap58429c4c-bd: entered promiscuous mode
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:56 compute-0 systemd-udevd[319184]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:04:56 compute-0 ovn_controller[152662]: 2025-10-14T09:04:56Z|00522|binding|INFO|Claiming lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 for this chassis.
Oct 14 09:04:56 compute-0 ovn_controller[152662]: 2025-10-14T09:04:56Z|00523|binding|INFO|58429c4c-bdab-4d51-8440-95fb6e0fab00: Claiming fa:16:3e:53:a5:80 10.100.0.3
Oct 14 09:04:56 compute-0 NetworkManager[44885]: <info>  [1760432696.4812] manager: (tap58429c4c-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.489 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:a5:80 10.100.0.3'], port_security=['fa:16:3e:53:a5:80 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1ce7a863-d0bf-4ea3-80f5-18675b16ac93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f73d8240-1201-4e28-9385-26f0dd3955ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=58429c4c-bdab-4d51-8440-95fb6e0fab00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.490 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 58429c4c-bdab-4d51-8440-95fb6e0fab00 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.491 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.492 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:04:56 compute-0 NetworkManager[44885]: <info>  [1760432696.4979] device (tap58429c4c-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:04:56 compute-0 NetworkManager[44885]: <info>  [1760432696.4987] device (tap58429c4c-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cee8e013-d14b-4c47-b0c9-26c0926837b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.502 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3b87118-f1 in ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.504 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3b87118-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.504 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe50a562-e558-45ec-8620-51476a0deeb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.505 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[770b029d-6550-41d9-acb8-0de9a436cee7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 systemd-machined[214636]: New machine qemu-68-instance-00000035.
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.516 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[0637f4a5-a866-4ab8-ac8d-8cf73d69f443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.524 2 INFO nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Took 8.58 seconds to build instance.
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.538 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0123846-582c-4f22-a298-de67e87822e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-00000035.
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:56 compute-0 ovn_controller[152662]: 2025-10-14T09:04:56Z|00524|binding|INFO|Setting lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 ovn-installed in OVS
Oct 14 09:04:56 compute-0 ovn_controller[152662]: 2025-10-14T09:04:56Z|00525|binding|INFO|Setting lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 up in Southbound
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.589 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8905815d-155c-4041-ad4b-f8738a1b862f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.594 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be01a6c2-deb2-412e-b990-78376f91033d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 NetworkManager[44885]: <info>  [1760432696.5951] manager: (tapf3b87118-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.626 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9c4591-5e3c-456d-a61a-7f863f125fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.629 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ac860218-bea7-43e5-9289-a607ce083673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 NetworkManager[44885]: <info>  [1760432696.6470] device (tapf3b87118-f0): carrier: link connected
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.652 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bcf45d-2f42-47de-bb77-6ad4623de003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3f2de1b4-cc87-48e4-9f55-36c7b1dff1df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319369, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.693 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a05d899b-054c-4853-8db8-672fe9353d19]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:43f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647540, 'tstamp': 647540}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319370, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.705 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29b7710c-39c0-4f73-8eba-2df4905384b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319371, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.735 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f51ad508-9adc-43f5-8156-9dc12f3de803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.804 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dccbcfd9-260d-413d-a9bb-f8a2ecb02a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.806 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.806 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.807 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:56 compute-0 NetworkManager[44885]: <info>  [1760432696.8103] manager: (tapf3b87118-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Oct 14 09:04:56 compute-0 kernel: tapf3b87118-f0: entered promiscuous mode
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.821 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:04:56 compute-0 ovn_controller[152662]: 2025-10-14T09:04:56Z|00526|binding|INFO|Releasing lport a7f44223-dee5-4a2f-b975-1f04f03b78f7 from this chassis (sb_readonly=0)
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.829 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3b87118-f516-4f2d-8696-aa7290af9d83.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3b87118-f516-4f2d-8696-aa7290af9d83.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.830 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3df8fe5d-da64-41f5-82d7-580cd80052bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.831 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/f3b87118-f516-4f2d-8696-aa7290af9d83.pid.haproxy
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:04:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.832 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'env', 'PROCESS_TAG=haproxy-f3b87118-f516-4f2d-8696-aa7290af9d83', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3b87118-f516-4f2d-8696-aa7290af9d83.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:04:56 compute-0 nova_compute[259627]: 2025-10-14 09:04:56.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:57 compute-0 podman[319445]: 2025-10-14 09:04:57.246676431 +0000 UTC m=+0.051892267 container create 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:57 compute-0 systemd[1]: Started libpod-conmon-5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b.scope.
Oct 14 09:04:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59812b275a7f08f721c19ad28122ba9e2d64d1426f442032d01ef6fa1360d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:04:57 compute-0 podman[319445]: 2025-10-14 09:04:57.219970455 +0000 UTC m=+0.025186341 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:04:57 compute-0 podman[319445]: 2025-10-14 09:04:57.317072152 +0000 UTC m=+0.122288008 container init 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:04:57 compute-0 podman[319445]: 2025-10-14 09:04:57.323362097 +0000 UTC m=+0.128577933 container start 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:04:57 compute-0 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [NOTICE]   (319464) : New worker (319466) forked
Oct 14 09:04:57 compute-0 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [NOTICE]   (319464) : Loading success.
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.444 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432697.4445388, 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.445 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] VM Started (Lifecycle Event)
Oct 14 09:04:57 compute-0 ceph-mon[74249]: pgmap v1480: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 MiB/s wr, 83 op/s
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.495 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.499 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432697.4446344, 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.501 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] VM Paused (Lifecycle Event)
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.521 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.525 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:04:57 compute-0 nova_compute[259627]: 2025-10-14 09:04:57.546 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:04:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:04:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.236 2 DEBUG nova.compute.manager [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.237 2 DEBUG oslo_concurrency.lockutils [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.238 2 DEBUG oslo_concurrency.lockutils [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.239 2 DEBUG oslo_concurrency.lockutils [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.239 2 DEBUG nova.compute.manager [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.240 2 WARNING nova.compute.manager [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c for instance with vm_state active and task_state None.
Oct 14 09:04:58 compute-0 ovn_controller[152662]: 2025-10-14T09:04:58Z|00527|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 09:04:58 compute-0 ovn_controller[152662]: 2025-10-14T09:04:58Z|00528|binding|INFO|Releasing lport a7f44223-dee5-4a2f-b975-1f04f03b78f7 from this chassis (sb_readonly=0)
Oct 14 09:04:58 compute-0 NetworkManager[44885]: <info>  [1760432698.7622] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:58 compute-0 NetworkManager[44885]: <info>  [1760432698.7645] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:58 compute-0 ovn_controller[152662]: 2025-10-14T09:04:58Z|00529|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 09:04:58 compute-0 ovn_controller[152662]: 2025-10-14T09:04:58Z|00530|binding|INFO|Releasing lport a7f44223-dee5-4a2f-b975-1f04f03b78f7 from this chassis (sb_readonly=0)
Oct 14 09:04:58 compute-0 nova_compute[259627]: 2025-10-14 09:04:58.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:04:59 compute-0 ceph-mon[74249]: pgmap v1481: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct 14 09:04:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct 14 09:04:59 compute-0 nova_compute[259627]: 2025-10-14 09:04:59.971 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432684.9695382, e038df86-1323-4b04-afae-9fe68c98c22c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:04:59 compute-0 nova_compute[259627]: 2025-10-14 09:04:59.972 2 INFO nova.compute.manager [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] VM Stopped (Lifecycle Event)
Oct 14 09:04:59 compute-0 nova_compute[259627]: 2025-10-14 09:04:59.992 2 DEBUG nova.compute.manager [None req-9be7d0c4-fd42-495a-bc55-ce86df4f0d9a - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:00 compute-0 nova_compute[259627]: 2025-10-14 09:05:00.342 2 DEBUG nova.compute.manager [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:00 compute-0 nova_compute[259627]: 2025-10-14 09:05:00.343 2 DEBUG nova.compute.manager [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-971d99c2-5a60-4cac-8f99-e819d71e419c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:05:00 compute-0 nova_compute[259627]: 2025-10-14 09:05:00.344 2 DEBUG oslo_concurrency.lockutils [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:00 compute-0 nova_compute[259627]: 2025-10-14 09:05:00.344 2 DEBUG oslo_concurrency.lockutils [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:00 compute-0 nova_compute[259627]: 2025-10-14 09:05:00.345 2 DEBUG nova.network.neutron [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port 971d99c2-5a60-4cac-8f99-e819d71e419c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:05:00 compute-0 nova_compute[259627]: 2025-10-14 09:05:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:01 compute-0 ceph-mon[74249]: pgmap v1482: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct 14 09:05:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.040 2 DEBUG nova.network.neutron [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port 971d99c2-5a60-4cac-8f99-e819d71e419c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.041 2 DEBUG nova.network.neutron [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.064 2 DEBUG oslo_concurrency.lockutils [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.116 2 DEBUG nova.compute.manager [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.117 2 DEBUG oslo_concurrency.lockutils [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.118 2 DEBUG oslo_concurrency.lockutils [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.119 2 DEBUG oslo_concurrency.lockutils [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.119 2 DEBUG nova.compute.manager [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Processing event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.120 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.124 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432702.1243072, 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.125 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] VM Resumed (Lifecycle Event)
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.127 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.132 2 INFO nova.virt.libvirt.driver [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance spawned successfully.
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.132 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.155 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.158 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.164 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.165 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.165 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.165 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.166 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.166 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.204 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.241 2 INFO nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Took 13.87 seconds to spawn the instance on the hypervisor.
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.243 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.320 2 INFO nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Took 14.87 seconds to build instance.
Oct 14 09:05:02 compute-0 nova_compute[259627]: 2025-10-14 09:05:02.347 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:05:03 compute-0 ceph-mon[74249]: pgmap v1483: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 14 09:05:03 compute-0 nova_compute[259627]: 2025-10-14 09:05:03.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 83 op/s
Oct 14 09:05:04 compute-0 nova_compute[259627]: 2025-10-14 09:05:04.238 2 DEBUG nova.compute.manager [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:04 compute-0 nova_compute[259627]: 2025-10-14 09:05:04.239 2 DEBUG oslo_concurrency.lockutils [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:04 compute-0 nova_compute[259627]: 2025-10-14 09:05:04.240 2 DEBUG oslo_concurrency.lockutils [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:04 compute-0 nova_compute[259627]: 2025-10-14 09:05:04.241 2 DEBUG oslo_concurrency.lockutils [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:04 compute-0 nova_compute[259627]: 2025-10-14 09:05:04.241 2 DEBUG nova.compute.manager [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] No waiting events found dispatching network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:04 compute-0 nova_compute[259627]: 2025-10-14 09:05:04.242 2 WARNING nova.compute.manager [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received unexpected event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 for instance with vm_state active and task_state None.
Oct 14 09:05:05 compute-0 nova_compute[259627]: 2025-10-14 09:05:05.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:05 compute-0 nova_compute[259627]: 2025-10-14 09:05:05.485 2 DEBUG nova.compute.manager [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-changed-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:05 compute-0 nova_compute[259627]: 2025-10-14 09:05:05.485 2 DEBUG nova.compute.manager [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Refreshing instance network info cache due to event network-changed-58429c4c-bdab-4d51-8440-95fb6e0fab00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:05:05 compute-0 nova_compute[259627]: 2025-10-14 09:05:05.486 2 DEBUG oslo_concurrency.lockutils [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:05 compute-0 nova_compute[259627]: 2025-10-14 09:05:05.486 2 DEBUG oslo_concurrency.lockutils [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:05 compute-0 nova_compute[259627]: 2025-10-14 09:05:05.487 2 DEBUG nova.network.neutron [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Refreshing network info cache for port 58429c4c-bdab-4d51-8440-95fb6e0fab00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:05:05 compute-0 ceph-mon[74249]: pgmap v1484: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 83 op/s
Oct 14 09:05:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:05:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3543546620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:05:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:05:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3543546620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:05:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 147 op/s
Oct 14 09:05:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3543546620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:05:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3543546620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:05:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:07.023 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:07.024 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:07.024 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:07 compute-0 podman[319476]: 2025-10-14 09:05:07.12721482 +0000 UTC m=+0.063214326 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:05:07 compute-0 podman[319477]: 2025-10-14 09:05:07.135734759 +0000 UTC m=+0.065676176 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:05:07 compute-0 nova_compute[259627]: 2025-10-14 09:05:07.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:07 compute-0 nova_compute[259627]: 2025-10-14 09:05:07.269 2 DEBUG nova.network.neutron [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updated VIF entry in instance network info cache for port 58429c4c-bdab-4d51-8440-95fb6e0fab00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:05:07 compute-0 nova_compute[259627]: 2025-10-14 09:05:07.269 2 DEBUG nova.network.neutron [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:07 compute-0 nova_compute[259627]: 2025-10-14 09:05:07.292 2 DEBUG oslo_concurrency.lockutils [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:07 compute-0 ceph-mon[74249]: pgmap v1485: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 147 op/s
Oct 14 09:05:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 146 op/s
Oct 14 09:05:08 compute-0 ovn_controller[152662]: 2025-10-14T09:05:08Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:79:ab 10.100.0.3
Oct 14 09:05:08 compute-0 ovn_controller[152662]: 2025-10-14T09:05:08Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:79:ab 10.100.0.3
Oct 14 09:05:08 compute-0 nova_compute[259627]: 2025-10-14 09:05:08.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:09 compute-0 ceph-mon[74249]: pgmap v1486: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 146 op/s
Oct 14 09:05:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 146 op/s
Oct 14 09:05:10 compute-0 nova_compute[259627]: 2025-10-14 09:05:10.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:11 compute-0 ceph-mon[74249]: pgmap v1487: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 146 op/s
Oct 14 09:05:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 211 op/s
Oct 14 09:05:12 compute-0 nova_compute[259627]: 2025-10-14 09:05:12.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:13 compute-0 ceph-mon[74249]: pgmap v1488: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 211 op/s
Oct 14 09:05:13 compute-0 ovn_controller[152662]: 2025-10-14T09:05:13Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:a5:80 10.100.0.3
Oct 14 09:05:13 compute-0 ovn_controller[152662]: 2025-10-14T09:05:13Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:a5:80 10.100.0.3
Oct 14 09:05:13 compute-0 nova_compute[259627]: 2025-10-14 09:05:13.655 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:13 compute-0 nova_compute[259627]: 2025-10-14 09:05:13.656 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:13 compute-0 nova_compute[259627]: 2025-10-14 09:05:13.674 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:05:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 14 09:05:13 compute-0 nova_compute[259627]: 2025-10-14 09:05:13.794 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:13 compute-0 nova_compute[259627]: 2025-10-14 09:05:13.796 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:13 compute-0 nova_compute[259627]: 2025-10-14 09:05:13.811 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:05:13 compute-0 nova_compute[259627]: 2025-10-14 09:05:13.812 2 INFO nova.compute.claims [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:05:13 compute-0 nova_compute[259627]: 2025-10-14 09:05:13.999 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:05:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894978399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.474 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.479 2 DEBUG nova.compute.provider_tree [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.500 2 DEBUG nova.scheduler.client.report [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.530 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.531 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:05:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2894978399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.590 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.591 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.608 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.628 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.717 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.718 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.718 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Creating image(s)
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.743 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.768 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.791 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.796 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.881 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.882 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.883 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.883 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.929 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.934 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:14 compute-0 nova_compute[259627]: 2025-10-14 09:05:14.998 2 DEBUG nova.policy [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '64a22b9370d049c0b189508f3f58f0ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55e6e0201a064f1390a998f830140354', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.228 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.317 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] resizing rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.440 2 DEBUG nova.objects.instance [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lazy-loading 'migration_context' on Instance uuid 65c7e6ed-131f-4bca-af69-a1241d048bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.462 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.463 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Ensure instance console log exists: /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.464 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.465 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.465 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:15 compute-0 ceph-mon[74249]: pgmap v1489: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 14 09:05:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 217 op/s
Oct 14 09:05:15 compute-0 nova_compute[259627]: 2025-10-14 09:05:15.807 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Successfully created port: 282dfd9e-9e84-450c-a306-8bc55428feb4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.246 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.248 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.248 2 DEBUG nova.objects.instance [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.280 2 DEBUG nova.objects.instance [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.299 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:05:16 compute-0 ceph-mon[74249]: pgmap v1490: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 217 op/s
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.781 2 DEBUG nova.policy [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.872 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Successfully updated port: 282dfd9e-9e84-450c-a306-8bc55428feb4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.894 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.895 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquired lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.895 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.970 2 DEBUG nova.compute.manager [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-changed-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.970 2 DEBUG nova.compute.manager [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Refreshing instance network info cache due to event network-changed-282dfd9e-9e84-450c-a306-8bc55428feb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:05:16 compute-0 nova_compute[259627]: 2025-10-14 09:05:16.971 2 DEBUG oslo_concurrency.lockutils [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:17 compute-0 nova_compute[259627]: 2025-10-14 09:05:17.069 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:05:17 compute-0 nova_compute[259627]: 2025-10-14 09:05:17.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:17 compute-0 nova_compute[259627]: 2025-10-14 09:05:17.437 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully created port: e3bc3ac3-6147-40d0-a19c-df111dcf23a5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:05:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.151 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updating instance_info_cache with network_info: [{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.159 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully updated port: e3bc3ac3-6147-40d0-a19c-df111dcf23a5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.171 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Releasing lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.172 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance network_info: |[{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.172 2 DEBUG oslo_concurrency.lockutils [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.172 2 DEBUG nova.network.neutron [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Refreshing network info cache for port 282dfd9e-9e84-450c-a306-8bc55428feb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.175 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start _get_guest_xml network_info=[{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.177 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.177 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.178 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.182 2 WARNING nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.187 2 DEBUG nova.virt.libvirt.host [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.188 2 DEBUG nova.virt.libvirt.host [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.201 2 DEBUG nova.virt.libvirt.host [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.202 2 DEBUG nova.virt.libvirt.host [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.202 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.203 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.204 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.204 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.205 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.205 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.206 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.206 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.207 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.207 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.207 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.208 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.212 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.342 2 WARNING nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.427 2 DEBUG nova.compute.manager [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.428 2 DEBUG nova.compute.manager [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-e3bc3ac3-6147-40d0-a19c-df111dcf23a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.428 2 DEBUG oslo_concurrency.lockutils [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:05:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727335350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.707 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.742 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:18 compute-0 nova_compute[259627]: 2025-10-14 09:05:18.747 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:18 compute-0 ceph-mon[74249]: pgmap v1491: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 14 09:05:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2727335350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:05:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:05:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178235096' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.249 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.252 2 DEBUG nova.virt.libvirt.vif [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-100646485',display_name='tempest-ServersTestManualDisk-server-100646485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-100646485',id=55,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFq4tcZMNkmwexYMay7CcSUnt45X5jGbu/ngQCrGssdHqitjMlfE2R1DP+cztwj+Jbcg255ZB+kgmwp3pbM6el/CrOnrVr2V0onKRN9dF6T7lO2ORJc789YDLKzPg0Nog==',key_name='tempest-keypair-532072477',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55e6e0201a064f1390a998f830140354',ramdisk_id='',reservation_id='r-5wf160ka',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-748280037',owner_user_name='tempest-ServersTestManualDisk-748280037-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:05:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='64a22b9370d049c0b189508f3f58f0ca',uuid=65c7e6ed-131f-4bca-af69-a1241d048bdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.253 2 DEBUG nova.network.os_vif_util [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converting VIF {"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.254 2 DEBUG nova.network.os_vif_util [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.257 2 DEBUG nova.objects.instance [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65c7e6ed-131f-4bca-af69-a1241d048bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.285 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <uuid>65c7e6ed-131f-4bca-af69-a1241d048bdb</uuid>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <name>instance-00000037</name>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestManualDisk-server-100646485</nova:name>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:05:18</nova:creationTime>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <nova:user uuid="64a22b9370d049c0b189508f3f58f0ca">tempest-ServersTestManualDisk-748280037-project-member</nova:user>
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <nova:project uuid="55e6e0201a064f1390a998f830140354">tempest-ServersTestManualDisk-748280037</nova:project>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <nova:port uuid="282dfd9e-9e84-450c-a306-8bc55428feb4">
Oct 14 09:05:19 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <system>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <entry name="serial">65c7e6ed-131f-4bca-af69-a1241d048bdb</entry>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <entry name="uuid">65c7e6ed-131f-4bca-af69-a1241d048bdb</entry>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </system>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <os>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   </os>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <features>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   </features>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/65c7e6ed-131f-4bca-af69-a1241d048bdb_disk">
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config">
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:05:19 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:de:50:93"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <target dev="tap282dfd9e-9e"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/console.log" append="off"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <video>
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </video>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:05:19 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:05:19 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:05:19 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:05:19 compute-0 nova_compute[259627]: </domain>
Oct 14 09:05:19 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.288 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Preparing to wait for external event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.288 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.289 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.289 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.290 2 DEBUG nova.virt.libvirt.vif [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-100646485',display_name='tempest-ServersTestManualDisk-server-100646485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-100646485',id=55,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFq4tcZMNkmwexYMay7CcSUnt45X5jGbu/ngQCrGssdHqitjMlfE2R1DP+cztwj+Jbcg255ZB+kgmwp3pbM6el/CrOnrVr2V0onKRN9dF6T7lO2ORJc789YDLKzPg0Nog==',key_name='tempest-keypair-532072477',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55e6e0201a064f1390a998f830140354',ramdisk_id='',reservation_id='r-5wf160ka',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-748280037',owner_user_name='tempest-ServersTestManualDisk-748280037-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:05:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='64a22b9370d049c0b189508f3f58f0ca',uuid=65c7e6ed-131f-4bca-af69-a1241d048bdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.291 2 DEBUG nova.network.os_vif_util [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converting VIF {"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.292 2 DEBUG nova.network.os_vif_util [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.293 2 DEBUG os_vif [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap282dfd9e-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap282dfd9e-9e, col_values=(('external_ids', {'iface-id': '282dfd9e-9e84-450c-a306-8bc55428feb4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:50:93', 'vm-uuid': '65c7e6ed-131f-4bca-af69-a1241d048bdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:19 compute-0 NetworkManager[44885]: <info>  [1760432719.3070] manager: (tap282dfd9e-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.318 2 INFO os_vif [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e')
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.395 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.396 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.397 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] No VIF found with MAC fa:16:3e:de:50:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.397 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Using config drive
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.431 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 14 09:05:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2178235096' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.839 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Creating config drive at /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.852 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7ac9h4b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.906 2 DEBUG nova.network.neutron [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updated VIF entry in instance network info cache for port 282dfd9e-9e84-450c-a306-8bc55428feb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.907 2 DEBUG nova.network.neutron [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updating instance_info_cache with network_info: [{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.926 2 DEBUG oslo_concurrency.lockutils [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:19 compute-0 nova_compute[259627]: 2025-10-14 09:05:19.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.019 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7ac9h4b" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.066 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.070 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.316 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.318 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Deleting local config drive /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config because it was imported into RBD.
Oct 14 09:05:20 compute-0 kernel: tap282dfd9e-9e: entered promiscuous mode
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.3758] manager: (tap282dfd9e-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00531|binding|INFO|Claiming lport 282dfd9e-9e84-450c-a306-8bc55428feb4 for this chassis.
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00532|binding|INFO|282dfd9e-9e84-450c-a306-8bc55428feb4: Claiming fa:16:3e:de:50:93 10.100.0.10
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.385 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:50:93 10.100.0.10'], port_security=['fa:16:3e:de:50:93 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '65c7e6ed-131f-4bca-af69-a1241d048bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55e6e0201a064f1390a998f830140354', 'neutron:revision_number': '2', 'neutron:security_group_ids': '98209f09-275f-46ee-a2c6-16214403e3de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8d21053-8b98-4816-ad89-107cc4743794, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=282dfd9e-9e84-450c-a306-8bc55428feb4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.387 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 282dfd9e-9e84-450c-a306-8bc55428feb4 in datapath 77fb75b2-483b-47a5-99a5-ae91248b8ed8 bound to our chassis
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.389 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77fb75b2-483b-47a5-99a5-ae91248b8ed8
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00533|binding|INFO|Setting lport 282dfd9e-9e84-450c-a306-8bc55428feb4 ovn-installed in OVS
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00534|binding|INFO|Setting lport 282dfd9e-9e84-450c-a306-8bc55428feb4 up in Southbound
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.401 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff1e6d8-b370-485a-9b11-4a6a8ec2c9a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.403 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77fb75b2-41 in ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.405 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77fb75b2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.405 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e68ae350-3d1a-4e29-8f18-1fef832515f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb337d5-b4b6-4a4d-9bf4-3805f38be0a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 systemd-machined[214636]: New machine qemu-69-instance-00000037.
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.421 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1b811a-43ff-4d6e-bc73-e32068d3cdf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-00000037.
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.439 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4cbf94-cd9b-49d3-bde3-70338a03660a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 systemd-udevd[319865]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.4599] device (tap282dfd9e-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.4623] device (tap282dfd9e-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:05:20 compute-0 podman[319839]: 2025-10-14 09:05:20.476307072 +0000 UTC m=+0.065040303 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.476 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[780c1315-d5e6-4ec6-a936-7eff9266caef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.4883] manager: (tap77fb75b2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.487 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5299bd08-63d6-405c-a4cf-488c7883dacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 podman[319837]: 2025-10-14 09:05:20.512820831 +0000 UTC m=+0.101612483 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.523 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[316fb446-40f4-43b2-93e5-59c97ede23a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.526 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d3480b53-be5d-4eb2-83d3-f3f8eb79de53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.5540] device (tap77fb75b2-40): carrier: link connected
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.559 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[92ce509a-9787-4da5-ae95-c3b6ceba1d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.576 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[687d236e-0705-4f1c-b3e3-d09cab6f42ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77fb75b2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:16:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649931, 'reachable_time': 44929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319915, 'error': None, 'target': 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.593 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cacb4314-fe02-4472-965c-085cb45ee493]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:16e4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649931, 'tstamp': 649931}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319916, 'error': None, 'target': 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.608 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9f8c94-83ec-4d35-b15a-1e2fa50ec9b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77fb75b2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:16:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649931, 'reachable_time': 44929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319917, 'error': None, 'target': 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.640 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad9f010-7a0c-49d3-a4d4-24ac23439e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.699 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba158c3-0473-454a-84f7-db2fce86bc2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.700 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77fb75b2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.701 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.701 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77fb75b2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.7043] manager: (tap77fb75b2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct 14 09:05:20 compute-0 kernel: tap77fb75b2-40: entered promiscuous mode
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77fb75b2-40, col_values=(('external_ids', {'iface-id': '43fea039-06e3-4d15-8c57-031bfdc08664'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00535|binding|INFO|Releasing lport 43fea039-06e3-4d15-8c57-031bfdc08664 from this chassis (sb_readonly=0)
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.724 2 DEBUG nova.compute.manager [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.724 2 DEBUG oslo_concurrency.lockutils [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.725 2 DEBUG oslo_concurrency.lockutils [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.725 2 DEBUG oslo_concurrency.lockutils [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.725 2 DEBUG nova.compute.manager [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Processing event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.737 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77fb75b2-483b-47a5-99a5-ae91248b8ed8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77fb75b2-483b-47a5-99a5-ae91248b8ed8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6b4088-f5da-429d-9c83-85c6c3e2d044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.739 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-77fb75b2-483b-47a5-99a5-ae91248b8ed8
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/77fb75b2-483b-47a5-99a5-ae91248b8ed8.pid.haproxy
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 77fb75b2-483b-47a5-99a5-ae91248b8ed8
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.741 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'env', 'PROCESS_TAG=haproxy-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77fb75b2-483b-47a5-99a5-ae91248b8ed8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.811 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:20 compute-0 ceph-mon[74249]: pgmap v1492: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.834 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.836 2 DEBUG oslo_concurrency.lockutils [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.837 2 DEBUG nova.network.neutron [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port e3bc3ac3-6147-40d0-a19c-df111dcf23a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.844 2 DEBUG nova.virt.libvirt.vif [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.845 2 DEBUG nova.network.os_vif_util [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.846 2 DEBUG nova.network.os_vif_util [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.846 2 DEBUG os_vif [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3bc3ac3-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape3bc3ac3-61, col_values=(('external_ids', {'iface-id': 'e3bc3ac3-6147-40d0-a19c-df111dcf23a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:94:02', 'vm-uuid': '47257c6e-4d10-4d8e-af5a-b57db20048ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.8532] manager: (tape3bc3ac3-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.859 2 INFO os_vif [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61')
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.860 2 DEBUG nova.virt.libvirt.vif [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.861 2 DEBUG nova.network.os_vif_util [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.861 2 DEBUG nova.network.os_vif_util [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.864 2 DEBUG nova.virt.libvirt.guest [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:54:94:02"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <target dev="tape3bc3ac3-61"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]: </interface>
Oct 14 09:05:20 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 09:05:20 compute-0 kernel: tape3bc3ac3-61: entered promiscuous mode
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.8745] manager: (tape3bc3ac3-61): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00536|binding|INFO|Claiming lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for this chassis.
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00537|binding|INFO|e3bc3ac3-6147-40d0-a19c-df111dcf23a5: Claiming fa:16:3e:54:94:02 10.100.0.4
Oct 14 09:05:20 compute-0 systemd-udevd[319904]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.887 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:94:02 10.100.0.4'], port_security=['fa:16:3e:54:94:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e3bc3ac3-6147-40d0-a19c-df111dcf23a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.8941] device (tape3bc3ac3-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:05:20 compute-0 NetworkManager[44885]: <info>  [1760432720.8951] device (tape3bc3ac3-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00538|binding|INFO|Setting lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 ovn-installed in OVS
Oct 14 09:05:20 compute-0 ovn_controller[152662]: 2025-10-14T09:05:20Z|00539|binding|INFO|Setting lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 up in Southbound
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.964 2 DEBUG nova.virt.libvirt.driver [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.964 2 DEBUG nova.virt.libvirt.driver [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.964 2 DEBUG nova.virt.libvirt.driver [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9a:79:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.965 2 DEBUG nova.virt.libvirt.driver [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:54:94:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:20 compute-0 nova_compute[259627]: 2025-10-14 09:05:20.992 2 DEBUG nova.virt.libvirt.guest [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:20</nova:creationTime>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:20 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 09:05:20 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:05:20 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:20 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:20 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:20 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.024 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:21 compute-0 podman[319953]: 2025-10-14 09:05:21.127624482 +0000 UTC m=+0.066813656 container create 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 09:05:21 compute-0 podman[319953]: 2025-10-14 09:05:21.087221287 +0000 UTC m=+0.026410491 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:05:21 compute-0 systemd[1]: Started libpod-conmon-4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d.scope.
Oct 14 09:05:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8ef72225e5c259b1a8bbd67e6e8947df3e042df52c25d7dffcdfcd0e7b97cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:21 compute-0 podman[319953]: 2025-10-14 09:05:21.217530286 +0000 UTC m=+0.156719480 container init 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 09:05:21 compute-0 podman[319953]: 2025-10-14 09:05:21.224108289 +0000 UTC m=+0.163297453 container start 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 09:05:21 compute-0 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [NOTICE]   (320015) : New worker (320017) forked
Oct 14 09:05:21 compute-0 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [NOTICE]   (320015) : Loading success.
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.292 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e3bc3ac3-6147-40d0-a19c-df111dcf23a5 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.294 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.314 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a6049538-3459-46cc-9b4b-364fb6dc6e07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.347 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2bf88c-2efb-4ba7-a721-0dcfdf8a667c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.350 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc36a96-2a4a-4b1e-ade4-30b4542086e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.377 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4646b6-6108-4466-9ad8-0b871770dfc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.395 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[344aa24b-5f55-48a3-8f64-cf0b89b87382]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320031, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f988e5c1-ab05-41b7-a548-644914e0c080]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320032, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320032, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.412 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.416 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.416 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.417 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.417 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.694 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432721.694389, 65c7e6ed-131f-4bca-af69-a1241d048bdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.695 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] VM Started (Lifecycle Event)
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.698 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.701 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.704 2 INFO nova.virt.libvirt.driver [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance spawned successfully.
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.704 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:05:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 247 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 6.1 MiB/s wr, 156 op/s
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.738 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.738 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.739 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.740 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.740 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.741 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.745 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.748 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.813 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.814 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432721.694497, 65c7e6ed-131f-4bca-af69-a1241d048bdb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.814 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] VM Paused (Lifecycle Event)
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.841 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.845 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432721.701136, 65c7e6ed-131f-4bca-af69-a1241d048bdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.845 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] VM Resumed (Lifecycle Event)
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.850 2 INFO nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Took 7.13 seconds to spawn the instance on the hypervisor.
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.851 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.865 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.868 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.895 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.914 2 INFO nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Took 8.16 seconds to build instance.
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.928 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:21 compute-0 nova_compute[259627]: 2025-10-14 09:05:21.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.258 2 DEBUG nova.compute.manager [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.258 2 DEBUG oslo_concurrency.lockutils [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.259 2 DEBUG oslo_concurrency.lockutils [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.260 2 DEBUG oslo_concurrency.lockutils [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.260 2 DEBUG nova.compute.manager [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.261 2 WARNING nova.compute.manager [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for instance with vm_state active and task_state None.
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:22 compute-0 ceph-mon[74249]: pgmap v1493: 305 pgs: 305 active+clean; 247 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 6.1 MiB/s wr, 156 op/s
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:22 compute-0 nova_compute[259627]: 2025-10-14 09:05:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.004 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:23 compute-0 ovn_controller[152662]: 2025-10-14T09:05:23Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:94:02 10.100.0.4
Oct 14 09:05:23 compute-0 ovn_controller[152662]: 2025-10-14T09:05:23Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:94:02 10.100.0.4
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.228 2 DEBUG nova.compute.manager [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.229 2 DEBUG oslo_concurrency.lockutils [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.230 2 DEBUG oslo_concurrency.lockutils [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.230 2 DEBUG oslo_concurrency.lockutils [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.230 2 DEBUG nova.compute.manager [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] No waiting events found dispatching network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.231 2 WARNING nova.compute.manager [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received unexpected event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 for instance with vm_state active and task_state None.
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.345 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.345 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.346 2 DEBUG nova.objects.instance [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:05:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066470348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.523 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.616 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.617 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.619 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.620 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.622 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.622 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:05:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 247 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.805 2 DEBUG nova.objects.instance [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.825 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:05:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4066470348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.838 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.839 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.876243591308594GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.839 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.840 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.931 2 DEBUG nova.network.neutron [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port e3bc3ac3-6147-40d0-a19c-df111dcf23a5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.931 2 DEBUG nova.network.neutron [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.947 2 DEBUG oslo_concurrency.lockutils [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.976 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.977 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 47257c6e-4d10-4d8e-af5a-b57db20048ea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.977 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 65c7e6ed-131f-4bca-af69-a1241d048bdb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.979 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:05:23 compute-0 nova_compute[259627]: 2025-10-14 09:05:23.979 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:05:24 compute-0 nova_compute[259627]: 2025-10-14 09:05:24.026 2 DEBUG nova.policy [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:05:24 compute-0 nova_compute[259627]: 2025-10-14 09:05:24.064 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:05:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546375787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:24 compute-0 nova_compute[259627]: 2025-10-14 09:05:24.520 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:24 compute-0 nova_compute[259627]: 2025-10-14 09:05:24.525 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:05:24 compute-0 nova_compute[259627]: 2025-10-14 09:05:24.541 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:05:24 compute-0 nova_compute[259627]: 2025-10-14 09:05:24.560 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:05:24 compute-0 nova_compute[259627]: 2025-10-14 09:05:24.560 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:24 compute-0 ceph-mon[74249]: pgmap v1494: 305 pgs: 305 active+clean; 247 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 14 09:05:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/546375787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 09:05:25 compute-0 nova_compute[259627]: 2025-10-14 09:05:25.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:26 compute-0 nova_compute[259627]: 2025-10-14 09:05:26.201 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully created port: b624404b-6681-4fc8-a870-dc9418e2de0f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:05:26 compute-0 ceph-mon[74249]: pgmap v1495: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.291 2 DEBUG nova.compute.manager [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.291 2 DEBUG oslo_concurrency.lockutils [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.292 2 DEBUG oslo_concurrency.lockutils [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.292 2 DEBUG oslo_concurrency.lockutils [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.293 2 DEBUG nova.compute.manager [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.293 2 WARNING nova.compute.manager [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for instance with vm_state active and task_state None.
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.634 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully updated port: b624404b-6681-4fc8-a870-dc9418e2de0f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.655 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.656 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.656 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:05:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 717 KiB/s wr, 76 op/s
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.871 2 WARNING nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:05:27 compute-0 nova_compute[259627]: 2025-10-14 09:05:27.871 2 WARNING nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:05:28 compute-0 ceph-mon[74249]: pgmap v1496: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 717 KiB/s wr, 76 op/s
Oct 14 09:05:28 compute-0 nova_compute[259627]: 2025-10-14 09:05:28.989 2 DEBUG nova.compute.manager [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-changed-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:28 compute-0 nova_compute[259627]: 2025-10-14 09:05:28.989 2 DEBUG nova.compute.manager [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Refreshing instance network info cache due to event network-changed-282dfd9e-9e84-450c-a306-8bc55428feb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:05:28 compute-0 nova_compute[259627]: 2025-10-14 09:05:28.990 2 DEBUG oslo_concurrency.lockutils [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:28 compute-0 nova_compute[259627]: 2025-10-14 09:05:28.990 2 DEBUG oslo_concurrency.lockutils [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:28 compute-0 nova_compute[259627]: 2025-10-14 09:05:28.990 2 DEBUG nova.network.neutron [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Refreshing network info cache for port 282dfd9e-9e84-450c-a306-8bc55428feb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:05:29 compute-0 nova_compute[259627]: 2025-10-14 09:05:29.560 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:29 compute-0 nova_compute[259627]: 2025-10-14 09:05:29.561 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:05:29 compute-0 nova_compute[259627]: 2025-10-14 09:05:29.562 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:05:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 717 KiB/s wr, 76 op/s
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.037 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.039 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.040 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.040 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.598 2 DEBUG nova.network.neutron [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updated VIF entry in instance network info cache for port 282dfd9e-9e84-450c-a306-8bc55428feb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.599 2 DEBUG nova.network.neutron [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updating instance_info_cache with network_info: [{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.617 2 DEBUG oslo_concurrency.lockutils [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.741 2 DEBUG nova.compute.manager [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.742 2 DEBUG nova.compute.manager [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-b624404b-6681-4fc8-a870-dc9418e2de0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.743 2 DEBUG oslo_concurrency.lockutils [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:30 compute-0 nova_compute[259627]: 2025-10-14 09:05:30.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:30 compute-0 ceph-mon[74249]: pgmap v1497: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 717 KiB/s wr, 76 op/s
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.244 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.265 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.266 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.270 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.272 2 DEBUG oslo_concurrency.lockutils [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.273 2 DEBUG nova.network.neutron [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port b624404b-6681-4fc8-a870-dc9418e2de0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.278 2 DEBUG nova.virt.libvirt.vif [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.279 2 DEBUG nova.network.os_vif_util [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.280 2 DEBUG nova.network.os_vif_util [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.281 2 DEBUG os_vif [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.296 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb624404b-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb624404b-66, col_values=(('external_ids', {'iface-id': 'b624404b-6681-4fc8-a870-dc9418e2de0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:02:94', 'vm-uuid': '47257c6e-4d10-4d8e-af5a-b57db20048ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 NetworkManager[44885]: <info>  [1760432731.3054] manager: (tapb624404b-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.317 2 INFO os_vif [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66')
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.318 2 DEBUG nova.virt.libvirt.vif [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.318 2 DEBUG nova.network.os_vif_util [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.319 2 DEBUG nova.network.os_vif_util [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.321 2 DEBUG nova.virt.libvirt.guest [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:bb:02:94"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <target dev="tapb624404b-66"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]: </interface>
Oct 14 09:05:31 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 09:05:31 compute-0 kernel: tapb624404b-66: entered promiscuous mode
Oct 14 09:05:31 compute-0 NetworkManager[44885]: <info>  [1760432731.3464] manager: (tapb624404b-66): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Oct 14 09:05:31 compute-0 ovn_controller[152662]: 2025-10-14T09:05:31Z|00540|binding|INFO|Claiming lport b624404b-6681-4fc8-a870-dc9418e2de0f for this chassis.
Oct 14 09:05:31 compute-0 ovn_controller[152662]: 2025-10-14T09:05:31Z|00541|binding|INFO|b624404b-6681-4fc8-a870-dc9418e2de0f: Claiming fa:16:3e:bb:02:94 10.100.0.9
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.359 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:02:94 10.100.0.9'], port_security=['fa:16:3e:bb:02:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b624404b-6681-4fc8-a870-dc9418e2de0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.362 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b624404b-6681-4fc8-a870-dc9418e2de0f in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.366 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.368 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.369 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.377 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.378 2 INFO nova.compute.claims [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:05:31 compute-0 ovn_controller[152662]: 2025-10-14T09:05:31Z|00542|binding|INFO|Setting lport b624404b-6681-4fc8-a870-dc9418e2de0f ovn-installed in OVS
Oct 14 09:05:31 compute-0 ovn_controller[152662]: 2025-10-14T09:05:31Z|00543|binding|INFO|Setting lport b624404b-6681-4fc8-a870-dc9418e2de0f up in Southbound
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[216c130a-7112-4e78-a96f-ec3d8e7b8df8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:31 compute-0 systemd-udevd[320085]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:05:31 compute-0 NetworkManager[44885]: <info>  [1760432731.4260] device (tapb624404b-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:05:31 compute-0 NetworkManager[44885]: <info>  [1760432731.4274] device (tapb624404b-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.454 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4d5aad-ac26-4ad1-a127-7a0a1355bb20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.459 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[92413a12-66c9-4972-86ad-76306d8f6503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.474 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.475 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.476 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9a:79:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.477 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:54:94:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.477 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:bb:02:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.495 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[29dfe148-4f73-4b17-8217-226249fd8ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.505 2 DEBUG nova.virt.libvirt.guest [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:31</nova:creationTime>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:31 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 09:05:31 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 09:05:31 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:05:31 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:31 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:31 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:31 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d170d8ed-4503-470a-9680-89db84052cfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320092, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.536 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.540 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af54e1ad-26fb-4325-bf52-0908ad594719]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320093, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320093, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.543 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.549 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.549 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.549 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.552 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 76 op/s
Oct 14 09:05:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:05:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242352205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:31 compute-0 nova_compute[259627]: 2025-10-14 09:05:31.995 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.000 2 DEBUG nova.compute.provider_tree [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.027 2 DEBUG nova.scheduler.client.report [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.099 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.100 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.152 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.152 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.169 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.183 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.198 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.202 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.202 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.202 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.203 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.286 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.288 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.289 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating image(s)
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.325 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.360 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.390 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.394 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.436 2 DEBUG nova.policy [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd952679a4e6a4fc6bacf42c02d3e92d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.519 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.520 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.521 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.522 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.548 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.554 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ec31b9ab-88ab-4085-a46b-76cb9825061a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.724 2 DEBUG nova.compute.manager [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.726 2 DEBUG oslo_concurrency.lockutils [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.727 2 DEBUG oslo_concurrency.lockutils [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.728 2 DEBUG oslo_concurrency.lockutils [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.729 2 DEBUG nova.compute.manager [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.730 2 WARNING nova.compute.manager [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f for instance with vm_state active and task_state None.
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:05:32
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.848 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ec31b9ab-88ab-4085-a46b-76cb9825061a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:32 compute-0 ceph-mon[74249]: pgmap v1498: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 76 op/s
Oct 14 09:05:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4242352205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.911 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] resizing rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:05:32 compute-0 nova_compute[259627]: 2025-10-14 09:05:32.996 2 DEBUG nova.objects.instance [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'migration_context' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.016 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.016 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Ensure instance console log exists: /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.017 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.017 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.017 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.095 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Successfully created port: 31797867-f0bc-4632-a658-bd0fae609c23 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.404 2 DEBUG nova.network.neutron [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port b624404b-6681-4fc8-a870-dc9418e2de0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.405 2 DEBUG nova.network.neutron [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.430 2 DEBUG oslo_concurrency.lockutils [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.2 KiB/s wr, 73 op/s
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.844 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Successfully updated port: 31797867-f0bc-4632-a658-bd0fae609c23 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.871 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.871 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquired lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:33 compute-0 nova_compute[259627]: 2025-10-14 09:05:33.871 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:05:33 compute-0 ovn_controller[152662]: 2025-10-14T09:05:33Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:50:93 10.100.0.10
Oct 14 09:05:33 compute-0 ovn_controller[152662]: 2025-10-14T09:05:33Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:50:93 10.100.0.10
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.028 2 DEBUG nova.compute.manager [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-changed-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.029 2 DEBUG nova.compute.manager [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Refreshing instance network info cache due to event network-changed-31797867-f0bc-4632-a658-bd0fae609c23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.029 2 DEBUG oslo_concurrency.lockutils [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.070 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:05:34 compute-0 ovn_controller[152662]: 2025-10-14T09:05:34Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:02:94 10.100.0.9
Oct 14 09:05:34 compute-0 ovn_controller[152662]: 2025-10-14T09:05:34Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:02:94 10.100.0.9
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.806 2 DEBUG nova.compute.manager [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.807 2 DEBUG oslo_concurrency.lockutils [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.807 2 DEBUG oslo_concurrency.lockutils [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.808 2 DEBUG oslo_concurrency.lockutils [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.808 2 DEBUG nova.compute.manager [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:34 compute-0 nova_compute[259627]: 2025-10-14 09:05:34.809 2 WARNING nova.compute.manager [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f for instance with vm_state active and task_state None.
Oct 14 09:05:34 compute-0 ceph-mon[74249]: pgmap v1499: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.2 KiB/s wr, 73 op/s
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.390 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Updating instance_info_cache with network_info: [{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.409 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Releasing lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.410 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance network_info: |[{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.411 2 DEBUG oslo_concurrency.lockutils [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.411 2 DEBUG nova.network.neutron [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Refreshing network info cache for port 31797867-f0bc-4632-a658-bd0fae609c23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.416 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start _get_guest_xml network_info=[{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.423 2 WARNING nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.429 2 DEBUG nova.virt.libvirt.host [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.430 2 DEBUG nova.virt.libvirt.host [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.434 2 DEBUG nova.virt.libvirt.host [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.435 2 DEBUG nova.virt.libvirt.host [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.435 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.436 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.437 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.437 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.438 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.438 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.439 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.440 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.440 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.441 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.441 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.442 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.447 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.529 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-be6943f6-df97-4a84-854b-858cc7d3ea1a" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.530 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-be6943f6-df97-4a84-854b-858cc7d3ea1a" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.531 2 DEBUG nova.objects.instance [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 09:05:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:05:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078694914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.944 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.970 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:35 compute-0 nova_compute[259627]: 2025-10-14 09:05:35.974 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:05:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944170748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.416 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.418 2 DEBUG nova.virt.libvirt.vif [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:05:32Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.419 2 DEBUG nova.network.os_vif_util [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.421 2 DEBUG nova.network.os_vif_util [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.423 2 DEBUG nova.objects.instance [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_devices' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.441 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <uuid>ec31b9ab-88ab-4085-a46b-76cb9825061a</uuid>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <name>instance-00000038</name>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <nova:name>tempest-tempest.common.compute-instance-412231426</nova:name>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:05:35</nova:creationTime>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <nova:user uuid="d952679a4e6a4fc6bacf42c02d3e92d0">tempest-ServerActionsTestOtherA-894139105-project-member</nova:user>
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <nova:project uuid="4e47722c609640d3a70fee8dd6ff94cc">tempest-ServerActionsTestOtherA-894139105</nova:project>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <nova:port uuid="31797867-f0bc-4632-a658-bd0fae609c23">
Oct 14 09:05:36 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <system>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <entry name="serial">ec31b9ab-88ab-4085-a46b-76cb9825061a</entry>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <entry name="uuid">ec31b9ab-88ab-4085-a46b-76cb9825061a</entry>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </system>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <os>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   </os>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <features>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   </features>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ec31b9ab-88ab-4085-a46b-76cb9825061a_disk">
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config">
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:05:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:01:3d:6b"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <target dev="tap31797867-f0"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/console.log" append="off"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <video>
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </video>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:05:36 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:05:36 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:05:36 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:05:36 compute-0 nova_compute[259627]: </domain>
Oct 14 09:05:36 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.444 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Preparing to wait for external event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.445 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.445 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.446 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.448 2 DEBUG nova.virt.libvirt.vif [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:05:32Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.449 2 DEBUG nova.network.os_vif_util [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.450 2 DEBUG nova.network.os_vif_util [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.451 2 DEBUG os_vif [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31797867-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31797867-f0, col_values=(('external_ids', {'iface-id': '31797867-f0bc-4632-a658-bd0fae609c23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:3d:6b', 'vm-uuid': 'ec31b9ab-88ab-4085-a46b-76cb9825061a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:36 compute-0 NetworkManager[44885]: <info>  [1760432736.4625] manager: (tap31797867-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.473 2 INFO os_vif [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0')
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.537 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.538 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.539 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No VIF found with MAC fa:16:3e:01:3d:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.539 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Using config drive
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.572 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.820 2 DEBUG nova.objects.instance [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:36 compute-0 nova_compute[259627]: 2025-10-14 09:05:36.851 2 DEBUG nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:05:36 compute-0 ceph-mon[74249]: pgmap v1500: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 09:05:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1078694914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:05:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2944170748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.239 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating config drive at /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.249 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfl5ihuf9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.407 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfl5ihuf9" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.438 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.443 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.540 2 DEBUG nova.policy [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.546 2 DEBUG nova.network.neutron [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Updated VIF entry in instance network info cache for port 31797867-f0bc-4632-a658-bd0fae609c23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.547 2 DEBUG nova.network.neutron [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Updating instance_info_cache with network_info: [{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.581 2 DEBUG oslo_concurrency.lockutils [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.636 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.637 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deleting local config drive /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config because it was imported into RBD.
Oct 14 09:05:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:37 compute-0 podman[320401]: 2025-10-14 09:05:37.667919503 +0000 UTC m=+0.078350470 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:05:37 compute-0 podman[320405]: 2025-10-14 09:05:37.668621191 +0000 UTC m=+0.074527977 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 09:05:37 compute-0 kernel: tap31797867-f0: entered promiscuous mode
Oct 14 09:05:37 compute-0 NetworkManager[44885]: <info>  [1760432737.6892] manager: (tap31797867-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Oct 14 09:05:37 compute-0 ovn_controller[152662]: 2025-10-14T09:05:37Z|00544|binding|INFO|Claiming lport 31797867-f0bc-4632-a658-bd0fae609c23 for this chassis.
Oct 14 09:05:37 compute-0 ovn_controller[152662]: 2025-10-14T09:05:37Z|00545|binding|INFO|31797867-f0bc-4632-a658-bd0fae609c23: Claiming fa:16:3e:01:3d:6b 10.100.0.5
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.701 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:3d:6b 10.100.0.5'], port_security=['fa:16:3e:01:3d:6b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ec31b9ab-88ab-4085-a46b-76cb9825061a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31797867-f0bc-4632-a658-bd0fae609c23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.702 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31797867-f0bc-4632-a658-bd0fae609c23 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.704 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:05:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.720 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc76a4d-e989-4fa1-869e-17355d966261]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:37 compute-0 systemd-udevd[320457]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:05:37 compute-0 ovn_controller[152662]: 2025-10-14T09:05:37Z|00546|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 ovn-installed in OVS
Oct 14 09:05:37 compute-0 ovn_controller[152662]: 2025-10-14T09:05:37Z|00547|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 up in Southbound
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:37 compute-0 systemd-machined[214636]: New machine qemu-70-instance-00000038.
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:37 compute-0 NetworkManager[44885]: <info>  [1760432737.7418] device (tap31797867-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:05:37 compute-0 NetworkManager[44885]: <info>  [1760432737.7426] device (tap31797867-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:05:37 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-00000038.
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.761 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38ed69bf-17ee-4971-9e83-956dae57dd2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.764 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[37d00dd9-737f-4d86-8439-806a210bed45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.792 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5e13cd21-df20-4980-bfcd-95aff84c453f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.811 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b046edea-1984-4c80-ad51-3d5855d31262]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320469, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.828 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bb134a-f7b9-483f-a20b-47ac5f8c4f27]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320470, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320470, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.829 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:37 compute-0 nova_compute[259627]: 2025-10-14 09:05:37.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.833 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.834 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.834 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.834 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.529 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432738.528594, ec31b9ab-88ab-4085-a46b-76cb9825061a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.530 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Started (Lifecycle Event)
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.551 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.556 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432738.5288987, ec31b9ab-88ab-4085-a46b-76cb9825061a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.557 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Paused (Lifecycle Event)
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.573 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.578 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.598 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.614 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.834 2 DEBUG nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully updated port: be6943f6-df97-4a84-854b-858cc7d3ea1a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.862 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.862 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.863 2 DEBUG nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:05:38 compute-0 ceph-mon[74249]: pgmap v1501: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.935 2 DEBUG nova.compute.manager [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-be6943f6-df97-4a84-854b-858cc7d3ea1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.936 2 DEBUG nova.compute.manager [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-be6943f6-df97-4a84-854b-858cc7d3ea1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:05:38 compute-0 nova_compute[259627]: 2025-10-14 09:05:38.936 2 DEBUG oslo_concurrency.lockutils [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:39 compute-0 nova_compute[259627]: 2025-10-14 09:05:39.115 2 WARNING nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:05:39 compute-0 nova_compute[259627]: 2025-10-14 09:05:39.116 2 WARNING nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:05:39 compute-0 nova_compute[259627]: 2025-10-14 09:05:39.117 2 WARNING nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:05:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 14 09:05:40 compute-0 ceph-mon[74249]: pgmap v1502: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 14 09:05:41 compute-0 nova_compute[259627]: 2025-10-14 09:05:41.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 14 09:05:42 compute-0 nova_compute[259627]: 2025-10-14 09:05:42.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:42 compute-0 ceph-mon[74249]: pgmap v1503: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002629157814840138 of space, bias 1.0, pg target 0.7887473444520414 quantized to 32 (current 32)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:05:42 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.680 2 DEBUG nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.701 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.703 2 DEBUG oslo_concurrency.lockutils [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.703 2 DEBUG nova.network.neutron [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port be6943f6-df97-4a84-854b-858cc7d3ea1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.708 2 DEBUG nova.virt.libvirt.vif [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.709 2 DEBUG nova.network.os_vif_util [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.710 2 DEBUG nova.network.os_vif_util [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.711 2 DEBUG os_vif [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe6943f6-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe6943f6-df, col_values=(('external_ids', {'iface-id': 'be6943f6-df97-4a84-854b-858cc7d3ea1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:ae:0e', 'vm-uuid': '47257c6e-4d10-4d8e-af5a-b57db20048ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 NetworkManager[44885]: <info>  [1760432743.7515] manager: (tapbe6943f6-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.761 2 INFO os_vif [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df')
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.762 2 DEBUG nova.virt.libvirt.vif [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.763 2 DEBUG nova.network.os_vif_util [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.764 2 DEBUG nova.network.os_vif_util [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.768 2 DEBUG nova.virt.libvirt.guest [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:a4:ae:0e"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <target dev="tapbe6943f6-df"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]: </interface>
Oct 14 09:05:43 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 09:05:43 compute-0 kernel: tapbe6943f6-df: entered promiscuous mode
Oct 14 09:05:43 compute-0 NetworkManager[44885]: <info>  [1760432743.7878] manager: (tapbe6943f6-df): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Oct 14 09:05:43 compute-0 ovn_controller[152662]: 2025-10-14T09:05:43Z|00548|binding|INFO|Claiming lport be6943f6-df97-4a84-854b-858cc7d3ea1a for this chassis.
Oct 14 09:05:43 compute-0 ovn_controller[152662]: 2025-10-14T09:05:43Z|00549|binding|INFO|be6943f6-df97-4a84-854b-858cc7d3ea1a: Claiming fa:16:3e:a4:ae:0e 10.100.0.14
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.802 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:ae:0e 10.100.0.14'], port_security=['fa:16:3e:a4:ae:0e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-4377624', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-4377624', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=be6943f6-df97-4a84-854b-858cc7d3ea1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.805 162547 INFO neutron.agent.ovn.metadata.agent [-] Port be6943f6-df97-4a84-854b-858cc7d3ea1a in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.809 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:05:43 compute-0 ovn_controller[152662]: 2025-10-14T09:05:43Z|00550|binding|INFO|Setting lport be6943f6-df97-4a84-854b-858cc7d3ea1a ovn-installed in OVS
Oct 14 09:05:43 compute-0 ovn_controller[152662]: 2025-10-14T09:05:43Z|00551|binding|INFO|Setting lport be6943f6-df97-4a84-854b-858cc7d3ea1a up in Southbound
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.834 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1009ad7-e8ed-4885-b347-4c101924dd5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:43 compute-0 systemd-udevd[320522]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:05:43 compute-0 NetworkManager[44885]: <info>  [1760432743.8743] device (tapbe6943f6-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:05:43 compute-0 NetworkManager[44885]: <info>  [1760432743.8759] device (tapbe6943f6-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.878 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fa55af88-ecdb-4028-9f2f-52ee6cb2635f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.883 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[413abdce-a805-4f7b-8a3b-90fa35720837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.918 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.919 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.919 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9a:79:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.920 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:54:94:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.921 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:bb:02:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.921 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:a4:ae:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.922 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[75bc0043-8508-40d9-8ec6-88e27c65eaa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.947 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1fda5e3-467e-4d3c-895a-12722dd78789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320528, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.963 2 DEBUG nova.virt.libvirt.guest [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:43</nova:creationTime>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:43 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 09:05:43 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 09:05:43 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 09:05:43 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:05:43 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:43 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:43 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:43 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.968 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5475455e-6a04-4ac4-9656-849ca3fdaf03]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320529, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320529, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.970 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 nova_compute[259627]: 2025-10-14 09:05:43.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.975 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:44 compute-0 nova_compute[259627]: 2025-10-14 09:05:44.000 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-be6943f6-df97-4a84-854b-858cc7d3ea1a" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:44 compute-0 nova_compute[259627]: 2025-10-14 09:05:44.653 2 DEBUG nova.compute.manager [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:44 compute-0 nova_compute[259627]: 2025-10-14 09:05:44.654 2 DEBUG oslo_concurrency.lockutils [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:44 compute-0 nova_compute[259627]: 2025-10-14 09:05:44.654 2 DEBUG oslo_concurrency.lockutils [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:44 compute-0 nova_compute[259627]: 2025-10-14 09:05:44.655 2 DEBUG oslo_concurrency.lockutils [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:44 compute-0 nova_compute[259627]: 2025-10-14 09:05:44.655 2 DEBUG nova.compute.manager [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:44 compute-0 nova_compute[259627]: 2025-10-14 09:05:44.656 2 WARNING nova.compute.manager [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a for instance with vm_state active and task_state None.
Oct 14 09:05:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:44.771 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:44.772 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:05:44 compute-0 nova_compute[259627]: 2025-10-14 09:05:44.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:44 compute-0 ceph-mon[74249]: pgmap v1504: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.483 2 DEBUG nova.compute.manager [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.484 2 DEBUG oslo_concurrency.lockutils [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.485 2 DEBUG oslo_concurrency.lockutils [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.485 2 DEBUG oslo_concurrency.lockutils [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.486 2 DEBUG nova.compute.manager [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Processing event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.487 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.498 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432745.497848, ec31b9ab-88ab-4085-a46b-76cb9825061a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.498 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Resumed (Lifecycle Event)
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.501 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.506 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance spawned successfully.
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.507 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.525 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.533 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.538 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.539 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.540 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.540 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.541 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.542 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.556 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.610 2 INFO nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Took 13.32 seconds to spawn the instance on the hypervisor.
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.610 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.671 2 INFO nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Took 14.33 seconds to build instance.
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.687 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 104 op/s
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.942 2 DEBUG nova.network.neutron [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port be6943f6-df97-4a84-854b-858cc7d3ea1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.943 2 DEBUG nova.network.neutron [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:45 compute-0 nova_compute[259627]: 2025-10-14 09:05:45.964 2 DEBUG oslo_concurrency.lockutils [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.247 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.247 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.247 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.248 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.248 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.249 2 INFO nova.compute.manager [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Terminating instance
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.250 2 DEBUG nova.compute.manager [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:05:46 compute-0 kernel: tap282dfd9e-9e (unregistering): left promiscuous mode
Oct 14 09:05:46 compute-0 NetworkManager[44885]: <info>  [1760432746.3040] device (tap282dfd9e-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:05:46 compute-0 ovn_controller[152662]: 2025-10-14T09:05:46Z|00552|binding|INFO|Releasing lport 282dfd9e-9e84-450c-a306-8bc55428feb4 from this chassis (sb_readonly=0)
Oct 14 09:05:46 compute-0 ovn_controller[152662]: 2025-10-14T09:05:46Z|00553|binding|INFO|Setting lport 282dfd9e-9e84-450c-a306-8bc55428feb4 down in Southbound
Oct 14 09:05:46 compute-0 ovn_controller[152662]: 2025-10-14T09:05:46Z|00554|binding|INFO|Removing iface tap282dfd9e-9e ovn-installed in OVS
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.376 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:50:93 10.100.0.10'], port_security=['fa:16:3e:de:50:93 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '65c7e6ed-131f-4bca-af69-a1241d048bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55e6e0201a064f1390a998f830140354', 'neutron:revision_number': '4', 'neutron:security_group_ids': '98209f09-275f-46ee-a2c6-16214403e3de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8d21053-8b98-4816-ad89-107cc4743794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=282dfd9e-9e84-450c-a306-8bc55428feb4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.377 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 282dfd9e-9e84-450c-a306-8bc55428feb4 in datapath 77fb75b2-483b-47a5-99a5-ae91248b8ed8 unbound from our chassis
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.379 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77fb75b2-483b-47a5-99a5-ae91248b8ed8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.380 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6ed1ff-4b40-4c52-b20e-014a1f584aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.381 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 namespace which is not needed anymore
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.382 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-e3bc3ac3-6147-40d0-a19c-df111dcf23a5" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.382 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-e3bc3ac3-6147-40d0-a19c-df111dcf23a5" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:46 compute-0 ovn_controller[152662]: 2025-10-14T09:05:46Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:ae:0e 10.100.0.14
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 ovn_controller[152662]: 2025-10-14T09:05:46Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:ae:0e 10.100.0.14
Oct 14 09:05:46 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct 14 09:05:46 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000037.scope: Consumed 12.805s CPU time.
Oct 14 09:05:46 compute-0 systemd-machined[214636]: Machine qemu-69-instance-00000037 terminated.
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.401 2 DEBUG nova.objects.instance [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.423 2 DEBUG nova.virt.libvirt.vif [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.423 2 DEBUG nova.network.os_vif_util [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.424 2 DEBUG nova.network.os_vif_util [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.430 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.432 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.436 2 DEBUG nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Attempting to detach device tape3bc3ac3-61 from instance 47257c6e-4d10-4d8e-af5a-b57db20048ea from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.437 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:54:94:02"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <target dev="tape3bc3ac3-61"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]: </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.443 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.447 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface>not found in domain: <domain type='kvm' id='67'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <name>instance-00000036</name>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:43</nova:creationTime>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:46 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <system>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='serial'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='uuid'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </system>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <os>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </os>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <features>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </features>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk' index='2'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config' index='1'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:9a:79:ab'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='tap971d99c2-5a'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:54:94:02'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='tape3bc3ac3-61'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='net1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:bb:02:94'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='tapb624404b-66'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='net2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:a4:ae:0e'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='tapbe6943f6-df'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='net3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </target>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </console>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <video>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </video>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c314,c342</label>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c314,c342</imagelabel>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:05:46 compute-0 nova_compute[259627]: </domain>
Oct 14 09:05:46 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.450 2 INFO nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tape3bc3ac3-61 from instance 47257c6e-4d10-4d8e-af5a-b57db20048ea from the persistent domain config.
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.451 2 DEBUG nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] (1/8): Attempting to detach device tape3bc3ac3-61 with device alias net1 from instance 47257c6e-4d10-4d8e-af5a-b57db20048ea from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.451 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:54:94:02"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <target dev="tape3bc3ac3-61"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]: </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.492 2 INFO nova.virt.libvirt.driver [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance destroyed successfully.
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.493 2 DEBUG nova.objects.instance [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lazy-loading 'resources' on Instance uuid 65c7e6ed-131f-4bca-af69-a1241d048bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.507 2 DEBUG nova.virt.libvirt.vif [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-100646485',display_name='tempest-ServersTestManualDisk-server-100646485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-100646485',id=55,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFq4tcZMNkmwexYMay7CcSUnt45X5jGbu/ngQCrGssdHqitjMlfE2R1DP+cztwj+Jbcg255ZB+kgmwp3pbM6el/CrOnrVr2V0onKRN9dF6T7lO2ORJc789YDLKzPg0Nog==',key_name='tempest-keypair-532072477',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55e6e0201a064f1390a998f830140354',ramdisk_id='',reservation_id='r-5wf160ka',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-748280037',owner_user_name='tempest-ServersTestManualDisk-748280037-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:05:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='64a22b9370d049c0b189508f3f58f0ca',uuid=65c7e6ed-131f-4bca-af69-a1241d048bdb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.507 2 DEBUG nova.network.os_vif_util [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converting VIF {"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.508 2 DEBUG nova.network.os_vif_util [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.508 2 DEBUG os_vif [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap282dfd9e-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.517 2 INFO os_vif [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e')
Oct 14 09:05:46 compute-0 kernel: tape3bc3ac3-61 (unregistering): left promiscuous mode
Oct 14 09:05:46 compute-0 NetworkManager[44885]: <info>  [1760432746.5574] device (tape3bc3ac3-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.569 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760432746.5693483, 47257c6e-4d10-4d8e-af5a-b57db20048ea => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 14 09:05:46 compute-0 ovn_controller[152662]: 2025-10-14T09:05:46Z|00555|binding|INFO|Releasing lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 from this chassis (sb_readonly=0)
Oct 14 09:05:46 compute-0 ovn_controller[152662]: 2025-10-14T09:05:46Z|00556|binding|INFO|Setting lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 down in Southbound
Oct 14 09:05:46 compute-0 ovn_controller[152662]: 2025-10-14T09:05:46Z|00557|binding|INFO|Removing iface tape3bc3ac3-61 ovn-installed in OVS
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.579 2 DEBUG nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Start waiting for the detach event from libvirt for device tape3bc3ac3-61 with device alias net1 for instance 47257c6e-4d10-4d8e-af5a-b57db20048ea _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 14 09:05:46 compute-0 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [NOTICE]   (320015) : haproxy version is 2.8.14-c23fe91
Oct 14 09:05:46 compute-0 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [NOTICE]   (320015) : path to executable is /usr/sbin/haproxy
Oct 14 09:05:46 compute-0 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [WARNING]  (320015) : Exiting Master process...
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.580 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:05:46 compute-0 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [ALERT]    (320015) : Current worker (320017) exited with code 143 (Terminated)
Oct 14 09:05:46 compute-0 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [WARNING]  (320015) : All workers exited. Exiting... (0)
Oct 14 09:05:46 compute-0 systemd[1]: libpod-4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d.scope: Deactivated successfully.
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.587 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:94:02 10.100.0.4'], port_security=['fa:16:3e:54:94:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e3bc3ac3-6147-40d0-a19c-df111dcf23a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:46 compute-0 podman[320558]: 2025-10-14 09:05:46.59491666 +0000 UTC m=+0.089830413 container died 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.598 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface>not found in domain: <domain type='kvm' id='67'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <name>instance-00000036</name>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:43</nova:creationTime>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:46 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <system>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='serial'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='uuid'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </system>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <os>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </os>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <features>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </features>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk' index='2'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config' index='1'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:9a:79:ab'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='tap971d99c2-5a'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:bb:02:94'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='tapb624404b-66'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='net2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:a4:ae:0e'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target dev='tapbe6943f6-df'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='net3'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       </target>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </console>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <video>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </video>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c314,c342</label>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c314,c342</imagelabel>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:05:46 compute-0 nova_compute[259627]: </domain>
Oct 14 09:05:46 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.599 2 INFO nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tape3bc3ac3-61 from instance 47257c6e-4d10-4d8e-af5a-b57db20048ea from the live domain config.
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.600 2 DEBUG nova.virt.libvirt.vif [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.601 2 DEBUG nova.network.os_vif_util [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.602 2 DEBUG nova.network.os_vif_util [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.603 2 DEBUG os_vif [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3bc3ac3-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.612 2 INFO os_vif [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61')
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.613 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:46</nova:creationTime>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 09:05:46 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:05:46 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:46 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:46 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:46 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d-userdata-shm.mount: Deactivated successfully.
Oct 14 09:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc8ef72225e5c259b1a8bbd67e6e8947df3e042df52c25d7dffcdfcd0e7b97cd-merged.mount: Deactivated successfully.
Oct 14 09:05:46 compute-0 podman[320558]: 2025-10-14 09:05:46.646598403 +0000 UTC m=+0.141512156 container cleanup 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:05:46 compute-0 systemd[1]: libpod-conmon-4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d.scope: Deactivated successfully.
Oct 14 09:05:46 compute-0 podman[320607]: 2025-10-14 09:05:46.712114896 +0000 UTC m=+0.042131478 container remove 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.723 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d15b8f0e-a6d3-4fe6-91ce-67c024c07df3]: (4, ('Tue Oct 14 09:05:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 (4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d)\n4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d\nTue Oct 14 09:05:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 (4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d)\n4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a2abaf2a-87ee-4bb8-a4b2-88dd5c5c098f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.726 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77fb75b2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 kernel: tap77fb75b2-40: left promiscuous mode
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4b1df0-582e-4e37-8e3e-2f10d7364c7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de879427-2cff-4fa1-8210-f2ae941c0494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.767 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2efda798-bb92-4a39-a761-3b4ee4d6772d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.782 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f23c88d-3d27-4df2-9c2f-a63079ce4436]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649923, 'reachable_time': 42307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320622, 'error': None, 'target': 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d77fb75b2\x2d483b\x2d47a5\x2d99a5\x2dae91248b8ed8.mount: Deactivated successfully.
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.786 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.786 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7ef62e-6f2c-4aa1-be82-4c84993434fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.787 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e3bc3ac3-6147-40d0-a19c-df111dcf23a5 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.789 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.806 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e1474c50-2308-4562-a915-35987ed8d302]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.846 2 DEBUG nova.compute.manager [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.847 2 DEBUG oslo_concurrency.lockutils [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.848 2 DEBUG oslo_concurrency.lockutils [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.847 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8b2511-45d1-49b5-ba94-235d2281658f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.848 2 DEBUG oslo_concurrency.lockutils [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.848 2 DEBUG nova.compute.manager [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.848 2 WARNING nova.compute.manager [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a for instance with vm_state active and task_state None.
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.852 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9c53a2-71a2-4d48-b59a-97f49ece8d2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.884 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0db64d-65de-4638-a4f1-3326086cca03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1321533e-1eee-46e1-b800-29be6c80ebde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320631, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.929 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dd413c-75e8-4a18-bc49-3a0f059268bc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320632, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320632, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.930 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.935 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.960 2 DEBUG oslo_concurrency.lockutils [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:46 compute-0 ceph-mon[74249]: pgmap v1505: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 104 op/s
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.961 2 DEBUG oslo_concurrency.lockutils [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.961 2 DEBUG nova.compute.manager [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.964 2 DEBUG nova.compute.manager [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.965 2 DEBUG nova.objects.instance [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'flavor' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.975 2 INFO nova.virt.libvirt.driver [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Deleting instance files /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb_del
Oct 14 09:05:46 compute-0 nova_compute[259627]: 2025-10-14 09:05:46.976 2 INFO nova.virt.libvirt.driver [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Deletion of /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb_del complete
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.007 2 DEBUG nova.virt.libvirt.driver [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.034 2 INFO nova.compute.manager [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.034 2 DEBUG oslo.service.loopingcall [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.035 2 DEBUG nova.compute.manager [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.035 2 DEBUG nova.network.neutron [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.541 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.541 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.542 2 DEBUG nova.network.neutron [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.629 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.629 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.629 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.629 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.630 2 WARNING nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state active and task_state powering-off.
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-unplugged-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] No waiting events found dispatching network-vif-unplugged-282dfd9e-9e84-450c-a306-8bc55428feb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-unplugged-282dfd9e-9e84-450c-a306-8bc55428feb4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.632 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] No waiting events found dispatching network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:47 compute-0 nova_compute[259627]: 2025-10-14 09:05:47.632 2 WARNING nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received unexpected event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 for instance with vm_state active and task_state deleting.
Oct 14 09:05:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 37 KiB/s wr, 12 op/s
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.010 2 DEBUG nova.network.neutron [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.030 2 INFO nova.compute.manager [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Took 1.00 seconds to deallocate network for instance.
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.084 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.085 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.186 2 DEBUG oslo_concurrency.processutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:48 compute-0 sudo[320653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:48 compute-0 sudo[320653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:48 compute-0 sudo[320653]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:48 compute-0 sudo[320678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:05:48 compute-0 sudo[320678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:48 compute-0 sudo[320678]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:05:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/990524553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.670 2 DEBUG oslo_concurrency.processutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.681 2 DEBUG nova.compute.provider_tree [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:05:48 compute-0 sudo[320703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:48 compute-0 sudo[320703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:48 compute-0 sudo[320703]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.709 2 DEBUG nova.scheduler.client.report [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.747 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:48 compute-0 sudo[320730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:05:48 compute-0 sudo[320730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.781 2 INFO nova.scheduler.client.report [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Deleted allocations for instance 65c7e6ed-131f-4bca-af69-a1241d048bdb
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.826 162547 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 0e55393a-241b-4e02-88ac-00433344f70e with type ""
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.827 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:ae:0e 10.100.0.14'], port_security=['fa:16:3e:a4:ae:0e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-4377624', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-4377624', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=be6943f6-df97-4a84-854b-858cc7d3ea1a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.828 162547 INFO neutron.agent.ovn.metadata.agent [-] Port be6943f6-df97-4a84-854b-858cc7d3ea1a in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:05:48 compute-0 ovn_controller[152662]: 2025-10-14T09:05:48Z|00558|binding|INFO|Removing iface tapbe6943f6-df ovn-installed in OVS
Oct 14 09:05:48 compute-0 ovn_controller[152662]: 2025-10-14T09:05:48Z|00559|binding|INFO|Removing lport be6943f6-df97-4a84-854b-858cc7d3ea1a ovn-installed in OVS
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.876 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8d4988-93d3-4214-b8fe-84014cf0cd75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.877 2 INFO nova.network.neutron [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Port e3bc3ac3-6147-40d0-a19c-df111dcf23a5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.888 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.915 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[51e06dcb-082b-4124-914b-f451ed2dd85d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.919 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a55b1c-6829-426c-a519-a6ba9a2965b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.945 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.946 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.946 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.947 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.947 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-unplugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.948 2 WARNING nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-unplugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for instance with vm_state active and task_state None.
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.948 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.948 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.949 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.949 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.949 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.950 2 WARNING nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for instance with vm_state active and task_state None.
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.950 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-deleted-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.951 2 INFO nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Neutron deleted interface e3bc3ac3-6147-40d0-a19c-df111dcf23a5; detaching it from the instance and deleting it from the info cache
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.951 2 DEBUG nova.network.neutron [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.958 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3d1b34-47d6-44c4-b912-fac68af83c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:48 compute-0 ceph-mon[74249]: pgmap v1506: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 37 KiB/s wr, 12 op/s
Oct 14 09:05:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/990524553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:48 compute-0 nova_compute[259627]: 2025-10-14 09:05:48.973 2 DEBUG nova.objects.instance [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'system_metadata' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.982 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[142d9c40-16bf-48ea-b865-c6f3d6ca2fc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320762, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.002 2 DEBUG nova.objects.instance [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.003 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d079179-1bc3-45dd-90dd-c96aedac38ff]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320772, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320772, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.005 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.009 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.010 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.011 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.011 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.029 2 DEBUG nova.virt.libvirt.vif [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.029 2 DEBUG nova.network.os_vif_util [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.030 2 DEBUG nova.network.os_vif_util [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.035 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.040 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface>not found in domain: <domain type='kvm' id='67'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <name>instance-00000036</name>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:46</nova:creationTime>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:49 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <system>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='serial'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='uuid'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </system>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <os>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </os>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <features>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </features>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk' index='2'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config' index='1'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:9a:79:ab'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='tap971d99c2-5a'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:bb:02:94'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='tapb624404b-66'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='net2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:a4:ae:0e'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='tapbe6943f6-df'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='net3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </target>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </console>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <video>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </video>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c314,c342</label>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c314,c342</imagelabel>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:05:49 compute-0 nova_compute[259627]: </domain>
Oct 14 09:05:49 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.043 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.045 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.046 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.046 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.047 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.047 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.048 2 INFO nova.compute.manager [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Terminating instance
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.049 2 DEBUG nova.compute.manager [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.057 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface>not found in domain: <domain type='kvm' id='67'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <name>instance-00000036</name>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:46</nova:creationTime>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:49 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <system>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='serial'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='uuid'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </system>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <os>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </os>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <features>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </features>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk' index='2'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config' index='1'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:9a:79:ab'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='tap971d99c2-5a'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:bb:02:94'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='tapb624404b-66'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='net2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:a4:ae:0e'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target dev='tapbe6943f6-df'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='net3'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       </target>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </console>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </input>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <video>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </video>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c314,c342</label>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c314,c342</imagelabel>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:05:49 compute-0 nova_compute[259627]: </domain>
Oct 14 09:05:49 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.061 2 WARNING nova.virt.libvirt.driver [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Detaching interface fa:16:3e:54:94:02 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tape3bc3ac3-61' not found.
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.063 2 DEBUG nova.virt.libvirt.vif [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.063 2 DEBUG nova.network.os_vif_util [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.065 2 DEBUG nova.network.os_vif_util [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.066 2 DEBUG os_vif [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3bc3ac3-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.069 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.071 2 INFO os_vif [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61')
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.072 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:05:49</nova:creationTime>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 09:05:49 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:05:49 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:05:49 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:05:49 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:05:49 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.075 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-deleted-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:49 compute-0 kernel: tap971d99c2-5a (unregistering): left promiscuous mode
Oct 14 09:05:49 compute-0 NetworkManager[44885]: <info>  [1760432749.1254] device (tap971d99c2-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:05:49 compute-0 ovn_controller[152662]: 2025-10-14T09:05:49Z|00560|binding|INFO|Releasing lport 971d99c2-5a60-4cac-8f99-e819d71e419c from this chassis (sb_readonly=0)
Oct 14 09:05:49 compute-0 ovn_controller[152662]: 2025-10-14T09:05:49Z|00561|binding|INFO|Setting lport 971d99c2-5a60-4cac-8f99-e819d71e419c down in Southbound
Oct 14 09:05:49 compute-0 ovn_controller[152662]: 2025-10-14T09:05:49Z|00562|binding|INFO|Removing iface tap971d99c2-5a ovn-installed in OVS
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.149 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:79:ab 10.100.0.3'], port_security=['fa:16:3e:9a:79:ab 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c96b2336-ed00-4da6-b121-ce1c9aa6f017', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=971d99c2-5a60-4cac-8f99-e819d71e419c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.151 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 971d99c2-5a60-4cac-8f99-e819d71e419c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.154 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:05:49 compute-0 kernel: tapb624404b-66 (unregistering): left promiscuous mode
Oct 14 09:05:49 compute-0 NetworkManager[44885]: <info>  [1760432749.1653] device (tapb624404b-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.174 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5312c828-f5c5-43de-9618-02ad51c39bfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_controller[152662]: 2025-10-14T09:05:49Z|00563|binding|INFO|Releasing lport b624404b-6681-4fc8-a870-dc9418e2de0f from this chassis (sb_readonly=0)
Oct 14 09:05:49 compute-0 ovn_controller[152662]: 2025-10-14T09:05:49Z|00564|binding|INFO|Setting lport b624404b-6681-4fc8-a870-dc9418e2de0f down in Southbound
Oct 14 09:05:49 compute-0 ovn_controller[152662]: 2025-10-14T09:05:49Z|00565|binding|INFO|Removing iface tapb624404b-66 ovn-installed in OVS
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.183 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:02:94 10.100.0.9'], port_security=['fa:16:3e:bb:02:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b624404b-6681-4fc8-a870-dc9418e2de0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:49 compute-0 kernel: tapbe6943f6-df (unregistering): left promiscuous mode
Oct 14 09:05:49 compute-0 NetworkManager[44885]: <info>  [1760432749.2048] device (tapbe6943f6-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.223 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[def7ee9f-9db3-4b05-ac1b-558473daeaca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.227 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9d90a313-c399-443c-a413-afe743c5c818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.256 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27db2119-07d4-4b91-b779-dd1a3a97e210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 14 09:05:49 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000036.scope: Consumed 14.566s CPU time.
Oct 14 09:05:49 compute-0 systemd-machined[214636]: Machine qemu-67-instance-00000036 terminated.
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.283 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f61496-b923-42a6-9d23-553ab9df2513]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320801, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cff62ce2-7d35-4014-b56c-bf31ca34f74e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320804, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320804, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.303 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 NetworkManager[44885]: <info>  [1760432749.3096] manager: (tapb624404b-66): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Oct 14 09:05:49 compute-0 NetworkManager[44885]: <info>  [1760432749.3169] manager: (tapbe6943f6-df): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.324 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.324 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.325 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.325 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.327 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b624404b-6681-4fc8-a870-dc9418e2de0f in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.331 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.332 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e30b31a8-c25c-4fad-a877-c7702eee1123]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.333 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace which is not needed anymore
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.347 2 INFO nova.virt.libvirt.driver [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance destroyed successfully.
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.347 2 DEBUG nova.objects.instance [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'resources' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.379 2 DEBUG nova.virt.libvirt.vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.379 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.380 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.381 2 DEBUG os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap971d99c2-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.393 2 INFO os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a')
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.394 2 DEBUG nova.virt.libvirt.vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.394 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.395 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.395 2 DEBUG os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb624404b-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.403 2 INFO os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66')
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.404 2 DEBUG nova.virt.libvirt.vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.404 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.405 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.406 2 DEBUG os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe6943f6-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.414 2 INFO os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df')
Oct 14 09:05:49 compute-0 sudo[320730]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:05:49 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [NOTICE]   (319328) : haproxy version is 2.8.14-c23fe91
Oct 14 09:05:49 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [NOTICE]   (319328) : path to executable is /usr/sbin/haproxy
Oct 14 09:05:49 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [WARNING]  (319328) : Exiting Master process...
Oct 14 09:05:49 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [ALERT]    (319328) : Current worker (319330) exited with code 143 (Terminated)
Oct 14 09:05:49 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [WARNING]  (319328) : All workers exited. Exiting... (0)
Oct 14 09:05:49 compute-0 systemd[1]: libpod-7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3.scope: Deactivated successfully.
Oct 14 09:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:05:49 compute-0 podman[320876]: 2025-10-14 09:05:49.510521263 +0000 UTC m=+0.050273569 container died 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:05:49 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e626c556-85d5-404a-8e2e-b7dc993cefd4 does not exist
Oct 14 09:05:49 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e1add75d-7bb7-41bd-ad9d-8219cc6836fd does not exist
Oct 14 09:05:49 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a2c7db3-d1fc-4234-b570-814908acebbd does not exist
Oct 14 09:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3-userdata-shm.mount: Deactivated successfully.
Oct 14 09:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-85f03912ee5e1d4e48b9e053b6890e84bd070633917fc91412b280f99baf2363-merged.mount: Deactivated successfully.
Oct 14 09:05:49 compute-0 podman[320876]: 2025-10-14 09:05:49.556581848 +0000 UTC m=+0.096334144 container cleanup 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:05:49 compute-0 systemd[1]: libpod-conmon-7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3.scope: Deactivated successfully.
Oct 14 09:05:49 compute-0 sudo[320905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:49 compute-0 sudo[320905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:49 compute-0 sudo[320905]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:49 compute-0 podman[320929]: 2025-10-14 09:05:49.635539312 +0000 UTC m=+0.058434690 container remove 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:05:49 compute-0 sudo[320938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:05:49 compute-0 sudo[320938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:49 compute-0 sudo[320938]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.643 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5d32840c-1212-4a1d-b23b-72ca7a8555ad]: (4, ('Tue Oct 14 09:05:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3)\n7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3\nTue Oct 14 09:05:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3)\n7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.646 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f62f71b6-2ce9-4b47-b720-93894ceb8b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.647 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:49 compute-0 kernel: tapfc2d149f-a0: left promiscuous mode
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.674 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[60d73d7d-6ba3-4237-a6e4-1fe3affa48f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.694 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df039852-63de-4763-9738-f31916cc927e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.696 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[851ff194-a27f-4898-a361-b9b22ab38ed6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 sudo[320970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:49 compute-0 sudo[320970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:49 compute-0 sudo[320970]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.712 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.712 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.713 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.713 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.713 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-unplugged-971d99c2-5a60-4cac-8f99-e819d71e419c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.713 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-971d99c2-5a60-4cac-8f99-e819d71e419c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.714 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.714 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.714 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f961b8d0-b59d-4393-bc1e-3ab92fa24f15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647439, 'reachable_time': 28959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320996, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.715 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.715 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.715 2 WARNING nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c for instance with vm_state active and task_state deleting.
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.715 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.716 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.716 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.716 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:49 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc2d149f\x2daebf\x2d406a\x2daed2\x2d5161dd22b079.mount: Deactivated successfully.
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.717 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-unplugged-b624404b-6681-4fc8-a870-dc9418e2de0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 37 KiB/s wr, 12 op/s
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.717 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-b624404b-6681-4fc8-a870-dc9418e2de0f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.718 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:05:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.718 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[13a9c168-3405-4de1-b6ce-05b26f392044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:05:49 compute-0 sudo[320998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:05:49 compute-0 sudo[320998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.852 2 INFO nova.virt.libvirt.driver [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Deleting instance files /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea_del
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.853 2 INFO nova.virt.libvirt.driver [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Deletion of /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea_del complete
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.914 2 INFO nova.compute.manager [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.915 2 DEBUG oslo.service.loopingcall [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.916 2 DEBUG nova.compute.manager [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:05:49 compute-0 nova_compute[259627]: 2025-10-14 09:05:49.916 2 DEBUG nova.network.neutron [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:05:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:05:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:05:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:05:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:05:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:05:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:05:50 compute-0 podman[321064]: 2025-10-14 09:05:50.107228139 +0000 UTC m=+0.037048634 container create b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:05:50 compute-0 systemd[1]: Started libpod-conmon-b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856.scope.
Oct 14 09:05:50 compute-0 podman[321064]: 2025-10-14 09:05:50.091987893 +0000 UTC m=+0.021808408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:05:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:05:50 compute-0 podman[321064]: 2025-10-14 09:05:50.207089938 +0000 UTC m=+0.136910443 container init b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:05:50 compute-0 podman[321064]: 2025-10-14 09:05:50.217089884 +0000 UTC m=+0.146910409 container start b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:05:50 compute-0 podman[321064]: 2025-10-14 09:05:50.220997021 +0000 UTC m=+0.150817536 container attach b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:05:50 compute-0 busy_nobel[321080]: 167 167
Oct 14 09:05:50 compute-0 systemd[1]: libpod-b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856.scope: Deactivated successfully.
Oct 14 09:05:50 compute-0 podman[321064]: 2025-10-14 09:05:50.223167154 +0000 UTC m=+0.152987649 container died b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 14 09:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a91237c91560e6ddf1a565d87879be4336ff570fa77eab1610301310c752ab-merged.mount: Deactivated successfully.
Oct 14 09:05:50 compute-0 podman[321064]: 2025-10-14 09:05:50.262658747 +0000 UTC m=+0.192479252 container remove b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:05:50 compute-0 systemd[1]: libpod-conmon-b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856.scope: Deactivated successfully.
Oct 14 09:05:50 compute-0 nova_compute[259627]: 2025-10-14 09:05:50.317 2 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port be6943f6-df97-4a84-854b-858cc7d3ea1a could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 14 09:05:50 compute-0 nova_compute[259627]: 2025-10-14 09:05:50.318 2 DEBUG nova.network.neutron [-] Unable to show port be6943f6-df97-4a84-854b-858cc7d3ea1a as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Oct 14 09:05:50 compute-0 podman[321103]: 2025-10-14 09:05:50.512329445 +0000 UTC m=+0.074939566 container create 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:05:50 compute-0 systemd[1]: Started libpod-conmon-4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c.scope.
Oct 14 09:05:50 compute-0 podman[321103]: 2025-10-14 09:05:50.487539565 +0000 UTC m=+0.050149686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:05:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:50 compute-0 podman[321103]: 2025-10-14 09:05:50.631421978 +0000 UTC m=+0.194032109 container init 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:05:50 compute-0 podman[321103]: 2025-10-14 09:05:50.638615136 +0000 UTC m=+0.201225217 container start 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:05:50 compute-0 podman[321103]: 2025-10-14 09:05:50.642698296 +0000 UTC m=+0.205308407 container attach 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:05:50 compute-0 podman[321119]: 2025-10-14 09:05:50.643752042 +0000 UTC m=+0.085397754 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:05:50 compute-0 podman[321117]: 2025-10-14 09:05:50.687450368 +0000 UTC m=+0.131841978 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.079 2 DEBUG nova.network.neutron [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.103 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.137 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-e3bc3ac3-6147-40d0-a19c-df111dcf23a5" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.191 2 DEBUG nova.compute.manager [req-9add1c9a-008a-40f9-a517-2700431f6bd6 req-933a9304-8036-4206-9777-20625cd897d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-deleted-be6943f6-df97-4a84-854b-858cc7d3ea1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.192 2 INFO nova.compute.manager [req-9add1c9a-008a-40f9-a517-2700431f6bd6 req-933a9304-8036-4206-9777-20625cd897d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Neutron deleted interface be6943f6-df97-4a84-854b-858cc7d3ea1a; detaching it from the instance and deleting it from the info cache
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.193 2 DEBUG nova.network.neutron [req-9add1c9a-008a-40f9-a517-2700431f6bd6 req-933a9304-8036-4206-9777-20625cd897d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.219 2 DEBUG nova.compute.manager [req-9add1c9a-008a-40f9-a517-2700431f6bd6 req-933a9304-8036-4206-9777-20625cd897d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Detach interface failed, port_id=be6943f6-df97-4a84-854b-858cc7d3ea1a, reason: Instance 47257c6e-4d10-4d8e-af5a-b57db20048ea could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:05:51 compute-0 ceph-mon[74249]: pgmap v1507: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 37 KiB/s wr, 12 op/s
Oct 14 09:05:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 167 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 40 KiB/s wr, 131 op/s
Oct 14 09:05:51 compute-0 distracted_pascal[321132]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:05:51 compute-0 distracted_pascal[321132]: --> relative data size: 1.0
Oct 14 09:05:51 compute-0 distracted_pascal[321132]: --> All data devices are unavailable
Oct 14 09:05:51 compute-0 systemd[1]: libpod-4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c.scope: Deactivated successfully.
Oct 14 09:05:51 compute-0 systemd[1]: libpod-4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c.scope: Consumed 1.083s CPU time.
Oct 14 09:05:51 compute-0 podman[321103]: 2025-10-14 09:05:51.793811714 +0000 UTC m=+1.356421825 container died 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b-merged.mount: Deactivated successfully.
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.832 2 DEBUG nova.compute.manager [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.833 2 DEBUG oslo_concurrency.lockutils [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.833 2 DEBUG oslo_concurrency.lockutils [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.833 2 DEBUG oslo_concurrency.lockutils [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.835 2 DEBUG nova.compute.manager [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:05:51 compute-0 nova_compute[259627]: 2025-10-14 09:05:51.836 2 WARNING nova.compute.manager [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f for instance with vm_state active and task_state deleting.
Oct 14 09:05:51 compute-0 podman[321103]: 2025-10-14 09:05:51.857629316 +0000 UTC m=+1.420239397 container remove 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:05:51 compute-0 systemd[1]: libpod-conmon-4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c.scope: Deactivated successfully.
Oct 14 09:05:51 compute-0 sudo[320998]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:51 compute-0 sudo[321204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:51 compute-0 sudo[321204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:51 compute-0 sudo[321204]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:52 compute-0 sudo[321229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:05:52 compute-0 sudo[321229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:52 compute-0 sudo[321229]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:52 compute-0 sudo[321254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:52 compute-0 sudo[321254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:52 compute-0 sudo[321254]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:52 compute-0 sudo[321279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:05:52 compute-0 sudo[321279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:52 compute-0 nova_compute[259627]: 2025-10-14 09:05:52.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:52 compute-0 nova_compute[259627]: 2025-10-14 09:05:52.401 2 DEBUG nova.network.neutron [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:05:52 compute-0 nova_compute[259627]: 2025-10-14 09:05:52.428 2 INFO nova.compute.manager [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Took 2.51 seconds to deallocate network for instance.
Oct 14 09:05:52 compute-0 nova_compute[259627]: 2025-10-14 09:05:52.498 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:05:52 compute-0 nova_compute[259627]: 2025-10-14 09:05:52.498 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:05:52 compute-0 podman[321345]: 2025-10-14 09:05:52.607404271 +0000 UTC m=+0.045778678 container create 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:05:52 compute-0 nova_compute[259627]: 2025-10-14 09:05:52.607 2 DEBUG oslo_concurrency.processutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:05:52 compute-0 systemd[1]: Started libpod-conmon-17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3.scope.
Oct 14 09:05:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:52 compute-0 podman[321345]: 2025-10-14 09:05:52.583766509 +0000 UTC m=+0.022140966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:05:52 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:05:52 compute-0 podman[321345]: 2025-10-14 09:05:52.718507278 +0000 UTC m=+0.156881705 container init 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:05:52 compute-0 podman[321345]: 2025-10-14 09:05:52.728968525 +0000 UTC m=+0.167342932 container start 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:05:52 compute-0 podman[321345]: 2025-10-14 09:05:52.732166264 +0000 UTC m=+0.170540671 container attach 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:05:52 compute-0 lucid_satoshi[321363]: 167 167
Oct 14 09:05:52 compute-0 systemd[1]: libpod-17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3.scope: Deactivated successfully.
Oct 14 09:05:52 compute-0 podman[321345]: 2025-10-14 09:05:52.736031359 +0000 UTC m=+0.174405776 container died 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:05:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-31fcad3c161ead70133cb8861b37b5d1ca947bb44f35f1a757bb6fe91677b8ff-merged.mount: Deactivated successfully.
Oct 14 09:05:52 compute-0 podman[321345]: 2025-10-14 09:05:52.773399859 +0000 UTC m=+0.211774276 container remove 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:05:52 compute-0 systemd[1]: libpod-conmon-17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3.scope: Deactivated successfully.
Oct 14 09:05:53 compute-0 podman[321404]: 2025-10-14 09:05:53.004562112 +0000 UTC m=+0.054615916 container create 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:05:53 compute-0 systemd[1]: Started libpod-conmon-30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a.scope.
Oct 14 09:05:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:05:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273249229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:53 compute-0 podman[321404]: 2025-10-14 09:05:52.977969438 +0000 UTC m=+0.028023332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:05:53 compute-0 nova_compute[259627]: 2025-10-14 09:05:53.080 2 DEBUG oslo_concurrency.processutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:05:53 compute-0 nova_compute[259627]: 2025-10-14 09:05:53.091 2 DEBUG nova.compute.provider_tree [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:05:53 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:53 compute-0 nova_compute[259627]: 2025-10-14 09:05:53.113 2 DEBUG nova.scheduler.client.report [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:05:53 compute-0 podman[321404]: 2025-10-14 09:05:53.119971325 +0000 UTC m=+0.170025149 container init 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:05:53 compute-0 podman[321404]: 2025-10-14 09:05:53.132544534 +0000 UTC m=+0.182598368 container start 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:05:53 compute-0 podman[321404]: 2025-10-14 09:05:53.137052895 +0000 UTC m=+0.187106719 container attach 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:05:53 compute-0 nova_compute[259627]: 2025-10-14 09:05:53.158 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:53 compute-0 nova_compute[259627]: 2025-10-14 09:05:53.186 2 INFO nova.scheduler.client.report [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Deleted allocations for instance 47257c6e-4d10-4d8e-af5a-b57db20048ea
Oct 14 09:05:53 compute-0 nova_compute[259627]: 2025-10-14 09:05:53.291 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:05:53 compute-0 nova_compute[259627]: 2025-10-14 09:05:53.336 2 DEBUG nova.compute.manager [req-4c24f124-f3bd-411c-984c-573a990e904e req-7d0ba147-42a2-4942-8668-c5041f55e739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-deleted-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:53 compute-0 nova_compute[259627]: 2025-10-14 09:05:53.337 2 DEBUG nova.compute.manager [req-4c24f124-f3bd-411c-984c-573a990e904e req-7d0ba147-42a2-4942-8668-c5041f55e739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-deleted-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:05:53 compute-0 ceph-mon[74249]: pgmap v1508: 305 pgs: 305 active+clean; 167 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 40 KiB/s wr, 131 op/s
Oct 14 09:05:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4273249229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:05:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 167 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.3 KiB/s wr, 120 op/s
Oct 14 09:05:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:53.774 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]: {
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:     "0": [
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:         {
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "devices": [
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "/dev/loop3"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             ],
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_name": "ceph_lv0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_size": "21470642176",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "name": "ceph_lv0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "tags": {
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cluster_name": "ceph",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.crush_device_class": "",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.encrypted": "0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osd_id": "0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.type": "block",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.vdo": "0"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             },
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "type": "block",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "vg_name": "ceph_vg0"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:         }
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:     ],
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:     "1": [
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:         {
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "devices": [
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "/dev/loop4"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             ],
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_name": "ceph_lv1",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_size": "21470642176",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "name": "ceph_lv1",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "tags": {
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cluster_name": "ceph",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.crush_device_class": "",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.encrypted": "0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osd_id": "1",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.type": "block",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.vdo": "0"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             },
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "type": "block",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "vg_name": "ceph_vg1"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:         }
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:     ],
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:     "2": [
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:         {
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "devices": [
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "/dev/loop5"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             ],
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_name": "ceph_lv2",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_size": "21470642176",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "name": "ceph_lv2",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "tags": {
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.cluster_name": "ceph",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.crush_device_class": "",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.encrypted": "0",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osd_id": "2",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.type": "block",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:                 "ceph.vdo": "0"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             },
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "type": "block",
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:             "vg_name": "ceph_vg2"
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:         }
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]:     ]
Oct 14 09:05:53 compute-0 naughty_roentgen[321419]: }
Oct 14 09:05:53 compute-0 systemd[1]: libpod-30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a.scope: Deactivated successfully.
Oct 14 09:05:53 compute-0 podman[321404]: 2025-10-14 09:05:53.911681173 +0000 UTC m=+0.961735027 container died 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:05:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f-merged.mount: Deactivated successfully.
Oct 14 09:05:53 compute-0 podman[321404]: 2025-10-14 09:05:53.996361868 +0000 UTC m=+1.046415712 container remove 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:05:54 compute-0 systemd[1]: libpod-conmon-30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a.scope: Deactivated successfully.
Oct 14 09:05:54 compute-0 sudo[321279]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:54 compute-0 sudo[321442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:54 compute-0 sudo[321442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:54 compute-0 sudo[321442]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:54 compute-0 sudo[321467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:05:54 compute-0 sudo[321467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:54 compute-0 sudo[321467]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:54 compute-0 sudo[321492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:54 compute-0 sudo[321492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:54 compute-0 sudo[321492]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:54 compute-0 sudo[321517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:05:54 compute-0 sudo[321517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:54 compute-0 nova_compute[259627]: 2025-10-14 09:05:54.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:54 compute-0 ovn_controller[152662]: 2025-10-14T09:05:54Z|00566|binding|INFO|Releasing lport a7f44223-dee5-4a2f-b975-1f04f03b78f7 from this chassis (sb_readonly=0)
Oct 14 09:05:54 compute-0 nova_compute[259627]: 2025-10-14 09:05:54.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:54 compute-0 podman[321580]: 2025-10-14 09:05:54.776222494 +0000 UTC m=+0.047069230 container create b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:05:54 compute-0 systemd[1]: Started libpod-conmon-b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232.scope.
Oct 14 09:05:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:05:54 compute-0 podman[321580]: 2025-10-14 09:05:54.757893013 +0000 UTC m=+0.028739799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:05:54 compute-0 podman[321580]: 2025-10-14 09:05:54.854385759 +0000 UTC m=+0.125232535 container init b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:05:54 compute-0 podman[321580]: 2025-10-14 09:05:54.865420341 +0000 UTC m=+0.136267087 container start b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 09:05:54 compute-0 podman[321580]: 2025-10-14 09:05:54.869114912 +0000 UTC m=+0.139961698 container attach b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:05:54 compute-0 charming_wiles[321597]: 167 167
Oct 14 09:05:54 compute-0 systemd[1]: libpod-b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232.scope: Deactivated successfully.
Oct 14 09:05:54 compute-0 podman[321580]: 2025-10-14 09:05:54.871367967 +0000 UTC m=+0.142214753 container died b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 14 09:05:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-009bb0019069b9a70717fd636c9d836b03ad0a55afa41f021eaffef872c079f5-merged.mount: Deactivated successfully.
Oct 14 09:05:54 compute-0 podman[321580]: 2025-10-14 09:05:54.911502836 +0000 UTC m=+0.182349582 container remove b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:05:54 compute-0 systemd[1]: libpod-conmon-b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232.scope: Deactivated successfully.
Oct 14 09:05:55 compute-0 podman[321621]: 2025-10-14 09:05:55.131532894 +0000 UTC m=+0.062009849 container create 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 09:05:55 compute-0 systemd[1]: Started libpod-conmon-206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5.scope.
Oct 14 09:05:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:05:55 compute-0 podman[321621]: 2025-10-14 09:05:55.106839595 +0000 UTC m=+0.037316640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:05:55 compute-0 podman[321621]: 2025-10-14 09:05:55.222382001 +0000 UTC m=+0.152858996 container init 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:05:55 compute-0 podman[321621]: 2025-10-14 09:05:55.240487097 +0000 UTC m=+0.170964052 container start 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:05:55 compute-0 podman[321621]: 2025-10-14 09:05:55.24467098 +0000 UTC m=+0.175147975 container attach 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 09:05:55 compute-0 ceph-mon[74249]: pgmap v1509: 305 pgs: 305 active+clean; 167 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.3 KiB/s wr, 120 op/s
Oct 14 09:05:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.3 KiB/s wr, 120 op/s
Oct 14 09:05:56 compute-0 angry_burnell[321638]: {
Oct 14 09:05:56 compute-0 angry_burnell[321638]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "osd_id": 2,
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "type": "bluestore"
Oct 14 09:05:56 compute-0 angry_burnell[321638]:     },
Oct 14 09:05:56 compute-0 angry_burnell[321638]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "osd_id": 1,
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "type": "bluestore"
Oct 14 09:05:56 compute-0 angry_burnell[321638]:     },
Oct 14 09:05:56 compute-0 angry_burnell[321638]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "osd_id": 0,
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:05:56 compute-0 angry_burnell[321638]:         "type": "bluestore"
Oct 14 09:05:56 compute-0 angry_burnell[321638]:     }
Oct 14 09:05:56 compute-0 angry_burnell[321638]: }
Oct 14 09:05:56 compute-0 systemd[1]: libpod-206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5.scope: Deactivated successfully.
Oct 14 09:05:56 compute-0 systemd[1]: libpod-206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5.scope: Consumed 1.033s CPU time.
Oct 14 09:05:56 compute-0 podman[321672]: 2025-10-14 09:05:56.330773038 +0000 UTC m=+0.022643369 container died 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985-merged.mount: Deactivated successfully.
Oct 14 09:05:56 compute-0 podman[321672]: 2025-10-14 09:05:56.386225994 +0000 UTC m=+0.078096285 container remove 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:05:56 compute-0 systemd[1]: libpod-conmon-206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5.scope: Deactivated successfully.
Oct 14 09:05:56 compute-0 sudo[321517]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:05:56 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:05:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:05:56 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:05:56 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 513df777-fbca-47a6-9604-d15c356c4c0c does not exist
Oct 14 09:05:56 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 320f7e08-2c41-41ed-8536-052e0e1a8fb1 does not exist
Oct 14 09:05:56 compute-0 sudo[321687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:05:56 compute-0 sudo[321687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:56 compute-0 sudo[321687]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:56 compute-0 sudo[321712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:05:56 compute-0 sudo[321712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:05:56 compute-0 sudo[321712]: pam_unix(sudo:session): session closed for user root
Oct 14 09:05:56 compute-0 ovn_controller[152662]: 2025-10-14T09:05:56Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:3d:6b 10.100.0.5
Oct 14 09:05:56 compute-0 ovn_controller[152662]: 2025-10-14T09:05:56Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:3d:6b 10.100.0.5
Oct 14 09:05:57 compute-0 nova_compute[259627]: 2025-10-14 09:05:57.075 2 DEBUG nova.virt.libvirt.driver [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:05:57 compute-0 nova_compute[259627]: 2025-10-14 09:05:57.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:57 compute-0 ceph-mon[74249]: pgmap v1510: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.3 KiB/s wr, 120 op/s
Oct 14 09:05:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:05:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:05:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:05:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 119 op/s
Oct 14 09:05:59 compute-0 nova_compute[259627]: 2025-10-14 09:05:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:59 compute-0 ceph-mon[74249]: pgmap v1511: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 119 op/s
Oct 14 09:05:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 119 op/s
Oct 14 09:05:59 compute-0 kernel: tap31797867-f0 (unregistering): left promiscuous mode
Oct 14 09:05:59 compute-0 NetworkManager[44885]: <info>  [1760432759.9560] device (tap31797867-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:05:59 compute-0 ovn_controller[152662]: 2025-10-14T09:05:59Z|00567|binding|INFO|Releasing lport 31797867-f0bc-4632-a658-bd0fae609c23 from this chassis (sb_readonly=0)
Oct 14 09:05:59 compute-0 nova_compute[259627]: 2025-10-14 09:05:59.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:05:59 compute-0 ovn_controller[152662]: 2025-10-14T09:05:59Z|00568|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 down in Southbound
Oct 14 09:05:59 compute-0 ovn_controller[152662]: 2025-10-14T09:05:59Z|00569|binding|INFO|Removing iface tap31797867-f0 ovn-installed in OVS
Oct 14 09:05:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:59.980 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:3d:6b 10.100.0.5'], port_security=['fa:16:3e:01:3d:6b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ec31b9ab-88ab-4085-a46b-76cb9825061a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31797867-f0bc-4632-a658-bd0fae609c23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:05:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:59.982 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31797867-f0bc-4632-a658-bd0fae609c23 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis
Oct 14 09:05:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:05:59.985 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.012 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[71043b8a-f09f-43a4-aea9-7f5eb9414fc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:00 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 14 09:06:00 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000038.scope: Consumed 12.366s CPU time.
Oct 14 09:06:00 compute-0 systemd-machined[214636]: Machine qemu-70-instance-00000038 terminated.
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.050 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a24d984-75f8-4e31-8cc2-13b91ce88388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.053 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2bba4ff7-f1cc-4f26-8025-a4dc9cbc6e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.095 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1075c7c0-6659-4a65-a436-bc754cc5a7d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.116 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01e5c3d2-568c-4a56-a764-8e31be967520]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321749, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.146 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e64c342-0833-47a5-a332-1442463af942]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321750, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321750, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.148 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.155 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.156 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.157 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.158 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.195 2 INFO nova.virt.libvirt.driver [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance shutdown successfully after 13 seconds.
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.201 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.202 2 DEBUG nova.objects.instance [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'numa_topology' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.217 2 DEBUG nova.compute.manager [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.217 2 DEBUG oslo_concurrency.lockutils [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.217 2 DEBUG oslo_concurrency.lockutils [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.218 2 DEBUG oslo_concurrency.lockutils [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.218 2 DEBUG nova.compute.manager [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.218 2 WARNING nova.compute.manager [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state active and task_state powering-off.
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.220 2 DEBUG nova.compute.manager [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:00 compute-0 nova_compute[259627]: 2025-10-14 09:06:00.282 2 DEBUG oslo_concurrency.lockutils [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:01 compute-0 ceph-mon[74249]: pgmap v1512: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 119 op/s
Oct 14 09:06:01 compute-0 nova_compute[259627]: 2025-10-14 09:06:01.485 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432746.483594, 65c7e6ed-131f-4bca-af69-a1241d048bdb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:01 compute-0 nova_compute[259627]: 2025-10-14 09:06:01.486 2 INFO nova.compute.manager [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] VM Stopped (Lifecycle Event)
Oct 14 09:06:01 compute-0 nova_compute[259627]: 2025-10-14 09:06:01.510 2 DEBUG nova.compute.manager [None req-7beafdc1-0bdb-44b7-b400-48b22d2d1265 - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 200 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Oct 14 09:06:01 compute-0 nova_compute[259627]: 2025-10-14 09:06:01.934 2 INFO nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Rebuilding instance
Oct 14 09:06:01 compute-0 nova_compute[259627]: 2025-10-14 09:06:01.992 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:01 compute-0 nova_compute[259627]: 2025-10-14 09:06:01.992 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.019 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.108 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.109 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.117 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.117 2 INFO nova.compute.claims [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.280 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'trusted_certs' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.298 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.301 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.352 2 DEBUG nova.compute.manager [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.353 2 DEBUG oslo_concurrency.lockutils [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.353 2 DEBUG oslo_concurrency.lockutils [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.354 2 DEBUG oslo_concurrency.lockutils [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.354 2 DEBUG nova.compute.manager [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.355 2 WARNING nova.compute.manager [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state stopped and task_state rebuilding.
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.406 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_requests' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.418 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_devices' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.429 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'resources' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.437 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'migration_context' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.453 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.456 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance already shutdown.
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.463 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.469 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.470 2 DEBUG nova.virt.libvirt.vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:01Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.470 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.471 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.471 2 DEBUG os_vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31797867-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.479 2 INFO os_vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0')
Oct 14 09:06:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:06:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390791536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.779 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.786 2 DEBUG nova.compute.provider_tree [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.813 2 DEBUG nova.scheduler.client.report [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.830 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.831 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.872 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.872 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.879 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deleting instance files /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a_del
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.880 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deletion of /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a_del complete
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.887 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:06:02 compute-0 nova_compute[259627]: 2025-10-14 09:06:02.915 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.020 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.021 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.022 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Creating image(s)
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.039 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.057 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.078 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.081 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.122 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.123 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating image(s)
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.144 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.164 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.184 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.188 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.230 2 DEBUG nova.policy [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.233 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.233 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.234 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.234 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.254 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.257 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.285 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.286 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.286 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.287 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.310 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.313 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf ec31b9ab-88ab-4085-a46b-76cb9825061a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:03 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 14 09:06:03 compute-0 ceph-mon[74249]: pgmap v1513: 305 pgs: 305 active+clean; 200 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Oct 14 09:06:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1390791536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.624 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.678 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf ec31b9ab-88ab-4085-a46b-76cb9825061a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 200 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.740 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] resizing rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.786 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] resizing rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.883 2 DEBUG nova.objects.instance [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'migration_context' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.923 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.924 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Ensure instance console log exists: /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.925 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.925 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.926 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.928 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start _get_guest_xml network_info=[{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:06:03 compute-0 nova_compute[259627]: 2025-10-14 09:06:03.932 2 WARNING nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.030 2 DEBUG nova.virt.libvirt.host [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.031 2 DEBUG nova.virt.libvirt.host [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.062 2 DEBUG nova.virt.libvirt.host [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.063 2 DEBUG nova.virt.libvirt.host [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.063 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.063 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.064 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.064 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.065 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.065 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.065 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.066 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.066 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.067 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.067 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.067 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.068 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'vcpu_model' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.108 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.108 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Ensure instance console log exists: /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.109 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.109 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.110 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.122 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.346 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432749.342114, 47257c6e-4d10-4d8e-af5a-b57db20048ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.348 2 INFO nova.compute.manager [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] VM Stopped (Lifecycle Event)
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.371 2 DEBUG nova.compute.manager [None req-7204f379-0335-478a-b117-3ebf82d8f1c3 - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.516 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Successfully created port: dffa5a1f-657b-498e-bbe5-6540fead7fb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:06:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297244430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.591 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.617 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:04 compute-0 nova_compute[259627]: 2025-10-14 09:06:04.621 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/528567082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.062 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.064 2 DEBUG nova.virt.libvirt.vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:02Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.065 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.067 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.073 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <uuid>ec31b9ab-88ab-4085-a46b-76cb9825061a</uuid>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <name>instance-00000038</name>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <nova:name>tempest-tempest.common.compute-instance-412231426</nova:name>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:06:03</nova:creationTime>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <nova:user uuid="d952679a4e6a4fc6bacf42c02d3e92d0">tempest-ServerActionsTestOtherA-894139105-project-member</nova:user>
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <nova:project uuid="4e47722c609640d3a70fee8dd6ff94cc">tempest-ServerActionsTestOtherA-894139105</nova:project>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <nova:port uuid="31797867-f0bc-4632-a658-bd0fae609c23">
Oct 14 09:06:05 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <entry name="serial">ec31b9ab-88ab-4085-a46b-76cb9825061a</entry>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <entry name="uuid">ec31b9ab-88ab-4085-a46b-76cb9825061a</entry>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ec31b9ab-88ab-4085-a46b-76cb9825061a_disk">
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config">
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:05 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:01:3d:6b"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <target dev="tap31797867-f0"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/console.log" append="off"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:06:05 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:06:05 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:05 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:05 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:05 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.075 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Preparing to wait for external event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.076 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.077 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.077 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.079 2 DEBUG nova.virt.libvirt.vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:02Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.079 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.080 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.081 2 DEBUG os_vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31797867-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31797867-f0, col_values=(('external_ids', {'iface-id': '31797867-f0bc-4632-a658-bd0fae609c23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:3d:6b', 'vm-uuid': 'ec31b9ab-88ab-4085-a46b-76cb9825061a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:05 compute-0 NetworkManager[44885]: <info>  [1760432765.0917] manager: (tap31797867-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.097 2 INFO os_vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0')
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.166 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.167 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.168 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No VIF found with MAC fa:16:3e:01:3d:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.168 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Using config drive
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.204 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.229 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'ec2_ids' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:05 compute-0 nova_compute[259627]: 2025-10-14 09:06:05.277 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'keypairs' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:05 compute-0 ceph-mon[74249]: pgmap v1514: 305 pgs: 305 active+clean; 200 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:06:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2297244430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/528567082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:06:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/651548379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:06:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:06:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/651548379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:06:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.389 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Successfully updated port: dffa5a1f-657b-498e-bbe5-6540fead7fb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.414 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating config drive at /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.423 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6cd_7y4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.461 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.462 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.463 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.512 2 DEBUG nova.compute.manager [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.513 2 DEBUG nova.compute.manager [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.513 2 DEBUG oslo_concurrency.lockutils [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/651548379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:06:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/651548379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.565 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6cd_7y4e" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.589 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.593 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.620 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.764 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.765 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deleting local config drive /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config because it was imported into RBD.
Oct 14 09:06:06 compute-0 kernel: tap31797867-f0: entered promiscuous mode
Oct 14 09:06:06 compute-0 NetworkManager[44885]: <info>  [1760432766.8304] manager: (tap31797867-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Oct 14 09:06:06 compute-0 ovn_controller[152662]: 2025-10-14T09:06:06Z|00570|binding|INFO|Claiming lport 31797867-f0bc-4632-a658-bd0fae609c23 for this chassis.
Oct 14 09:06:06 compute-0 ovn_controller[152662]: 2025-10-14T09:06:06Z|00571|binding|INFO|31797867-f0bc-4632-a658-bd0fae609c23: Claiming fa:16:3e:01:3d:6b 10.100.0.5
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.840 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:3d:6b 10.100.0.5'], port_security=['fa:16:3e:01:3d:6b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ec31b9ab-88ab-4085-a46b-76cb9825061a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31797867-f0bc-4632-a658-bd0fae609c23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.841 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31797867-f0bc-4632-a658-bd0fae609c23 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.843 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:06:06 compute-0 systemd-udevd[322272]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.864 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ce6e97-195c-41bf-bea0-fc72ae95c090]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:06 compute-0 ovn_controller[152662]: 2025-10-14T09:06:06Z|00572|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 ovn-installed in OVS
Oct 14 09:06:06 compute-0 ovn_controller[152662]: 2025-10-14T09:06:06Z|00573|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 up in Southbound
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:06 compute-0 systemd-machined[214636]: New machine qemu-71-instance-00000038.
Oct 14 09:06:06 compute-0 NetworkManager[44885]: <info>  [1760432766.8810] device (tap31797867-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:06:06 compute-0 NetworkManager[44885]: <info>  [1760432766.8818] device (tap31797867-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:06:06 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000038.
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.900 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0b263b14-8b9e-4066-a723-45d58d204427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.905 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b96b2597-c7e2-4e3d-9b8e-0e3a4a259db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.930 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c79ef636-ff32-4be3-b943-91b856b02ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.950 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[367f7224-72f5-4c79-bd4c-4b9aced97d59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322285, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.969 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f44213b2-cc9c-4f3a-93ce-64d2a666ad46]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322287, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322287, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.971 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:06 compute-0 nova_compute[259627]: 2025-10-14 09:06:06.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.973 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:07.024 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:07.025 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:07.025 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.217 2 DEBUG nova.compute.manager [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.218 2 DEBUG oslo_concurrency.lockutils [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.218 2 DEBUG oslo_concurrency.lockutils [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.219 2 DEBUG oslo_concurrency.lockutils [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.220 2 DEBUG nova.compute.manager [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Processing event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.385 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.422 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.423 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance network_info: |[{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.424 2 DEBUG oslo_concurrency.lockutils [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.424 2 DEBUG nova.network.neutron [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.426 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start _get_guest_xml network_info=[{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.433 2 WARNING nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.437 2 DEBUG nova.virt.libvirt.host [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.437 2 DEBUG nova.virt.libvirt.host [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.444 2 DEBUG nova.virt.libvirt.host [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.446 2 DEBUG nova.virt.libvirt.host [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.447 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.447 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.447 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.449 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.449 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.449 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.449 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.452 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:07 compute-0 ceph-mon[74249]: pgmap v1515: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 09:06:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 09:06:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292982815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.921 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.944 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.948 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.975 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.976 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for ec31b9ab-88ab-4085-a46b-76cb9825061a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.976 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432767.9354346, ec31b9ab-88ab-4085-a46b-76cb9825061a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.976 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Started (Lifecycle Event)
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.980 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.982 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance spawned successfully.
Oct 14 09:06:07 compute-0 nova_compute[259627]: 2025-10-14 09:06:07.982 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.020 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.022 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.022 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.023 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.023 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.023 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.024 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.029 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.062 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.063 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432767.9356983, ec31b9ab-88ab-4085-a46b-76cb9825061a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.063 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Paused (Lifecycle Event)
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.090 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.094 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432767.979517, ec31b9ab-88ab-4085-a46b-76cb9825061a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.094 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Resumed (Lifecycle Event)
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.111 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.119 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.121 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.153 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.161 2 INFO nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] bringing vm to original state: 'stopped'
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.220 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.221 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.221 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.225 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 09:06:08 compute-0 kernel: tap31797867-f0 (unregistering): left promiscuous mode
Oct 14 09:06:08 compute-0 NetworkManager[44885]: <info>  [1760432768.2667] device (tap31797867-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:08 compute-0 ovn_controller[152662]: 2025-10-14T09:06:08Z|00574|binding|INFO|Releasing lport 31797867-f0bc-4632-a658-bd0fae609c23 from this chassis (sb_readonly=0)
Oct 14 09:06:08 compute-0 ovn_controller[152662]: 2025-10-14T09:06:08Z|00575|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 down in Southbound
Oct 14 09:06:08 compute-0 ovn_controller[152662]: 2025-10-14T09:06:08Z|00576|binding|INFO|Removing iface tap31797867-f0 ovn-installed in OVS
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.294 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:3d:6b 10.100.0.5'], port_security=['fa:16:3e:01:3d:6b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ec31b9ab-88ab-4085-a46b-76cb9825061a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31797867-f0bc-4632-a658-bd0fae609c23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.296 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31797867-f0bc-4632-a658-bd0fae609c23 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.299 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:08 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 14 09:06:08 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000038.scope: Consumed 1.110s CPU time.
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.313 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f32c478-88ed-4f04-b040-7ec3fe8bd947]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:08 compute-0 systemd-machined[214636]: Machine qemu-71-instance-00000038 terminated.
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.339 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac1bf6d-680a-4ed4-be3d-5e272e6bc4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.342 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5e7b53-9aaf-4851-8359-3d4287c95b52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:08 compute-0 podman[322390]: 2025-10-14 09:06:08.358719273 +0000 UTC m=+0.065815541 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:06:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327380294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.378 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[463a2360-204b-48b5-b3c7-237105f7f287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:08 compute-0 podman[322395]: 2025-10-14 09:06:08.386690712 +0000 UTC m=+0.082991875 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.388 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.389 2 DEBUG nova.virt.libvirt.vif [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.390 2 DEBUG nova.network.os_vif_util [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.391 2 DEBUG nova.network.os_vif_util [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.392 2 DEBUG nova.objects.instance [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_devices' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.404 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3cccba09-16b6-4c15-ae8b-91170c167e46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322440, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.408 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <uuid>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</uuid>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <name>instance-00000039</name>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:06:07</nova:creationTime>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 09:06:08 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <entry name="serial">a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <entry name="uuid">a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk">
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config">
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:08 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b4:40:de"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <target dev="tapdffa5a1f-65"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log" append="off"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:06:08 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:06:08 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:08 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:08 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:08 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.409 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Preparing to wait for external event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.409 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.409 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.410 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.410 2 DEBUG nova.virt.libvirt.vif [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.411 2 DEBUG nova.network.os_vif_util [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.411 2 DEBUG nova.network.os_vif_util [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.412 2 DEBUG os_vif [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdffa5a1f-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdffa5a1f-65, col_values=(('external_ids', {'iface-id': 'dffa5a1f-657b-498e-bbe5-6540fead7fb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:40:de', 'vm-uuid': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:08 compute-0 NetworkManager[44885]: <info>  [1760432768.4188] manager: (tapdffa5a1f-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.421 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56a3fa7d-ebb9-4dcc-b595-b11712020405]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322441, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322441, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.423 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.426 2 INFO os_vif [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65')
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.433 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.436 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:08 compute-0 NetworkManager[44885]: <info>  [1760432768.4463] manager: (tap31797867-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.453 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.454 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.515 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.523 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.524 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.524 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:b4:40:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.524 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Using config drive
Oct 14 09:06:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4292982815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3327380294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.553 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.562 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.563 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.563 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.661 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.984 2 DEBUG nova.network.neutron [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:08 compute-0 nova_compute[259627]: 2025-10-14 09:06:08.985 2 DEBUG nova.network.neutron [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.004 2 DEBUG oslo_concurrency.lockutils [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.113 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Creating config drive at /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.123 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxhlo5n0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.271 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxhlo5n0" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.315 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.319 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.375 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.376 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.377 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.378 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.378 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.379 2 WARNING nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state stopped and task_state None.
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.380 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.380 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.381 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.382 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.382 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.383 2 WARNING nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state stopped and task_state None.
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.384 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.384 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.385 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.386 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.386 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.387 2 WARNING nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state stopped and task_state None.
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.517 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.518 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Deleting local config drive /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config because it was imported into RBD.
Oct 14 09:06:09 compute-0 ceph-mon[74249]: pgmap v1516: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 09:06:09 compute-0 NetworkManager[44885]: <info>  [1760432769.5841] manager: (tapdffa5a1f-65): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Oct 14 09:06:09 compute-0 systemd-udevd[322277]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:09 compute-0 kernel: tapdffa5a1f-65: entered promiscuous mode
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 ovn_controller[152662]: 2025-10-14T09:06:09Z|00577|binding|INFO|Claiming lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 for this chassis.
Oct 14 09:06:09 compute-0 ovn_controller[152662]: 2025-10-14T09:06:09Z|00578|binding|INFO|dffa5a1f-657b-498e-bbe5-6540fead7fb6: Claiming fa:16:3e:b4:40:de 10.100.0.8
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.605 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:40:de 10.100.0.8'], port_security=['fa:16:3e:b4:40:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6192a40-cbd8-43eb-9955-4fede99ddb79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=dffa5a1f-657b-498e-bbe5-6540fead7fb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.607 162547 INFO neutron.agent.ovn.metadata.agent [-] Port dffa5a1f-657b-498e-bbe5-6540fead7fb6 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis
Oct 14 09:06:09 compute-0 NetworkManager[44885]: <info>  [1760432769.6074] device (tapdffa5a1f-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.608 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:06:09 compute-0 NetworkManager[44885]: <info>  [1760432769.6105] device (tapdffa5a1f-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:06:09 compute-0 systemd-machined[214636]: New machine qemu-72-instance-00000039.
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.626 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[260f0b8f-5acd-4740-8acf-7def4ebcafeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.627 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc2d149f-a1 in ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 ovn_controller[152662]: 2025-10-14T09:06:09Z|00579|binding|INFO|Setting lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 up in Southbound
Oct 14 09:06:09 compute-0 ovn_controller[152662]: 2025-10-14T09:06:09Z|00580|binding|INFO|Setting lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 ovn-installed in OVS
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.630 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc2d149f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.630 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b12460ce-3873-458a-971d-8baecf560a3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.631 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a92cde-f141-4e25-90d9-e20436e88940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-00000039.
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.649 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[40bc4ee7-6b9f-46bc-a5fd-a7b39cefdd6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.668 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c2760e-2eb5-47bd-96e0-f54813032ecc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.703 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed334ad-96c7-48dd-8e10-ece9f5181485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.709 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6347d271-5f30-45b9-81fc-d3b37d4f6717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 NetworkManager[44885]: <info>  [1760432769.7142] manager: (tapfc2d149f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/250)
Oct 14 09:06:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.754 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[572114a0-1cc4-4e86-9954-94f872e5dcf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.756 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9a693c6a-aa7a-42f0-ba0a-1d4090e73c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 NetworkManager[44885]: <info>  [1760432769.7813] device (tapfc2d149f-a0): carrier: link connected
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.785 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8966265a-595b-4902-bc00-20074f99e212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.804 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45a0b58b-304b-4371-bbdf-32d80b65cccf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 26994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322561, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.820 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d07ef93-f315-4105-bbb1-f8a9f4857e7c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:e73e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654854, 'tstamp': 654854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322562, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.837 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6437eb22-466d-4df0-9ac6-9218db57a5fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 26994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322563, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[93a77616-6792-4dea-a5a3-a1f46f838e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.922 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b27565be-df30-42f2-89d6-f8cd945a9d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.923 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.924 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.924 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 NetworkManager[44885]: <info>  [1760432769.9270] manager: (tapfc2d149f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct 14 09:06:09 compute-0 kernel: tapfc2d149f-a0: entered promiscuous mode
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 ovn_controller[152662]: 2025-10-14T09:06:09Z|00581|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 nova_compute[259627]: 2025-10-14 09:06:09.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.963 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.964 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbc39f1-ebab-4f8c-90fa-38587495581c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.964 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:06:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.965 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'env', 'PROCESS_TAG=haproxy-fc2d149f-aebf-406a-aed2-5161dd22b079', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc2d149f-aebf-406a-aed2-5161dd22b079.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:06:10 compute-0 podman[322643]: 2025-10-14 09:06:10.402221319 +0000 UTC m=+0.059754553 container create 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:06:10 compute-0 systemd[1]: Started libpod-conmon-0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777.scope.
Oct 14 09:06:10 compute-0 podman[322643]: 2025-10-14 09:06:10.366393767 +0000 UTC m=+0.023927051 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:06:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2b9ef6811a39d51e9705ae92738e1dfb3f318ac7ca2b4d0ea3d271eb3fe0ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:06:10 compute-0 podman[322643]: 2025-10-14 09:06:10.482572718 +0000 UTC m=+0.140105972 container init 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:06:10 compute-0 podman[322643]: 2025-10-14 09:06:10.488058423 +0000 UTC m=+0.145591657 container start 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:06:10 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [NOTICE]   (322662) : New worker (322664) forked
Oct 14 09:06:10 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [NOTICE]   (322662) : Loading success.
Oct 14 09:06:10 compute-0 nova_compute[259627]: 2025-10-14 09:06:10.545 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432770.5443027, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:10 compute-0 nova_compute[259627]: 2025-10-14 09:06:10.545 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] VM Started (Lifecycle Event)
Oct 14 09:06:10 compute-0 nova_compute[259627]: 2025-10-14 09:06:10.569 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:10 compute-0 nova_compute[259627]: 2025-10-14 09:06:10.573 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432770.544775, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:10 compute-0 nova_compute[259627]: 2025-10-14 09:06:10.574 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] VM Paused (Lifecycle Event)
Oct 14 09:06:10 compute-0 nova_compute[259627]: 2025-10-14 09:06:10.591 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:10 compute-0 nova_compute[259627]: 2025-10-14 09:06:10.594 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:10 compute-0 nova_compute[259627]: 2025-10-14 09:06:10.615 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:11 compute-0 ceph-mon[74249]: pgmap v1517: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 09:06:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 214 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 5.7 MiB/s wr, 167 op/s
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.525 2 DEBUG nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.526 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.526 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.527 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.527 2 DEBUG nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Processing event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.528 2 DEBUG nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.528 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.529 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.530 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.530 2 DEBUG nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.531 2 WARNING nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 for instance with vm_state building and task_state spawning.
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.532 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.536 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432772.5359058, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.536 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] VM Resumed (Lifecycle Event)
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.540 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.545 2 INFO nova.virt.libvirt.driver [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance spawned successfully.
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.545 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.570 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.579 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.586 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.587 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.587 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.588 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.589 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.589 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.619 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.657 2 INFO nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Took 9.64 seconds to spawn the instance on the hypervisor.
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.657 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.732 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.732 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.733 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.733 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.733 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.734 2 INFO nova.compute.manager [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Terminating instance
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.736 2 DEBUG nova.compute.manager [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.743 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.745 2 DEBUG nova.objects.instance [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'resources' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.756 2 INFO nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Took 10.69 seconds to build instance.
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.773 2 DEBUG nova.virt.libvirt.vif [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:08Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.774 2 DEBUG nova.network.os_vif_util [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.775 2 DEBUG nova.network.os_vif_util [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.775 2 DEBUG os_vif [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.779 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31797867-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.780 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:12 compute-0 nova_compute[259627]: 2025-10-14 09:06:12.786 2 INFO os_vif [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0')
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.196 2 INFO nova.virt.libvirt.driver [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deleting instance files /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a_del
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.198 2 INFO nova.virt.libvirt.driver [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deletion of /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a_del complete
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.282 2 INFO nova.compute.manager [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Took 0.55 seconds to destroy the instance on the hypervisor.
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.283 2 DEBUG oslo.service.loopingcall [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.284 2 DEBUG nova.compute.manager [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.285 2 DEBUG nova.network.neutron [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:06:13 compute-0 ceph-mon[74249]: pgmap v1518: 305 pgs: 305 active+clean; 214 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 5.7 MiB/s wr, 167 op/s
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.577 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.578 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.600 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.682 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.683 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.695 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.696 2 INFO nova.compute.claims [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:06:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 214 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.868 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.916 2 DEBUG nova.network.neutron [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.934 2 INFO nova.compute.manager [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Took 0.65 seconds to deallocate network for instance.
Oct 14 09:06:13 compute-0 nova_compute[259627]: 2025-10-14 09:06:13.976 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270437326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.351 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.357 2 DEBUG nova.compute.provider_tree [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.372 2 DEBUG nova.scheduler.client.report [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.398 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.399 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.403 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.467 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.468 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.488 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.511 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.544 2 DEBUG oslo_concurrency.processutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4270437326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.645 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.648 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.649 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Creating image(s)
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.685 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.717 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.740 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.743 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.781 2 DEBUG nova.policy [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.786 2 DEBUG nova.compute.manager [req-b1510d54-4efa-4083-b982-b6ddd24a2d2f req-da68a5f3-ce98-4562-812c-c7d56eb33f72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-deleted-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.822 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.823 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.824 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.824 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.849 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.852 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69457542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.976 2 DEBUG oslo_concurrency.processutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:14 compute-0 nova_compute[259627]: 2025-10-14 09:06:14.982 2 DEBUG nova.compute.provider_tree [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.006 2 DEBUG nova.scheduler.client.report [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.045 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.089 2 INFO nova.scheduler.client.report [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Deleted allocations for instance ec31b9ab-88ab-4085-a46b-76cb9825061a
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.145 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.199 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.205 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.292 2 DEBUG nova.objects.instance [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.309 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.310 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Ensure instance console log exists: /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.310 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.310 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.311 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.473 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Successfully created port: 1550cd45-1c1e-4505-8762-fb1668990b8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.514 2 DEBUG nova.compute.manager [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.515 2 DEBUG nova.compute.manager [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.516 2 DEBUG oslo_concurrency.lockutils [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.516 2 DEBUG oslo_concurrency.lockutils [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:15 compute-0 nova_compute[259627]: 2025-10-14 09:06:15.516 2 DEBUG nova.network.neutron [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:15 compute-0 ceph-mon[74249]: pgmap v1519: 305 pgs: 305 active+clean; 214 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct 14 09:06:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/69457542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 218 op/s
Oct 14 09:06:16 compute-0 nova_compute[259627]: 2025-10-14 09:06:16.245 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Successfully updated port: 1550cd45-1c1e-4505-8762-fb1668990b8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:06:16 compute-0 nova_compute[259627]: 2025-10-14 09:06:16.266 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:16 compute-0 nova_compute[259627]: 2025-10-14 09:06:16.267 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:16 compute-0 nova_compute[259627]: 2025-10-14 09:06:16.267 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:16 compute-0 nova_compute[259627]: 2025-10-14 09:06:16.519 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.002 2 DEBUG nova.compute.manager [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-changed-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.003 2 DEBUG nova.compute.manager [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Refreshing instance network info cache due to event network-changed-1550cd45-1c1e-4505-8762-fb1668990b8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.003 2 DEBUG oslo_concurrency.lockutils [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.142 2 DEBUG nova.network.neutron [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.143 2 DEBUG nova.network.neutron [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.167 2 DEBUG oslo_concurrency.lockutils [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:17 compute-0 ceph-mon[74249]: pgmap v1520: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 218 op/s
Oct 14 09:06:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.719 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Updating instance_info_cache with network_info: [{"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 136 op/s
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.760 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.761 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance network_info: |[{"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.763 2 DEBUG oslo_concurrency.lockutils [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.763 2 DEBUG nova.network.neutron [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Refreshing network info cache for port 1550cd45-1c1e-4505-8762-fb1668990b8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.768 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start _get_guest_xml network_info=[{"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.775 2 WARNING nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.781 2 DEBUG nova.virt.libvirt.host [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.782 2 DEBUG nova.virt.libvirt.host [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.795 2 DEBUG nova.virt.libvirt.host [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.796 2 DEBUG nova.virt.libvirt.host [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.797 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.797 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.798 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.799 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.800 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.800 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.801 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.801 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.802 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.802 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.803 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.804 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.808 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.882 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.883 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.904 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.974 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.974 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.981 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:06:17 compute-0 nova_compute[259627]: 2025-10-14 09:06:17.982 2 INFO nova.compute.claims [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.148 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510970167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.266 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.290 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.294 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/510970167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757796224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.674 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.683 2 DEBUG nova.compute.provider_tree [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.707 2 DEBUG nova.scheduler.client.report [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.728 2 DEBUG nova.network.neutron [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Updated VIF entry in instance network info cache for port 1550cd45-1c1e-4505-8762-fb1668990b8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.729 2 DEBUG nova.network.neutron [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Updating instance_info_cache with network_info: [{"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.738 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.739 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.746 2 DEBUG oslo_concurrency.lockutils [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568574214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.798 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.799 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.812 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.814 2 DEBUG nova.virt.libvirt.vif [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1220502864',display_name='tempest-DeleteServersTestJSON-server-1220502864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1220502864',id=58,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-a7evba7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:14Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.815 2 DEBUG nova.network.os_vif_util [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.816 2 DEBUG nova.network.os_vif_util [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.818 2 DEBUG nova.objects.instance [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.821 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.839 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <uuid>97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce</uuid>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <name>instance-0000003a</name>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <nova:name>tempest-DeleteServersTestJSON-server-1220502864</nova:name>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:06:17</nova:creationTime>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <nova:port uuid="1550cd45-1c1e-4505-8762-fb1668990b8f">
Oct 14 09:06:18 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <entry name="serial">97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce</entry>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <entry name="uuid">97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce</entry>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk">
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config">
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:18 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c5:61:e3"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <target dev="tap1550cd45-1c"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/console.log" append="off"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:06:18 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:06:18 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:18 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:18 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:18 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.850 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Preparing to wait for external event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.850 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.851 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.851 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.852 2 DEBUG nova.virt.libvirt.vif [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1220502864',display_name='tempest-DeleteServersTestJSON-server-1220502864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1220502864',id=58,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-a7evba7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:14Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.853 2 DEBUG nova.network.os_vif_util [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.854 2 DEBUG nova.network.os_vif_util [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.854 2 DEBUG os_vif [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.861 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.867 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1550cd45-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1550cd45-1c, col_values=(('external_ids', {'iface-id': '1550cd45-1c1e-4505-8762-fb1668990b8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:61:e3', 'vm-uuid': '97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:18 compute-0 NetworkManager[44885]: <info>  [1760432778.9140] manager: (tap1550cd45-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.925 2 INFO os_vif [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c')
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.951 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.953 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.954 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Creating image(s)
Oct 14 09:06:18 compute-0 nova_compute[259627]: 2025-10-14 09:06:18.991 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.027 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.062 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.067 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.136 2 DEBUG nova.policy [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.160 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.160 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.161 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.161 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.184 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.188 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.220 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.221 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.221 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:c5:61:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.222 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Using config drive
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.248 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.455 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.515 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] resizing rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:06:19 compute-0 ceph-mon[74249]: pgmap v1521: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 136 op/s
Oct 14 09:06:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2757796224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3568574214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.652 2 DEBUG nova.objects.instance [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'migration_context' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.664 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.665 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Ensure instance console log exists: /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.666 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.666 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.667 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 136 op/s
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.781 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Creating config drive at /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.791 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuvfr0hdm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.954 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuvfr0hdm" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:19 compute-0 nova_compute[259627]: 2025-10-14 09:06:19.998 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.004 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.046 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Successfully created port: 350a3bec-5dbd-4a83-8d80-5796be0319fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.182 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.183 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Deleting local config drive /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config because it was imported into RBD.
Oct 14 09:06:20 compute-0 NetworkManager[44885]: <info>  [1760432780.2250] manager: (tap1550cd45-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Oct 14 09:06:20 compute-0 kernel: tap1550cd45-1c: entered promiscuous mode
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:20 compute-0 ovn_controller[152662]: 2025-10-14T09:06:20Z|00582|binding|INFO|Claiming lport 1550cd45-1c1e-4505-8762-fb1668990b8f for this chassis.
Oct 14 09:06:20 compute-0 ovn_controller[152662]: 2025-10-14T09:06:20Z|00583|binding|INFO|1550cd45-1c1e-4505-8762-fb1668990b8f: Claiming fa:16:3e:c5:61:e3 10.100.0.9
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.278 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:61:e3 10.100.0.9'], port_security=['fa:16:3e:c5:61:e3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1550cd45-1c1e-4505-8762-fb1668990b8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.279 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1550cd45-1c1e-4505-8762-fb1668990b8f in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.280 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:06:20 compute-0 ovn_controller[152662]: 2025-10-14T09:06:20Z|00584|binding|INFO|Setting lport 1550cd45-1c1e-4505-8762-fb1668990b8f ovn-installed in OVS
Oct 14 09:06:20 compute-0 ovn_controller[152662]: 2025-10-14T09:06:20Z|00585|binding|INFO|Setting lport 1550cd45-1c1e-4505-8762-fb1668990b8f up in Southbound
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.300 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c81774-9bc6-42a1-a32f-795f022fc506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.301 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:06:20 compute-0 systemd-machined[214636]: New machine qemu-73-instance-0000003a.
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.303 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.303 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[931ae6e0-2572-4d68-aeef-b9880caf1d3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.304 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2c3fac-b5e7-4347-a0bc-b116691cd3a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-0000003a.
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.314 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ce274265-7eb9-4868-ad3e-7b4ba7af8af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 systemd-udevd[323229]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.326 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcf0b1b-cc4b-4bce-ab16-72c31d6ec2a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 NetworkManager[44885]: <info>  [1760432780.3421] device (tap1550cd45-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:06:20 compute-0 NetworkManager[44885]: <info>  [1760432780.3429] device (tap1550cd45-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.360 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[96a6ef19-39ee-4494-a7ec-6723523bb5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.365 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[10af2398-9734-4465-b47e-7a26a896094d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 NetworkManager[44885]: <info>  [1760432780.3662] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/254)
Oct 14 09:06:20 compute-0 systemd-udevd[323233]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.408 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[180a7f93-12d1-45d7-9d38-d9db2e160b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.411 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4a50f6cc-8781-4ebb-b05a-d78e42a90b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 NetworkManager[44885]: <info>  [1760432780.4315] device (tap0a07d59e-b0): carrier: link connected
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.435 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7505253c-1888-4ebc-9e39-177b6aa12fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b92e6009-66ee-4217-9a14-1c584a31a343]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655919, 'reachable_time': 33913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323259, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[16d51723-c8c1-4fec-8811-3795b67a8c36]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655919, 'tstamp': 655919}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323260, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.493 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd93ee1-8268-47ec-9227-d02ae541eb26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655919, 'reachable_time': 33913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323261, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.528 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4115a7c-c777-49d2-8c99-5f8cb10cbdae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.601 2 DEBUG nova.compute.manager [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.603 2 DEBUG oslo_concurrency.lockutils [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.603 2 DEBUG oslo_concurrency.lockutils [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.604 2 DEBUG oslo_concurrency.lockutils [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.605 2 DEBUG nova.compute.manager [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Processing event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.616 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e25d245-c90b-4304-ad3c-4f2c8ea05625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.617 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.618 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.618 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:20 compute-0 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 09:06:20 compute-0 NetworkManager[44885]: <info>  [1760432780.6209] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.622 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:20 compute-0 ovn_controller[152662]: 2025-10-14T09:06:20Z|00586|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.639 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.641 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5916cc-49ec-491c-b920-c78c7a48862b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.642 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:06:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.643 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:20 compute-0 nova_compute[259627]: 2025-10-14 09:06:20.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:21 compute-0 podman[323335]: 2025-10-14 09:06:21.089060287 +0000 UTC m=+0.065171836 container create f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:06:21 compute-0 podman[323335]: 2025-10-14 09:06:21.057235133 +0000 UTC m=+0.033346642 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:06:21 compute-0 systemd[1]: Started libpod-conmon-f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb.scope.
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.176 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Successfully updated port: 350a3bec-5dbd-4a83-8d80-5796be0319fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:06:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.195 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.195 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.195 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfe46c2d430d558e2727a9bdbf703eb7324aebea7de1e5e4f99b74c1e21eb6fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:06:21 compute-0 podman[323335]: 2025-10-14 09:06:21.223265252 +0000 UTC m=+0.199376771 container init f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:06:21 compute-0 podman[323335]: 2025-10-14 09:06:21.232637323 +0000 UTC m=+0.208748822 container start f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:06:21 compute-0 podman[323348]: 2025-10-14 09:06:21.236776145 +0000 UTC m=+0.089369452 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.247 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432781.2466352, 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.247 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] VM Started (Lifecycle Event)
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.249 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.252 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.255 2 INFO nova.virt.libvirt.driver [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance spawned successfully.
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.255 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:06:21 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [NOTICE]   (323388) : New worker (323394) forked
Oct 14 09:06:21 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [NOTICE]   (323388) : Loading success.
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.273 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.275 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.289 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.289 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.289 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.290 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.291 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.292 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.296 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.297 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432781.2471485, 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.297 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] VM Paused (Lifecycle Event)
Oct 14 09:06:21 compute-0 podman[323347]: 2025-10-14 09:06:21.307778473 +0000 UTC m=+0.163110988 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.325 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.328 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432781.2509403, 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.328 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] VM Resumed (Lifecycle Event)
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.354 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.356 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.365 2 INFO nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Took 6.72 seconds to spawn the instance on the hypervisor.
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.366 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.374 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.380 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.419 2 INFO nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Took 7.77 seconds to build instance.
Oct 14 09:06:21 compute-0 nova_compute[259627]: 2025-10-14 09:06:21.436 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:21 compute-0 ceph-mon[74249]: pgmap v1522: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 136 op/s
Oct 14 09:06:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 260 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.423 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.451 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.452 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance network_info: |[{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.454 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start _get_guest_xml network_info=[{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.459 2 WARNING nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.467 2 DEBUG nova.virt.libvirt.host [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.468 2 DEBUG nova.virt.libvirt.host [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.472 2 DEBUG nova.virt.libvirt.host [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.473 2 DEBUG nova.virt.libvirt.host [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.473 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.473 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.474 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.474 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.475 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.475 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.475 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.476 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.476 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.476 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.477 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.477 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.481 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:22 compute-0 ceph-mon[74249]: pgmap v1523: 305 pgs: 305 active+clean; 260 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Oct 14 09:06:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.722 2 DEBUG nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.722 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.723 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.723 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.723 2 DEBUG nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] No waiting events found dispatching network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.723 2 WARNING nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received unexpected event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f for instance with vm_state active and task_state None.
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.723 2 DEBUG nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.724 2 DEBUG nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.724 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.724 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.724 2 DEBUG nova.network.neutron [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504711210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.909 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.929 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.933 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:22 compute-0 nova_compute[259627]: 2025-10-14 09:06:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.054 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.054 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.073 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.130 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.131 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.139 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.139 2 INFO nova.compute.claims [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.288 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.289 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.291 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.291 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.292 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.293 2 INFO nova.compute.manager [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Terminating instance
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.294 2 DEBUG nova.compute.manager [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:06:23 compute-0 kernel: tap1550cd45-1c (unregistering): left promiscuous mode
Oct 14 09:06:23 compute-0 NetworkManager[44885]: <info>  [1760432783.3502] device (tap1550cd45-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.350 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:23 compute-0 ovn_controller[152662]: 2025-10-14T09:06:23Z|00587|binding|INFO|Releasing lport 1550cd45-1c1e-4505-8762-fb1668990b8f from this chassis (sb_readonly=0)
Oct 14 09:06:23 compute-0 ovn_controller[152662]: 2025-10-14T09:06:23Z|00588|binding|INFO|Setting lport 1550cd45-1c1e-4505-8762-fb1668990b8f down in Southbound
Oct 14 09:06:23 compute-0 ovn_controller[152662]: 2025-10-14T09:06:23Z|00589|binding|INFO|Removing iface tap1550cd45-1c ovn-installed in OVS
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.367 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:61:e3 10.100.0.9'], port_security=['fa:16:3e:c5:61:e3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1550cd45-1c1e-4505-8762-fb1668990b8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.368 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1550cd45-1c1e-4505-8762-fb1668990b8f in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.369 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.371 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5de8ab-2995-4eaf-af55-ef797764113b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.371 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct 14 09:06:23 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003a.scope: Consumed 2.894s CPU time.
Oct 14 09:06:23 compute-0 systemd-machined[214636]: Machine qemu-73-instance-0000003a terminated.
Oct 14 09:06:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2989097755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.429 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.430 2 DEBUG nova.virt.libvirt.vif [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.430 2 DEBUG nova.network.os_vif_util [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.431 2 DEBUG nova.network.os_vif_util [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.433 2 DEBUG nova.objects.instance [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.451 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432768.4496799, ec31b9ab-88ab-4085-a46b-76cb9825061a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.451 2 INFO nova.compute.manager [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Stopped (Lifecycle Event)
Oct 14 09:06:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424773956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.460 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <uuid>2189eac5-238f-4f09-ae1c-1cf47c3b6030</uuid>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <name>instance-0000003b</name>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:06:22</nova:creationTime>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 09:06:23 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <entry name="serial">2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <entry name="uuid">2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk">
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config">
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:9c:3f:ae"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <target dev="tap350a3bec-5d"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log" append="off"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:06:23 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:06:23 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:23 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:23 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:23 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.467 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Preparing to wait for external event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.468 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.468 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.469 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.470 2 DEBUG nova.virt.libvirt.vif [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.470 2 DEBUG nova.network.os_vif_util [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.471 2 DEBUG nova.network.os_vif_util [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.471 2 DEBUG os_vif [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.476 2 DEBUG nova.compute.manager [None req-6fdef719-9673-41b2-b5ce-10bea6d7191d - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap350a3bec-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap350a3bec-5d, col_values=(('external_ids', {'iface-id': '350a3bec-5dbd-4a83-8d80-5796be0319fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:3f:ae', 'vm-uuid': '2189eac5-238f-4f09-ae1c-1cf47c3b6030'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.479 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:23 compute-0 NetworkManager[44885]: <info>  [1760432783.4805] manager: (tap350a3bec-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.488 2 INFO os_vif [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d')
Oct 14 09:06:23 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [NOTICE]   (323388) : haproxy version is 2.8.14-c23fe91
Oct 14 09:06:23 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [NOTICE]   (323388) : path to executable is /usr/sbin/haproxy
Oct 14 09:06:23 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [WARNING]  (323388) : Exiting Master process...
Oct 14 09:06:23 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [WARNING]  (323388) : Exiting Master process...
Oct 14 09:06:23 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [ALERT]    (323388) : Current worker (323394) exited with code 143 (Terminated)
Oct 14 09:06:23 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [WARNING]  (323388) : All workers exited. Exiting... (0)
Oct 14 09:06:23 compute-0 systemd[1]: libpod-f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb.scope: Deactivated successfully.
Oct 14 09:06:23 compute-0 podman[323512]: 2025-10-14 09:06:23.517412501 +0000 UTC m=+0.051985301 container died f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.538 2 INFO nova.virt.libvirt.driver [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance destroyed successfully.
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.539 2 DEBUG nova.objects.instance [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb-userdata-shm.mount: Deactivated successfully.
Oct 14 09:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfe46c2d430d558e2727a9bdbf703eb7324aebea7de1e5e4f99b74c1e21eb6fb-merged.mount: Deactivated successfully.
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.556 2 DEBUG nova.virt.libvirt.vif [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1220502864',display_name='tempest-DeleteServersTestJSON-server-1220502864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1220502864',id=58,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-a7evba7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:21Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.557 2 DEBUG nova.network.os_vif_util [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.558 2 DEBUG nova.network.os_vif_util [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.558 2 DEBUG os_vif [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.561 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1550cd45-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.574 2 INFO os_vif [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c')
Oct 14 09:06:23 compute-0 podman[323512]: 2025-10-14 09:06:23.579472009 +0000 UTC m=+0.114044799 container cleanup f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.600 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.600 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.601 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9c:3f:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.602 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Using config drive
Oct 14 09:06:23 compute-0 systemd[1]: libpod-conmon-f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb.scope: Deactivated successfully.
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.628 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2504711210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2989097755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3424773956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:23 compute-0 podman[323592]: 2025-10-14 09:06:23.669470855 +0000 UTC m=+0.063028342 container remove f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.677 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91c83b0f-d956-425e-9964-16a2424a2fa9]: (4, ('Tue Oct 14 09:06:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb)\nf152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb\nTue Oct 14 09:06:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb)\nf152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f2d926-a85c-415d-9ac5-fed947636ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.680 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.701 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.702 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.709 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.709 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d75fcaf-f2fc-4466-9c89-0ff331af734d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.713 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.713 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.717 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.717 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:06:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 260 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15a32f35-5d2a-4a2f-a98c-6156a19ef6af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b84a851e-401a-4a94-9cf5-16cc6b2c3a3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.758 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9962208e-f0be-4f98-a927-1b371fa5beda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655911, 'reachable_time': 21762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323633, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.763 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:06:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.763 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef93816-bf2f-45b2-93ce-294c724dcb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843737089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.915 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.916 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3652MB free_disk=59.8802490234375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.916 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.925 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.930 2 DEBUG nova.compute.provider_tree [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.943 2 DEBUG nova.scheduler.client.report [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.967 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.967 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:06:23 compute-0 nova_compute[259627]: 2025-10-14 09:06:23.970 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.041 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.041 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.057 2 INFO nova.virt.libvirt.driver [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Deleting instance files /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_del
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.058 2 INFO nova.virt.libvirt.driver [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Deletion of /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_del complete
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.062 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.078 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.078 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.079 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.079 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.079 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance dd55716e-2330-42a4-8963-33bdc9c7bbf8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.079 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.080 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.084 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.104 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Creating config drive at /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.114 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnheo7sr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.157 2 INFO nova.compute.manager [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.158 2 DEBUG oslo.service.loopingcall [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.158 2 DEBUG nova.compute.manager [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.159 2 DEBUG nova.network.neutron [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.201 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.202 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.203 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Creating image(s)
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.223 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.248 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.275 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.279 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.327 2 DEBUG nova.policy [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd952679a4e6a4fc6bacf42c02d3e92d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.329 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnheo7sr" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.362 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.367 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.410 2 DEBUG nova.network.neutron [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.411 2 DEBUG nova.network.neutron [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.419 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.420 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.421 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.421 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.448 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.453 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.513 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.530 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.583 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.585 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Deleting local config drive /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config because it was imported into RBD.
Oct 14 09:06:24 compute-0 ceph-mon[74249]: pgmap v1524: 305 pgs: 305 active+clean; 260 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Oct 14 09:06:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3843737089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:24 compute-0 kernel: tap350a3bec-5d: entered promiscuous mode
Oct 14 09:06:24 compute-0 NetworkManager[44885]: <info>  [1760432784.6465] manager: (tap350a3bec-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Oct 14 09:06:24 compute-0 systemd-udevd[323497]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:24 compute-0 ovn_controller[152662]: 2025-10-14T09:06:24Z|00590|binding|INFO|Claiming lport 350a3bec-5dbd-4a83-8d80-5796be0319fd for this chassis.
Oct 14 09:06:24 compute-0 ovn_controller[152662]: 2025-10-14T09:06:24Z|00591|binding|INFO|350a3bec-5dbd-4a83-8d80-5796be0319fd: Claiming fa:16:3e:9c:3f:ae 10.100.0.6
Oct 14 09:06:24 compute-0 NetworkManager[44885]: <info>  [1760432784.6588] device (tap350a3bec-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.657 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:3f:ae 10.100.0.6'], port_security=['fa:16:3e:9c:3f:ae 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2189eac5-238f-4f09-ae1c-1cf47c3b6030', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6192a40-cbd8-43eb-9955-4fede99ddb79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=350a3bec-5dbd-4a83-8d80-5796be0319fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.659 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 350a3bec-5dbd-4a83-8d80-5796be0319fd in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis
Oct 14 09:06:24 compute-0 NetworkManager[44885]: <info>  [1760432784.6604] device (tap350a3bec-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.662 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:06:24 compute-0 ovn_controller[152662]: 2025-10-14T09:06:24Z|00592|binding|INFO|Setting lport 350a3bec-5dbd-4a83-8d80-5796be0319fd ovn-installed in OVS
Oct 14 09:06:24 compute-0 ovn_controller[152662]: 2025-10-14T09:06:24Z|00593|binding|INFO|Setting lport 350a3bec-5dbd-4a83-8d80-5796be0319fd up in Southbound
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.683 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7dc762-1897-48cf-b94d-da6e8b40a937]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:24 compute-0 systemd-machined[214636]: New machine qemu-74-instance-0000003b.
Oct 14 09:06:24 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-0000003b.
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.721 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[01120e2e-1356-4e74-9611-31c6334a2348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.727 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[313b1781-f359-4d35-a6b1-56612ff5680e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.757 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b98cf3b2-0680-4504-88f5-4dc9027589e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.774 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[08a4f83a-c187-4015-9bd2-7fa3d32b1e89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323814, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.782 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.791 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bb87511f-ec7a-4bb2-b5b5-ef45a5076cd2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323816, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323816, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.793 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.796 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.796 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.796 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.839 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] resizing rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.867 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-unplugged-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.867 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] No waiting events found dispatching network-vif-unplugged-1550cd45-1c1e-4505-8762-fb1668990b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-unplugged-1550cd45-1c1e-4505-8762-fb1668990b8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.869 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.869 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.869 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.869 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] No waiting events found dispatching network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.869 2 WARNING nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received unexpected event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f for instance with vm_state active and task_state deleting.
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.942 2 DEBUG nova.network.neutron [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.952 2 DEBUG nova.objects.instance [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'migration_context' on Instance uuid dd55716e-2330-42a4-8963-33bdc9c7bbf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.966 2 INFO nova.compute.manager [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Took 0.81 seconds to deallocate network for instance.
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.971 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.971 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Ensure instance console log exists: /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.972 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.972 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:24 compute-0 nova_compute[259627]: 2025-10-14 09:06:24.972 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.001 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286670197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.024 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.037 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.056 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.056 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.056 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.069 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Successfully created port: deb48802-bfee-42af-882c-3632b1fbb2cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:06:25 compute-0 ovn_controller[152662]: 2025-10-14T09:06:25Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:40:de 10.100.0.8
Oct 14 09:06:25 compute-0 ovn_controller[152662]: 2025-10-14T09:06:25Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:40:de 10.100.0.8
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.234 2 DEBUG oslo_concurrency.processutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.277 2 DEBUG nova.compute.manager [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.277 2 DEBUG oslo_concurrency.lockutils [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.278 2 DEBUG oslo_concurrency.lockutils [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.278 2 DEBUG oslo_concurrency.lockutils [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.278 2 DEBUG nova.compute.manager [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Processing event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:06:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4286670197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2414840640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.699 2 DEBUG oslo_concurrency.processutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.707 2 DEBUG nova.compute.provider_tree [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.5 MiB/s wr, 339 op/s
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.742 2 DEBUG nova.scheduler.client.report [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.775 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.812 2 INFO nova.scheduler.client.report [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.893 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Successfully updated port: deb48802-bfee-42af-882c-3632b1fbb2cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.897 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.913 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.913 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquired lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:25 compute-0 nova_compute[259627]: 2025-10-14 09:06:25.915 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.057 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.058 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.111 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.263 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432786.2628214, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.263 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] VM Started (Lifecycle Event)
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.266 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.270 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.273 2 INFO nova.virt.libvirt.driver [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance spawned successfully.
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.273 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.288 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.297 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.303 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.304 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.305 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.305 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.306 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.307 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.322 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.323 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432786.2629719, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.323 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] VM Paused (Lifecycle Event)
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.365 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.369 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432786.2688997, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.369 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] VM Resumed (Lifecycle Event)
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.402 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.406 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.421 2 INFO nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Took 7.47 seconds to spawn the instance on the hypervisor.
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.421 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.434 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.516 2 INFO nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Took 8.56 seconds to build instance.
Oct 14 09:06:26 compute-0 nova_compute[259627]: 2025-10-14 09:06:26.534 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2414840640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:26 compute-0 ceph-mon[74249]: pgmap v1525: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.5 MiB/s wr, 339 op/s
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.030 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updating instance_info_cache with network_info: [{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.060 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Releasing lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.060 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance network_info: |[{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.065 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start _get_guest_xml network_info=[{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.069 2 WARNING nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.075 2 DEBUG nova.virt.libvirt.host [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.076 2 DEBUG nova.virt.libvirt.host [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.083 2 DEBUG nova.virt.libvirt.host [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.084 2 DEBUG nova.virt.libvirt.host [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.084 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.084 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.085 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.086 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.086 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.086 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.087 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.087 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.087 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.088 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.088 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.088 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.093 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.331 2 DEBUG nova.compute.manager [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-deleted-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.332 2 DEBUG nova.compute.manager [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-changed-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.332 2 DEBUG nova.compute.manager [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Refreshing instance network info cache due to event network-changed-deb48802-bfee-42af-882c-3632b1fbb2cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.332 2 DEBUG oslo_concurrency.lockutils [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.333 2 DEBUG oslo_concurrency.lockutils [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.333 2 DEBUG nova.network.neutron [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Refreshing network info cache for port deb48802-bfee-42af-882c-3632b1fbb2cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230294850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.557 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.576 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.579 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/230294850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.2 MiB/s wr, 222 op/s
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:27 compute-0 nova_compute[259627]: 2025-10-14 09:06:27.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.041 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.041 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440266417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.106 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.108 2 DEBUG nova.virt.libvirt.vif [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-138998221',display_name='tempest-ServerActionsTestOtherA-server-138998221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-138998221',id=60,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-kgnyrp0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:24Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=dd55716e-2330-42a4-8963-33bdc9c7bbf8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.109 2 DEBUG nova.network.os_vif_util [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.111 2 DEBUG nova.network.os_vif_util [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.113 2 DEBUG nova.objects.instance [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_devices' on Instance uuid dd55716e-2330-42a4-8963-33bdc9c7bbf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.130 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <uuid>dd55716e-2330-42a4-8963-33bdc9c7bbf8</uuid>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <name>instance-0000003c</name>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestOtherA-server-138998221</nova:name>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:06:27</nova:creationTime>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <nova:user uuid="d952679a4e6a4fc6bacf42c02d3e92d0">tempest-ServerActionsTestOtherA-894139105-project-member</nova:user>
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <nova:project uuid="4e47722c609640d3a70fee8dd6ff94cc">tempest-ServerActionsTestOtherA-894139105</nova:project>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <nova:port uuid="deb48802-bfee-42af-882c-3632b1fbb2cf">
Oct 14 09:06:28 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <entry name="serial">dd55716e-2330-42a4-8963-33bdc9c7bbf8</entry>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <entry name="uuid">dd55716e-2330-42a4-8963-33bdc9c7bbf8</entry>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk">
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config">
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:28 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:e0:3f:2b"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <target dev="tapdeb48802-bf"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/console.log" append="off"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:06:28 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:06:28 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:28 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:28 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:28 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.130 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Preparing to wait for external event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.131 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.131 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.132 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.133 2 DEBUG nova.virt.libvirt.vif [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-138998221',display_name='tempest-ServerActionsTestOtherA-server-138998221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-138998221',id=60,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-kgnyrp0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:24Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=dd55716e-2330-42a4-8963-33bdc9c7bbf8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.133 2 DEBUG nova.network.os_vif_util [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.134 2 DEBUG nova.network.os_vif_util [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.134 2 DEBUG os_vif [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.137 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdeb48802-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.141 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdeb48802-bf, col_values=(('external_ids', {'iface-id': 'deb48802-bfee-42af-882c-3632b1fbb2cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:3f:2b', 'vm-uuid': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:28 compute-0 NetworkManager[44885]: <info>  [1760432788.1448] manager: (tapdeb48802-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.154 2 INFO os_vif [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf')
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.183 2 DEBUG nova.compute.manager [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.183 2 DEBUG oslo_concurrency.lockutils [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.183 2 DEBUG oslo_concurrency.lockutils [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.183 2 DEBUG oslo_concurrency.lockutils [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.184 2 DEBUG nova.compute.manager [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.184 2 WARNING nova.compute.manager [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd for instance with vm_state active and task_state None.
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.203 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.205 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.205 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No VIF found with MAC fa:16:3e:e0:3f:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.206 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Using config drive
Oct 14 09:06:28 compute-0 nova_compute[259627]: 2025-10-14 09:06:28.230 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:28 compute-0 ceph-mon[74249]: pgmap v1526: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.2 MiB/s wr, 222 op/s
Oct 14 09:06:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/440266417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.370 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Creating config drive at /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.374 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpizfx1fli execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.506 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpizfx1fli" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.532 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.535 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.690 2 DEBUG nova.network.neutron [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updated VIF entry in instance network info cache for port deb48802-bfee-42af-882c-3632b1fbb2cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.692 2 DEBUG nova.network.neutron [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updating instance_info_cache with network_info: [{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.728 2 DEBUG oslo_concurrency.lockutils [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.2 MiB/s wr, 222 op/s
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.758 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.759 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Deleting local config drive /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config because it was imported into RBD.
Oct 14 09:06:29 compute-0 NetworkManager[44885]: <info>  [1760432789.8202] manager: (tapdeb48802-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Oct 14 09:06:29 compute-0 kernel: tapdeb48802-bf: entered promiscuous mode
Oct 14 09:06:29 compute-0 ovn_controller[152662]: 2025-10-14T09:06:29Z|00594|binding|INFO|Claiming lport deb48802-bfee-42af-882c-3632b1fbb2cf for this chassis.
Oct 14 09:06:29 compute-0 ovn_controller[152662]: 2025-10-14T09:06:29Z|00595|binding|INFO|deb48802-bfee-42af-882c-3632b1fbb2cf: Claiming fa:16:3e:e0:3f:2b 10.100.0.9
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.834 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3f:2b 10.100.0.9'], port_security=['fa:16:3e:e0:3f:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=deb48802-bfee-42af-882c-3632b1fbb2cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.835 162547 INFO neutron.agent.ovn.metadata.agent [-] Port deb48802-bfee-42af-882c-3632b1fbb2cf in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.839 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:06:29 compute-0 systemd-udevd[324092]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.868 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6b154422-7708-47c6-9c0c-66c750ee150a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:29 compute-0 ovn_controller[152662]: 2025-10-14T09:06:29Z|00596|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf up in Southbound
Oct 14 09:06:29 compute-0 ovn_controller[152662]: 2025-10-14T09:06:29Z|00597|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf ovn-installed in OVS
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:29 compute-0 systemd-machined[214636]: New machine qemu-75-instance-0000003c.
Oct 14 09:06:29 compute-0 NetworkManager[44885]: <info>  [1760432789.8819] device (tapdeb48802-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:06:29 compute-0 NetworkManager[44885]: <info>  [1760432789.8830] device (tapdeb48802-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:06:29 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-0000003c.
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.902 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a23b2566-d9c7-45e8-9a71-ca5c19749bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.906 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aadac84c-cbe6-4af6-8515-21b419de18c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.936 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[899c5ce2-ae09-4ebf-bd65-573abb45fea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.955 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb94a0f9-3857-46f8-92eb-868842c8861a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324104, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.974 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a30fb6ba-42a8-4be2-ba28-e0d0e3c493f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324106, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324106, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.976 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:29 compute-0 nova_compute[259627]: 2025-10-14 09:06:29.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.979 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.979 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.979 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.979 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.049 2 DEBUG nova.compute.manager [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.050 2 DEBUG nova.compute.manager [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.050 2 DEBUG oslo_concurrency.lockutils [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.051 2 DEBUG oslo_concurrency.lockutils [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.051 2 DEBUG nova.network.neutron [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:30 compute-0 ceph-mon[74249]: pgmap v1527: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.2 MiB/s wr, 222 op/s
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.961 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432790.9609225, dd55716e-2330-42a4-8963-33bdc9c7bbf8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.962 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] VM Started (Lifecycle Event)
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.985 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.990 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432790.961066, dd55716e-2330-42a4-8963-33bdc9c7bbf8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:30 compute-0 nova_compute[259627]: 2025-10-14 09:06:30.990 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] VM Paused (Lifecycle Event)
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.008 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.012 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.035 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.322 2 DEBUG nova.network.neutron [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.323 2 DEBUG nova.network.neutron [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.348 2 DEBUG oslo_concurrency.lockutils [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 293 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 299 op/s
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.924 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.925 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.946 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:06:31 compute-0 nova_compute[259627]: 2025-10-14 09:06:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.028 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.028 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.037 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.038 2 INFO nova.compute.claims [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.175 2 DEBUG nova.compute.manager [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.176 2 DEBUG nova.compute.manager [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.176 2 DEBUG oslo_concurrency.lockutils [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.177 2 DEBUG oslo_concurrency.lockutils [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.177 2 DEBUG nova.network.neutron [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.240 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670485446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.699 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.706 2 DEBUG nova.compute.provider_tree [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.728 2 DEBUG nova.scheduler.client.report [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:06:32
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'volumes', 'backups', 'images', '.mgr', 'cephfs.cephfs.meta']
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:06:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.766 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.767 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.811 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.811 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:06:32 compute-0 ceph-mon[74249]: pgmap v1528: 305 pgs: 305 active+clean; 293 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 299 op/s
Oct 14 09:06:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3670485446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.829 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.849 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.949 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.950 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.951 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Creating image(s)
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.969 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:32 compute-0 nova_compute[259627]: 2025-10-14 09:06:32.991 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.011 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.015 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.102 2 DEBUG nova.policy [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.111 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.112 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.112 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.112 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.130 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.133 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.386 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.444 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.475 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.476 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.476 2 DEBUG nova.objects.instance [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.556 2 DEBUG nova.objects.instance [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid 73d6be04-84dc-4b80-81f8-a9bbf9938051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.572 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.572 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Ensure instance console log exists: /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.573 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.573 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.573 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 293 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.956 2 DEBUG nova.network.neutron [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.957 2 DEBUG nova.network.neutron [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.961 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Successfully created port: 26dd6404-2018-471b-a387-2b045d236164 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:06:33 compute-0 nova_compute[259627]: 2025-10-14 09:06:33.983 2 DEBUG oslo_concurrency.lockutils [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:34 compute-0 nova_compute[259627]: 2025-10-14 09:06:34.072 2 DEBUG nova.objects.instance [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:34 compute-0 nova_compute[259627]: 2025-10-14 09:06:34.086 2 DEBUG nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:06:34 compute-0 ceph-mon[74249]: pgmap v1529: 305 pgs: 305 active+clean; 293 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.069 2 DEBUG nova.policy [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.370 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Successfully updated port: 26dd6404-2018-471b-a387-2b045d236164 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.385 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.385 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.386 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.408 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.409 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.409 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.410 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.410 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Processing event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.411 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.412 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.412 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.413 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.413 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.416 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.422 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432795.421274, dd55716e-2330-42a4-8963-33bdc9c7bbf8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.422 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] VM Resumed (Lifecycle Event)
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.424 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.431 2 INFO nova.virt.libvirt.driver [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance spawned successfully.
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.431 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.449 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.471 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.475 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.476 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.477 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.477 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.478 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.478 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.527 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.568 2 INFO nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Took 11.37 seconds to spawn the instance on the hypervisor.
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.569 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.690 2 INFO nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Took 12.58 seconds to build instance.
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.705 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.715 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 301 op/s
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.970 2 DEBUG nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Successfully updated port: df1ec4d8-f543-4899-9d98-b60a6a46cc7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:06:35 compute-0 nova_compute[259627]: 2025-10-14 09:06:35.999 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.000 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.000 2 DEBUG nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.108 2 DEBUG nova.compute.manager [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-changed-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.108 2 DEBUG nova.compute.manager [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Refreshing instance network info cache due to event network-changed-26dd6404-2018-471b-a387-2b045d236164. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.109 2 DEBUG oslo_concurrency.lockutils [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.177 2 WARNING nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.628 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Updating instance_info_cache with network_info: [{"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.635 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.635 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.651 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.651 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.652 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.652 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.653 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.654 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance network_info: |[{"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.654 2 DEBUG oslo_concurrency.lockutils [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.655 2 DEBUG nova.network.neutron [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Refreshing network info cache for port 26dd6404-2018-471b-a387-2b045d236164 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.658 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start _get_guest_xml network_info=[{"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.662 2 WARNING nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.666 2 DEBUG nova.virt.libvirt.host [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.667 2 DEBUG nova.virt.libvirt.host [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.673 2 DEBUG nova.virt.libvirt.host [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.674 2 DEBUG nova.virt.libvirt.host [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.674 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.675 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.675 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.676 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.676 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.676 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.677 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.677 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.677 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.677 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.678 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.678 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:06:36 compute-0 nova_compute[259627]: 2025-10-14 09:06:36.680 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:36 compute-0 ceph-mon[74249]: pgmap v1530: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 301 op/s
Oct 14 09:06:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302745539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.163 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.184 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.188 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80336960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.723 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.725 2 DEBUG nova.virt.libvirt.vif [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-715549256',display_name='tempest-DeleteServersTestJSON-server-715549256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-715549256',id=62,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-vdf1bahn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:32Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=73d6be04-84dc-4b80-81f8-a9bbf9938051,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.726 2 DEBUG nova.network.os_vif_util [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.727 2 DEBUG nova.network.os_vif_util [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.729 2 DEBUG nova.objects.instance [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73d6be04-84dc-4b80-81f8-a9bbf9938051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.744 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <uuid>73d6be04-84dc-4b80-81f8-a9bbf9938051</uuid>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <name>instance-0000003e</name>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <nova:name>tempest-DeleteServersTestJSON-server-715549256</nova:name>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:06:36</nova:creationTime>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <nova:port uuid="26dd6404-2018-471b-a387-2b045d236164">
Oct 14 09:06:37 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <entry name="serial">73d6be04-84dc-4b80-81f8-a9bbf9938051</entry>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <entry name="uuid">73d6be04-84dc-4b80-81f8-a9bbf9938051</entry>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/73d6be04-84dc-4b80-81f8-a9bbf9938051_disk">
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config">
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:37 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:95:a0:6d"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <target dev="tap26dd6404-20"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/console.log" append="off"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:06:37 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:37 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:37 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.746 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Preparing to wait for external event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.747 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.747 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.748 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.750 2 DEBUG nova.virt.libvirt.vif [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-715549256',display_name='tempest-DeleteServersTestJSON-server-715549256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-715549256',id=62,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-vdf1bahn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:32Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=73d6be04-84dc-4b80-81f8-a9bbf9938051,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.750 2 DEBUG nova.network.os_vif_util [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.752 2 DEBUG nova.network.os_vif_util [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.752 2 DEBUG os_vif [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.754 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26dd6404-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26dd6404-20, col_values=(('external_ids', {'iface-id': '26dd6404-2018-471b-a387-2b045d236164', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:a0:6d', 'vm-uuid': '73d6be04-84dc-4b80-81f8-a9bbf9938051'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:37 compute-0 NetworkManager[44885]: <info>  [1760432797.7655] manager: (tap26dd6404-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.772 2 INFO os_vif [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20')
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.815 2 DEBUG nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.819 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.820 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.820 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:95:a0:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.820 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Using config drive
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.838 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/302745539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/80336960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.844 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.844 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.845 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.849 2 DEBUG nova.virt.libvirt.vif [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.850 2 DEBUG nova.network.os_vif_util [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.850 2 DEBUG nova.network.os_vif_util [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.851 2 DEBUG os_vif [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf1ec4d8-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.870 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf1ec4d8-f5, col_values=(('external_ids', {'iface-id': 'df1ec4d8-f543-4899-9d98-b60a6a46cc7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:db:60', 'vm-uuid': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:37 compute-0 NetworkManager[44885]: <info>  [1760432797.9247] manager: (tapdf1ec4d8-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.941 2 DEBUG nova.compute.manager [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.941 2 DEBUG nova.compute.manager [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-df1ec4d8-f543-4899-9d98-b60a6a46cc7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.941 2 DEBUG oslo_concurrency.lockutils [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.943 2 INFO os_vif [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.944 2 DEBUG nova.virt.libvirt.vif [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.945 2 DEBUG nova.network.os_vif_util [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.945 2 DEBUG nova.network.os_vif_util [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.949 2 DEBUG nova.virt.libvirt.guest [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:38:db:60"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:06:37 compute-0 kernel: tapdf1ec4d8-f5: entered promiscuous mode
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]:   <target dev="tapdf1ec4d8-f5"/>
Oct 14 09:06:37 compute-0 nova_compute[259627]: </interface>
Oct 14 09:06:37 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 09:06:37 compute-0 NetworkManager[44885]: <info>  [1760432797.9637] manager: (tapdf1ec4d8-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Oct 14 09:06:37 compute-0 nova_compute[259627]: 2025-10-14 09:06:37.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:37 compute-0 ovn_controller[152662]: 2025-10-14T09:06:37Z|00598|binding|INFO|Claiming lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c for this chassis.
Oct 14 09:06:37 compute-0 ovn_controller[152662]: 2025-10-14T09:06:37Z|00599|binding|INFO|df1ec4d8-f543-4899-9d98-b60a6a46cc7c: Claiming fa:16:3e:38:db:60 10.100.0.12
Oct 14 09:06:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:37.981 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:db:60 10.100.0.12'], port_security=['fa:16:3e:38:db:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=df1ec4d8-f543-4899-9d98-b60a6a46cc7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:37.985 162547 INFO neutron.agent.ovn.metadata.agent [-] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis
Oct 14 09:06:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:37.987 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:06:37 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.013 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a8cd74-a8a2-4a16-8337-49ba6e3f3820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 ovn_controller[152662]: 2025-10-14T09:06:38Z|00600|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c ovn-installed in OVS
Oct 14 09:06:38 compute-0 ovn_controller[152662]: 2025-10-14T09:06:38Z|00601|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c up in Southbound
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:38 compute-0 systemd-udevd[324432]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:38 compute-0 NetworkManager[44885]: <info>  [1760432798.0514] device (tapdf1ec4d8-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:06:38 compute-0 NetworkManager[44885]: <info>  [1760432798.0526] device (tapdf1ec4d8-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.067 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5439e85b-90ea-4e69-8628-9ddc1f8336d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.074 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[58d03d06-0dd6-4bd5-a0a8-26f9f7308957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.094 2 DEBUG nova.virt.libvirt.driver [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.095 2 DEBUG nova.virt.libvirt.driver [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.095 2 DEBUG nova.virt.libvirt.driver [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:b4:40:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.095 2 DEBUG nova.virt.libvirt.driver [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:38:db:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.118 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f099b6fa-5cb7-42bd-8197-79369eac313c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.139 2 DEBUG nova.virt.libvirt.guest [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:38 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:06:38</nova:creationTime>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 09:06:38 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 09:06:38 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:06:38 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:38 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:06:38 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:06:38 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.156 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[039723d2-1aef-475f-b20c-2b66a2f000a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324441, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.172 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.183 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2be36841-9e71-4797-8539-d7357b15502b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324442, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324442, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.189 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.196 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.197 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.197 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.197 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.290 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Creating config drive at /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.300 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzbfwhdte execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.452 2 DEBUG nova.network.neutron [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Updated VIF entry in instance network info cache for port 26dd6404-2018-471b-a387-2b045d236164. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.453 2 DEBUG nova.network.neutron [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Updating instance_info_cache with network_info: [{"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.463 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzbfwhdte" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.504 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.510 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.579 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432783.5275261, 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.580 2 INFO nova.compute.manager [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] VM Stopped (Lifecycle Event)
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.583 2 DEBUG oslo_concurrency.lockutils [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.603 2 DEBUG nova.compute.manager [None req-58735ac5-659f-4bd0-84bd-3d42920154d2 - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:38 compute-0 podman[324466]: 2025-10-14 09:06:38.669888145 +0000 UTC m=+0.074126006 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 09:06:38 compute-0 podman[324467]: 2025-10-14 09:06:38.682291691 +0000 UTC m=+0.069082492 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.733 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.734 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Deleting local config drive /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config because it was imported into RBD.
Oct 14 09:06:38 compute-0 kernel: tap26dd6404-20: entered promiscuous mode
Oct 14 09:06:38 compute-0 NetworkManager[44885]: <info>  [1760432798.7928] manager: (tap26dd6404-20): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Oct 14 09:06:38 compute-0 systemd-udevd[324436]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:38 compute-0 ovn_controller[152662]: 2025-10-14T09:06:38Z|00602|binding|INFO|Claiming lport 26dd6404-2018-471b-a387-2b045d236164 for this chassis.
Oct 14 09:06:38 compute-0 ovn_controller[152662]: 2025-10-14T09:06:38Z|00603|binding|INFO|26dd6404-2018-471b-a387-2b045d236164: Claiming fa:16:3e:95:a0:6d 10.100.0.12
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.802 2 DEBUG nova.compute.manager [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.803 2 DEBUG oslo_concurrency.lockutils [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.803 2 DEBUG oslo_concurrency.lockutils [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.803 2 DEBUG oslo_concurrency.lockutils [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.803 2 DEBUG nova.compute.manager [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.803 2 WARNING nova.compute.manager [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.808 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:a0:6d 10.100.0.12'], port_security=['fa:16:3e:95:a0:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '73d6be04-84dc-4b80-81f8-a9bbf9938051', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=26dd6404-2018-471b-a387-2b045d236164) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.809 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 26dd6404-2018-471b-a387-2b045d236164 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.810 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:06:38 compute-0 NetworkManager[44885]: <info>  [1760432798.8134] device (tap26dd6404-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:06:38 compute-0 NetworkManager[44885]: <info>  [1760432798.8154] device (tap26dd6404-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:06:38 compute-0 ovn_controller[152662]: 2025-10-14T09:06:38Z|00604|binding|INFO|Setting lport 26dd6404-2018-471b-a387-2b045d236164 ovn-installed in OVS
Oct 14 09:06:38 compute-0 ovn_controller[152662]: 2025-10-14T09:06:38Z|00605|binding|INFO|Setting lport 26dd6404-2018-471b-a387-2b045d236164 up in Southbound
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:38 compute-0 nova_compute[259627]: 2025-10-14 09:06:38.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.822 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68b12a3a-99e7-4fa2-bfa4-4bed2c19b994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.829 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.835 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.835 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9773d949-5c64-43c9-8dd0-b8d817c42a4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 systemd-machined[214636]: New machine qemu-76-instance-0000003e.
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f351b6ed-36e0-43d7-abed-8787a619dffb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 ceph-mon[74249]: pgmap v1531: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.858 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[618d36e3-75bb-4ed3-aa08-53c4a9be5021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-0000003e.
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.882 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[05aa377c-da5f-4944-9a8b-44cacc5a80a1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.911 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[323b0c7d-bc87-440b-867a-282473219ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.917 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd4dde8-c93f-4f85-a8b8-a25398b1a4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 NetworkManager[44885]: <info>  [1760432798.9193] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/264)
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.962 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bffeab60-ea6a-480a-bd05-07b9f32e072e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.965 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3978780b-1b50-4c72-9796-f2818b50a9a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:38 compute-0 NetworkManager[44885]: <info>  [1760432798.9913] device (tap0a07d59e-b0): carrier: link connected
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.003 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ba71907c-c197-4b37-b215-8f06b68b8cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.022 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[627a1e21-f952-450b-8e9f-26d896d03d02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657775, 'reachable_time': 18074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324565, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.053 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1627ee3b-ea20-49d2-95d2-2e463f1d529b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657775, 'tstamp': 657775}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324566, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12221c71-9daf-4522-8ba4-f6621128e9a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657775, 'reachable_time': 18074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324567, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:39 compute-0 ovn_controller[152662]: 2025-10-14T09:06:39Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:3f:ae 10.100.0.6
Oct 14 09:06:39 compute-0 ovn_controller[152662]: 2025-10-14T09:06:39Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:3f:ae 10.100.0.6
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.134 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e581e01-437e-4c7c-8cab-7bd177e85c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.246 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e1352d88-6aa1-4511-a46c-250e929c8a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 NetworkManager[44885]: <info>  [1760432799.2518] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Oct 14 09:06:39 compute-0 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.260 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 ovn_controller[152662]: 2025-10-14T09:06:39Z|00606|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.293 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.294 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8397ca08-0357-49a4-a8e6-16fbc0414dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.295 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.296 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.308 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.308 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.328 2 DEBUG nova.objects.instance [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.352 2 DEBUG nova.virt.libvirt.vif [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.352 2 DEBUG nova.network.os_vif_util [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.361 2 DEBUG nova.network.os_vif_util [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.366 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.369 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.374 2 DEBUG nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Attempting to detach device tapdf1ec4d8-f5 from instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.374 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:38:db:60"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <target dev="tapdf1ec4d8-f5"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]: </interface>
Oct 14 09:06:39 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.383 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.386 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface>not found in domain: <domain type='kvm' id='72'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <name>instance-00000039</name>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <uuid>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</uuid>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:06:38</nova:creationTime>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:06:39 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='serial'>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='uuid'>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk' index='2'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config' index='1'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:b4:40:de'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target dev='tapdffa5a1f-65'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:38:db:60'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target dev='tapdf1ec4d8-f5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='net1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log' append='off'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </target>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log' append='off'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </console>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c397,c877</label>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c397,c877</imagelabel>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:06:39 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:39 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.387 2 INFO nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tapdf1ec4d8-f5 from instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 from the persistent domain config.
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.387 2 DEBUG nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] (1/8): Attempting to detach device tapdf1ec4d8-f5 with device alias net1 from instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.388 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:38:db:60"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <target dev="tapdf1ec4d8-f5"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]: </interface>
Oct 14 09:06:39 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.471 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.472 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:39 compute-0 kernel: tapdf1ec4d8-f5 (unregistering): left promiscuous mode
Oct 14 09:06:39 compute-0 NetworkManager[44885]: <info>  [1760432799.5029] device (tapdf1ec4d8-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.507 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.507 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.508 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.508 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.509 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.509 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.509 2 WARNING nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state building and task_state spawning.
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.510 2 DEBUG oslo_concurrency.lockutils [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.513 2 DEBUG nova.network.neutron [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port df1ec4d8-f543-4899-9d98-b60a6a46cc7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:39 compute-0 ovn_controller[152662]: 2025-10-14T09:06:39Z|00607|binding|INFO|Releasing lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c from this chassis (sb_readonly=0)
Oct 14 09:06:39 compute-0 ovn_controller[152662]: 2025-10-14T09:06:39Z|00608|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c down in Southbound
Oct 14 09:06:39 compute-0 ovn_controller[152662]: 2025-10-14T09:06:39Z|00609|binding|INFO|Removing iface tapdf1ec4d8-f5 ovn-installed in OVS
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.525 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:db:60 10.100.0.12'], port_security=['fa:16:3e:38:db:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=df1ec4d8-f543-4899-9d98-b60a6a46cc7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.527 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760432799.5268383, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.527 2 DEBUG nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Start waiting for the detach event from libvirt for device tapdf1ec4d8-f5 with device alias net1 for instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.528 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.536 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface>not found in domain: <domain type='kvm' id='72'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <name>instance-00000039</name>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <uuid>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</uuid>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:06:38</nova:creationTime>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:06:39 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='serial'>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='uuid'>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk' index='2'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config' index='1'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:b4:40:de'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target dev='tapdffa5a1f-65'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log' append='off'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       </target>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log' append='off'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </console>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c397,c877</label>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c397,c877</imagelabel>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:06:39 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:39 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.536 2 INFO nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tapdf1ec4d8-f5 from instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 from the live domain config.
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.537 2 DEBUG nova.virt.libvirt.vif [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.537 2 DEBUG nova.network.os_vif_util [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.538 2 DEBUG nova.network.os_vif_util [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.538 2 DEBUG os_vif [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf1ec4d8-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.550 2 INFO os_vif [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')
Oct 14 09:06:39 compute-0 nova_compute[259627]: 2025-10-14 09:06:39.551 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:06:39</nova:creationTime>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 09:06:39 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:06:39 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:39 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:06:39 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:06:39 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:06:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 14 09:06:39 compute-0 podman[324602]: 2025-10-14 09:06:39.869321685 +0000 UTC m=+0.112654296 container create 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:06:39 compute-0 podman[324602]: 2025-10-14 09:06:39.806661461 +0000 UTC m=+0.049994132 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:06:39 compute-0 systemd[1]: Started libpod-conmon-026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd.scope.
Oct 14 09:06:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ebe327e9b036dd3fe73fd0e1546dc84881dfa7dffba911ed9994c43ab1d96a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:06:40 compute-0 podman[324602]: 2025-10-14 09:06:40.003744815 +0000 UTC m=+0.247077446 container init 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:06:40 compute-0 podman[324602]: 2025-10-14 09:06:40.010643665 +0000 UTC m=+0.253976266 container start 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 14 09:06:40 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [NOTICE]   (324621) : New worker (324623) forked
Oct 14 09:06:40 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [NOTICE]   (324621) : Loading success.
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.109 162547 INFO neutron.agent.ovn.metadata.agent [-] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.110 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.121 2 DEBUG nova.compute.manager [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-changed-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.122 2 DEBUG nova.compute.manager [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Refreshing instance network info cache due to event network-changed-deb48802-bfee-42af-882c-3632b1fbb2cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.122 2 DEBUG oslo_concurrency.lockutils [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.122 2 DEBUG oslo_concurrency.lockutils [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.122 2 DEBUG nova.network.neutron [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Refreshing network info cache for port deb48802-bfee-42af-882c-3632b1fbb2cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.124 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[27ebe0f6-3ee1-402d-9d38-4d10025bd6e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.150 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c4143b-c17b-432a-8256-16accd1026ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.153 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[04b04c9f-0bb9-4d20-a624-1092d53e7751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.186 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[68762ebe-b631-4b67-ad4e-23fc1e224361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.205 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37c05338-da39-4565-8d49-a64f279655f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324637, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.215 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.216 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.216 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.216 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.216 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.218 2 INFO nova.compute.manager [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Terminating instance
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.218 2 DEBUG nova.compute.manager [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.222 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a6aec952-db56-47c7-95c2-c5f8930a6158]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324638, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324638, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.225 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.230 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.231 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.231 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.231 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:40 compute-0 kernel: tapdeb48802-bf (unregistering): left promiscuous mode
Oct 14 09:06:40 compute-0 NetworkManager[44885]: <info>  [1760432800.2681] device (tapdeb48802-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00610|binding|INFO|Releasing lport deb48802-bfee-42af-882c-3632b1fbb2cf from this chassis (sb_readonly=0)
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00611|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf down in Southbound
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00612|binding|INFO|Removing iface tapdeb48802-bf ovn-installed in OVS
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.290 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3f:2b 10.100.0.9'], port_security=['fa:16:3e:e0:3f:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=deb48802-bfee-42af-882c-3632b1fbb2cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.291 162547 INFO neutron.agent.ovn.metadata.agent [-] Port deb48802-bfee-42af-882c-3632b1fbb2cf in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.293 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.310 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88a40c6c-6ad5-4492-907c-18a5141e8a32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct 14 09:06:40 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003c.scope: Consumed 5.852s CPU time.
Oct 14 09:06:40 compute-0 systemd-machined[214636]: Machine qemu-75-instance-0000003c terminated.
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.351 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ad729ba7-896e-4901-a8b0-be76db64d7ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.354 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6f23a12c-a10c-49c0-8c88-2519db38182e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.393 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[55b8a40c-dd93-48d2-8f71-f3135c8a10a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.411 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7048650-72cd-43c1-aefa-992523d6f8d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324690, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.428 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a8d2d5-0a69-4fa9-9029-44986c2d3714]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324692, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324692, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.430 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.439 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.440 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.440 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.440 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:40 compute-0 kernel: tapdeb48802-bf: entered promiscuous mode
Oct 14 09:06:40 compute-0 NetworkManager[44885]: <info>  [1760432800.4430] manager: (tapdeb48802-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Oct 14 09:06:40 compute-0 kernel: tapdeb48802-bf (unregistering): left promiscuous mode
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00613|binding|INFO|Claiming lport deb48802-bfee-42af-882c-3632b1fbb2cf for this chassis.
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00614|binding|INFO|deb48802-bfee-42af-882c-3632b1fbb2cf: Claiming fa:16:3e:e0:3f:2b 10.100.0.9
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.457 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3f:2b 10.100.0.9'], port_security=['fa:16:3e:e0:3f:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=deb48802-bfee-42af-882c-3632b1fbb2cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.458 162547 INFO neutron.agent.ovn.metadata.agent [-] Port deb48802-bfee-42af-882c-3632b1fbb2cf in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.459 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.474 2 INFO nova.virt.libvirt.driver [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance destroyed successfully.
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.474 2 DEBUG nova.objects.instance [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'resources' on Instance uuid dd55716e-2330-42a4-8963-33bdc9c7bbf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00615|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf ovn-installed in OVS
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00616|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf up in Southbound
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.478 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[449c8e2c-4928-4c25-b17d-4f1c910fccef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00617|binding|INFO|Releasing lport deb48802-bfee-42af-882c-3632b1fbb2cf from this chassis (sb_readonly=1)
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00618|binding|INFO|Removing iface tapdeb48802-bf ovn-installed in OVS
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00619|if_status|INFO|Not setting lport deb48802-bfee-42af-882c-3632b1fbb2cf down as sb is readonly
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00620|binding|INFO|Releasing lport deb48802-bfee-42af-882c-3632b1fbb2cf from this chassis (sb_readonly=0)
Oct 14 09:06:40 compute-0 ovn_controller[152662]: 2025-10-14T09:06:40Z|00621|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf down in Southbound
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.497 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3f:2b 10.100.0.9'], port_security=['fa:16:3e:e0:3f:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=deb48802-bfee-42af-882c-3632b1fbb2cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.507 2 DEBUG nova.virt.libvirt.vif [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-138998221',display_name='tempest-ServerActionsTestOtherA-server-138998221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-138998221',id=60,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-kgnyrp0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:35Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=dd55716e-2330-42a4-8963-33bdc9c7bbf8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.507 2 DEBUG nova.network.os_vif_util [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.508 2 DEBUG nova.network.os_vif_util [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.508 2 DEBUG os_vif [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdeb48802-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.517 2 INFO os_vif [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf')
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.527 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1662c53a-ac81-4d41-ae07-745a5de23cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.530 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[142804ff-5b36-4a00-a1eb-b71ecdf26c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.566 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41e994f2-c619-457f-8a35-03278f299dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.583 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6aba2fde-b222-4222-8bb5-c0ad9e416e8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 916, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 916, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324727, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.601 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90bd0788-e696-4048-8abf-6fe9ae6ffcb1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324728, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324728, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.602 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.604 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.604 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.605 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.605 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.606 162547 INFO neutron.agent.ovn.metadata.agent [-] Port deb48802-bfee-42af-882c-3632b1fbb2cf in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.607 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.621 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[253ae6a6-9bd8-49cd-8c44-c6b415de2b79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.649 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[68154786-4523-43d6-9343-4d2ead5e1515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.653 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1003c6db-ff7f-44c5-a589-201ae1415b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.682 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9ded30-fae1-44d9-9165-a96ae70c1f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.710 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1610eb89-9b82-40fb-beb8-13487c40ee65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 916, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 916, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324734, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c334b8b0-7798-4d30-b0c8-d5596f2bbaa6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324736, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324736, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.732 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.733 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.733 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.734 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:40 compute-0 ceph-mon[74249]: pgmap v1532: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.872 2 DEBUG nova.network.neutron [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port df1ec4d8-f543-4899-9d98-b60a6a46cc7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.872 2 DEBUG nova.network.neutron [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.881 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.881 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.881 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.882 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.882 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.882 2 WARNING nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.882 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.883 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.883 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.883 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.883 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Processing event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.884 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.884 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.884 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.884 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.885 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] No waiting events found dispatching network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.885 2 WARNING nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received unexpected event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 for instance with vm_state building and task_state spawning.
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.885 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.886 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.886 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.886 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.886 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.887 2 WARNING nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.887 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.887 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.887 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.887 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.888 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.888 2 WARNING nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.902 2 DEBUG oslo_concurrency.lockutils [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.913 2 INFO nova.virt.libvirt.driver [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Deleting instance files /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8_del
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.914 2 INFO nova.virt.libvirt.driver [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Deletion of /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8_del complete
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.957 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.958 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432800.9565084, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.958 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Started (Lifecycle Event)
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.963 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.968 2 INFO nova.virt.libvirt.driver [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance spawned successfully.
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.968 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:06:40 compute-0 nova_compute[259627]: 2025-10-14 09:06:40.990 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.009 2 INFO nova.compute.manager [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.010 2 DEBUG oslo.service.loopingcall [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.011 2 DEBUG nova.compute.manager [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.011 2 DEBUG nova.network.neutron [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.022 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.030 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.031 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.032 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.033 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.034 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.035 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.041 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.042 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.042 2 DEBUG nova.network.neutron [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.053 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.054 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432800.9566607, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.054 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Paused (Lifecycle Event)
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.084 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.087 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432800.9620774, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.088 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Resumed (Lifecycle Event)
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.106 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.110 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.115 2 INFO nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Took 8.17 seconds to spawn the instance on the hypervisor.
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.116 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.131 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.181 2 INFO nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Took 9.19 seconds to build instance.
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.200 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.431 2 DEBUG nova.network.neutron [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updated VIF entry in instance network info cache for port deb48802-bfee-42af-882c-3632b1fbb2cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.432 2 DEBUG nova.network.neutron [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updating instance_info_cache with network_info: [{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:41 compute-0 nova_compute[259627]: 2025-10-14 09:06:41.453 2 DEBUG oslo_concurrency.lockutils [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 245 op/s
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.281 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.283 2 WARNING nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state active and task_state deleting.
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.284 2 WARNING nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state active and task_state deleting.
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.285 2 WARNING nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state active and task_state deleting.
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.286 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.286 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.286 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:06:42 compute-0 nova_compute[259627]: 2025-10-14 09:06:42.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:42 compute-0 ceph-mon[74249]: pgmap v1533: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 245 op/s
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029738194512274533 of space, bias 1.0, pg target 0.892145835368236 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.168 2 DEBUG nova.network.neutron [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.193 2 INFO nova.compute.manager [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Took 2.18 seconds to deallocate network for instance.
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.250 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.251 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.392 2 DEBUG oslo_concurrency.processutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct 14 09:06:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317642531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/317642531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.889 2 DEBUG oslo_concurrency.processutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.895 2 DEBUG nova.compute.provider_tree [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.910 2 DEBUG nova.scheduler.client.report [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.925 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.949 2 INFO nova.scheduler.client.report [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Deleted allocations for instance dd55716e-2330-42a4-8963-33bdc9c7bbf8
Oct 14 09:06:43 compute-0 nova_compute[259627]: 2025-10-14 09:06:43.999 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.132 2 INFO nova.network.neutron [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.133 2 DEBUG nova.network.neutron [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.150 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.174 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.233 2 INFO nova.compute.manager [None req-d673ad26-1c19-4c12-9521-927131ab42de a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Pausing
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.235 2 DEBUG nova.objects.instance [None req-d673ad26-1c19-4c12-9521-927131ab42de a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'flavor' on Instance uuid 73d6be04-84dc-4b80-81f8-a9bbf9938051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.264 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432804.2639887, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.264 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Paused (Lifecycle Event)
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.270 2 DEBUG nova.compute.manager [None req-d673ad26-1c19-4c12-9521-927131ab42de a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.283 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.287 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.324 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.383 2 DEBUG nova.compute.manager [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.384 2 DEBUG oslo_concurrency.lockutils [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.385 2 DEBUG oslo_concurrency.lockutils [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.385 2 DEBUG oslo_concurrency.lockutils [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.386 2 DEBUG nova.compute.manager [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.386 2 WARNING nova.compute.manager [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state deleted and task_state None.
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.387 2 DEBUG nova.compute.manager [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-deleted-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.872 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.873 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.873 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.874 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.874 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.875 2 INFO nova.compute.manager [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Terminating instance
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.877 2 DEBUG nova.compute.manager [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:06:44 compute-0 ceph-mon[74249]: pgmap v1534: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct 14 09:06:44 compute-0 kernel: tap58429c4c-bd (unregistering): left promiscuous mode
Oct 14 09:06:44 compute-0 NetworkManager[44885]: <info>  [1760432804.9518] device (tap58429c4c-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:06:44 compute-0 ovn_controller[152662]: 2025-10-14T09:06:44Z|00622|binding|INFO|Releasing lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 from this chassis (sb_readonly=0)
Oct 14 09:06:44 compute-0 ovn_controller[152662]: 2025-10-14T09:06:44Z|00623|binding|INFO|Setting lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 down in Southbound
Oct 14 09:06:44 compute-0 ovn_controller[152662]: 2025-10-14T09:06:44Z|00624|binding|INFO|Removing iface tap58429c4c-bd ovn-installed in OVS
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:44 compute-0 nova_compute[259627]: 2025-10-14 09:06:44.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.978 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:a5:80 10.100.0.3'], port_security=['fa:16:3e:53:a5:80 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1ce7a863-d0bf-4ea3-80f5-18675b16ac93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f73d8240-1201-4e28-9385-26f0dd3955ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=58429c4c-bdab-4d51-8440-95fb6e0fab00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.979 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 58429c4c-bdab-4d51-8440-95fb6e0fab00 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis
Oct 14 09:06:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.980 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3b87118-f516-4f2d-8696-aa7290af9d83, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:06:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[544cdfae-da75-480c-b6f3-7d0bf1e609ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.981 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 namespace which is not needed anymore
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:45 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct 14 09:06:45 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000035.scope: Consumed 16.481s CPU time.
Oct 14 09:06:45 compute-0 systemd-machined[214636]: Machine qemu-68-instance-00000035 terminated.
Oct 14 09:06:45 compute-0 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [NOTICE]   (319464) : haproxy version is 2.8.14-c23fe91
Oct 14 09:06:45 compute-0 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [NOTICE]   (319464) : path to executable is /usr/sbin/haproxy
Oct 14 09:06:45 compute-0 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [WARNING]  (319464) : Exiting Master process...
Oct 14 09:06:45 compute-0 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [ALERT]    (319464) : Current worker (319466) exited with code 143 (Terminated)
Oct 14 09:06:45 compute-0 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [WARNING]  (319464) : All workers exited. Exiting... (0)
Oct 14 09:06:45 compute-0 systemd[1]: libpod-5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b.scope: Deactivated successfully.
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.121 2 INFO nova.virt.libvirt.driver [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance destroyed successfully.
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.121 2 DEBUG nova.objects.instance [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'resources' on Instance uuid 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:45 compute-0 podman[324782]: 2025-10-14 09:06:45.123652464 +0000 UTC m=+0.044743402 container died 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.139 2 DEBUG nova.compute.manager [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-unplugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.139 2 DEBUG oslo_concurrency.lockutils [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.139 2 DEBUG oslo_concurrency.lockutils [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.140 2 DEBUG oslo_concurrency.lockutils [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.140 2 DEBUG nova.compute.manager [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] No waiting events found dispatching network-vif-unplugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.140 2 DEBUG nova.compute.manager [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-unplugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.142 2 DEBUG nova.virt.libvirt.vif [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1496520165',display_name='tempest-ServerActionsTestOtherA-server-1496520165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1496520165',id=53,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHn+mln6XiHS3Dbrh5f5r23+s3Q61qobcQwb2UzGhsgS1DhTJSEpJGmS/ZP0w8jiE9rcTktB/Gz7RvHBySi5EJz+HH+wa+mTFVBHeaIG5cz8L5ypIzO20Wa3eu2dAxGK5A==',key_name='tempest-keypair-1288175355',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-60w53hyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:05:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=1ce7a863-d0bf-4ea3-80f5-18675b16ac93,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.142 2 DEBUG nova.network.os_vif_util [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.143 2 DEBUG nova.network.os_vif_util [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.143 2 DEBUG os_vif [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58429c4c-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.152 2 INFO os_vif [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd')
Oct 14 09:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b-userdata-shm.mount: Deactivated successfully.
Oct 14 09:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c59812b275a7f08f721c19ad28122ba9e2d64d1426f442032d01ef6fa1360d6-merged.mount: Deactivated successfully.
Oct 14 09:06:45 compute-0 podman[324782]: 2025-10-14 09:06:45.177749546 +0000 UTC m=+0.098840494 container cleanup 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:06:45 compute-0 systemd[1]: libpod-conmon-5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b.scope: Deactivated successfully.
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.245 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:45 compute-0 podman[324836]: 2025-10-14 09:06:45.262239077 +0000 UTC m=+0.054933644 container remove 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13d93744-f6f0-4ce3-ad3a-12121c2a2b1d]: (4, ('Tue Oct 14 09:06:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 (5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b)\n5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b\nTue Oct 14 09:06:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 (5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b)\n5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84b9a732-3b13-42d2-9243-ebdf2a16c823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.283 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:45 compute-0 kernel: tapf3b87118-f0: left promiscuous mode
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.319 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb4de62-e69b-46d0-bfe8-3d3fdc75b982]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[231e6f90-204f-411e-9bef-d5ffcdb39bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.351 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa99f71-1268-4c7c-9409-d7fcabe04d2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.370 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45343fca-1b34-4673-ba6a-fbadf0dde668]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647534, 'reachable_time': 25406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324854, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.372 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.372 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfa4f97-b4ef-468d-8237-7ebff1e4e65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.373 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:06:45 compute-0 systemd[1]: run-netns-ovnmeta\x2df3b87118\x2df516\x2d4f2d\x2d8696\x2daa7290af9d83.mount: Deactivated successfully.
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.488 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.489 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.489 2 DEBUG nova.objects.instance [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.534 2 INFO nova.virt.libvirt.driver [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Deleting instance files /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93_del
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.534 2 INFO nova.virt.libvirt.driver [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Deletion of /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93_del complete
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.579 2 INFO nova.compute.manager [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.579 2 DEBUG oslo.service.loopingcall [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.580 2 DEBUG nova.compute.manager [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:06:45 compute-0 nova_compute[259627]: 2025-10-14 09:06:45.580 2 DEBUG nova.network.neutron [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:06:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 280 op/s
Oct 14 09:06:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:46.374 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:46 compute-0 nova_compute[259627]: 2025-10-14 09:06:46.566 2 DEBUG nova.objects.instance [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:46 compute-0 nova_compute[259627]: 2025-10-14 09:06:46.591 2 DEBUG nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:06:46 compute-0 nova_compute[259627]: 2025-10-14 09:06:46.660 2 DEBUG nova.compute.manager [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:46 compute-0 nova_compute[259627]: 2025-10-14 09:06:46.661 2 DEBUG nova.compute.manager [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:46 compute-0 nova_compute[259627]: 2025-10-14 09:06:46.661 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:46 compute-0 nova_compute[259627]: 2025-10-14 09:06:46.661 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:46 compute-0 nova_compute[259627]: 2025-10-14 09:06:46.662 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:46 compute-0 ceph-mon[74249]: pgmap v1535: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 280 op/s
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.181 2 DEBUG nova.network.neutron [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.197 2 INFO nova.compute.manager [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Took 1.62 seconds to deallocate network for instance.
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.207 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.207 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.208 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.208 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.209 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.210 2 INFO nova.compute.manager [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Terminating instance
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.212 2 DEBUG nova.compute.manager [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.242 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.242 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:47 compute-0 kernel: tap26dd6404-20 (unregistering): left promiscuous mode
Oct 14 09:06:47 compute-0 NetworkManager[44885]: <info>  [1760432807.2622] device (tap26dd6404-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:47 compute-0 ovn_controller[152662]: 2025-10-14T09:06:47Z|00625|binding|INFO|Releasing lport 26dd6404-2018-471b-a387-2b045d236164 from this chassis (sb_readonly=0)
Oct 14 09:06:47 compute-0 ovn_controller[152662]: 2025-10-14T09:06:47Z|00626|binding|INFO|Setting lport 26dd6404-2018-471b-a387-2b045d236164 down in Southbound
Oct 14 09:06:47 compute-0 ovn_controller[152662]: 2025-10-14T09:06:47Z|00627|binding|INFO|Removing iface tap26dd6404-20 ovn-installed in OVS
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.294 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:a0:6d 10.100.0.12'], port_security=['fa:16:3e:95:a0:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '73d6be04-84dc-4b80-81f8-a9bbf9938051', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=26dd6404-2018-471b-a387-2b045d236164) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.295 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 26dd6404-2018-471b-a387-2b045d236164 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.296 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.296 2 DEBUG nova.compute.manager [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.297 2 DEBUG oslo_concurrency.lockutils [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.297 2 DEBUG oslo_concurrency.lockutils [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.298 2 DEBUG oslo_concurrency.lockutils [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.298 2 DEBUG nova.compute.manager [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] No waiting events found dispatching network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.298 2 WARNING nova.compute.manager [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received unexpected event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 for instance with vm_state deleted and task_state None.
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.299 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cff322e0-c925-44ee-9cef-3cb2de5751e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.300 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:47 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct 14 09:06:47 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000003e.scope: Consumed 5.270s CPU time.
Oct 14 09:06:47 compute-0 systemd-machined[214636]: Machine qemu-76-instance-0000003e terminated.
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.361 2 DEBUG oslo_concurrency.processutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.458 2 INFO nova.virt.libvirt.driver [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance destroyed successfully.
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.460 2 DEBUG nova.objects.instance [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid 73d6be04-84dc-4b80-81f8-a9bbf9938051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.487 2 DEBUG nova.virt.libvirt.vif [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-715549256',display_name='tempest-DeleteServersTestJSON-server-715549256',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-715549256',id=62,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-vdf1bahn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:44Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=73d6be04-84dc-4b80-81f8-a9bbf9938051,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.488 2 DEBUG nova.network.os_vif_util [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.489 2 DEBUG nova.network.os_vif_util [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.490 2 DEBUG os_vif [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26dd6404-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:47 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [NOTICE]   (324621) : haproxy version is 2.8.14-c23fe91
Oct 14 09:06:47 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [NOTICE]   (324621) : path to executable is /usr/sbin/haproxy
Oct 14 09:06:47 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [WARNING]  (324621) : Exiting Master process...
Oct 14 09:06:47 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [ALERT]    (324621) : Current worker (324623) exited with code 143 (Terminated)
Oct 14 09:06:47 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [WARNING]  (324621) : All workers exited. Exiting... (0)
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:47 compute-0 systemd[1]: libpod-026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd.scope: Deactivated successfully.
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.503 2 INFO os_vif [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20')
Oct 14 09:06:47 compute-0 podman[324878]: 2025-10-14 09:06:47.508379534 +0000 UTC m=+0.075689695 container died 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd-userdata-shm.mount: Deactivated successfully.
Oct 14 09:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-00ebe327e9b036dd3fe73fd0e1546dc84881dfa7dffba911ed9994c43ab1d96a-merged.mount: Deactivated successfully.
Oct 14 09:06:47 compute-0 podman[324878]: 2025-10-14 09:06:47.549510717 +0000 UTC m=+0.116820868 container cleanup 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 09:06:47 compute-0 systemd[1]: libpod-conmon-026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd.scope: Deactivated successfully.
Oct 14 09:06:47 compute-0 podman[324950]: 2025-10-14 09:06:47.633989367 +0000 UTC m=+0.053765055 container remove 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a66ac1b2-18c8-4ebf-8d7f-92d63a59ab16]: (4, ('Tue Oct 14 09:06:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd)\n026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd\nTue Oct 14 09:06:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd)\n026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.641 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd460d3-80d4-46a8-b63f-7ac8c8ee0e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.642 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:47 compute-0 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.649 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebc68f0-58fb-4ba2-880f-dc1e0f638e90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.656 2 DEBUG nova.policy [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef6bf79-8b1f-4a64-852e-3a922840b71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.680 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4be7d14e-4c93-4bac-a2ec-848799ad5cbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.694 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1894dc91-4fcf-4529-bdef-057d980dd2ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657766, 'reachable_time': 39793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324967, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.696 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:06:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 09:06:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.696 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fa289942-25c1-475e-93c0-f2b6dd947adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.916 2 INFO nova.virt.libvirt.driver [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Deleting instance files /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051_del
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.917 2 INFO nova.virt.libvirt.driver [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Deletion of /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051_del complete
Oct 14 09:06:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3819414765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.941 2 DEBUG oslo_concurrency.processutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.946 2 DEBUG nova.compute.provider_tree [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.958 2 INFO nova.compute.manager [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.959 2 DEBUG oslo.service.loopingcall [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.959 2 DEBUG nova.compute.manager [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.959 2 DEBUG nova.network.neutron [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.962 2 DEBUG nova.scheduler.client.report [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:47 compute-0 nova_compute[259627]: 2025-10-14 09:06:47.982 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:48 compute-0 nova_compute[259627]: 2025-10-14 09:06:48.034 2 INFO nova.scheduler.client.report [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Deleted allocations for instance 1ce7a863-d0bf-4ea3-80f5-18675b16ac93
Oct 14 09:06:48 compute-0 nova_compute[259627]: 2025-10-14 09:06:48.108 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:48 compute-0 nova_compute[259627]: 2025-10-14 09:06:48.744 2 DEBUG nova.network.neutron [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:48 compute-0 nova_compute[259627]: 2025-10-14 09:06:48.761 2 INFO nova.compute.manager [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Took 0.80 seconds to deallocate network for instance.
Oct 14 09:06:48 compute-0 nova_compute[259627]: 2025-10-14 09:06:48.902 2 DEBUG nova.compute.manager [req-c5aff32e-cd7b-4146-9f9b-5974489fc49d req-eaba8b44-a8f1-46ea-9cc5-992308f66606 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-deleted-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:48 compute-0 ceph-mon[74249]: pgmap v1536: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Oct 14 09:06:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3819414765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:48 compute-0 nova_compute[259627]: 2025-10-14 09:06:48.937 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:48 compute-0 nova_compute[259627]: 2025-10-14 09:06:48.937 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.020 2 DEBUG oslo_concurrency.processutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.064 2 DEBUG nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Successfully updated port: df1ec4d8-f543-4899-9d98-b60a6a46cc7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.071 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.072 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.088 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.089 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.090 2 DEBUG nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.093 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.094 2 DEBUG nova.compute.manager [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.094 2 DEBUG nova.compute.manager [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.095 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.341 2 WARNING nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it
Oct 14 09:06:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186077694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.523 2 DEBUG oslo_concurrency.processutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.531 2 DEBUG nova.compute.provider_tree [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.562 2 DEBUG nova.scheduler.client.report [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.594 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.623 2 DEBUG nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-unplugged-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.624 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.624 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.624 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.624 2 DEBUG nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] No waiting events found dispatching network-vif-unplugged-26dd6404-2018-471b-a387-2b045d236164 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.625 2 WARNING nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received unexpected event network-vif-unplugged-26dd6404-2018-471b-a387-2b045d236164 for instance with vm_state deleted and task_state None.
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.625 2 DEBUG nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.625 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.625 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.626 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.626 2 DEBUG nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] No waiting events found dispatching network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.626 2 WARNING nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received unexpected event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 for instance with vm_state deleted and task_state None.
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.627 2 INFO nova.scheduler.client.report [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance 73d6be04-84dc-4b80-81f8-a9bbf9938051
Oct 14 09:06:49 compute-0 nova_compute[259627]: 2025-10-14 09:06:49.698 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Oct 14 09:06:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1186077694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:50 compute-0 nova_compute[259627]: 2025-10-14 09:06:50.894 2 DEBUG nova.compute.manager [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-deleted-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:50 compute-0 nova_compute[259627]: 2025-10-14 09:06:50.894 2 DEBUG nova.compute.manager [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:50 compute-0 nova_compute[259627]: 2025-10-14 09:06:50.895 2 DEBUG nova.compute.manager [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-df1ec4d8-f543-4899-9d98-b60a6a46cc7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:50 compute-0 nova_compute[259627]: 2025-10-14 09:06:50.895 2 DEBUG oslo_concurrency.lockutils [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:50 compute-0 ceph-mon[74249]: pgmap v1537: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Oct 14 09:06:51 compute-0 podman[324994]: 2025-10-14 09:06:51.67146846 +0000 UTC m=+0.077164042 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:06:51 compute-0 podman[324993]: 2025-10-14 09:06:51.711991028 +0000 UTC m=+0.117743961 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:06:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 200 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 283 op/s
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.390 2 DEBUG nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.424 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.426 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.427 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.433 2 DEBUG nova.virt.libvirt.vif [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.434 2 DEBUG nova.network.os_vif_util [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.436 2 DEBUG nova.network.os_vif_util [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.437 2 DEBUG os_vif [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf1ec4d8-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf1ec4d8-f5, col_values=(('external_ids', {'iface-id': 'df1ec4d8-f543-4899-9d98-b60a6a46cc7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:db:60', 'vm-uuid': '2189eac5-238f-4f09-ae1c-1cf47c3b6030'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 NetworkManager[44885]: <info>  [1760432812.4483] manager: (tapdf1ec4d8-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.455 2 INFO os_vif [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.457 2 DEBUG nova.virt.libvirt.vif [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.457 2 DEBUG nova.network.os_vif_util [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.458 2 DEBUG nova.network.os_vif_util [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.466 2 DEBUG nova.virt.libvirt.guest [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:38:db:60"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <target dev="tapdf1ec4d8-f5"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]: </interface>
Oct 14 09:06:52 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 09:06:52 compute-0 kernel: tapdf1ec4d8-f5: entered promiscuous mode
Oct 14 09:06:52 compute-0 NetworkManager[44885]: <info>  [1760432812.4808] manager: (tapdf1ec4d8-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Oct 14 09:06:52 compute-0 systemd-udevd[325041]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:06:52 compute-0 ovn_controller[152662]: 2025-10-14T09:06:52Z|00628|binding|INFO|Claiming lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c for this chassis.
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 ovn_controller[152662]: 2025-10-14T09:06:52Z|00629|binding|INFO|df1ec4d8-f543-4899-9d98-b60a6a46cc7c: Claiming fa:16:3e:38:db:60 10.100.0.12
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.532 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:db:60 10.100.0.12'], port_security=['fa:16:3e:38:db:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2189eac5-238f-4f09-ae1c-1cf47c3b6030', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '7', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=df1ec4d8-f543-4899-9d98-b60a6a46cc7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.533 162547 INFO neutron.agent.ovn.metadata.agent [-] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.534 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:06:52 compute-0 NetworkManager[44885]: <info>  [1760432812.5450] device (tapdf1ec4d8-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:06:52 compute-0 NetworkManager[44885]: <info>  [1760432812.5457] device (tapdf1ec4d8-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:06:52 compute-0 ovn_controller[152662]: 2025-10-14T09:06:52Z|00630|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c ovn-installed in OVS
Oct 14 09:06:52 compute-0 ovn_controller[152662]: 2025-10-14T09:06:52Z|00631|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c up in Southbound
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.553 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1faebdad-a438-4394-b9d0-022734ef13c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.593 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[111b7490-4d1f-4f3b-be36-6e1ff98cadcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.600 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[894dedba-8680-4323-a998-197920e0462f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.612 2 DEBUG nova.virt.libvirt.driver [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.613 2 DEBUG nova.virt.libvirt.driver [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.613 2 DEBUG nova.virt.libvirt.driver [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9c:3f:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.613 2 DEBUG nova.virt.libvirt.driver [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:38:db:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.635 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[be6986ea-c8d3-42da-8d9c-4b0ab413a53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.644 2 DEBUG nova.virt.libvirt.guest [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:06:52</nova:creationTime>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 09:06:52 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 09:06:52 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:06:52 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:52 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:06:52 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:06:52 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.666 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfe2d4e-f7c7-4585-a6e1-928c506fe467]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325050, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.672 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.687 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a01ac541-da31-496d-b05e-7221a40350a2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325051, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325051, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.689 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.693 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.693 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:52 compute-0 ovn_controller[152662]: 2025-10-14T09:06:52Z|00632|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 09:06:52 compute-0 nova_compute[259627]: 2025-10-14 09:06:52.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:52 compute-0 ceph-mon[74249]: pgmap v1538: 305 pgs: 305 active+clean; 200 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 283 op/s
Oct 14 09:06:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 200 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 146 op/s
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.104 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.105 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.124 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.206 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.206 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.216 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.217 2 INFO nova.compute.claims [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.341 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.395 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.396 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.435 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.436 2 DEBUG oslo_concurrency.lockutils [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.437 2 DEBUG nova.network.neutron [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port df1ec4d8-f543-4899-9d98-b60a6a46cc7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:06:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737460509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.841 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.849 2 DEBUG nova.compute.provider_tree [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.870 2 DEBUG nova.scheduler.client.report [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.896 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.897 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:06:54 compute-0 ceph-mon[74249]: pgmap v1539: 305 pgs: 305 active+clean; 200 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 146 op/s
Oct 14 09:06:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/737460509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.947 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.948 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:06:54 compute-0 nova_compute[259627]: 2025-10-14 09:06:54.971 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.003 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.132 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.133 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.134 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Creating image(s)
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.161 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.191 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.220 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.224 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.275 2 DEBUG nova.policy [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:06:55 compute-0 ovn_controller[152662]: 2025-10-14T09:06:55Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:db:60 10.100.0.12
Oct 14 09:06:55 compute-0 ovn_controller[152662]: 2025-10-14T09:06:55Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:db:60 10.100.0.12
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.324 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.324 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.325 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.326 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.350 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.355 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.468 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432800.4674513, dd55716e-2330-42a4-8963-33bdc9c7bbf8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.469 2 INFO nova.compute.manager [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] VM Stopped (Lifecycle Event)
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.501 2 DEBUG nova.compute.manager [None req-2e68dc7e-0c7d-460d-b35e-7ca2d32c5203 - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.643 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.708 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:06:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 89 KiB/s wr, 157 op/s
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.827 2 DEBUG nova.objects.instance [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.841 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.842 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Ensure instance console log exists: /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.843 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.844 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:55 compute-0 nova_compute[259627]: 2025-10-14 09:06:55.844 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.212 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Successfully created port: 93e6162a-d037-4440-9c8c-1cb9b293f249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.702 2 DEBUG nova.network.neutron [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port df1ec4d8-f543-4899-9d98-b60a6a46cc7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.703 2 DEBUG nova.network.neutron [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:56 compute-0 sudo[325241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:06:56 compute-0 sudo[325241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:56 compute-0 sudo[325241]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.734 2 DEBUG oslo_concurrency.lockutils [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:56 compute-0 sudo[325266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:06:56 compute-0 sudo[325266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:56 compute-0 sudo[325266]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:56 compute-0 sudo[325291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:06:56 compute-0 sudo[325291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:56 compute-0 sudo[325291]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.941 2 DEBUG nova.compute.manager [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.942 2 DEBUG oslo_concurrency.lockutils [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.942 2 DEBUG oslo_concurrency.lockutils [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.942 2 DEBUG oslo_concurrency.lockutils [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.942 2 DEBUG nova.compute.manager [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:56 compute-0 nova_compute[259627]: 2025-10-14 09:06:56.942 2 WARNING nova.compute.manager [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.
Oct 14 09:06:56 compute-0 sudo[325316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 14 09:06:56 compute-0 ceph-mon[74249]: pgmap v1540: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 89 KiB/s wr, 157 op/s
Oct 14 09:06:56 compute-0 sudo[325316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.259 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Successfully updated port: 93e6162a-d037-4440-9c8c-1cb9b293f249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.277 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.278 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.278 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.368 2 DEBUG nova.compute.manager [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-changed-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.368 2 DEBUG nova.compute.manager [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Refreshing instance network info cache due to event network-changed-93e6162a-d037-4440-9c8c-1cb9b293f249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.369 2 DEBUG oslo_concurrency.lockutils [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.448 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:06:57 compute-0 nova_compute[259627]: 2025-10-14 09:06:57.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:57 compute-0 podman[325414]: 2025-10-14 09:06:57.643629187 +0000 UTC m=+0.098432135 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:06:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:06:57 compute-0 podman[325414]: 2025-10-14 09:06:57.745337032 +0000 UTC m=+0.200139870 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:06:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 74 KiB/s wr, 45 op/s
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.164 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.164 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.186 2 DEBUG nova.objects.instance [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.211 2 DEBUG nova.virt.libvirt.vif [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.211 2 DEBUG nova.network.os_vif_util [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.213 2 DEBUG nova.network.os_vif_util [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.216 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.219 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.225 2 DEBUG nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Attempting to detach device tapdf1ec4d8-f5 from instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.226 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:38:db:60"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <target dev="tapdf1ec4d8-f5"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]: </interface>
Oct 14 09:06:58 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.234 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.238 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface>not found in domain: <domain type='kvm' id='74'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <name>instance-0000003b</name>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <uuid>2189eac5-238f-4f09-ae1c-1cf47c3b6030</uuid>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:06:52</nova:creationTime>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:06:58 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='serial'>2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='uuid'>2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk' index='2'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config' index='1'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:9c:3f:ae'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target dev='tap350a3bec-5d'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:38:db:60'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target dev='tapdf1ec4d8-f5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='net1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <source path='/dev/pts/2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log' append='off'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </target>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/2'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <source path='/dev/pts/2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log' append='off'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </console>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c87,c674</label>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c87,c674</imagelabel>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:06:58 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:58 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.239 2 INFO nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tapdf1ec4d8-f5 from instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 from the persistent domain config.
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.239 2 DEBUG nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] (1/8): Attempting to detach device tapdf1ec4d8-f5 with device alias net1 from instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.240 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:38:db:60"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <target dev="tapdf1ec4d8-f5"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]: </interface>
Oct 14 09:06:58 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:06:58 compute-0 kernel: tapdf1ec4d8-f5 (unregistering): left promiscuous mode
Oct 14 09:06:58 compute-0 NetworkManager[44885]: <info>  [1760432818.3459] device (tapdf1ec4d8-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:58 compute-0 ovn_controller[152662]: 2025-10-14T09:06:58Z|00633|binding|INFO|Releasing lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c from this chassis (sb_readonly=0)
Oct 14 09:06:58 compute-0 ovn_controller[152662]: 2025-10-14T09:06:58Z|00634|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c down in Southbound
Oct 14 09:06:58 compute-0 ovn_controller[152662]: 2025-10-14T09:06:58Z|00635|binding|INFO|Removing iface tapdf1ec4d8-f5 ovn-installed in OVS
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.370 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760432818.3700905, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.371 2 DEBUG nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Start waiting for the detach event from libvirt for device tapdf1ec4d8-f5 with device alias net1 for instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.372 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.374 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:db:60 10.100.0.12'], port_security=['fa:16:3e:38:db:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2189eac5-238f-4f09-ae1c-1cf47c3b6030', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '9', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=df1ec4d8-f543-4899-9d98-b60a6a46cc7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.375 162547 INFO neutron.agent.ovn.metadata.agent [-] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.378 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.378 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface>not found in domain: <domain type='kvm' id='74'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <name>instance-0000003b</name>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <uuid>2189eac5-238f-4f09-ae1c-1cf47c3b6030</uuid>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:06:52</nova:creationTime>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:06:58 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='serial'>2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='uuid'>2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk' index='2'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config' index='1'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:9c:3f:ae'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target dev='tap350a3bec-5d'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <source path='/dev/pts/2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log' append='off'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       </target>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/2'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <source path='/dev/pts/2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log' append='off'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </console>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </input>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c87,c674</label>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c87,c674</imagelabel>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:06:58 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:58 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.379 2 INFO nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tapdf1ec4d8-f5 from instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 from the live domain config.
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.380 2 DEBUG nova.virt.libvirt.vif [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.380 2 DEBUG nova.network.os_vif_util [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.381 2 DEBUG nova.network.os_vif_util [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.382 2 DEBUG os_vif [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf1ec4d8-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.395 2 INFO os_vif [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.396 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:06:58</nova:creationTime>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 09:06:58 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 09:06:58 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:06:58 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:06:58 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:06:58 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.398 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9546b44-9c1f-4c4f-90ca-df6cf57044d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.433 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f39c19ef-88a4-4cb5-aa62-fe11bf116c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.436 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4bac93-517a-4bac-999f-61203d97480a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:58 compute-0 sudo[325316]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:06:58 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:06:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.468 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ae564788-0044-4918-90ae-0c8860a71f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:58 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.483 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84cac4af-f6c4-414d-a099-68d42e6f7375]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325585, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.497 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18bcfa63-2bce-4379-8d37-a09084ecc168]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325592, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325592, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.499 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.502 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.502 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.502 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.503 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:58 compute-0 sudo[325586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:06:58 compute-0 sudo[325586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:58 compute-0 sudo[325586]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.584 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.600 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.601 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance network_info: |[{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.603 2 DEBUG oslo_concurrency.lockutils [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.604 2 DEBUG nova.network.neutron [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Refreshing network info cache for port 93e6162a-d037-4440-9c8c-1cb9b293f249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:06:58 compute-0 sudo[325612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:06:58 compute-0 sudo[325612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:58 compute-0 sudo[325612]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.611 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Start _get_guest_xml network_info=[{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.624 2 WARNING nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.631 2 DEBUG nova.virt.libvirt.host [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.633 2 DEBUG nova.virt.libvirt.host [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.642 2 DEBUG nova.virt.libvirt.host [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.644 2 DEBUG nova.virt.libvirt.host [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.645 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.645 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.646 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.646 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.647 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.647 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.648 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.648 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.648 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.649 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.649 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.649 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:06:58 compute-0 nova_compute[259627]: 2025-10-14 09:06:58.654 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:58 compute-0 sudo[325637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:06:58 compute-0 sudo[325637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:58 compute-0 sudo[325637]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:58 compute-0 sudo[325663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:06:58 compute-0 sudo[325663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543137327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.092 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.119 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.122 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.201 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.201 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.202 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.202 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.202 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.203 2 WARNING nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.203 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.203 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.203 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.203 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.204 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.204 2 WARNING nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.204 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.204 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.205 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.205 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.205 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.205 2 WARNING nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.
Oct 14 09:06:59 compute-0 sudo[325663]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:06:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:06:59 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:06:59 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:06:59 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 88bd820f-451c-4a7f-aa33-881c67a4960b does not exist
Oct 14 09:06:59 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 31fe626f-6c82-4b46-a17d-13b4d0ea4349 does not exist
Oct 14 09:06:59 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev fb9b39aa-4b7c-4ac2-b09a-7718b7f44dcd does not exist
Oct 14 09:06:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:06:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:06:59 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:06:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: pgmap v1541: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 74 KiB/s wr, 45 op/s
Oct 14 09:06:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:06:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:06:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1543137327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:06:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:06:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:06:59 compute-0 sudo[325779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:06:59 compute-0 sudo[325779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:59 compute-0 sudo[325779]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:06:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204475229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.621 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.625 2 DEBUG nova.virt.libvirt.vif [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1969379492',display_name='tempest-DeleteServersTestJSON-server-1969379492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1969379492',id=63,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-399cflte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:55Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=c8d53ba7-c60f-4e5c-899f-fd95996ea742,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.626 2 DEBUG nova.network.os_vif_util [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.628 2 DEBUG nova.network.os_vif_util [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.631 2 DEBUG nova.objects.instance [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:06:59 compute-0 sudo[325804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.655 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <uuid>c8d53ba7-c60f-4e5c-899f-fd95996ea742</uuid>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <name>instance-0000003f</name>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <nova:name>tempest-DeleteServersTestJSON-server-1969379492</nova:name>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:06:58</nova:creationTime>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:06:59 compute-0 sudo[325804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <nova:port uuid="93e6162a-d037-4440-9c8c-1cb9b293f249">
Oct 14 09:06:59 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <system>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <entry name="serial">c8d53ba7-c60f-4e5c-899f-fd95996ea742</entry>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <entry name="uuid">c8d53ba7-c60f-4e5c-899f-fd95996ea742</entry>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </system>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <os>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   </os>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <features>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   </features>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:06:59 compute-0 sudo[325804]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:59 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk">
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config">
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       </source>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:06:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:13:f7:ed"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <target dev="tap93e6162a-d0"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/console.log" append="off"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <video>
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </video>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:06:59 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:06:59 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:06:59 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:06:59 compute-0 nova_compute[259627]: </domain>
Oct 14 09:06:59 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.656 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Preparing to wait for external event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.657 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.657 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.658 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.660 2 DEBUG nova.virt.libvirt.vif [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1969379492',display_name='tempest-DeleteServersTestJSON-server-1969379492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1969379492',id=63,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-399cflte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:55Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=c8d53ba7-c60f-4e5c-899f-fd95996ea742,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.660 2 DEBUG nova.network.os_vif_util [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.661 2 DEBUG nova.network.os_vif_util [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.662 2 DEBUG os_vif [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93e6162a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap93e6162a-d0, col_values=(('external_ids', {'iface-id': '93e6162a-d037-4440-9c8c-1cb9b293f249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:f7:ed', 'vm-uuid': 'c8d53ba7-c60f-4e5c-899f-fd95996ea742'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:06:59 compute-0 NetworkManager[44885]: <info>  [1760432819.6821] manager: (tap93e6162a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.692 2 INFO os_vif [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0')
Oct 14 09:06:59 compute-0 sudo[325831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:06:59 compute-0 sudo[325831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:06:59 compute-0 sudo[325831]: pam_unix(sudo:session): session closed for user root
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.751 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.752 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.752 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:13:f7:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.753 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Using config drive
Oct 14 09:06:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 74 KiB/s wr, 45 op/s
Oct 14 09:06:59 compute-0 nova_compute[259627]: 2025-10-14 09:06:59.787 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:06:59 compute-0 sudo[325859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:06:59 compute-0 sudo[325859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.116 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432805.115147, 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.117 2 INFO nova.compute.manager [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] VM Stopped (Lifecycle Event)
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.145 2 DEBUG nova.compute.manager [None req-26f19152-467c-4624-8edd-49861307e52e - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:00 compute-0 podman[325942]: 2025-10-14 09:07:00.225431638 +0000 UTC m=+0.060040749 container create 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:07:00 compute-0 systemd[1]: Started libpod-conmon-2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3.scope.
Oct 14 09:07:00 compute-0 podman[325942]: 2025-10-14 09:07:00.205158499 +0000 UTC m=+0.039767640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:07:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:00 compute-0 podman[325942]: 2025-10-14 09:07:00.333422848 +0000 UTC m=+0.168032029 container init 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:07:00 compute-0 podman[325942]: 2025-10-14 09:07:00.346609213 +0000 UTC m=+0.181218314 container start 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 09:07:00 compute-0 podman[325942]: 2025-10-14 09:07:00.35015382 +0000 UTC m=+0.184762981 container attach 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:07:00 compute-0 condescending_snyder[325956]: 167 167
Oct 14 09:07:00 compute-0 systemd[1]: libpod-2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3.scope: Deactivated successfully.
Oct 14 09:07:00 compute-0 podman[325942]: 2025-10-14 09:07:00.35542119 +0000 UTC m=+0.190030331 container died 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:07:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca784fa99719cf0a1715223159a56a3be4c3e34e6880899df326d954924e1da7-merged.mount: Deactivated successfully.
Oct 14 09:07:00 compute-0 podman[325942]: 2025-10-14 09:07:00.412617938 +0000 UTC m=+0.247227049 container remove 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:07:00 compute-0 systemd[1]: libpod-conmon-2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3.scope: Deactivated successfully.
Oct 14 09:07:00 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:07:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4204475229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:00 compute-0 podman[325981]: 2025-10-14 09:07:00.616221322 +0000 UTC m=+0.044767743 container create 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 09:07:00 compute-0 podman[325981]: 2025-10-14 09:07:00.595859921 +0000 UTC m=+0.024406312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:07:00 compute-0 systemd[1]: Started libpod-conmon-4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3.scope.
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.740 2 DEBUG nova.network.neutron [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updated VIF entry in instance network info cache for port 93e6162a-d037-4440-9c8c-1cb9b293f249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.740 2 DEBUG nova.network.neutron [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:00 compute-0 podman[325981]: 2025-10-14 09:07:00.743964828 +0000 UTC m=+0.172511229 container init 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:07:00 compute-0 podman[325981]: 2025-10-14 09:07:00.753466852 +0000 UTC m=+0.182013233 container start 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:07:00 compute-0 podman[325981]: 2025-10-14 09:07:00.757370738 +0000 UTC m=+0.185917149 container attach 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.780 2 DEBUG oslo_concurrency.lockutils [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.796 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.797 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.797 2 DEBUG nova.network.neutron [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:07:00 compute-0 nova_compute[259627]: 2025-10-14 09:07:00.997 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Creating config drive at /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.007 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7kes8fn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.157 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7kes8fn" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.184 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.188 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.242 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.243 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.243 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.243 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.243 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.244 2 INFO nova.compute.manager [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Terminating instance
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.245 2 DEBUG nova.compute.manager [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:07:01 compute-0 kernel: tap350a3bec-5d (unregistering): left promiscuous mode
Oct 14 09:07:01 compute-0 NetworkManager[44885]: <info>  [1760432821.3051] device (tap350a3bec-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:07:01 compute-0 ovn_controller[152662]: 2025-10-14T09:07:01Z|00636|binding|INFO|Releasing lport 350a3bec-5dbd-4a83-8d80-5796be0319fd from this chassis (sb_readonly=0)
Oct 14 09:07:01 compute-0 ovn_controller[152662]: 2025-10-14T09:07:01Z|00637|binding|INFO|Setting lport 350a3bec-5dbd-4a83-8d80-5796be0319fd down in Southbound
Oct 14 09:07:01 compute-0 ovn_controller[152662]: 2025-10-14T09:07:01Z|00638|binding|INFO|Removing iface tap350a3bec-5d ovn-installed in OVS
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.331 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:3f:ae 10.100.0.6'], port_security=['fa:16:3e:9c:3f:ae 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2189eac5-238f-4f09-ae1c-1cf47c3b6030', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6192a40-cbd8-43eb-9955-4fede99ddb79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=350a3bec-5dbd-4a83-8d80-5796be0319fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.333 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 350a3bec-5dbd-4a83-8d80-5796be0319fd in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.336 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.359 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d761962-34e4-4cdd-b30b-b650dfe2e7b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct 14 09:07:01 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003b.scope: Consumed 14.114s CPU time.
Oct 14 09:07:01 compute-0 systemd-machined[214636]: Machine qemu-74-instance-0000003b terminated.
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.418 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5abc18-24eb-4296-8a3b-a60bdf934b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.423 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c160f1ef-84f8-49b8-821a-028e5b989f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.427 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.428 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Deleting local config drive /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config because it was imported into RBD.
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.458 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1ed5d4-5082-4d13-9a6a-d0bb865d4964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.474 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e234958a-0765-4ad0-aff1-f93a3e09a06d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326061, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.499 2 INFO nova.virt.libvirt.driver [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance destroyed successfully.
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.499 2 DEBUG nova.objects.instance [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'resources' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:01 compute-0 ceph-mon[74249]: pgmap v1542: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 74 KiB/s wr, 45 op/s
Oct 14 09:07:01 compute-0 NetworkManager[44885]: <info>  [1760432821.5048] manager: (tap93e6162a-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Oct 14 09:07:01 compute-0 systemd-udevd[326085]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.508 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5109b6-0761-4c44-9ced-43e27887d87e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326073, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326073, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.510 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:01 compute-0 kernel: tap93e6162a-d0: entered promiscuous mode
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.514 2 DEBUG nova.virt.libvirt.vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.514 2 DEBUG nova.network.os_vif_util [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.515 2 DEBUG nova.network.os_vif_util [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.515 2 DEBUG os_vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap350a3bec-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 NetworkManager[44885]: <info>  [1760432821.5215] device (tap93e6162a-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:07:01 compute-0 ovn_controller[152662]: 2025-10-14T09:07:01Z|00639|binding|INFO|Claiming lport 93e6162a-d037-4440-9c8c-1cb9b293f249 for this chassis.
Oct 14 09:07:01 compute-0 ovn_controller[152662]: 2025-10-14T09:07:01Z|00640|binding|INFO|93e6162a-d037-4440-9c8c-1cb9b293f249: Claiming fa:16:3e:13:f7:ed 10.100.0.13
Oct 14 09:07:01 compute-0 NetworkManager[44885]: <info>  [1760432821.5226] device (tap93e6162a-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.532 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:f7:ed 10.100.0.13'], port_security=['fa:16:3e:13:f7:ed 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c8d53ba7-c60f-4e5c-899f-fd95996ea742', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=93e6162a-d037-4440-9c8c-1cb9b293f249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:07:01 compute-0 ovn_controller[152662]: 2025-10-14T09:07:01Z|00641|binding|INFO|Setting lport 93e6162a-d037-4440-9c8c-1cb9b293f249 ovn-installed in OVS
Oct 14 09:07:01 compute-0 ovn_controller[152662]: 2025-10-14T09:07:01Z|00642|binding|INFO|Setting lport 93e6162a-d037-4440-9c8c-1cb9b293f249 up in Southbound
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.553 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.553 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.554 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.554 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.555 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 93e6162a-d037-4440-9c8c-1cb9b293f249 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.555 2 INFO os_vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d')
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.556 2 DEBUG nova.virt.libvirt.vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.556 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.556 2 DEBUG nova.network.os_vif_util [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.557 2 DEBUG nova.network.os_vif_util [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.557 2 DEBUG os_vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf1ec4d8-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.561 2 INFO os_vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')
Oct 14 09:07:01 compute-0 systemd-machined[214636]: New machine qemu-77-instance-0000003f.
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.568 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[55623d5c-e7c4-4478-8800-fa751827bc78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.569 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.570 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.571 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04c6920b-c0fa-469b-a7d5-365801dd9248]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.571 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e85350-26c8-49ab-8ffa-cf3a48307929]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-0000003f.
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.587 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[23aa2b60-ab27-44b1-a327-7aaff715328b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.601 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8a58a9-8245-4f34-90c8-31779fb8bfd6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.629 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e62667d1-412a-4b67-8ca3-35a4752dde56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c568cc-2ba4-41c0-b67a-3664da9fa653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 NetworkManager[44885]: <info>  [1760432821.6384] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.669 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba4775b-0ceb-43b6-afdd-a5365a4d4e5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.672 2 DEBUG nova.compute.manager [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-unplugged-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG oslo_concurrency.lockutils [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG oslo_concurrency.lockutils [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG oslo_concurrency.lockutils [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.673 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3287b4bf-1e38-45cd-b3b4-21661b5156f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG nova.compute.manager [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-unplugged-350a3bec-5dbd-4a83-8d80-5796be0319fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG nova.compute.manager [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-unplugged-350a3bec-5dbd-4a83-8d80-5796be0319fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:07:01 compute-0 NetworkManager[44885]: <info>  [1760432821.7035] device (tap0a07d59e-b0): carrier: link connected
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.708 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c2629d-b819-49f7-8634-b6f9fd8df6c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6866a829-d2d0-446a-8f6a-cd43519d7ed6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660046, 'reachable_time': 29093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326152, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.748 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[761a8d1d-e3b6-4c71-923a-1a28cd57c9cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 660046, 'tstamp': 660046}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326155, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.771 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed30fc04-b496-4fb6-828d-ee35eea984f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660046, 'reachable_time': 29093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326156, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 epic_black[325998]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:07:01 compute-0 epic_black[325998]: --> relative data size: 1.0
Oct 14 09:07:01 compute-0 epic_black[325998]: --> All data devices are unavailable
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.806 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c941cb75-6649-4e9a-af56-52f88fdbce8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 systemd[1]: libpod-4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3.scope: Deactivated successfully.
Oct 14 09:07:01 compute-0 conmon[325998]: conmon 4422c108f94c6a1d0de0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3.scope/container/memory.events
Oct 14 09:07:01 compute-0 podman[325981]: 2025-10-14 09:07:01.833414279 +0000 UTC m=+1.261960670 container died 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514-merged.mount: Deactivated successfully.
Oct 14 09:07:01 compute-0 podman[325981]: 2025-10-14 09:07:01.896084362 +0000 UTC m=+1.324630743 container remove 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:07:01 compute-0 systemd[1]: libpod-conmon-4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3.scope: Deactivated successfully.
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.910 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0096230b-9213-49c9-a309-f811cd147f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.912 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.913 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.913 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:01 compute-0 sudo[325859]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 09:07:01 compute-0 NetworkManager[44885]: <info>  [1760432821.9715] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:01 compute-0 ovn_controller[152662]: 2025-10-14T09:07:01Z|00643|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.978 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[046f6f94-bd15-4d2f-9c8c-fbcab087e455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.980 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:07:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.981 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:07:01 compute-0 nova_compute[259627]: 2025-10-14 09:07:01.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:02 compute-0 sudo[326179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:07:02 compute-0 sudo[326179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:02 compute-0 sudo[326179]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.070 2 INFO nova.virt.libvirt.driver [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Deleting instance files /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030_del
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.071 2 INFO nova.virt.libvirt.driver [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Deletion of /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030_del complete
Oct 14 09:07:02 compute-0 sudo[326207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:07:02 compute-0 sudo[326207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:02 compute-0 sudo[326207]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.130 2 INFO nova.compute.manager [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Took 0.88 seconds to destroy the instance on the hypervisor.
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.131 2 DEBUG oslo.service.loopingcall [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.131 2 DEBUG nova.compute.manager [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.131 2 DEBUG nova.network.neutron [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:07:02 compute-0 sudo[326232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:07:02 compute-0 sudo[326232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:02 compute-0 sudo[326232]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:02 compute-0 sudo[326264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:07:02 compute-0 sudo[326264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:02 compute-0 podman[326346]: 2025-10-14 09:07:02.360209822 +0000 UTC m=+0.049545641 container create 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:07:02 compute-0 systemd[1]: Started libpod-conmon-92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1.scope.
Oct 14 09:07:02 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f8a07d67777e3bae26757fdb9f288e4a77887d356ee44526796fcfd258e250/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:02 compute-0 podman[326346]: 2025-10-14 09:07:02.337468832 +0000 UTC m=+0.026804681 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:07:02 compute-0 podman[326346]: 2025-10-14 09:07:02.436885131 +0000 UTC m=+0.126220960 container init 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:07:02 compute-0 podman[326346]: 2025-10-14 09:07:02.441661099 +0000 UTC m=+0.130996928 container start 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.453 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432807.4486067, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.453 2 INFO nova.compute.manager [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Stopped (Lifecycle Event)
Oct 14 09:07:02 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [NOTICE]   (326390) : New worker (326392) forked
Oct 14 09:07:02 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [NOTICE]   (326390) : Loading success.
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.476 2 DEBUG nova.compute.manager [None req-0f87be0d-b499-42af-8c6a-97b195807800 - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:02 compute-0 podman[326415]: 2025-10-14 09:07:02.575825763 +0000 UTC m=+0.042063877 container create 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:07:02 compute-0 systemd[1]: Started libpod-conmon-82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa.scope.
Oct 14 09:07:02 compute-0 podman[326415]: 2025-10-14 09:07:02.55824085 +0000 UTC m=+0.024478984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:07:02 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:02 compute-0 podman[326415]: 2025-10-14 09:07:02.674587145 +0000 UTC m=+0.140825309 container init 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 09:07:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:02 compute-0 podman[326415]: 2025-10-14 09:07:02.686721714 +0000 UTC m=+0.152959848 container start 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:07:02 compute-0 podman[326415]: 2025-10-14 09:07:02.689718188 +0000 UTC m=+0.155956352 container attach 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:07:02 compute-0 stoic_taussig[326431]: 167 167
Oct 14 09:07:02 compute-0 systemd[1]: libpod-82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa.scope: Deactivated successfully.
Oct 14 09:07:02 compute-0 podman[326415]: 2025-10-14 09:07:02.696692019 +0000 UTC m=+0.162930163 container died 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-498faadb83c147a9a25133648bf65545008379dfa5661921ba75760583ea92e4-merged.mount: Deactivated successfully.
Oct 14 09:07:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:07:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:07:02 compute-0 podman[326415]: 2025-10-14 09:07:02.744326492 +0000 UTC m=+0.210564616 container remove 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 14 09:07:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:07:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:07:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:07:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:07:02 compute-0 systemd[1]: libpod-conmon-82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa.scope: Deactivated successfully.
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.788 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432822.7872407, c8d53ba7-c60f-4e5c-899f-fd95996ea742 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.789 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] VM Started (Lifecycle Event)
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.823 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.827 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432822.787327, c8d53ba7-c60f-4e5c-899f-fd95996ea742 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.827 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] VM Paused (Lifecycle Event)
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.851 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.855 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:07:02 compute-0 nova_compute[259627]: 2025-10-14 09:07:02.875 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:07:03 compute-0 podman[326453]: 2025-10-14 09:07:03.004474359 +0000 UTC m=+0.061209218 container create b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:07:03 compute-0 systemd[1]: Started libpod-conmon-b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b.scope.
Oct 14 09:07:03 compute-0 podman[326453]: 2025-10-14 09:07:02.975729891 +0000 UTC m=+0.032464810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:07:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:03 compute-0 podman[326453]: 2025-10-14 09:07:03.111505834 +0000 UTC m=+0.168240733 container init b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:07:03 compute-0 podman[326453]: 2025-10-14 09:07:03.11984637 +0000 UTC m=+0.176581229 container start b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:07:03 compute-0 podman[326453]: 2025-10-14 09:07:03.123568381 +0000 UTC m=+0.180303240 container attach b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:07:03 compute-0 ceph-mon[74249]: pgmap v1543: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.531 2 INFO nova.network.neutron [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.532 2 DEBUG nova.network.neutron [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.552 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.586 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.766 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.766 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.767 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.767 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.767 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.767 2 WARNING nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd for instance with vm_state active and task_state deleting.
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.768 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.768 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.768 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.768 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.769 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Processing event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.769 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.769 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.770 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.770 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.771 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] No waiting events found dispatching network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.771 2 WARNING nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received unexpected event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 for instance with vm_state building and task_state spawning.
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.772 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.776 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432823.7761073, c8d53ba7-c60f-4e5c-899f-fd95996ea742 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.776 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] VM Resumed (Lifecycle Event)
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.778 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.781 2 INFO nova.virt.libvirt.driver [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance spawned successfully.
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.781 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.800 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.806 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.810 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.810 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.811 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.811 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.812 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.812 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.835 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.865 2 INFO nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Took 8.73 seconds to spawn the instance on the hypervisor.
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.866 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.930 2 INFO nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Took 9.75 seconds to build instance.
Oct 14 09:07:03 compute-0 peaceful_colden[326471]: {
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:     "0": [
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:         {
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "devices": [
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "/dev/loop3"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             ],
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_name": "ceph_lv0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_size": "21470642176",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "name": "ceph_lv0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "tags": {
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cluster_name": "ceph",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.crush_device_class": "",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.encrypted": "0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osd_id": "0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.type": "block",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.vdo": "0"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             },
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "type": "block",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "vg_name": "ceph_vg0"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:         }
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:     ],
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:     "1": [
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:         {
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "devices": [
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "/dev/loop4"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             ],
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_name": "ceph_lv1",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_size": "21470642176",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "name": "ceph_lv1",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "tags": {
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cluster_name": "ceph",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.crush_device_class": "",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.encrypted": "0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osd_id": "1",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.type": "block",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.vdo": "0"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             },
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "type": "block",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "vg_name": "ceph_vg1"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:         }
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:     ],
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:     "2": [
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:         {
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "devices": [
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "/dev/loop5"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             ],
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_name": "ceph_lv2",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_size": "21470642176",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "name": "ceph_lv2",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "tags": {
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.cluster_name": "ceph",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.crush_device_class": "",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.encrypted": "0",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osd_id": "2",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.type": "block",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:                 "ceph.vdo": "0"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             },
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "type": "block",
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:             "vg_name": "ceph_vg2"
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:         }
Oct 14 09:07:03 compute-0 peaceful_colden[326471]:     ]
Oct 14 09:07:03 compute-0 peaceful_colden[326471]: }
Oct 14 09:07:03 compute-0 nova_compute[259627]: 2025-10-14 09:07:03.961 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:03 compute-0 systemd[1]: libpod-b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b.scope: Deactivated successfully.
Oct 14 09:07:03 compute-0 podman[326453]: 2025-10-14 09:07:03.963537638 +0000 UTC m=+1.020272457 container died b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:07:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804-merged.mount: Deactivated successfully.
Oct 14 09:07:04 compute-0 podman[326453]: 2025-10-14 09:07:04.039082798 +0000 UTC m=+1.095817617 container remove b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:07:04 compute-0 systemd[1]: libpod-conmon-b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b.scope: Deactivated successfully.
Oct 14 09:07:04 compute-0 sudo[326264]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:04 compute-0 sudo[326491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:07:04 compute-0 sudo[326491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:04 compute-0 sudo[326491]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:04 compute-0 sudo[326516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:07:04 compute-0 sudo[326516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:04 compute-0 sudo[326516]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:04 compute-0 sudo[326541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:07:04 compute-0 sudo[326541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:04 compute-0 sudo[326541]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:04 compute-0 sudo[326566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:07:04 compute-0 sudo[326566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:04 compute-0 podman[326631]: 2025-10-14 09:07:04.712433691 +0000 UTC m=+0.037830122 container create 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 14 09:07:04 compute-0 systemd[1]: Started libpod-conmon-0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28.scope.
Oct 14 09:07:04 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:04 compute-0 podman[326631]: 2025-10-14 09:07:04.789333125 +0000 UTC m=+0.114729586 container init 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:07:04 compute-0 podman[326631]: 2025-10-14 09:07:04.696643952 +0000 UTC m=+0.022040393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:07:04 compute-0 podman[326631]: 2025-10-14 09:07:04.794886352 +0000 UTC m=+0.120282783 container start 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:07:04 compute-0 podman[326631]: 2025-10-14 09:07:04.798146842 +0000 UTC m=+0.123543293 container attach 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:07:04 compute-0 tender_boyd[326648]: 167 167
Oct 14 09:07:04 compute-0 systemd[1]: libpod-0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28.scope: Deactivated successfully.
Oct 14 09:07:04 compute-0 podman[326631]: 2025-10-14 09:07:04.799710581 +0000 UTC m=+0.125107042 container died 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5edda76d64926d90015d1bb673852e248a0bfe5f0b8e13db4e854b075878724e-merged.mount: Deactivated successfully.
Oct 14 09:07:04 compute-0 podman[326631]: 2025-10-14 09:07:04.831657507 +0000 UTC m=+0.157053938 container remove 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 09:07:04 compute-0 systemd[1]: libpod-conmon-0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28.scope: Deactivated successfully.
Oct 14 09:07:05 compute-0 podman[326673]: 2025-10-14 09:07:05.027002368 +0000 UTC m=+0.041797320 container create 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:07:05 compute-0 systemd[1]: Started libpod-conmon-1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e.scope.
Oct 14 09:07:05 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:05 compute-0 podman[326673]: 2025-10-14 09:07:05.007322914 +0000 UTC m=+0.022117886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:07:05 compute-0 podman[326673]: 2025-10-14 09:07:05.108796983 +0000 UTC m=+0.123591935 container init 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:07:05 compute-0 podman[326673]: 2025-10-14 09:07:05.114211796 +0000 UTC m=+0.129006748 container start 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:07:05 compute-0 podman[326673]: 2025-10-14 09:07:05.117286592 +0000 UTC m=+0.132081574 container attach 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:07:05 compute-0 nova_compute[259627]: 2025-10-14 09:07:05.334 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:05 compute-0 nova_compute[259627]: 2025-10-14 09:07:05.335 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:05 compute-0 nova_compute[259627]: 2025-10-14 09:07:05.335 2 INFO nova.compute.manager [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Shelving
Oct 14 09:07:05 compute-0 nova_compute[259627]: 2025-10-14 09:07:05.361 2 DEBUG nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:07:05 compute-0 ceph-mon[74249]: pgmap v1544: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:07:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:07:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2368455642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:07:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:07:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2368455642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:07:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 14 09:07:06 compute-0 sad_hawking[326690]: {
Oct 14 09:07:06 compute-0 sad_hawking[326690]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "osd_id": 2,
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "type": "bluestore"
Oct 14 09:07:06 compute-0 sad_hawking[326690]:     },
Oct 14 09:07:06 compute-0 sad_hawking[326690]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "osd_id": 1,
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "type": "bluestore"
Oct 14 09:07:06 compute-0 sad_hawking[326690]:     },
Oct 14 09:07:06 compute-0 sad_hawking[326690]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "osd_id": 0,
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:07:06 compute-0 sad_hawking[326690]:         "type": "bluestore"
Oct 14 09:07:06 compute-0 sad_hawking[326690]:     }
Oct 14 09:07:06 compute-0 sad_hawking[326690]: }
Oct 14 09:07:06 compute-0 systemd[1]: libpod-1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e.scope: Deactivated successfully.
Oct 14 09:07:06 compute-0 podman[326673]: 2025-10-14 09:07:06.090001497 +0000 UTC m=+1.104796439 container died 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb-merged.mount: Deactivated successfully.
Oct 14 09:07:06 compute-0 podman[326673]: 2025-10-14 09:07:06.30610762 +0000 UTC m=+1.320902572 container remove 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:07:06 compute-0 systemd[1]: libpod-conmon-1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e.scope: Deactivated successfully.
Oct 14 09:07:06 compute-0 sudo[326566]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:07:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:07:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:07:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:07:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3fde45db-e93e-4225-bdb9-9fcd731233f5 does not exist
Oct 14 09:07:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 32b68a08-15ad-4ad4-a555-0df41ca7eb73 does not exist
Oct 14 09:07:06 compute-0 sudo[326736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:07:06 compute-0 sudo[326736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:06 compute-0 sudo[326736]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:06 compute-0 nova_compute[259627]: 2025-10-14 09:07:06.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2368455642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:07:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2368455642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:07:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:07:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:07:06 compute-0 sudo[326761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:07:06 compute-0 sudo[326761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:07:06 compute-0 sudo[326761]: pam_unix(sudo:session): session closed for user root
Oct 14 09:07:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:07.025 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:07.026 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.185 2 DEBUG nova.network.neutron [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.213 2 INFO nova.compute.manager [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Took 5.08 seconds to deallocate network for instance.
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.268 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.268 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.388 2 DEBUG nova.compute.manager [req-74caf18f-c655-4ef4-92f6-1fa4f23bbab8 req-95789484-10ed-4041-8c97-7eec329f7f60 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-deleted-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.403 2 DEBUG oslo_concurrency.processutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:07 compute-0 ceph-mon[74249]: pgmap v1545: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 14 09:07:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Oct 14 09:07:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:07:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918270055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.808 2 DEBUG oslo_concurrency.processutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.819 2 DEBUG nova.compute.provider_tree [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.860 2 DEBUG nova.scheduler.client.report [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.891 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:07 compute-0 nova_compute[259627]: 2025-10-14 09:07:07.951 2 INFO nova.scheduler.client.report [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Deleted allocations for instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.033 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.427 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.428 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.428 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.428 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.429 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.430 2 INFO nova.compute.manager [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Terminating instance
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.431 2 DEBUG nova.compute.manager [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:07:08 compute-0 kernel: tapdffa5a1f-65 (unregistering): left promiscuous mode
Oct 14 09:07:08 compute-0 NetworkManager[44885]: <info>  [1760432828.5013] device (tapdffa5a1f-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:08 compute-0 ovn_controller[152662]: 2025-10-14T09:07:08Z|00644|binding|INFO|Releasing lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 from this chassis (sb_readonly=0)
Oct 14 09:07:08 compute-0 ovn_controller[152662]: 2025-10-14T09:07:08Z|00645|binding|INFO|Setting lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 down in Southbound
Oct 14 09:07:08 compute-0 ovn_controller[152662]: 2025-10-14T09:07:08Z|00646|binding|INFO|Removing iface tapdffa5a1f-65 ovn-installed in OVS
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.528 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:40:de 10.100.0.8'], port_security=['fa:16:3e:b4:40:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6192a40-cbd8-43eb-9955-4fede99ddb79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=dffa5a1f-657b-498e-bbe5-6540fead7fb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.529 162547 INFO neutron.agent.ovn.metadata.agent [-] Port dffa5a1f-657b-498e-bbe5-6540fead7fb6 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.530 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.531 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bd05692f-7cff-4f1f-b700-3efff555e451]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.539 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace which is not needed anymore
Oct 14 09:07:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2918270055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:08 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct 14 09:07:08 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000039.scope: Consumed 15.091s CPU time.
Oct 14 09:07:08 compute-0 systemd-machined[214636]: Machine qemu-72-instance-00000039 terminated.
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.687 2 INFO nova.virt.libvirt.driver [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance destroyed successfully.
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.689 2 DEBUG nova.objects.instance [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'resources' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.711 2 DEBUG nova.virt.libvirt.vif [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.712 2 DEBUG nova.network.os_vif_util [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.713 2 DEBUG nova.network.os_vif_util [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.713 2 DEBUG os_vif [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.723 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdffa5a1f-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.738 2 INFO os_vif [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65')
Oct 14 09:07:08 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [NOTICE]   (322662) : haproxy version is 2.8.14-c23fe91
Oct 14 09:07:08 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [NOTICE]   (322662) : path to executable is /usr/sbin/haproxy
Oct 14 09:07:08 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [WARNING]  (322662) : Exiting Master process...
Oct 14 09:07:08 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [ALERT]    (322662) : Current worker (322664) exited with code 143 (Terminated)
Oct 14 09:07:08 compute-0 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [WARNING]  (322662) : All workers exited. Exiting... (0)
Oct 14 09:07:08 compute-0 systemd[1]: libpod-0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777.scope: Deactivated successfully.
Oct 14 09:07:08 compute-0 podman[326836]: 2025-10-14 09:07:08.756350382 +0000 UTC m=+0.074035894 container died 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d2b9ef6811a39d51e9705ae92738e1dfb3f318ac7ca2b4d0ea3d271eb3fe0ad-merged.mount: Deactivated successfully.
Oct 14 09:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777-userdata-shm.mount: Deactivated successfully.
Oct 14 09:07:08 compute-0 podman[326836]: 2025-10-14 09:07:08.808676621 +0000 UTC m=+0.126362133 container cleanup 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:07:08 compute-0 podman[326852]: 2025-10-14 09:07:08.810381223 +0000 UTC m=+0.081467677 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Oct 14 09:07:08 compute-0 podman[326853]: 2025-10-14 09:07:08.818464052 +0000 UTC m=+0.083782144 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:07:08 compute-0 systemd[1]: libpod-conmon-0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777.scope: Deactivated successfully.
Oct 14 09:07:08 compute-0 podman[326924]: 2025-10-14 09:07:08.886223021 +0000 UTC m=+0.053974711 container remove 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.893 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ad9d36-00c9-4105-8269-8be3462095de]: (4, ('Tue Oct 14 09:07:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777)\n0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777\nTue Oct 14 09:07:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777)\n0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.895 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e533ec11-e723-47a8-b9f8-7725e38f540c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.896 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:08 compute-0 kernel: tapfc2d149f-a0: left promiscuous mode
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:08 compute-0 nova_compute[259627]: 2025-10-14 09:07:08.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.916 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[23434a19-3690-461f-b9d5-5123c27766d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.952 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15c4170a-636e-46d9-88dc-83df76842f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.954 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0730380c-c421-4fbf-97f3-0b386cc75d90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c494b9a3-b121-468e-b935-6295f3087bf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654845, 'reachable_time': 37215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326939, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.982 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:07:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.982 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[97921249-f904-4ee0-bd9d-dd991b1dc0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:08 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc2d149f\x2daebf\x2d406a\x2daed2\x2d5161dd22b079.mount: Deactivated successfully.
Oct 14 09:07:09 compute-0 nova_compute[259627]: 2025-10-14 09:07:09.354 2 INFO nova.virt.libvirt.driver [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Deleting instance files /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_del
Oct 14 09:07:09 compute-0 nova_compute[259627]: 2025-10-14 09:07:09.355 2 INFO nova.virt.libvirt.driver [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Deletion of /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_del complete
Oct 14 09:07:09 compute-0 nova_compute[259627]: 2025-10-14 09:07:09.427 2 INFO nova.compute.manager [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 14 09:07:09 compute-0 nova_compute[259627]: 2025-10-14 09:07:09.428 2 DEBUG oslo.service.loopingcall [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:07:09 compute-0 nova_compute[259627]: 2025-10-14 09:07:09.429 2 DEBUG nova.compute.manager [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:07:09 compute-0 nova_compute[259627]: 2025-10-14 09:07:09.429 2 DEBUG nova.network.neutron [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:07:09 compute-0 ceph-mon[74249]: pgmap v1546: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Oct 14 09:07:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Oct 14 09:07:10 compute-0 nova_compute[259627]: 2025-10-14 09:07:10.462 2 DEBUG nova.network.neutron [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:10 compute-0 nova_compute[259627]: 2025-10-14 09:07:10.489 2 INFO nova.compute.manager [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Took 1.06 seconds to deallocate network for instance.
Oct 14 09:07:10 compute-0 nova_compute[259627]: 2025-10-14 09:07:10.549 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:10 compute-0 nova_compute[259627]: 2025-10-14 09:07:10.549 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:10 compute-0 nova_compute[259627]: 2025-10-14 09:07:10.563 2 DEBUG nova.compute.manager [req-7c2c448f-f8d4-40b8-9ad1-6e238694daef req-64fa60b9-d74c-4f2a-bd11-eb28a042eef0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-deleted-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:10 compute-0 nova_compute[259627]: 2025-10-14 09:07:10.616 2 DEBUG oslo_concurrency.processutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:07:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1537167008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:11 compute-0 nova_compute[259627]: 2025-10-14 09:07:11.041 2 DEBUG oslo_concurrency.processutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:11 compute-0 nova_compute[259627]: 2025-10-14 09:07:11.048 2 DEBUG nova.compute.provider_tree [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:07:11 compute-0 nova_compute[259627]: 2025-10-14 09:07:11.085 2 DEBUG nova.scheduler.client.report [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:07:11 compute-0 nova_compute[259627]: 2025-10-14 09:07:11.122 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:11 compute-0 nova_compute[259627]: 2025-10-14 09:07:11.171 2 INFO nova.scheduler.client.report [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Deleted allocations for instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8
Oct 14 09:07:11 compute-0 nova_compute[259627]: 2025-10-14 09:07:11.264 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:11 compute-0 ceph-mon[74249]: pgmap v1547: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Oct 14 09:07:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1537167008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 146 op/s
Oct 14 09:07:12 compute-0 nova_compute[259627]: 2025-10-14 09:07:12.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:13 compute-0 ceph-mon[74249]: pgmap v1548: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 146 op/s
Oct 14 09:07:13 compute-0 nova_compute[259627]: 2025-10-14 09:07:13.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 129 op/s
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.406 2 DEBUG nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.421 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.422 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.440 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.517 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.518 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.524 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.524 2 INFO nova.compute.claims [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:07:15 compute-0 ceph-mon[74249]: pgmap v1549: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 129 op/s
Oct 14 09:07:15 compute-0 nova_compute[259627]: 2025-10-14 09:07:15.623 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Oct 14 09:07:15 compute-0 ovn_controller[152662]: 2025-10-14T09:07:15Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:f7:ed 10.100.0.13
Oct 14 09:07:15 compute-0 ovn_controller[152662]: 2025-10-14T09:07:15Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:f7:ed 10.100.0.13
Oct 14 09:07:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:07:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1570839737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.048 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.054 2 DEBUG nova.compute.provider_tree [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.073 2 DEBUG nova.scheduler.client.report [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.106 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.107 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.185 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.186 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.215 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.236 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.377 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.378 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.379 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating image(s)
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.403 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.429 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.461 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.465 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.497 2 DEBUG nova.policy [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '695c749a8dce4506a31e2cec4f02876b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bda6775f81f403e83269a5f798c9853', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.504 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432821.496341, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.505 2 INFO nova.compute.manager [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] VM Stopped (Lifecycle Event)
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.525 2 DEBUG nova.compute.manager [None req-be725eb6-9ea9-41d1-820b-75daebfa89dc - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.538 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.539 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.539 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.539 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.560 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.563 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e065d857-2df9-4199-aa98-41ca3c436bad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1570839737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.829 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e065d857-2df9-4199-aa98-41ca3c436bad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.894 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] resizing rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.981 2 DEBUG nova.objects.instance [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'migration_context' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.993 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.993 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Ensure instance console log exists: /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.993 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.994 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:16 compute-0 nova_compute[259627]: 2025-10-14 09:07:16.994 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:17 compute-0 nova_compute[259627]: 2025-10-14 09:07:17.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:17 compute-0 ceph-mon[74249]: pgmap v1550: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Oct 14 09:07:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 09:07:17 compute-0 nova_compute[259627]: 2025-10-14 09:07:17.891 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Successfully created port: e18648ba-6112-40fa-85f6-bdf82a012079 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:07:18 compute-0 ceph-mon[74249]: pgmap v1551: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 09:07:18 compute-0 nova_compute[259627]: 2025-10-14 09:07:18.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:18 compute-0 nova_compute[259627]: 2025-10-14 09:07:18.852 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Successfully updated port: e18648ba-6112-40fa-85f6-bdf82a012079 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:07:18 compute-0 nova_compute[259627]: 2025-10-14 09:07:18.866 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:18 compute-0 nova_compute[259627]: 2025-10-14 09:07:18.866 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:18 compute-0 nova_compute[259627]: 2025-10-14 09:07:18.866 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:07:18 compute-0 nova_compute[259627]: 2025-10-14 09:07:18.972 2 DEBUG nova.compute.manager [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:18 compute-0 nova_compute[259627]: 2025-10-14 09:07:18.973 2 DEBUG nova.compute.manager [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing instance network info cache due to event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:07:18 compute-0 nova_compute[259627]: 2025-10-14 09:07:18.974 2 DEBUG oslo_concurrency.lockutils [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.046 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:07:19 compute-0 ovn_controller[152662]: 2025-10-14T09:07:19Z|00647|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:19 compute-0 kernel: tap93e6162a-d0 (unregistering): left promiscuous mode
Oct 14 09:07:19 compute-0 NetworkManager[44885]: <info>  [1760432839.2944] device (tap93e6162a-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:19 compute-0 ovn_controller[152662]: 2025-10-14T09:07:19Z|00648|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 09:07:19 compute-0 ovn_controller[152662]: 2025-10-14T09:07:19Z|00649|binding|INFO|Releasing lport 93e6162a-d037-4440-9c8c-1cb9b293f249 from this chassis (sb_readonly=0)
Oct 14 09:07:19 compute-0 ovn_controller[152662]: 2025-10-14T09:07:19Z|00650|binding|INFO|Removing iface tap93e6162a-d0 ovn-installed in OVS
Oct 14 09:07:19 compute-0 ovn_controller[152662]: 2025-10-14T09:07:19Z|00651|binding|INFO|Setting lport 93e6162a-d037-4440-9c8c-1cb9b293f249 down in Southbound
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.323 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:f7:ed 10.100.0.13'], port_security=['fa:16:3e:13:f7:ed 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c8d53ba7-c60f-4e5c-899f-fd95996ea742', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=93e6162a-d037-4440-9c8c-1cb9b293f249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.324 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 93e6162a-d037-4440-9c8c-1cb9b293f249 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.325 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.326 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0613a9a-8ce4-402e-90e5-d356f9c09b5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.327 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:19 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 14 09:07:19 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003f.scope: Consumed 13.659s CPU time.
Oct 14 09:07:19 compute-0 systemd-machined[214636]: Machine qemu-77-instance-0000003f terminated.
Oct 14 09:07:19 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [NOTICE]   (326390) : haproxy version is 2.8.14-c23fe91
Oct 14 09:07:19 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [NOTICE]   (326390) : path to executable is /usr/sbin/haproxy
Oct 14 09:07:19 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [WARNING]  (326390) : Exiting Master process...
Oct 14 09:07:19 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [WARNING]  (326390) : Exiting Master process...
Oct 14 09:07:19 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [ALERT]    (326390) : Current worker (326392) exited with code 143 (Terminated)
Oct 14 09:07:19 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [WARNING]  (326390) : All workers exited. Exiting... (0)
Oct 14 09:07:19 compute-0 systemd[1]: libpod-92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1.scope: Deactivated successfully.
Oct 14 09:07:19 compute-0 podman[327175]: 2025-10-14 09:07:19.492890365 +0000 UTC m=+0.052801531 container died 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1-userdata-shm.mount: Deactivated successfully.
Oct 14 09:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5f8a07d67777e3bae26757fdb9f288e4a77887d356ee44526796fcfd258e250-merged.mount: Deactivated successfully.
Oct 14 09:07:19 compute-0 podman[327175]: 2025-10-14 09:07:19.547974942 +0000 UTC m=+0.107886128 container cleanup 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 09:07:19 compute-0 systemd[1]: libpod-conmon-92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1.scope: Deactivated successfully.
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.573 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance shutdown successfully after 14 seconds.
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.581 2 INFO nova.virt.libvirt.driver [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance destroyed successfully.
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.582 2 DEBUG nova.objects.instance [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'numa_topology' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:19 compute-0 podman[327211]: 2025-10-14 09:07:19.640132051 +0000 UTC m=+0.061410583 container remove 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.646 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f4cb37-9544-4817-8377-a9c845b00de1]: (4, ('Tue Oct 14 09:07:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1)\n92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1\nTue Oct 14 09:07:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1)\n92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.648 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8860767b-08b7-4164-8bf7-2ad56e58a316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.650 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:19 compute-0 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.677 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed549f94-1c50-40d4-98e8-9f0086aee950]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.704 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37bb19a8-0256-49f3-9993-c7e817f85975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.706 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a0283821-7bff-4f1f-8f6d-42ebc5a3695e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.735 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[96f0754b-29d7-495c-a367-98ef21f39d4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660038, 'reachable_time': 17866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327235, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.738 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:07:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.739 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4201e93f-320f-4aa0-85e9-50053550c98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 09:07:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.802 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.825 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.825 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance network_info: |[{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.826 2 DEBUG oslo_concurrency.lockutils [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.826 2 DEBUG nova.network.neutron [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.831 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start _get_guest_xml network_info=[{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.839 2 WARNING nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.851 2 DEBUG nova.virt.libvirt.host [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.852 2 DEBUG nova.virt.libvirt.host [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.857 2 DEBUG nova.virt.libvirt.host [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.858 2 DEBUG nova.virt.libvirt.host [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.859 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.859 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.860 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.861 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.861 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.862 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.862 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.862 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.863 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.863 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.864 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.864 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.869 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:19 compute-0 nova_compute[259627]: 2025-10-14 09:07:19.921 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Beginning cold snapshot process
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.080 2 DEBUG nova.virt.libvirt.imagebackend [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.335 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] creating snapshot(0ec03d9c9ffa40e4b2f3296f6173a878) on rbd image(c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:07:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:07:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620390014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.405 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.424 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.428 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 14 09:07:20 compute-0 ceph-mon[74249]: pgmap v1552: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 09:07:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3620390014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 14 09:07:20 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 14 09:07:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:07:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107769105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.869 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.872 2 DEBUG nova.virt.libvirt.vif [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1278548098',display_name='tempest-ServerActionsTestOtherB-server-1278548098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1278548098',id=64,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN/kVgKkHzFM6KgYtJMEi52k+/MuBrPIt79IRFIgFmNTlVvXooEFluDr37nozPBAZXSiIdNHa7h8jeIafiglGDw1A5mNs3hIQ2Rxweba0GKcdCWJKvOM6RPyHsBm/r09+g==',key_name='tempest-keypair-1307751836',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j6ifs0px',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='695c749a8dce4506a31e2cec4f02876b',uuid=e065d857-2df9-4199-aa98-41ca3c436bad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.872 2 DEBUG nova.network.os_vif_util [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.874 2 DEBUG nova.network.os_vif_util [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.876 2 DEBUG nova.objects.instance [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'pci_devices' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.894 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <uuid>e065d857-2df9-4199-aa98-41ca3c436bad</uuid>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <name>instance-00000040</name>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestOtherB-server-1278548098</nova:name>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:07:19</nova:creationTime>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <nova:user uuid="695c749a8dce4506a31e2cec4f02876b">tempest-ServerActionsTestOtherB-381012378-project-member</nova:user>
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <nova:project uuid="4bda6775f81f403e83269a5f798c9853">tempest-ServerActionsTestOtherB-381012378</nova:project>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <nova:port uuid="e18648ba-6112-40fa-85f6-bdf82a012079">
Oct 14 09:07:20 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <system>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <entry name="serial">e065d857-2df9-4199-aa98-41ca3c436bad</entry>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <entry name="uuid">e065d857-2df9-4199-aa98-41ca3c436bad</entry>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </system>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <os>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   </os>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <features>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   </features>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk">
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       </source>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk.config">
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       </source>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:07:20 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:0b:9e:35"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <target dev="tape18648ba-61"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/console.log" append="off"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <video>
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </video>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:07:20 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:07:20 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:07:20 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:07:20 compute-0 nova_compute[259627]: </domain>
Oct 14 09:07:20 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.896 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Preparing to wait for external event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.896 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.897 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.897 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.898 2 DEBUG nova.virt.libvirt.vif [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1278548098',display_name='tempest-ServerActionsTestOtherB-server-1278548098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1278548098',id=64,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN/kVgKkHzFM6KgYtJMEi52k+/MuBrPIt79IRFIgFmNTlVvXooEFluDr37nozPBAZXSiIdNHa7h8jeIafiglGDw1A5mNs3hIQ2Rxweba0GKcdCWJKvOM6RPyHsBm/r09+g==',key_name='tempest-keypair-1307751836',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j6ifs0px',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='695c749a8dce4506a31e2cec4f02876b',uuid=e065d857-2df9-4199-aa98-41ca3c436bad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.899 2 DEBUG nova.network.os_vif_util [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.899 2 DEBUG nova.network.os_vif_util [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.900 2 DEBUG os_vif [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape18648ba-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape18648ba-61, col_values=(('external_ids', {'iface-id': 'e18648ba-6112-40fa-85f6-bdf82a012079', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:9e:35', 'vm-uuid': 'e065d857-2df9-4199-aa98-41ca3c436bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:20 compute-0 NetworkManager[44885]: <info>  [1760432840.9115] manager: (tape18648ba-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.920 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] cloning vms/c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk@0ec03d9c9ffa40e4b2f3296f6173a878 to images/64285d7a-1987-45f5-9cf9-f0e67a4d8856 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.969 2 INFO os_vif [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61')
Oct 14 09:07:20 compute-0 nova_compute[259627]: 2025-10-14 09:07:20.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.043 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.044 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.044 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No VIF found with MAC fa:16:3e:0b:9e:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.045 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Using config drive
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.077 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.092 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] flattening images/64285d7a-1987-45f5-9cf9-f0e67a4d8856 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.500 2 DEBUG nova.compute.manager [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-vif-unplugged-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.500 2 DEBUG oslo_concurrency.lockutils [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.501 2 DEBUG oslo_concurrency.lockutils [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.501 2 DEBUG oslo_concurrency.lockutils [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.501 2 DEBUG nova.compute.manager [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] No waiting events found dispatching network-vif-unplugged-93e6162a-d037-4440-9c8c-1cb9b293f249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.501 2 WARNING nova.compute.manager [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received unexpected event network-vif-unplugged-93e6162a-d037-4440-9c8c-1cb9b293f249 for instance with vm_state active and task_state shelving_image_uploading.
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.574 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] removing snapshot(0ec03d9c9ffa40e4b2f3296f6173a878) on rbd image(c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:07:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 491 KiB/s rd, 4.7 MiB/s wr, 118 op/s
Oct 14 09:07:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 14 09:07:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 14 09:07:21 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 14 09:07:21 compute-0 ceph-mon[74249]: osdmap e219: 3 total, 3 up, 3 in
Oct 14 09:07:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/107769105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.890 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] creating snapshot(snap) on rbd image(64285d7a-1987-45f5-9cf9-f0e67a4d8856) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:07:21 compute-0 nova_compute[259627]: 2025-10-14 09:07:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.141 2 DEBUG nova.network.neutron [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updated VIF entry in instance network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.142 2 DEBUG nova.network.neutron [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.160 2 DEBUG oslo_concurrency.lockutils [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.186 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating config drive at /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.191 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85ehh5sb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.348 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85ehh5sb" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.379 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.384 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config e065d857-2df9-4199-aa98-41ca3c436bad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.589 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config e065d857-2df9-4199-aa98-41ca3c436bad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.590 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deleting local config drive /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config because it was imported into RBD.
Oct 14 09:07:22 compute-0 kernel: tape18648ba-61: entered promiscuous mode
Oct 14 09:07:22 compute-0 NetworkManager[44885]: <info>  [1760432842.6495] manager: (tape18648ba-61): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Oct 14 09:07:22 compute-0 ovn_controller[152662]: 2025-10-14T09:07:22Z|00652|binding|INFO|Claiming lport e18648ba-6112-40fa-85f6-bdf82a012079 for this chassis.
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:22 compute-0 ovn_controller[152662]: 2025-10-14T09:07:22Z|00653|binding|INFO|e18648ba-6112-40fa-85f6-bdf82a012079: Claiming fa:16:3e:0b:9e:35 10.100.0.9
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.663 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:9e:35 10.100.0.9'], port_security=['fa:16:3e:0b:9e:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e065d857-2df9-4199-aa98-41ca3c436bad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baab55cf-9843-49b9-a43b-28ca1ab122c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e18648ba-6112-40fa-85f6-bdf82a012079) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.664 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e18648ba-6112-40fa-85f6-bdf82a012079 in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 bound to our chassis
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.666 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9883f8-3397-42d6-860c-3bf33d181c83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.681 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d540b01-e1 in ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:07:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:22 compute-0 systemd-machined[214636]: New machine qemu-78-instance-00000040.
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.683 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d540b01-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.683 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9978c729-1597-48c4-bb26-61dd715ad288]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.684 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bb5ce0-de24-4739-b664-3463c4a63f76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 podman[327502]: 2025-10-14 09:07:22.689457158 +0000 UTC m=+0.093981836 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.700 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[323d18cc-cbbc-4f96-bffb-d11b97d61a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000040.
Oct 14 09:07:22 compute-0 systemd-udevd[327562]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[72267736-3691-4925-b339-771102f2cc97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:22 compute-0 ovn_controller[152662]: 2025-10-14T09:07:22Z|00654|binding|INFO|Setting lport e18648ba-6112-40fa-85f6-bdf82a012079 ovn-installed in OVS
Oct 14 09:07:22 compute-0 ovn_controller[152662]: 2025-10-14T09:07:22Z|00655|binding|INFO|Setting lport e18648ba-6112-40fa-85f6-bdf82a012079 up in Southbound
Oct 14 09:07:22 compute-0 podman[327501]: 2025-10-14 09:07:22.738090125 +0000 UTC m=+0.141860944 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:07:22 compute-0 NetworkManager[44885]: <info>  [1760432842.7422] device (tape18648ba-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:07:22 compute-0 NetworkManager[44885]: <info>  [1760432842.7433] device (tape18648ba-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.761 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcc2f1d-0d4f-4a73-bfb0-1c8dc0862df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 systemd-udevd[327573]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:07:22 compute-0 NetworkManager[44885]: <info>  [1760432842.7670] manager: (tap9d540b01-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.768 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[71ecb441-2634-4630-bc58-c4687b912e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.796 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0256ed-8efd-450d-b066-4250373be3ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.799 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[91b71b32-a17e-4733-ad2f-5d877c1c0981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 NetworkManager[44885]: <info>  [1760432842.8197] device (tap9d540b01-e0): carrier: link connected
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.825 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fda78a3f-147b-4035-a5b6-9dbae9f6df83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.839 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e26d7dcc-7503-4eb7-bf9c-83181bf89450]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327597, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.856 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f70463fa-cb58-4e13-a85f-8e8364e57479]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:6aec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662158, 'tstamp': 662158}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327598, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17ebc2c2-ace0-460c-870a-bdb9d04e323a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327599, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 14 09:07:22 compute-0 ceph-mon[74249]: pgmap v1554: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 491 KiB/s rd, 4.7 MiB/s wr, 118 op/s
Oct 14 09:07:22 compute-0 ceph-mon[74249]: osdmap e220: 3 total, 3 up, 3 in
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.903 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82698df5-b05f-4fea-8126-9356de91dd93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 nova_compute[259627]: 2025-10-14 09:07:22.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c4a500e-89a5-48b1-92ea-86c6c978839d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.982 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.983 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.983 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d540b01-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:22 compute-0 NetworkManager[44885]: <info>  [1760432842.9864] manager: (tap9d540b01-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Oct 14 09:07:22 compute-0 kernel: tap9d540b01-e0: entered promiscuous mode
Oct 14 09:07:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.992 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d540b01-e0, col_values=(('external_ids', {'iface-id': 'fcca615a-5470-4880-844d-73adc425bce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:22 compute-0 ovn_controller[152662]: 2025-10-14T09:07:22Z|00656|binding|INFO|Releasing lport fcca615a-5470-4880-844d-73adc425bce1 from this chassis (sb_readonly=0)
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:23.060 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d540b01-e9c4-4dc5-9a51-94512ad9a409.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d540b01-e9c4-4dc5-9a51-94512ad9a409.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:23.062 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[591caace-b4df-4d4f-93a6-15e9e7264bcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:23.063 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/9d540b01-e9c4-4dc5-9a51-94512ad9a409.pid.haproxy
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:07:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:23.067 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'env', 'PROCESS_TAG=haproxy-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d540b01-e9c4-4dc5-9a51-94512ad9a409.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.163 2 DEBUG nova.compute.manager [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.164 2 DEBUG oslo_concurrency.lockutils [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.164 2 DEBUG oslo_concurrency.lockutils [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.166 2 DEBUG oslo_concurrency.lockutils [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.167 2 DEBUG nova.compute.manager [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Processing event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:07:23 compute-0 podman[327677]: 2025-10-14 09:07:23.452662454 +0000 UTC m=+0.072834675 container create d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:07:23 compute-0 systemd[1]: Started libpod-conmon-d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a.scope.
Oct 14 09:07:23 compute-0 podman[327677]: 2025-10-14 09:07:23.409757077 +0000 UTC m=+0.029929398 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:07:23 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670696918cb8387e8757572bf34cc188f3c339f0a3acc9d8f14f93b3255809dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:23 compute-0 podman[327677]: 2025-10-14 09:07:23.549942499 +0000 UTC m=+0.170114720 container init d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:07:23 compute-0 podman[327677]: 2025-10-14 09:07:23.559390612 +0000 UTC m=+0.179562833 container start d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:07:23 compute-0 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [NOTICE]   (327696) : New worker (327698) forked
Oct 14 09:07:23 compute-0 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [NOTICE]   (327696) : Loading success.
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG nova.compute.manager [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG oslo_concurrency.lockutils [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG oslo_concurrency.lockutils [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG oslo_concurrency.lockutils [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG nova.compute.manager [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] No waiting events found dispatching network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.607 2 WARNING nova.compute.manager [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received unexpected event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 for instance with vm_state active and task_state shelving_image_uploading.
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.686 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432828.6852503, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.687 2 INFO nova.compute.manager [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] VM Stopped (Lifecycle Event)
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.711 2 DEBUG nova.compute.manager [None req-713cd2c8-4ff2-4fd7-873c-844becfb6b10 - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.8 MiB/s wr, 110 op/s
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.792 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432843.7921097, e065d857-2df9-4199-aa98-41ca3c436bad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.792 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Started (Lifecycle Event)
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.794 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.800 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.805 2 INFO nova.virt.libvirt.driver [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance spawned successfully.
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.805 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.816 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.827 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.834 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.835 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.836 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.836 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.837 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.838 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.850 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.851 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432843.796003, e065d857-2df9-4199-aa98-41ca3c436bad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.851 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Paused (Lifecycle Event)
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.880 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.885 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432843.8001757, e065d857-2df9-4199-aa98-41ca3c436bad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.885 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Resumed (Lifecycle Event)
Oct 14 09:07:23 compute-0 ceph-mon[74249]: osdmap e221: 3 total, 3 up, 3 in
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.908 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.912 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.916 2 INFO nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 7.54 seconds to spawn the instance on the hypervisor.
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.916 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.929 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.973 2 INFO nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 8.47 seconds to build instance.
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:23 compute-0 nova_compute[259627]: 2025-10-14 09:07:23.989 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.680 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Snapshot image upload complete
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.680 2 DEBUG nova.compute.manager [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.727 2 INFO nova.compute.manager [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Shelve offloading
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.733 2 INFO nova.virt.libvirt.driver [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance destroyed successfully.
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.733 2 DEBUG nova.compute.manager [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.736 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.736 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.736 2 DEBUG nova.network.neutron [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.997 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.997 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.998 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:07:24 compute-0 nova_compute[259627]: 2025-10-14 09:07:24.998 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:25 compute-0 ceph-mon[74249]: pgmap v1557: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.8 MiB/s wr, 110 op/s
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.265 2 DEBUG nova.compute.manager [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.266 2 DEBUG oslo_concurrency.lockutils [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.267 2 DEBUG oslo_concurrency.lockutils [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.268 2 DEBUG oslo_concurrency.lockutils [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.268 2 DEBUG nova.compute.manager [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] No waiting events found dispatching network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.268 2 WARNING nova.compute.manager [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received unexpected event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 for instance with vm_state active and task_state None.
Oct 14 09:07:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:07:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982781137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.457 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.521 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.522 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.526 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.527 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.679 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.680 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3863MB free_disk=59.921966552734375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.680 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.681 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.759 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance c8d53ba7-c60f-4e5c-899f-fd95996ea742 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.759 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e065d857-2df9-4199-aa98-41ca3c436bad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.760 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:07:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 415 op/s
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.820 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:25 compute-0 nova_compute[259627]: 2025-10-14 09:07:25.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2982781137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:07:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828970195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.310 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.320 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.351 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:26 compute-0 NetworkManager[44885]: <info>  [1760432846.3609] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Oct 14 09:07:26 compute-0 NetworkManager[44885]: <info>  [1760432846.3629] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.389 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.389 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:26 compute-0 ovn_controller[152662]: 2025-10-14T09:07:26Z|00657|binding|INFO|Releasing lport fcca615a-5470-4880-844d-73adc425bce1 from this chassis (sb_readonly=0)
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.706 2 DEBUG nova.compute.manager [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.707 2 DEBUG nova.compute.manager [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing instance network info cache due to event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.707 2 DEBUG oslo_concurrency.lockutils [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.708 2 DEBUG oslo_concurrency.lockutils [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.708 2 DEBUG nova.network.neutron [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.806 2 DEBUG nova.network.neutron [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:26 compute-0 nova_compute[259627]: 2025-10-14 09:07:26.829 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:27 compute-0 ceph-mon[74249]: pgmap v1558: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 415 op/s
Oct 14 09:07:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1828970195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.391 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.391 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:07:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 14 09:07:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 14 09:07:27 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 14 09:07:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.996 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.996 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.996 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:07:27 compute-0 nova_compute[259627]: 2025-10-14 09:07:27.997 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:28 compute-0 ceph-mon[74249]: osdmap e222: 3 total, 3 up, 3 in
Oct 14 09:07:28 compute-0 ceph-mon[74249]: pgmap v1560: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.772 2 INFO nova.virt.libvirt.driver [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance destroyed successfully.
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.773 2 DEBUG nova.objects.instance [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.791 2 DEBUG nova.virt.libvirt.vif [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1969379492',display_name='tempest-DeleteServersTestJSON-server-1969379492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1969379492',id=63,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-399cflte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member',shelved_at='2025-10-14T09:07:24.680673',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='64285d7a-1987-45f5-9cf9-f0e67a4d8856'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:07:20Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=c8d53ba7-c60f-4e5c-899f-fd95996ea742,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.791 2 DEBUG nova.network.os_vif_util [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.793 2 DEBUG nova.network.os_vif_util [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.793 2 DEBUG os_vif [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.797 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93e6162a-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.815 2 INFO os_vif [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0')
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.872 2 DEBUG nova.compute.manager [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-changed-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.873 2 DEBUG nova.compute.manager [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Refreshing instance network info cache due to event network-changed-93e6162a-d037-4440-9c8c-1cb9b293f249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.874 2 DEBUG oslo_concurrency.lockutils [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.889 2 DEBUG nova.network.neutron [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updated VIF entry in instance network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.890 2 DEBUG nova.network.neutron [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:28 compute-0 nova_compute[259627]: 2025-10-14 09:07:28.919 2 DEBUG oslo_concurrency.lockutils [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.341 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Deleting instance files /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742_del
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.342 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Deletion of /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742_del complete
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.457 2 INFO nova.scheduler.client.report [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance c8d53ba7-c60f-4e5c-899f-fd95996ea742
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.511 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.511 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.564 2 DEBUG oslo_concurrency.processutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.622 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.657 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.658 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.659 2 DEBUG oslo_concurrency.lockutils [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.660 2 DEBUG nova.network.neutron [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Refreshing network info cache for port 93e6162a-d037-4440-9c8c-1cb9b293f249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.662 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:29 compute-0 nova_compute[259627]: 2025-10-14 09:07:29.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 231 op/s
Oct 14 09:07:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:07:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3148408731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:30 compute-0 nova_compute[259627]: 2025-10-14 09:07:30.085 2 DEBUG oslo_concurrency.processutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:30 compute-0 nova_compute[259627]: 2025-10-14 09:07:30.091 2 DEBUG nova.compute.provider_tree [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:07:30 compute-0 nova_compute[259627]: 2025-10-14 09:07:30.112 2 DEBUG nova.scheduler.client.report [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:07:30 compute-0 nova_compute[259627]: 2025-10-14 09:07:30.144 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:30 compute-0 nova_compute[259627]: 2025-10-14 09:07:30.212 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 24.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 14 09:07:30 compute-0 ceph-mon[74249]: pgmap v1561: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 231 op/s
Oct 14 09:07:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3148408731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 14 09:07:30 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 14 09:07:30 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 14 09:07:31 compute-0 nova_compute[259627]: 2025-10-14 09:07:31.514 2 DEBUG nova.network.neutron [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updated VIF entry in instance network info cache for port 93e6162a-d037-4440-9c8c-1cb9b293f249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:07:31 compute-0 nova_compute[259627]: 2025-10-14 09:07:31.515 2 DEBUG nova.network.neutron [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": null, "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap93e6162a-d0", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:31 compute-0 nova_compute[259627]: 2025-10-14 09:07:31.531 2 DEBUG oslo_concurrency.lockutils [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 277 op/s
Oct 14 09:07:31 compute-0 ceph-mon[74249]: osdmap e223: 3 total, 3 up, 3 in
Oct 14 09:07:32 compute-0 nova_compute[259627]: 2025-10-14 09:07:32.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:32 compute-0 nova_compute[259627]: 2025-10-14 09:07:32.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:07:32
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', '.rgw.root', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:07:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:07:32 compute-0 ceph-mon[74249]: pgmap v1563: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 277 op/s
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:07:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Oct 14 09:07:33 compute-0 nova_compute[259627]: 2025-10-14 09:07:33.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:33 compute-0 nova_compute[259627]: 2025-10-14 09:07:33.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.445 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.446 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.464 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.566 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.567 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.574 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.575 2 INFO nova.compute.claims [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.578 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432839.572321, c8d53ba7-c60f-4e5c-899f-fd95996ea742 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.578 2 INFO nova.compute.manager [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] VM Stopped (Lifecycle Event)
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.616 2 DEBUG nova.compute.manager [None req-eba3def5-debe-48f8-8117-be9e13e18554 - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:34 compute-0 nova_compute[259627]: 2025-10-14 09:07:34.727 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:34 compute-0 ceph-mon[74249]: pgmap v1564: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Oct 14 09:07:34 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 14 09:07:34 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:34.953904) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:07:34 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 14 09:07:34 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432854953954, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1825, "num_deletes": 252, "total_data_size": 2767758, "memory_usage": 2817408, "flush_reason": "Manual Compaction"}
Oct 14 09:07:34 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855116847, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2705619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31071, "largest_seqno": 32895, "table_properties": {"data_size": 2697358, "index_size": 4947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17817, "raw_average_key_size": 20, "raw_value_size": 2680512, "raw_average_value_size": 3066, "num_data_blocks": 220, "num_entries": 874, "num_filter_entries": 874, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432683, "oldest_key_time": 1760432683, "file_creation_time": 1760432854, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 162972 microseconds, and 5377 cpu microseconds.
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.116882) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2705619 bytes OK
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.116899) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.143223) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.143265) EVENT_LOG_v1 {"time_micros": 1760432855143253, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.143315) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2759903, prev total WAL file size 2759903, number of live WAL files 2.
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.146076) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2642KB)], [68(7258KB)]
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855146156, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10138009, "oldest_snapshot_seqno": -1}
Oct 14 09:07:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:07:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469170859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.285 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.294 2 DEBUG nova.compute.provider_tree [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5752 keys, 8516455 bytes, temperature: kUnknown
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855305280, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8516455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8477216, "index_size": 23769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 144502, "raw_average_key_size": 25, "raw_value_size": 8373086, "raw_average_value_size": 1455, "num_data_blocks": 967, "num_entries": 5752, "num_filter_entries": 5752, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.314 2 DEBUG nova.scheduler.client.report [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.305571) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8516455 bytes
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.323831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.7 rd, 53.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 6272, records dropped: 520 output_compression: NoCompression
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.323863) EVENT_LOG_v1 {"time_micros": 1760432855323850, "job": 38, "event": "compaction_finished", "compaction_time_micros": 159255, "compaction_time_cpu_micros": 32612, "output_level": 6, "num_output_files": 1, "total_output_size": 8516455, "num_input_records": 6272, "num_output_records": 5752, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855324918, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855326931, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.145885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:07:35 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.347 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.349 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.405 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.405 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.427 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.451 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.539 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.541 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.541 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Creating image(s)
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.571 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.602 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.626 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.629 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.712 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.713 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.714 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.714 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.734 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.737 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 2.5 MiB/s wr, 113 op/s
Oct 14 09:07:35 compute-0 nova_compute[259627]: 2025-10-14 09:07:35.847 2 DEBUG nova.policy [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:07:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3469170859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.003 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.081 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:07:36 compute-0 ovn_controller[152662]: 2025-10-14T09:07:36Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:9e:35 10.100.0.9
Oct 14 09:07:36 compute-0 ovn_controller[152662]: 2025-10-14T09:07:36Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:9e:35 10.100.0.9
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.176 2 DEBUG nova.objects.instance [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.191 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.191 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Ensure instance console log exists: /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.192 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.192 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.193 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:36 compute-0 nova_compute[259627]: 2025-10-14 09:07:36.642 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Successfully created port: 1dc4ac40-9a94-49bf-a098-664b98599004 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:07:36 compute-0 ceph-mon[74249]: pgmap v1565: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 2.5 MiB/s wr, 113 op/s
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.229 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.230 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.245 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.302 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.303 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.310 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.310 2 INFO nova.compute.claims [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.350 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Successfully updated port: 1dc4ac40-9a94-49bf-a098-664b98599004 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.371 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.372 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.372 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.446 2 DEBUG nova.compute.manager [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-changed-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.447 2 DEBUG nova.compute.manager [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Refreshing instance network info cache due to event network-changed-1dc4ac40-9a94-49bf-a098-664b98599004. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.447 2 DEBUG oslo_concurrency.lockutils [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.456 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.543 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:07:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 14 09:07:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 14 09:07:37 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 14 09:07:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.5 MiB/s wr, 114 op/s
Oct 14 09:07:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:07:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935533830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.979 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:37 compute-0 nova_compute[259627]: 2025-10-14 09:07:37.987 2 DEBUG nova.compute.provider_tree [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.005 2 DEBUG nova.scheduler.client.report [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.037 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.038 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.105 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.106 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.132 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.167 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.298 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.300 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.300 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Creating image(s)
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.334 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.363 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.389 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.392 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.425 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Updating instance_info_cache with network_info: [{"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.429 2 DEBUG nova.policy [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '40a7a5045f164fb3bc6f8ae8a40f6bac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f9883ad901fc41c2a340b171d7165a0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.473 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.473 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.474 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.475 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.497 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.500 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.656 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.657 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance network_info: |[{"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.658 2 DEBUG oslo_concurrency.lockutils [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.658 2 DEBUG nova.network.neutron [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Refreshing network info cache for port 1dc4ac40-9a94-49bf-a098-664b98599004 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.663 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start _get_guest_xml network_info=[{"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.668 2 WARNING nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.678 2 DEBUG nova.virt.libvirt.host [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.679 2 DEBUG nova.virt.libvirt.host [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.684 2 DEBUG nova.virt.libvirt.host [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.685 2 DEBUG nova.virt.libvirt.host [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.685 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.686 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.687 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.687 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.688 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.689 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.689 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.689 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.690 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.690 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.691 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.691 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.698 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:38 compute-0 ceph-mon[74249]: osdmap e224: 3 total, 3 up, 3 in
Oct 14 09:07:38 compute-0 ceph-mon[74249]: pgmap v1567: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.5 MiB/s wr, 114 op/s
Oct 14 09:07:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2935533830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:07:38 compute-0 nova_compute[259627]: 2025-10-14 09:07:38.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:07:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434854446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.207 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.231 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.235 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.428 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.928s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.505 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] resizing rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.646 2 DEBUG nova.objects.instance [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lazy-loading 'migration_context' on Instance uuid 8a58a504-85a5-44e6-b815-99abb4ca2fc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.669 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.670 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Ensure instance console log exists: /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.671 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.671 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.672 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:07:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673018092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:39 compute-0 podman[328214]: 2025-10-14 09:07:39.685476424 +0000 UTC m=+0.087426314 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:07:39 compute-0 podman[328215]: 2025-10-14 09:07:39.688188481 +0000 UTC m=+0.081432007 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.699 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.700 2 DEBUG nova.virt.libvirt.vif [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2035284822',display_name='tempest-DeleteServersTestJSON-server-2035284822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2035284822',id=65,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-47ja83kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:35Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=58b61a7a-1a2e-4e3a-9444-3a89da64c5f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.700 2 DEBUG nova.network.os_vif_util [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.701 2 DEBUG nova.network.os_vif_util [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.702 2 DEBUG nova.objects.instance [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.726 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <uuid>58b61a7a-1a2e-4e3a-9444-3a89da64c5f3</uuid>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <name>instance-00000041</name>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <nova:name>tempest-DeleteServersTestJSON-server-2035284822</nova:name>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:07:38</nova:creationTime>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <nova:port uuid="1dc4ac40-9a94-49bf-a098-664b98599004">
Oct 14 09:07:39 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <system>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <entry name="serial">58b61a7a-1a2e-4e3a-9444-3a89da64c5f3</entry>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <entry name="uuid">58b61a7a-1a2e-4e3a-9444-3a89da64c5f3</entry>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </system>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <os>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   </os>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <features>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   </features>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk">
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config">
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:07:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:0d:81:c4"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <target dev="tap1dc4ac40-9a"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/console.log" append="off"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <video>
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </video>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:07:39 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:07:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:07:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:07:39 compute-0 nova_compute[259627]: </domain>
Oct 14 09:07:39 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.727 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Preparing to wait for external event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.727 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.727 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.728 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.728 2 DEBUG nova.virt.libvirt.vif [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2035284822',display_name='tempest-DeleteServersTestJSON-server-2035284822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2035284822',id=65,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-47ja83kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:35Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=58b61a7a-1a2e-4e3a-9444-3a89da64c5f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.729 2 DEBUG nova.network.os_vif_util [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.729 2 DEBUG nova.network.os_vif_util [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.730 2 DEBUG os_vif [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.731 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.731 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dc4ac40-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1dc4ac40-9a, col_values=(('external_ids', {'iface-id': '1dc4ac40-9a94-49bf-a098-664b98599004', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:81:c4', 'vm-uuid': '58b61a7a-1a2e-4e3a-9444-3a89da64c5f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:39 compute-0 NetworkManager[44885]: <info>  [1760432859.7367] manager: (tap1dc4ac40-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.742 2 INFO os_vif [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a')
Oct 14 09:07:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.787 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.787 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.788 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:0d:81:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.788 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Using config drive
Oct 14 09:07:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3434854446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/673018092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.814 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:39 compute-0 nova_compute[259627]: 2025-10-14 09:07:39.820 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Successfully created port: 2d7c67a0-10d0-4de1-a430-60e038fcf537 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.339 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Creating config drive at /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.347 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0f95boe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.506 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0f95boe" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.531 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.534 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.746 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.747 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Deleting local config drive /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config because it was imported into RBD.
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.771 2 DEBUG nova.network.neutron [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Updated VIF entry in instance network info cache for port 1dc4ac40-9a94-49bf-a098-664b98599004. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.772 2 DEBUG nova.network.neutron [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Updating instance_info_cache with network_info: [{"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.798 2 DEBUG oslo_concurrency.lockutils [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:40 compute-0 kernel: tap1dc4ac40-9a: entered promiscuous mode
Oct 14 09:07:40 compute-0 NetworkManager[44885]: <info>  [1760432860.8137] manager: (tap1dc4ac40-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:40 compute-0 ovn_controller[152662]: 2025-10-14T09:07:40Z|00658|binding|INFO|Claiming lport 1dc4ac40-9a94-49bf-a098-664b98599004 for this chassis.
Oct 14 09:07:40 compute-0 ovn_controller[152662]: 2025-10-14T09:07:40Z|00659|binding|INFO|1dc4ac40-9a94-49bf-a098-664b98599004: Claiming fa:16:3e:0d:81:c4 10.100.0.8
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.828 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:81:c4 10.100.0.8'], port_security=['fa:16:3e:0d:81:c4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '58b61a7a-1a2e-4e3a-9444-3a89da64c5f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1dc4ac40-9a94-49bf-a098-664b98599004) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1dc4ac40-9a94-49bf-a098-664b98599004 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.833 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.847 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cd95ffce-2f26-400f-bd3d-523867fed3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.848 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.851 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.851 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d089a92-3d67-4eb2-9bcc-af6b31d5c629]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.852 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6ba1c3-1cf3-4a16-827b-ad125b2a1d53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 ovn_controller[152662]: 2025-10-14T09:07:40Z|00660|binding|INFO|Setting lport 1dc4ac40-9a94-49bf-a098-664b98599004 ovn-installed in OVS
Oct 14 09:07:40 compute-0 ovn_controller[152662]: 2025-10-14T09:07:40Z|00661|binding|INFO|Setting lport 1dc4ac40-9a94-49bf-a098-664b98599004 up in Southbound
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:40 compute-0 systemd-udevd[328346]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:40 compute-0 systemd-machined[214636]: New machine qemu-79-instance-00000041.
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.867 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7c6fb8-93ab-4a1d-baad-f0ea72f5df10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 NetworkManager[44885]: <info>  [1760432860.8768] device (tap1dc4ac40-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:07:40 compute-0 NetworkManager[44885]: <info>  [1760432860.8778] device (tap1dc4ac40-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:07:40 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000041.
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.881 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Successfully updated port: 2d7c67a0-10d0-4de1-a430-60e038fcf537 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4844db9b-a422-42a7-8310-b3da24378811]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.898 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.899 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquired lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.899 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.927 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e59c8d-8f08-448e-8cb1-42ae3c8962f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.932 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[140d3c18-0bb8-4b4e-b62a-baba91a25db8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 NetworkManager[44885]: <info>  [1760432860.9331] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/281)
Oct 14 09:07:40 compute-0 systemd-udevd[328349]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:07:40 compute-0 ceph-mon[74249]: pgmap v1568: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.973 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[24d4cddf-cbef-4987-b342-44849877768c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.976 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a4eadc-79d8-485a-ab31-4f2d31646b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.993 2 DEBUG nova.compute.manager [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-changed-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.994 2 DEBUG nova.compute.manager [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Refreshing instance network info cache due to event network-changed-2d7c67a0-10d0-4de1-a430-60e038fcf537. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:07:40 compute-0 nova_compute[259627]: 2025-10-14 09:07:40.994 2 DEBUG oslo_concurrency.lockutils [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:40 compute-0 NetworkManager[44885]: <info>  [1760432860.9968] device (tap0a07d59e-b0): carrier: link connected
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.002 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[31d9581b-11a8-41d4-a10f-fc4975c6e848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.022 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e91161fa-fd59-475e-b808-2eef186d2fdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 191], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663975, 'reachable_time': 38303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328377, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.038 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[121782ad-6040-4c43-90c7-094d3dec68fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663975, 'tstamp': 663975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328378, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.061 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d146ffd-c679-4632-8d3c-477687f7517a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 191], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663975, 'reachable_time': 38303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328379, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.106 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c99f2ea-bdf1-4211-b419-b990f3a51cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:41 compute-0 nova_compute[259627]: 2025-10-14 09:07:41.162 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.180 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3b4229-99d8-4e1b-a73d-f7bde3ddfa9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.182 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.183 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.184 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:41 compute-0 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 09:07:41 compute-0 NetworkManager[44885]: <info>  [1760432861.1873] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Oct 14 09:07:41 compute-0 nova_compute[259627]: 2025-10-14 09:07:41.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.201 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:41 compute-0 ovn_controller[152662]: 2025-10-14T09:07:41Z|00662|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.210 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.212 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6992e8b7-a464-43e5-9bfb-3b821bb58705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.213 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:07:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.214 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:07:41 compute-0 nova_compute[259627]: 2025-10-14 09:07:41.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:41 compute-0 podman[328411]: 2025-10-14 09:07:41.664096492 +0000 UTC m=+0.082853171 container create 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 09:07:41 compute-0 podman[328411]: 2025-10-14 09:07:41.616069609 +0000 UTC m=+0.034826348 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:07:41 compute-0 systemd[1]: Started libpod-conmon-3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6.scope.
Oct 14 09:07:41 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae4ef622a69186680b309d3a53a544ac1d19dfd134061b342523d307a4ae263/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 6.8 MiB/s wr, 171 op/s
Oct 14 09:07:41 compute-0 podman[328411]: 2025-10-14 09:07:41.792488824 +0000 UTC m=+0.211245553 container init 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:07:41 compute-0 podman[328411]: 2025-10-14 09:07:41.803192338 +0000 UTC m=+0.221949007 container start 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:07:41 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [NOTICE]   (328472) : New worker (328474) forked
Oct 14 09:07:41 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [NOTICE]   (328472) : Loading success.
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.071 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updating instance_info_cache with network_info: [{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.089 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Releasing lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.090 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance network_info: |[{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.091 2 DEBUG oslo_concurrency.lockutils [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.092 2 DEBUG nova.network.neutron [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Refreshing network info cache for port 2d7c67a0-10d0-4de1-a430-60e038fcf537 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.097 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start _get_guest_xml network_info=[{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.105 2 WARNING nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.114 2 DEBUG nova.virt.libvirt.host [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.115 2 DEBUG nova.virt.libvirt.host [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.120 2 DEBUG nova.virt.libvirt.host [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.121 2 DEBUG nova.virt.libvirt.host [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.122 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.122 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.123 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.124 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.124 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.125 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.125 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.126 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.127 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.127 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.128 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.129 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.133 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.287 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432862.2867455, 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.288 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] VM Started (Lifecycle Event)
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.319 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.325 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432862.288286, 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.326 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] VM Paused (Lifecycle Event)
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.348 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.351 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.381 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:07:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:07:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38985500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.648 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.682 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:42 compute-0 nova_compute[259627]: 2025-10-14 09:07:42.686 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:42 compute-0 ceph-mon[74249]: pgmap v1569: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 6.8 MiB/s wr, 171 op/s
Oct 14 09:07:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/38985500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014490414591065964 of space, bias 1.0, pg target 0.43471243773197893 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:07:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:07:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182743236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.167 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.172 2 DEBUG nova.virt.libvirt.vif [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1860549828',display_name='tempest-ServersTestJSON-server-1860549828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1860549828',id=66,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPBjCQeMHoUcI/GHoNmPZT13/aYbG1GbRQyp9yA777AdaKu728JKactgc6o+aymRL18NOp98nhjzfD96xfaginRg8v3g0mj8wP4FAzm7DJKAkKlO+Gseanq7GXJCgToDw==',key_name='tempest-keypair-1047945045',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9883ad901fc41c2a340b171d7165a0e',ramdisk_id='',reservation_id='r-owfgitz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1568732941',owner_user_name='tempest-ServersTestJSON-1568732941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40a7a5045f164fb3bc6f8ae8a40f6bac',uuid=8a58a504-85a5-44e6-b815-99abb4ca2fc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.173 2 DEBUG nova.network.os_vif_util [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converting VIF {"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.174 2 DEBUG nova.network.os_vif_util [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.176 2 DEBUG nova.objects.instance [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a58a504-85a5-44e6-b815-99abb4ca2fc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.201 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <uuid>8a58a504-85a5-44e6-b815-99abb4ca2fc8</uuid>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <name>instance-00000042</name>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestJSON-server-1860549828</nova:name>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:07:42</nova:creationTime>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <nova:user uuid="40a7a5045f164fb3bc6f8ae8a40f6bac">tempest-ServersTestJSON-1568732941-project-member</nova:user>
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <nova:project uuid="f9883ad901fc41c2a340b171d7165a0e">tempest-ServersTestJSON-1568732941</nova:project>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <nova:port uuid="2d7c67a0-10d0-4de1-a430-60e038fcf537">
Oct 14 09:07:43 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <system>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <entry name="serial">8a58a504-85a5-44e6-b815-99abb4ca2fc8</entry>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <entry name="uuid">8a58a504-85a5-44e6-b815-99abb4ca2fc8</entry>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </system>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <os>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   </os>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <features>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   </features>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk">
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       </source>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config">
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       </source>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:07:43 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b6:5f:31"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <target dev="tap2d7c67a0-10"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/console.log" append="off"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <video>
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </video>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:07:43 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:07:43 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:07:43 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:07:43 compute-0 nova_compute[259627]: </domain>
Oct 14 09:07:43 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.202 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Preparing to wait for external event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.203 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.203 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.203 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.204 2 DEBUG nova.virt.libvirt.vif [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1860549828',display_name='tempest-ServersTestJSON-server-1860549828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1860549828',id=66,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPBjCQeMHoUcI/GHoNmPZT13/aYbG1GbRQyp9yA777AdaKu728JKactgc6o+aymRL18NOp98nhjzfD96xfaginRg8v3g0mj8wP4FAzm7DJKAkKlO+Gseanq7GXJCgToDw==',key_name='tempest-keypair-1047945045',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9883ad901fc41c2a340b171d7165a0e',ramdisk_id='',reservation_id='r-owfgitz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1568732941',owner_user_name='tempest-ServersTestJSON-1568732941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40a7a5045f164fb3bc6f8ae8a40f6bac',uuid=8a58a504-85a5-44e6-b815-99abb4ca2fc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.205 2 DEBUG nova.network.os_vif_util [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converting VIF {"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.206 2 DEBUG nova.network.os_vif_util [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.206 2 DEBUG os_vif [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d7c67a0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d7c67a0-10, col_values=(('external_ids', {'iface-id': '2d7c67a0-10d0-4de1-a430-60e038fcf537', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:5f:31', 'vm-uuid': '8a58a504-85a5-44e6-b815-99abb4ca2fc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:43 compute-0 NetworkManager[44885]: <info>  [1760432863.2508] manager: (tap2d7c67a0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.261 2 INFO os_vif [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10')
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.328 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.328 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.329 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] No VIF found with MAC fa:16:3e:b6:5f:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.329 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Using config drive
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.357 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 6.8 MiB/s wr, 171 op/s
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.961 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Creating config drive at /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config
Oct 14 09:07:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/182743236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:07:43 compute-0 nova_compute[259627]: 2025-10-14 09:07:43.973 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq8mdkh22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.134 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq8mdkh22" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.171 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.177 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.392 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.394 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Deleting local config drive /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config because it was imported into RBD.
Oct 14 09:07:44 compute-0 NetworkManager[44885]: <info>  [1760432864.4682] manager: (tap2d7c67a0-10): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Oct 14 09:07:44 compute-0 kernel: tap2d7c67a0-10: entered promiscuous mode
Oct 14 09:07:44 compute-0 ovn_controller[152662]: 2025-10-14T09:07:44Z|00663|binding|INFO|Claiming lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 for this chassis.
Oct 14 09:07:44 compute-0 ovn_controller[152662]: 2025-10-14T09:07:44Z|00664|binding|INFO|2d7c67a0-10d0-4de1-a430-60e038fcf537: Claiming fa:16:3e:b6:5f:31 10.100.0.3
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.532 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:5f:31 10.100.0.3'], port_security=['fa:16:3e:b6:5f:31 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a58a504-85a5-44e6-b815-99abb4ca2fc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df3e044c-d77c-4323-a9d7-2b0425933df0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9883ad901fc41c2a340b171d7165a0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38f16233-f765-495d-b104-7721931f5384', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8aeb3f0-29e9-44d2-ad79-ad7dad88caab, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2d7c67a0-10d0-4de1-a430-60e038fcf537) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.534 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2d7c67a0-10d0-4de1-a430-60e038fcf537 in datapath df3e044c-d77c-4323-a9d7-2b0425933df0 bound to our chassis
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.536 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df3e044c-d77c-4323-a9d7-2b0425933df0
Oct 14 09:07:44 compute-0 systemd-udevd[328619]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.549 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[045e2ecd-d6a1-473a-b751-bc2c1c2a5f52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.551 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf3e044c-d1 in ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.552 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf3e044c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.553 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e96524c6-0230-48b4-852b-f61150439b3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_controller[152662]: 2025-10-14T09:07:44Z|00665|binding|INFO|Setting lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 ovn-installed in OVS
Oct 14 09:07:44 compute-0 ovn_controller[152662]: 2025-10-14T09:07:44Z|00666|binding|INFO|Setting lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 up in Southbound
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.553 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70814f65-df00-48fa-8e16-0c8c8ee44b5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 systemd-machined[214636]: New machine qemu-80-instance-00000042.
Oct 14 09:07:44 compute-0 NetworkManager[44885]: <info>  [1760432864.5651] device (tap2d7c67a0-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:07:44 compute-0 NetworkManager[44885]: <info>  [1760432864.5664] device (tap2d7c67a0-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.570 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb76f71-7d96-423d-9016-83b1bcce7fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000042.
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.596 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f457a325-81d3-4883-a454-6a54a116daba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.627 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1723f40-072e-495b-a558-467e7d27109d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 NetworkManager[44885]: <info>  [1760432864.6355] manager: (tapdf3e044c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.634 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9abff52-a3a3-41eb-8183-4e8b359ddafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.679 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[64d76807-1ef8-424e-8aa6-98bad444f061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.682 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3c9438-6e76-44e7-9bcf-5bf8bdbfcacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 NetworkManager[44885]: <info>  [1760432864.7089] device (tapdf3e044c-d0): carrier: link connected
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.717 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[11e2decd-0cdc-4d38-b343-865b73de2450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab1e218-9699-400a-ba7b-17472cd73f3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf3e044c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:44:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664347, 'reachable_time': 17983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328652, 'error': None, 'target': 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.765 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8da52f18-d28c-4337-ada8-138e8884cb72]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:4429'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664347, 'tstamp': 664347}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328653, 'error': None, 'target': 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1d6f85-3936-4cab-9e10-d5e389ff381a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf3e044c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:44:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664347, 'reachable_time': 17983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328654, 'error': None, 'target': 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.835 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d11cedfd-32ba-482c-aead-5855ffc4d135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.900 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34fe9948-5c8e-467b-81e0-d1b2c5f4e000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf3e044c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.903 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.904 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf3e044c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:44 compute-0 NetworkManager[44885]: <info>  [1760432864.9078] manager: (tapdf3e044c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Oct 14 09:07:44 compute-0 kernel: tapdf3e044c-d0: entered promiscuous mode
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.912 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf3e044c-d0, col_values=(('external_ids', {'iface-id': '7ee05a46-b477-4dd4-add4-3044883f8018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:44 compute-0 ovn_controller[152662]: 2025-10-14T09:07:44Z|00667|binding|INFO|Releasing lport 7ee05a46-b477-4dd4-add4-3044883f8018 from this chassis (sb_readonly=0)
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.915 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df3e044c-d77c-4323-a9d7-2b0425933df0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df3e044c-d77c-4323-a9d7-2b0425933df0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.920 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[108ab0ce-b410-4e24-93bd-98446f79fa09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.922 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-df3e044c-d77c-4323-a9d7-2b0425933df0
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/df3e044c-d77c-4323-a9d7-2b0425933df0.pid.haproxy
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID df3e044c-d77c-4323-a9d7-2b0425933df0
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:07:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.923 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'env', 'PROCESS_TAG=haproxy-df3e044c-d77c-4323-a9d7-2b0425933df0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df3e044c-d77c-4323-a9d7-2b0425933df0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:07:44 compute-0 nova_compute[259627]: 2025-10-14 09:07:44.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:44 compute-0 ceph-mon[74249]: pgmap v1570: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 6.8 MiB/s wr, 171 op/s
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.135 2 DEBUG nova.network.neutron [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updated VIF entry in instance network info cache for port 2d7c67a0-10d0-4de1-a430-60e038fcf537. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.137 2 DEBUG nova.network.neutron [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updating instance_info_cache with network_info: [{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.162 2 DEBUG oslo_concurrency.lockutils [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:45 compute-0 podman[328728]: 2025-10-14 09:07:45.353206345 +0000 UTC m=+0.054557354 container create 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:07:45 compute-0 systemd[1]: Started libpod-conmon-5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b.scope.
Oct 14 09:07:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:07:45 compute-0 podman[328728]: 2025-10-14 09:07:45.327997184 +0000 UTC m=+0.029348183 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18ff3ed9243abc8a60fd8bac38117c2979ed9d105d233f8f10fb601e76b89352/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:07:45 compute-0 podman[328728]: 2025-10-14 09:07:45.438828494 +0000 UTC m=+0.140179523 container init 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:07:45 compute-0 podman[328728]: 2025-10-14 09:07:45.444715909 +0000 UTC m=+0.146066918 container start 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:07:45 compute-0 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [NOTICE]   (328747) : New worker (328749) forked
Oct 14 09:07:45 compute-0 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [NOTICE]   (328747) : Loading success.
Oct 14 09:07:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 4.8 MiB/s wr, 140 op/s
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.789 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432865.7882369, 8a58a504-85a5-44e6-b815-99abb4ca2fc8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.789 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] VM Started (Lifecycle Event)
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.819 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.824 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432865.7893763, 8a58a504-85a5-44e6-b815-99abb4ca2fc8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.825 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] VM Paused (Lifecycle Event)
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.851 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.855 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:07:45 compute-0 nova_compute[259627]: 2025-10-14 09:07:45.891 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.616 2 DEBUG nova.compute.manager [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.656 2 INFO nova.compute.manager [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] instance snapshotting
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.657 2 DEBUG nova.objects.instance [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'flavor' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.798 2 DEBUG nova.compute.manager [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.799 2 DEBUG oslo_concurrency.lockutils [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.800 2 DEBUG oslo_concurrency.lockutils [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.801 2 DEBUG oslo_concurrency.lockutils [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.801 2 DEBUG nova.compute.manager [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Processing event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.803 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.807 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432866.8070023, 8a58a504-85a5-44e6-b815-99abb4ca2fc8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.807 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] VM Resumed (Lifecycle Event)
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.810 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.815 2 INFO nova.virt.libvirt.driver [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance spawned successfully.
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.816 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.850 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.860 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.861 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.862 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.863 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.864 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.865 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.872 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.927 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:07:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:46.934 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:07:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:46.936 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.943 2 INFO nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Took 8.64 seconds to spawn the instance on the hypervisor.
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.944 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:46 compute-0 nova_compute[259627]: 2025-10-14 09:07:46.982 2 INFO nova.virt.libvirt.driver [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Beginning live snapshot process
Oct 14 09:07:46 compute-0 ceph-mon[74249]: pgmap v1571: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 4.8 MiB/s wr, 140 op/s
Oct 14 09:07:47 compute-0 nova_compute[259627]: 2025-10-14 09:07:47.009 2 INFO nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Took 9.72 seconds to build instance.
Oct 14 09:07:47 compute-0 nova_compute[259627]: 2025-10-14 09:07:47.034 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:47 compute-0 nova_compute[259627]: 2025-10-14 09:07:47.121 2 DEBUG nova.virt.libvirt.imagebackend [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:07:47 compute-0 nova_compute[259627]: 2025-10-14 09:07:47.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:47 compute-0 nova_compute[259627]: 2025-10-14 09:07:47.390 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(8269a39a6fd14d9d8ed1b9ad0f12c4f5) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:07:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 468 KiB/s rd, 4.8 MiB/s wr, 139 op/s
Oct 14 09:07:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 14 09:07:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 14 09:07:48 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.047 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk@8269a39a6fd14d9d8ed1b9ad0f12c4f5 to images/4d823dba-1513-4c47-907f-2858c26de3c1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.091 2 DEBUG nova.compute.manager [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.092 2 DEBUG oslo_concurrency.lockutils [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.092 2 DEBUG oslo_concurrency.lockutils [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.093 2 DEBUG oslo_concurrency.lockutils [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.094 2 DEBUG nova.compute.manager [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Processing event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.095 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.100 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.102 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432868.1023886, 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.103 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] VM Resumed (Lifecycle Event)
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.107 2 INFO nova.virt.libvirt.driver [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance spawned successfully.
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.108 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.133 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.141 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.146 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.147 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.148 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.148 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.149 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.151 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.170 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening images/4d823dba-1513-4c47-907f-2858c26de3c1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.220 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.236 2 INFO nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Took 12.70 seconds to spawn the instance on the hypervisor.
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.238 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.308 2 INFO nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Took 13.80 seconds to build instance.
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.328 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.547 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] removing snapshot(8269a39a6fd14d9d8ed1b9ad0f12c4f5) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.945 2 DEBUG nova.compute.manager [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.946 2 DEBUG oslo_concurrency.lockutils [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.946 2 DEBUG oslo_concurrency.lockutils [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.946 2 DEBUG oslo_concurrency.lockutils [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.946 2 DEBUG nova.compute.manager [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] No waiting events found dispatching network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:07:48 compute-0 nova_compute[259627]: 2025-10-14 09:07:48.947 2 WARNING nova.compute.manager [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received unexpected event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 for instance with vm_state active and task_state None.
Oct 14 09:07:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 14 09:07:49 compute-0 ceph-mon[74249]: pgmap v1572: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 468 KiB/s rd, 4.8 MiB/s wr, 139 op/s
Oct 14 09:07:49 compute-0 ceph-mon[74249]: osdmap e225: 3 total, 3 up, 3 in
Oct 14 09:07:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 14 09:07:49 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 14 09:07:49 compute-0 nova_compute[259627]: 2025-10-14 09:07:49.041 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(snap) on rbd image(4d823dba-1513-4c47-907f-2858c26de3c1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:07:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 59 KiB/s wr, 27 op/s
Oct 14 09:07:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 14 09:07:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 14 09:07:50 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 14 09:07:50 compute-0 ceph-mon[74249]: osdmap e226: 3 total, 3 up, 3 in
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.210 2 DEBUG nova.compute.manager [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.211 2 DEBUG oslo_concurrency.lockutils [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.211 2 DEBUG oslo_concurrency.lockutils [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.211 2 DEBUG oslo_concurrency.lockutils [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.211 2 DEBUG nova.compute.manager [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] No waiting events found dispatching network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.212 2 WARNING nova.compute.manager [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received unexpected event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 for instance with vm_state active and task_state None.
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.289 2 DEBUG oslo_concurrency.lockutils [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.290 2 DEBUG oslo_concurrency.lockutils [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.290 2 DEBUG nova.compute.manager [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.293 2 DEBUG nova.compute.manager [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.294 2 DEBUG nova.objects.instance [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'flavor' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:50 compute-0 nova_compute[259627]: 2025-10-14 09:07:50.314 2 DEBUG nova.virt.libvirt.driver [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:07:51 compute-0 ceph-mon[74249]: pgmap v1575: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 59 KiB/s wr, 27 op/s
Oct 14 09:07:51 compute-0 ceph-mon[74249]: osdmap e227: 3 total, 3 up, 3 in
Oct 14 09:07:51 compute-0 nova_compute[259627]: 2025-10-14 09:07:51.432 2 INFO nova.virt.libvirt.driver [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Snapshot image upload complete
Oct 14 09:07:51 compute-0 nova_compute[259627]: 2025-10-14 09:07:51.434 2 INFO nova.compute.manager [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 4.74 seconds to snapshot the instance on the hypervisor.
Oct 14 09:07:51 compute-0 nova_compute[259627]: 2025-10-14 09:07:51.731 2 DEBUG nova.compute.manager [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 14 09:07:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 7.8 MiB/s wr, 413 op/s
Oct 14 09:07:52 compute-0 nova_compute[259627]: 2025-10-14 09:07:52.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 14 09:07:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 14 09:07:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 14 09:07:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:07:52.939 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:07:53 compute-0 ceph-mon[74249]: pgmap v1577: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 7.8 MiB/s wr, 413 op/s
Oct 14 09:07:53 compute-0 ceph-mon[74249]: osdmap e228: 3 total, 3 up, 3 in
Oct 14 09:07:53 compute-0 nova_compute[259627]: 2025-10-14 09:07:53.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:53 compute-0 podman[328900]: 2025-10-14 09:07:53.665820012 +0000 UTC m=+0.073468680 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 09:07:53 compute-0 podman[328899]: 2025-10-14 09:07:53.702132866 +0000 UTC m=+0.115049984 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 09:07:53 compute-0 nova_compute[259627]: 2025-10-14 09:07:53.730 2 DEBUG nova.compute.manager [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-changed-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:07:53 compute-0 nova_compute[259627]: 2025-10-14 09:07:53.730 2 DEBUG nova.compute.manager [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Refreshing instance network info cache due to event network-changed-2d7c67a0-10d0-4de1-a430-60e038fcf537. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:07:53 compute-0 nova_compute[259627]: 2025-10-14 09:07:53.731 2 DEBUG oslo_concurrency.lockutils [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:07:53 compute-0 nova_compute[259627]: 2025-10-14 09:07:53.731 2 DEBUG oslo_concurrency.lockutils [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:07:53 compute-0 nova_compute[259627]: 2025-10-14 09:07:53.732 2 DEBUG nova.network.neutron [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Refreshing network info cache for port 2d7c67a0-10d0-4de1-a430-60e038fcf537 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:07:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 8.1 MiB/s wr, 429 op/s
Oct 14 09:07:55 compute-0 ceph-mon[74249]: pgmap v1579: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 8.1 MiB/s wr, 429 op/s
Oct 14 09:07:55 compute-0 nova_compute[259627]: 2025-10-14 09:07:55.651 2 DEBUG nova.compute.manager [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:07:55 compute-0 nova_compute[259627]: 2025-10-14 09:07:55.727 2 INFO nova.compute.manager [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] instance snapshotting
Oct 14 09:07:55 compute-0 nova_compute[259627]: 2025-10-14 09:07:55.728 2 DEBUG nova.objects.instance [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'flavor' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:07:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 6.9 MiB/s wr, 386 op/s
Oct 14 09:07:56 compute-0 nova_compute[259627]: 2025-10-14 09:07:56.033 2 INFO nova.virt.libvirt.driver [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Beginning live snapshot process
Oct 14 09:07:56 compute-0 nova_compute[259627]: 2025-10-14 09:07:56.223 2 DEBUG nova.virt.libvirt.imagebackend [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:07:56 compute-0 nova_compute[259627]: 2025-10-14 09:07:56.446 2 DEBUG nova.network.neutron [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updated VIF entry in instance network info cache for port 2d7c67a0-10d0-4de1-a430-60e038fcf537. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:07:56 compute-0 nova_compute[259627]: 2025-10-14 09:07:56.447 2 DEBUG nova.network.neutron [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updating instance_info_cache with network_info: [{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:07:56 compute-0 nova_compute[259627]: 2025-10-14 09:07:56.477 2 DEBUG oslo_concurrency.lockutils [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:07:56 compute-0 nova_compute[259627]: 2025-10-14 09:07:56.510 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(4ccb5be71bd24d81a2819bc7d0631fd9) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:07:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 14 09:07:57 compute-0 ceph-mon[74249]: pgmap v1580: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 6.9 MiB/s wr, 386 op/s
Oct 14 09:07:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 14 09:07:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 14 09:07:57 compute-0 nova_compute[259627]: 2025-10-14 09:07:57.154 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk@4ccb5be71bd24d81a2819bc7d0631fd9 to images/71b4e8a7-8830-426f-b3a3-9271d6f6992b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:07:57 compute-0 nova_compute[259627]: 2025-10-14 09:07:57.292 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening images/71b4e8a7-8830-426f-b3a3-9271d6f6992b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:07:57 compute-0 nova_compute[259627]: 2025-10-14 09:07:57.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:07:57 compute-0 nova_compute[259627]: 2025-10-14 09:07:57.719 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] removing snapshot(4ccb5be71bd24d81a2819bc7d0631fd9) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:07:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.0 MiB/s wr, 337 op/s
Oct 14 09:07:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 14 09:07:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 14 09:07:58 compute-0 ceph-mon[74249]: osdmap e229: 3 total, 3 up, 3 in
Oct 14 09:07:58 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 14 09:07:58 compute-0 nova_compute[259627]: 2025-10-14 09:07:58.148 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(snap) on rbd image(71b4e8a7-8830-426f-b3a3-9271d6f6992b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:07:58 compute-0 nova_compute[259627]: 2025-10-14 09:07:58.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:07:58 compute-0 ovn_controller[152662]: 2025-10-14T09:07:58Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:5f:31 10.100.0.3
Oct 14 09:07:58 compute-0 ovn_controller[152662]: 2025-10-14T09:07:58Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:5f:31 10.100.0.3
Oct 14 09:07:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 14 09:07:59 compute-0 ceph-mon[74249]: pgmap v1582: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.0 MiB/s wr, 337 op/s
Oct 14 09:07:59 compute-0 ceph-mon[74249]: osdmap e230: 3 total, 3 up, 3 in
Oct 14 09:07:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 14 09:07:59 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 14 09:07:59 compute-0 ovn_controller[152662]: 2025-10-14T09:07:59Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:81:c4 10.100.0.8
Oct 14 09:07:59 compute-0 ovn_controller[152662]: 2025-10-14T09:07:59Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:81:c4 10.100.0.8
Oct 14 09:07:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.0 KiB/s wr, 22 op/s
Oct 14 09:08:00 compute-0 ceph-mon[74249]: osdmap e231: 3 total, 3 up, 3 in
Oct 14 09:08:00 compute-0 nova_compute[259627]: 2025-10-14 09:08:00.377 2 DEBUG nova.virt.libvirt.driver [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:08:00 compute-0 nova_compute[259627]: 2025-10-14 09:08:00.664 2 INFO nova.virt.libvirt.driver [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Snapshot image upload complete
Oct 14 09:08:00 compute-0 nova_compute[259627]: 2025-10-14 09:08:00.664 2 INFO nova.compute.manager [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 4.92 seconds to snapshot the instance on the hypervisor.
Oct 14 09:08:00 compute-0 nova_compute[259627]: 2025-10-14 09:08:00.937 2 DEBUG nova.compute.manager [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 14 09:08:01 compute-0 ceph-mon[74249]: pgmap v1585: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.0 KiB/s wr, 22 op/s
Oct 14 09:08:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 432 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 16 MiB/s wr, 389 op/s
Oct 14 09:08:02 compute-0 nova_compute[259627]: 2025-10-14 09:08:02.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:02 compute-0 nova_compute[259627]: 2025-10-14 09:08:02.442 2 DEBUG nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:02 compute-0 nova_compute[259627]: 2025-10-14 09:08:02.488 2 INFO nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] instance snapshotting
Oct 14 09:08:02 compute-0 nova_compute[259627]: 2025-10-14 09:08:02.489 2 DEBUG nova.objects.instance [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'flavor' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 14 09:08:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 14 09:08:02 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 14 09:08:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:08:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:08:02 compute-0 nova_compute[259627]: 2025-10-14 09:08:02.759 2 INFO nova.virt.libvirt.driver [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Beginning live snapshot process
Oct 14 09:08:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:08:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:08:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:08:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:08:02 compute-0 kernel: tap1dc4ac40-9a (unregistering): left promiscuous mode
Oct 14 09:08:02 compute-0 NetworkManager[44885]: <info>  [1760432882.8673] device (tap1dc4ac40-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:08:02 compute-0 ovn_controller[152662]: 2025-10-14T09:08:02Z|00668|binding|INFO|Releasing lport 1dc4ac40-9a94-49bf-a098-664b98599004 from this chassis (sb_readonly=0)
Oct 14 09:08:02 compute-0 ovn_controller[152662]: 2025-10-14T09:08:02Z|00669|binding|INFO|Setting lport 1dc4ac40-9a94-49bf-a098-664b98599004 down in Southbound
Oct 14 09:08:02 compute-0 ovn_controller[152662]: 2025-10-14T09:08:02Z|00670|binding|INFO|Removing iface tap1dc4ac40-9a ovn-installed in OVS
Oct 14 09:08:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.898 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:81:c4 10.100.0.8'], port_security=['fa:16:3e:0d:81:c4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '58b61a7a-1a2e-4e3a-9444-3a89da64c5f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1dc4ac40-9a94-49bf-a098-664b98599004) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:08:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.903 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1dc4ac40-9a94-49bf-a098-664b98599004 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis
Oct 14 09:08:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.905 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:08:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.908 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fdaf8160-309e-4316-8dfb-7d757ee42ca0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.909 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore
Oct 14 09:08:02 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000041.scope: Deactivated successfully.
Oct 14 09:08:02 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000041.scope: Consumed 12.999s CPU time.
Oct 14 09:08:02 compute-0 systemd-machined[214636]: Machine qemu-79-instance-00000041 terminated.
Oct 14 09:08:02 compute-0 nova_compute[259627]: 2025-10-14 09:08:02.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:02 compute-0 nova_compute[259627]: 2025-10-14 09:08:02.967 2 DEBUG nova.virt.libvirt.imagebackend [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:08:03 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [NOTICE]   (328472) : haproxy version is 2.8.14-c23fe91
Oct 14 09:08:03 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [NOTICE]   (328472) : path to executable is /usr/sbin/haproxy
Oct 14 09:08:03 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [WARNING]  (328472) : Exiting Master process...
Oct 14 09:08:03 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [ALERT]    (328472) : Current worker (328474) exited with code 143 (Terminated)
Oct 14 09:08:03 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [WARNING]  (328472) : All workers exited. Exiting... (0)
Oct 14 09:08:03 compute-0 systemd[1]: libpod-3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6.scope: Deactivated successfully.
Oct 14 09:08:03 compute-0 podman[329144]: 2025-10-14 09:08:03.098749971 +0000 UTC m=+0.053772215 container died 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ae4ef622a69186680b309d3a53a544ac1d19dfd134061b342523d307a4ae263-merged.mount: Deactivated successfully.
Oct 14 09:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6-userdata-shm.mount: Deactivated successfully.
Oct 14 09:08:03 compute-0 podman[329144]: 2025-10-14 09:08:03.141827782 +0000 UTC m=+0.096850006 container cleanup 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:08:03 compute-0 ceph-mon[74249]: pgmap v1586: 305 pgs: 305 active+clean; 432 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 16 MiB/s wr, 389 op/s
Oct 14 09:08:03 compute-0 ceph-mon[74249]: osdmap e232: 3 total, 3 up, 3 in
Oct 14 09:08:03 compute-0 systemd[1]: libpod-conmon-3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6.scope: Deactivated successfully.
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.213 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(646213f4699848f899233453071617e5) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:08:03 compute-0 podman[329183]: 2025-10-14 09:08:03.23758406 +0000 UTC m=+0.062303935 container remove 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.319 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11cb850d-f471-4304-837d-853250d9707b]: (4, ('Tue Oct 14 09:08:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6)\n3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6\nTue Oct 14 09:08:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6)\n3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb9e86c-96f9-4cd8-bd54-ca3721744814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.322 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:03 compute-0 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.327 2 DEBUG nova.compute.manager [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-unplugged-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.328 2 DEBUG oslo_concurrency.lockutils [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.328 2 DEBUG oslo_concurrency.lockutils [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.328 2 DEBUG oslo_concurrency.lockutils [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.328 2 DEBUG nova.compute.manager [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] No waiting events found dispatching network-vif-unplugged-1dc4ac40-9a94-49bf-a098-664b98599004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.328 2 WARNING nova.compute.manager [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received unexpected event network-vif-unplugged-1dc4ac40-9a94-49bf-a098-664b98599004 for instance with vm_state active and task_state powering-off.
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[841e1498-0d55-4790-8b5c-0e984eb65647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.368 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7090f9e4-ddd2-4bae-a663-2df1a623f3fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.369 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b0566a7b-4238-49ee-b1a4-df94a0cb38a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.383 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9414bd68-8f8b-48e1-9388-48e6266630be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663968, 'reachable_time': 22368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329225, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.386 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:08:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.386 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3d17b999-6b81-4a87-8817-39c0e26843b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.395 2 INFO nova.virt.libvirt.driver [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance shutdown successfully after 13 seconds.
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.399 2 INFO nova.virt.libvirt.driver [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance destroyed successfully.
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.399 2 DEBUG nova.objects.instance [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'numa_topology' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.412 2 DEBUG nova.compute.manager [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:03 compute-0 nova_compute[259627]: 2025-10-14 09:08:03.459 2 DEBUG oslo_concurrency.lockutils [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 432 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 16 MiB/s wr, 389 op/s
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.051 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.052 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.052 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.052 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.053 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.054 2 INFO nova.compute.manager [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Terminating instance
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.055 2 DEBUG nova.compute.manager [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.061 2 INFO nova.virt.libvirt.driver [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance destroyed successfully.
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.061 2 DEBUG nova.objects.instance [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.075 2 DEBUG nova.virt.libvirt.vif [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2035284822',display_name='tempest-DeleteServersTestJSON-server-2035284822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2035284822',id=65,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-47ja83kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:08:03Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=58b61a7a-1a2e-4e3a-9444-3a89da64c5f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.076 2 DEBUG nova.network.os_vif_util [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.077 2 DEBUG nova.network.os_vif_util [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.077 2 DEBUG os_vif [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.080 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc4ac40-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.086 2 INFO os_vif [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a')
Oct 14 09:08:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 14 09:08:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 14 09:08:04 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.247 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk@646213f4699848f899233453071617e5 to images/0dc1ba26-0588-4dc2-8af6-e697669fc950 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.369 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening images/0dc1ba26-0588-4dc2-8af6-e697669fc950 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.712 2 INFO nova.virt.libvirt.driver [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Deleting instance files /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_del
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.712 2 INFO nova.virt.libvirt.driver [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Deletion of /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_del complete
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.725 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] removing snapshot(646213f4699848f899233453071617e5) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.787 2 INFO nova.compute.manager [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.788 2 DEBUG oslo.service.loopingcall [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.788 2 DEBUG nova.compute.manager [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:08:04 compute-0 nova_compute[259627]: 2025-10-14 09:08:04.789 2 DEBUG nova.network.neutron [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:08:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 14 09:08:05 compute-0 ceph-mon[74249]: pgmap v1588: 305 pgs: 305 active+clean; 432 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 16 MiB/s wr, 389 op/s
Oct 14 09:08:05 compute-0 ceph-mon[74249]: osdmap e233: 3 total, 3 up, 3 in
Oct 14 09:08:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 14 09:08:05 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.226 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(snap) on rbd image(0dc1ba26-0588-4dc2-8af6-e697669fc950) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.424 2 DEBUG nova.compute.manager [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.424 2 DEBUG oslo_concurrency.lockutils [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.425 2 DEBUG oslo_concurrency.lockutils [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.426 2 DEBUG oslo_concurrency.lockutils [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.426 2 DEBUG nova.compute.manager [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] No waiting events found dispatching network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.427 2 WARNING nova.compute.manager [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received unexpected event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 for instance with vm_state stopped and task_state deleting.
Oct 14 09:08:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:08:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3566661624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:08:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:08:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3566661624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.613 2 DEBUG nova.network.neutron [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.630 2 INFO nova.compute.manager [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Took 0.84 seconds to deallocate network for instance.
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.680 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.681 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 24 MiB/s wr, 633 op/s
Oct 14 09:08:05 compute-0 nova_compute[259627]: 2025-10-14 09:08:05.849 2 DEBUG oslo_concurrency.processutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 14 09:08:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 14 09:08:06 compute-0 ceph-mon[74249]: osdmap e234: 3 total, 3 up, 3 in
Oct 14 09:08:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3566661624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:08:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3566661624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:08:06 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 14 09:08:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209689776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:06 compute-0 nova_compute[259627]: 2025-10-14 09:08:06.295 2 DEBUG oslo_concurrency.processutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:06 compute-0 nova_compute[259627]: 2025-10-14 09:08:06.305 2 DEBUG nova.compute.provider_tree [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:06 compute-0 nova_compute[259627]: 2025-10-14 09:08:06.330 2 DEBUG nova.scheduler.client.report [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:06 compute-0 nova_compute[259627]: 2025-10-14 09:08:06.353 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:06 compute-0 nova_compute[259627]: 2025-10-14 09:08:06.382 2 INFO nova.scheduler.client.report [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3
Oct 14 09:08:06 compute-0 nova_compute[259627]: 2025-10-14 09:08:06.467 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:06 compute-0 sudo[329357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:06 compute-0 sudo[329357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:06 compute-0 sudo[329357]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:06 compute-0 sudo[329382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:08:06 compute-0 sudo[329382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:06 compute-0 sudo[329382]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:06 compute-0 sudo[329407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:06 compute-0 sudo[329407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:06 compute-0 sudo[329407]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:06 compute-0 sudo[329432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:08:06 compute-0 sudo[329432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:07.026 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:07.026 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:07 compute-0 ceph-mon[74249]: pgmap v1591: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 24 MiB/s wr, 633 op/s
Oct 14 09:08:07 compute-0 ceph-mon[74249]: osdmap e235: 3 total, 3 up, 3 in
Oct 14 09:08:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1209689776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:07 compute-0 nova_compute[259627]: 2025-10-14 09:08:07.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:07 compute-0 sudo[329432]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:07 compute-0 nova_compute[259627]: 2025-10-14 09:08:07.529 2 DEBUG nova.compute.manager [req-73dc8602-ac19-446d-8dc6-187466d6aa6a req-6126bc4d-c521-4da2-99ed-bf2af8bd2687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-deleted-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:08:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:08:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:08:07 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:08:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:08:07 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:08:07 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 1704267d-3faa-49a0-a0d8-c233c578e95f does not exist
Oct 14 09:08:07 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 4b559b02-acae-4197-9af7-41d866e9bd68 does not exist
Oct 14 09:08:07 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 800c208f-8024-40be-9cba-425996bd2a51 does not exist
Oct 14 09:08:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:08:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:08:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:08:07 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:08:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:08:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:08:07 compute-0 sudo[329489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:07 compute-0 sudo[329489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:07 compute-0 sudo[329489]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:07 compute-0 nova_compute[259627]: 2025-10-14 09:08:07.692 2 INFO nova.virt.libvirt.driver [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Snapshot image upload complete
Oct 14 09:08:07 compute-0 nova_compute[259627]: 2025-10-14 09:08:07.693 2 INFO nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 5.18 seconds to snapshot the instance on the hypervisor.
Oct 14 09:08:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:07 compute-0 sudo[329514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:08:07 compute-0 sudo[329514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:07 compute-0 sudo[329514]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:07 compute-0 sudo[329539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:07 compute-0 sudo[329539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:07 compute-0 sudo[329539]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 9.3 MiB/s wr, 289 op/s
Oct 14 09:08:07 compute-0 sudo[329564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:08:07 compute-0 sudo[329564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:08 compute-0 nova_compute[259627]: 2025-10-14 09:08:08.018 2 DEBUG nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 14 09:08:08 compute-0 nova_compute[259627]: 2025-10-14 09:08:08.019 2 DEBUG nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Oct 14 09:08:08 compute-0 nova_compute[259627]: 2025-10-14 09:08:08.019 2 DEBUG nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deleting image 4d823dba-1513-4c47-907f-2858c26de3c1 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Oct 14 09:08:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:08:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:08:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:08:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:08:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:08:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:08:08 compute-0 podman[329631]: 2025-10-14 09:08:08.28361002 +0000 UTC m=+0.049668695 container create b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:08:08 compute-0 systemd[1]: Started libpod-conmon-b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791.scope.
Oct 14 09:08:08 compute-0 podman[329631]: 2025-10-14 09:08:08.263399442 +0000 UTC m=+0.029458157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:08:08 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:08:08 compute-0 podman[329631]: 2025-10-14 09:08:08.383004387 +0000 UTC m=+0.149063062 container init b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 09:08:08 compute-0 podman[329631]: 2025-10-14 09:08:08.390633355 +0000 UTC m=+0.156692050 container start b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 09:08:08 compute-0 podman[329631]: 2025-10-14 09:08:08.394443779 +0000 UTC m=+0.160502464 container attach b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 14 09:08:08 compute-0 determined_swartz[329649]: 167 167
Oct 14 09:08:08 compute-0 systemd[1]: libpod-b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791.scope: Deactivated successfully.
Oct 14 09:08:08 compute-0 podman[329631]: 2025-10-14 09:08:08.398788106 +0000 UTC m=+0.164846801 container died b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 09:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-186e18801a1ec1db3f37637e3894e08760957ca5c530d0429ededd8c2656f4ff-merged.mount: Deactivated successfully.
Oct 14 09:08:08 compute-0 podman[329631]: 2025-10-14 09:08:08.447118276 +0000 UTC m=+0.213176951 container remove b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:08:08 compute-0 systemd[1]: libpod-conmon-b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791.scope: Deactivated successfully.
Oct 14 09:08:08 compute-0 podman[329673]: 2025-10-14 09:08:08.722102249 +0000 UTC m=+0.073779598 container create abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:08:08 compute-0 systemd[1]: Started libpod-conmon-abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8.scope.
Oct 14 09:08:08 compute-0 podman[329673]: 2025-10-14 09:08:08.691825523 +0000 UTC m=+0.043502912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:08:08 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:08 compute-0 podman[329673]: 2025-10-14 09:08:08.839985802 +0000 UTC m=+0.191663141 container init abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 09:08:08 compute-0 podman[329673]: 2025-10-14 09:08:08.84964889 +0000 UTC m=+0.201326239 container start abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:08:08 compute-0 podman[329673]: 2025-10-14 09:08:08.854338675 +0000 UTC m=+0.206016004 container attach abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 14 09:08:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 14 09:08:09 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.544 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.545 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.566 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:08:09 compute-0 ceph-mon[74249]: pgmap v1593: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 9.3 MiB/s wr, 289 op/s
Oct 14 09:08:09 compute-0 ceph-mon[74249]: osdmap e236: 3 total, 3 up, 3 in
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.635 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.635 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.644 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.645 2 INFO nova.compute.claims [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:08:09 compute-0 nova_compute[259627]: 2025-10-14 09:08:09.775 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.5 MiB/s wr, 262 op/s
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.042 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.043 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.044 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.044 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.045 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.048 2 INFO nova.compute.manager [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Terminating instance
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.050 2 DEBUG nova.compute.manager [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:08:10 compute-0 frosty_sanderson[329691]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:08:10 compute-0 frosty_sanderson[329691]: --> relative data size: 1.0
Oct 14 09:08:10 compute-0 frosty_sanderson[329691]: --> All data devices are unavailable
Oct 14 09:08:10 compute-0 systemd[1]: libpod-abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8.scope: Deactivated successfully.
Oct 14 09:08:10 compute-0 systemd[1]: libpod-abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8.scope: Consumed 1.143s CPU time.
Oct 14 09:08:10 compute-0 podman[329673]: 2025-10-14 09:08:10.11610707 +0000 UTC m=+1.467784379 container died abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:08:10 compute-0 kernel: tap2d7c67a0-10 (unregistering): left promiscuous mode
Oct 14 09:08:10 compute-0 NetworkManager[44885]: <info>  [1760432890.1318] device (tap2d7c67a0-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367-merged.mount: Deactivated successfully.
Oct 14 09:08:10 compute-0 ovn_controller[152662]: 2025-10-14T09:08:10Z|00671|binding|INFO|Releasing lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 from this chassis (sb_readonly=0)
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:10 compute-0 ovn_controller[152662]: 2025-10-14T09:08:10Z|00672|binding|INFO|Setting lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 down in Southbound
Oct 14 09:08:10 compute-0 ovn_controller[152662]: 2025-10-14T09:08:10Z|00673|binding|INFO|Removing iface tap2d7c67a0-10 ovn-installed in OVS
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:10 compute-0 podman[329673]: 2025-10-14 09:08:10.176524698 +0000 UTC m=+1.528201997 container remove abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.177 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:5f:31 10.100.0.3'], port_security=['fa:16:3e:b6:5f:31 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a58a504-85a5-44e6-b815-99abb4ca2fc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df3e044c-d77c-4323-a9d7-2b0425933df0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9883ad901fc41c2a340b171d7165a0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38f16233-f765-495d-b104-7721931f5384', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8aeb3f0-29e9-44d2-ad79-ad7dad88caab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2d7c67a0-10d0-4de1-a430-60e038fcf537) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.178 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2d7c67a0-10d0-4de1-a430-60e038fcf537 in datapath df3e044c-d77c-4323-a9d7-2b0425933df0 unbound from our chassis
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.180 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df3e044c-d77c-4323-a9d7-2b0425933df0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.181 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58a0146c-8013-4ff0-9a19-da24aefe086c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.182 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 namespace which is not needed anymore
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:10 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 14 09:08:10 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000042.scope: Consumed 14.580s CPU time.
Oct 14 09:08:10 compute-0 systemd[1]: libpod-conmon-abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8.scope: Deactivated successfully.
Oct 14 09:08:10 compute-0 systemd-machined[214636]: Machine qemu-80-instance-00000042 terminated.
Oct 14 09:08:10 compute-0 sudo[329564]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:10 compute-0 podman[329741]: 2025-10-14 09:08:10.256888167 +0000 UTC m=+0.108623056 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:08:10 compute-0 podman[329743]: 2025-10-14 09:08:10.260965987 +0000 UTC m=+0.100213609 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 14 09:08:10 compute-0 sudo[329792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:10 compute-0 sudo[329792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120816586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:10 compute-0 sudo[329792]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.309 2 INFO nova.virt.libvirt.driver [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance destroyed successfully.
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.310 2 DEBUG nova.objects.instance [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lazy-loading 'resources' on Instance uuid 8a58a504-85a5-44e6-b815-99abb4ca2fc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.316 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.323 2 DEBUG nova.compute.provider_tree [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:10 compute-0 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [NOTICE]   (328747) : haproxy version is 2.8.14-c23fe91
Oct 14 09:08:10 compute-0 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [NOTICE]   (328747) : path to executable is /usr/sbin/haproxy
Oct 14 09:08:10 compute-0 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [WARNING]  (328747) : Exiting Master process...
Oct 14 09:08:10 compute-0 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [WARNING]  (328747) : Exiting Master process...
Oct 14 09:08:10 compute-0 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [ALERT]    (328747) : Current worker (328749) exited with code 143 (Terminated)
Oct 14 09:08:10 compute-0 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [WARNING]  (328747) : All workers exited. Exiting... (0)
Oct 14 09:08:10 compute-0 systemd[1]: libpod-5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b.scope: Deactivated successfully.
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.330 2 DEBUG nova.virt.libvirt.vif [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1860549828',display_name='tempest-ServersTestJSON-server-1860549828',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1860549828',id=66,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPBjCQeMHoUcI/GHoNmPZT13/aYbG1GbRQyp9yA777AdaKu728JKactgc6o+aymRL18NOp98nhjzfD96xfaginRg8v3g0mj8wP4FAzm7DJKAkKlO+Gseanq7GXJCgToDw==',key_name='tempest-keypair-1047945045',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f9883ad901fc41c2a340b171d7165a0e',ramdisk_id='',reservation_id='r-owfgitz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1568732941',owner_user_name='tempest-ServersTestJSON-1568732941-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:07:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40a7a5045f164fb3bc6f8ae8a40f6bac',uuid=8a58a504-85a5-44e6-b815-99abb4ca2fc8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.331 2 DEBUG nova.network.os_vif_util [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converting VIF {"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.331 2 DEBUG nova.network.os_vif_util [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:10 compute-0 podman[329837]: 2025-10-14 09:08:10.331991636 +0000 UTC m=+0.045226795 container died 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.332 2 DEBUG os_vif [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d7c67a0-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:10 compute-0 sudo[329851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.339 2 INFO os_vif [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10')
Oct 14 09:08:10 compute-0 sudo[329851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:10 compute-0 sudo[329851]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-18ff3ed9243abc8a60fd8bac38117c2979ed9d105d233f8f10fb601e76b89352-merged.mount: Deactivated successfully.
Oct 14 09:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b-userdata-shm.mount: Deactivated successfully.
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.368 2 DEBUG nova.scheduler.client.report [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:10 compute-0 podman[329837]: 2025-10-14 09:08:10.370737991 +0000 UTC m=+0.083973150 container cleanup 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:08:10 compute-0 systemd[1]: libpod-conmon-5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b.scope: Deactivated successfully.
Oct 14 09:08:10 compute-0 sudo[329902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.398 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.399 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:08:10 compute-0 sudo[329902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:10 compute-0 sudo[329902]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:10 compute-0 podman[329942]: 2025-10-14 09:08:10.443299028 +0000 UTC m=+0.050749221 container remove 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.446 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.447 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.452 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf654df-ce30-41cd-a484-eae3a3f95442]: (4, ('Tue Oct 14 09:08:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 (5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b)\n5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b\nTue Oct 14 09:08:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 (5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b)\n5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.453 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[08b230af-fa1c-4485-b2a4-43847492cc07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.454 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf3e044c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:10 compute-0 sudo[329957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:08:10 compute-0 sudo[329957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:10 compute-0 kernel: tapdf3e044c-d0: left promiscuous mode
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.466 2 DEBUG nova.compute.manager [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-unplugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.466 2 DEBUG oslo_concurrency.lockutils [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.467 2 DEBUG oslo_concurrency.lockutils [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.467 2 DEBUG oslo_concurrency.lockutils [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.467 2 DEBUG nova.compute.manager [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] No waiting events found dispatching network-vif-unplugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.467 2 DEBUG nova.compute.manager [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-unplugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.471 2 INFO nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.474 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6df5c1a4-f5cd-4564-acc1-10d9a01153df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.499 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24e5a6fe-4617-4b37-bca9-dffdbc126640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6761c6-9a32-496f-bd1f-745eac736a54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.508 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.520 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a9bec1-57f0-42ac-b55c-2d17e9fd69d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664338, 'reachable_time': 32890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329989, 'error': None, 'target': 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:10 compute-0 systemd[1]: run-netns-ovnmeta\x2ddf3e044c\x2dd77c\x2d4323\x2da9d7\x2d2b0425933df0.mount: Deactivated successfully.
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.523 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:08:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.524 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8aae0a5c-11f3-4759-b046-3363f03dda00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3120816586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.606 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.607 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.608 2 INFO nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Creating image(s)
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.631 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.652 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.674 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.677 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.708 2 DEBUG nova.policy [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.745 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.746 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.747 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.747 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.768 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.774 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9512792b-fd02-459a-8377-c2815c130684_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:10 compute-0 podman[330085]: 2025-10-14 09:08:10.798793252 +0000 UTC m=+0.044822414 container create c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:08:10 compute-0 systemd[1]: Started libpod-conmon-c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0.scope.
Oct 14 09:08:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:08:10 compute-0 podman[330085]: 2025-10-14 09:08:10.778073822 +0000 UTC m=+0.024103004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:08:10 compute-0 podman[330085]: 2025-10-14 09:08:10.89128737 +0000 UTC m=+0.137316562 container init c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:08:10 compute-0 podman[330085]: 2025-10-14 09:08:10.90225036 +0000 UTC m=+0.148279522 container start c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:08:10 compute-0 podman[330085]: 2025-10-14 09:08:10.906941616 +0000 UTC m=+0.152970878 container attach c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:08:10 compute-0 dazzling_bhabha[330121]: 167 167
Oct 14 09:08:10 compute-0 systemd[1]: libpod-c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0.scope: Deactivated successfully.
Oct 14 09:08:10 compute-0 podman[330085]: 2025-10-14 09:08:10.908832143 +0000 UTC m=+0.154861305 container died c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.923 2 INFO nova.virt.libvirt.driver [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Deleting instance files /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8_del
Oct 14 09:08:10 compute-0 nova_compute[259627]: 2025-10-14 09:08:10.925 2 INFO nova.virt.libvirt.driver [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Deletion of /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8_del complete
Oct 14 09:08:10 compute-0 podman[330085]: 2025-10-14 09:08:10.965070507 +0000 UTC m=+0.211099669 container remove c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:08:10 compute-0 systemd[1]: libpod-conmon-c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0.scope: Deactivated successfully.
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.009 2 INFO nova.compute.manager [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Took 0.96 seconds to destroy the instance on the hypervisor.
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.009 2 DEBUG oslo.service.loopingcall [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.010 2 DEBUG nova.compute.manager [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.010 2 DEBUG nova.network.neutron [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.052 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9512792b-fd02-459a-8377-c2815c130684_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.126 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image 9512792b-fd02-459a-8377-c2815c130684_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:08:11 compute-0 podman[330181]: 2025-10-14 09:08:11.138558089 +0000 UTC m=+0.046170698 container create 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:08:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-716d01a2a33a9c8f91bc7cddf4d80919629e78d4b79a94c61bc4f95e70a75d8a-merged.mount: Deactivated successfully.
Oct 14 09:08:11 compute-0 systemd[1]: Started libpod-conmon-44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454.scope.
Oct 14 09:08:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:08:11 compute-0 podman[330181]: 2025-10-14 09:08:11.120528995 +0000 UTC m=+0.028141644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:11 compute-0 podman[330181]: 2025-10-14 09:08:11.236338217 +0000 UTC m=+0.143950896 container init 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.239 2 DEBUG nova.objects.instance [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid 9512792b-fd02-459a-8377-c2815c130684 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:11 compute-0 podman[330181]: 2025-10-14 09:08:11.24661158 +0000 UTC m=+0.154224199 container start 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:08:11 compute-0 podman[330181]: 2025-10-14 09:08:11.250872245 +0000 UTC m=+0.158484944 container attach 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.262 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.263 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Ensure instance console log exists: /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.263 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.264 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:11 compute-0 nova_compute[259627]: 2025-10-14 09:08:11.264 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 14 09:08:11 compute-0 ceph-mon[74249]: pgmap v1595: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.5 MiB/s wr, 262 op/s
Oct 14 09:08:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 14 09:08:11 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 14 09:08:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 33 KiB/s wr, 82 op/s
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]: {
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:     "0": [
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:         {
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "devices": [
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "/dev/loop3"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             ],
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_name": "ceph_lv0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_size": "21470642176",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "name": "ceph_lv0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "tags": {
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cluster_name": "ceph",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.crush_device_class": "",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.encrypted": "0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osd_id": "0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.type": "block",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.vdo": "0"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             },
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "type": "block",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "vg_name": "ceph_vg0"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:         }
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:     ],
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:     "1": [
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:         {
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "devices": [
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "/dev/loop4"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             ],
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_name": "ceph_lv1",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_size": "21470642176",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "name": "ceph_lv1",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "tags": {
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cluster_name": "ceph",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.crush_device_class": "",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.encrypted": "0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osd_id": "1",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.type": "block",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.vdo": "0"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             },
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "type": "block",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "vg_name": "ceph_vg1"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:         }
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:     ],
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:     "2": [
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:         {
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "devices": [
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "/dev/loop5"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             ],
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_name": "ceph_lv2",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_size": "21470642176",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "name": "ceph_lv2",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "tags": {
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.cluster_name": "ceph",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.crush_device_class": "",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.encrypted": "0",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osd_id": "2",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.type": "block",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:                 "ceph.vdo": "0"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             },
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "type": "block",
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:             "vg_name": "ceph_vg2"
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:         }
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]:     ]
Oct 14 09:08:12 compute-0 hopeful_nightingale[330234]: }
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.098 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Successfully created port: 1c4cb1d0-4a75-436b-965e-910994f941f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:08:12 compute-0 systemd[1]: libpod-44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454.scope: Deactivated successfully.
Oct 14 09:08:12 compute-0 podman[330181]: 2025-10-14 09:08:12.13584482 +0000 UTC m=+1.043457439 container died 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761-merged.mount: Deactivated successfully.
Oct 14 09:08:12 compute-0 podman[330181]: 2025-10-14 09:08:12.203768143 +0000 UTC m=+1.111380752 container remove 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:08:12 compute-0 systemd[1]: libpod-conmon-44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454.scope: Deactivated successfully.
Oct 14 09:08:12 compute-0 sudo[329957]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:12 compute-0 sudo[330274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:12 compute-0 sudo[330274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:12 compute-0 sudo[330274]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:12 compute-0 sudo[330299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:08:12 compute-0 sudo[330299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:12 compute-0 sudo[330299]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:12 compute-0 sudo[330324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:12 compute-0 sudo[330324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:12 compute-0 sudo[330324]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.580 2 DEBUG nova.compute.manager [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.580 2 DEBUG oslo_concurrency.lockutils [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.581 2 DEBUG oslo_concurrency.lockutils [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.581 2 DEBUG oslo_concurrency.lockutils [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.581 2 DEBUG nova.compute.manager [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] No waiting events found dispatching network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.581 2 WARNING nova.compute.manager [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received unexpected event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 for instance with vm_state active and task_state deleting.
Oct 14 09:08:12 compute-0 sudo[330349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:08:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 14 09:08:12 compute-0 ceph-mon[74249]: osdmap e237: 3 total, 3 up, 3 in
Oct 14 09:08:12 compute-0 sudo[330349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 14 09:08:12 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 14 09:08:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 14 09:08:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 14 09:08:12 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:12 compute-0 nova_compute[259627]: 2025-10-14 09:08:12.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:08:13 compute-0 podman[330418]: 2025-10-14 09:08:13.155123422 +0000 UTC m=+0.072433754 container create ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:08:13 compute-0 systemd[1]: Started libpod-conmon-ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1.scope.
Oct 14 09:08:13 compute-0 podman[330418]: 2025-10-14 09:08:13.12661162 +0000 UTC m=+0.043922012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:08:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:08:13 compute-0 podman[330418]: 2025-10-14 09:08:13.259215086 +0000 UTC m=+0.176525458 container init ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:08:13 compute-0 podman[330418]: 2025-10-14 09:08:13.268247438 +0000 UTC m=+0.185557770 container start ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:08:13 compute-0 podman[330418]: 2025-10-14 09:08:13.272684118 +0000 UTC m=+0.189994500 container attach ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:08:13 compute-0 zen_bose[330434]: 167 167
Oct 14 09:08:13 compute-0 systemd[1]: libpod-ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1.scope: Deactivated successfully.
Oct 14 09:08:13 compute-0 podman[330418]: 2025-10-14 09:08:13.279089545 +0000 UTC m=+0.196399907 container died ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-405d2089b4f3d21fb561626d20247e354995d2206fe8bcceaefa47b3d8c2e42d-merged.mount: Deactivated successfully.
Oct 14 09:08:13 compute-0 podman[330418]: 2025-10-14 09:08:13.339121444 +0000 UTC m=+0.256431766 container remove ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:08:13 compute-0 systemd[1]: libpod-conmon-ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1.scope: Deactivated successfully.
Oct 14 09:08:13 compute-0 podman[330458]: 2025-10-14 09:08:13.557151113 +0000 UTC m=+0.045005149 container create 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:08:13 compute-0 systemd[1]: Started libpod-conmon-5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6.scope.
Oct 14 09:08:13 compute-0 podman[330458]: 2025-10-14 09:08:13.538929455 +0000 UTC m=+0.026783531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:08:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:08:13 compute-0 ceph-mon[74249]: pgmap v1597: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 33 KiB/s wr, 82 op/s
Oct 14 09:08:13 compute-0 ceph-mon[74249]: osdmap e238: 3 total, 3 up, 3 in
Oct 14 09:08:13 compute-0 ceph-mon[74249]: osdmap e239: 3 total, 3 up, 3 in
Oct 14 09:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:13 compute-0 podman[330458]: 2025-10-14 09:08:13.66543262 +0000 UTC m=+0.153286736 container init 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:08:13 compute-0 podman[330458]: 2025-10-14 09:08:13.672539695 +0000 UTC m=+0.160393721 container start 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:08:13 compute-0 podman[330458]: 2025-10-14 09:08:13.675883847 +0000 UTC m=+0.163737923 container attach 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:08:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 44 KiB/s wr, 109 op/s
Oct 14 09:08:13 compute-0 nova_compute[259627]: 2025-10-14 09:08:13.957 2 DEBUG nova.network.neutron [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:13 compute-0 nova_compute[259627]: 2025-10-14 09:08:13.977 2 INFO nova.compute.manager [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Took 2.97 seconds to deallocate network for instance.
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.030 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.031 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.069 2 DEBUG nova.compute.manager [req-391e3273-d6cf-4ac3-b7da-854de9da06d7 req-7de161ba-b6f5-4d16-bfab-c8a135290f17 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-deleted-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.106 2 DEBUG oslo_concurrency.processutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.295 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Successfully updated port: 1c4cb1d0-4a75-436b-965e-910994f941f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.318 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.319 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.319 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.509 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:08:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454732831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.557 2 DEBUG oslo_concurrency.processutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.566 2 DEBUG nova.compute.provider_tree [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.586 2 DEBUG nova.scheduler.client.report [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.611 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:14 compute-0 ceph-mon[74249]: pgmap v1600: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 44 KiB/s wr, 109 op/s
Oct 14 09:08:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1454732831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.650 2 INFO nova.scheduler.client.report [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Deleted allocations for instance 8a58a504-85a5-44e6-b815-99abb4ca2fc8
Oct 14 09:08:14 compute-0 serene_gates[330474]: {
Oct 14 09:08:14 compute-0 serene_gates[330474]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "osd_id": 2,
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "type": "bluestore"
Oct 14 09:08:14 compute-0 serene_gates[330474]:     },
Oct 14 09:08:14 compute-0 serene_gates[330474]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "osd_id": 1,
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "type": "bluestore"
Oct 14 09:08:14 compute-0 serene_gates[330474]:     },
Oct 14 09:08:14 compute-0 serene_gates[330474]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "osd_id": 0,
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:08:14 compute-0 serene_gates[330474]:         "type": "bluestore"
Oct 14 09:08:14 compute-0 serene_gates[330474]:     }
Oct 14 09:08:14 compute-0 serene_gates[330474]: }
Oct 14 09:08:14 compute-0 nova_compute[259627]: 2025-10-14 09:08:14.718 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:14 compute-0 systemd[1]: libpod-5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6.scope: Deactivated successfully.
Oct 14 09:08:14 compute-0 systemd[1]: libpod-5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6.scope: Consumed 1.084s CPU time.
Oct 14 09:08:14 compute-0 podman[330458]: 2025-10-14 09:08:14.762109198 +0000 UTC m=+1.249963244 container died 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e-merged.mount: Deactivated successfully.
Oct 14 09:08:14 compute-0 podman[330458]: 2025-10-14 09:08:14.843282827 +0000 UTC m=+1.331136883 container remove 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:08:14 compute-0 systemd[1]: libpod-conmon-5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6.scope: Deactivated successfully.
Oct 14 09:08:14 compute-0 sudo[330349]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:08:14 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:08:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:08:14 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:08:14 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev eb7ae002-7cd9-49ec-aecf-1ebfb1887d68 does not exist
Oct 14 09:08:14 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 1bd5afc2-1141-42f6-b8d1-c9a2ebae3473 does not exist
Oct 14 09:08:15 compute-0 sudo[330542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:08:15 compute-0 sudo[330542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:15 compute-0 sudo[330542]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:15 compute-0 sudo[330567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:08:15 compute-0 sudo[330567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:08:15 compute-0 sudo[330567]: pam_unix(sudo:session): session closed for user root
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.584 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Updating instance_info_cache with network_info: [{"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.606 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.606 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Instance network_info: |[{"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.608 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start _get_guest_xml network_info=[{"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.613 2 WARNING nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.620 2 DEBUG nova.virt.libvirt.host [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.621 2 DEBUG nova.virt.libvirt.host [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.624 2 DEBUG nova.virt.libvirt.host [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.625 2 DEBUG nova.virt.libvirt.host [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.625 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.625 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.625 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.627 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.627 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.627 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:08:15 compute-0 nova_compute[259627]: 2025-10-14 09:08:15.630 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 167 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 3.6 MiB/s wr, 281 op/s
Oct 14 09:08:15 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:08:15 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:08:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:08:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190284725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.055 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.087 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.091 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.261 2 DEBUG nova.compute.manager [req-a9d7873f-29cf-44a5-808c-81497367085e req-04701e68-af69-4176-8e6f-cf011f6533b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received event network-changed-1c4cb1d0-4a75-436b-965e-910994f941f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.262 2 DEBUG nova.compute.manager [req-a9d7873f-29cf-44a5-808c-81497367085e req-04701e68-af69-4176-8e6f-cf011f6533b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Refreshing instance network info cache due to event network-changed-1c4cb1d0-4a75-436b-965e-910994f941f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.262 2 DEBUG oslo_concurrency.lockutils [req-a9d7873f-29cf-44a5-808c-81497367085e req-04701e68-af69-4176-8e6f-cf011f6533b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:16 compute-0 rsyslogd[1002]: imjournal from <np0005486808:ceph-mon>: begin to drop messages due to rate-limiting
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.262 2 DEBUG oslo_concurrency.lockutils [req-a9d7873f-29cf-44a5-808c-81497367085e req-04701e68-af69-4176-8e6f-cf011f6533b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.263 2 DEBUG nova.network.neutron [req-a9d7873f-29cf-44a5-808c-81497367085e req-04701e68-af69-4176-8e6f-cf011f6533b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Refreshing network info cache for port 1c4cb1d0-4a75-436b-965e-910994f941f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:08:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:08:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2921368215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.532 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.534 2 DEBUG nova.virt.libvirt.vif [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1229366743',display_name='tempest-DeleteServersTestJSON-server-1229366743',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1229366743',id=67,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-gdrhmtlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:10Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=9512792b-fd02-459a-8377-c2815c130684,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.534 2 DEBUG nova.network.os_vif_util [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.536 2 DEBUG nova.network.os_vif_util [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=1c4cb1d0-4a75-436b-965e-910994f941f6,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c4cb1d0-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.539 2 DEBUG nova.objects.instance [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9512792b-fd02-459a-8377-c2815c130684 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.565 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <uuid>9512792b-fd02-459a-8377-c2815c130684</uuid>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <name>instance-00000043</name>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <nova:name>tempest-DeleteServersTestJSON-server-1229366743</nova:name>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:08:15</nova:creationTime>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <nova:port uuid="1c4cb1d0-4a75-436b-965e-910994f941f6">
Oct 14 09:08:16 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <system>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <entry name="serial">9512792b-fd02-459a-8377-c2815c130684</entry>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <entry name="uuid">9512792b-fd02-459a-8377-c2815c130684</entry>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </system>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <os>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   </os>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <features>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   </features>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9512792b-fd02-459a-8377-c2815c130684_disk">
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       </source>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9512792b-fd02-459a-8377-c2815c130684_disk.config">
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       </source>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:08:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:3c:c5:30"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <target dev="tap1c4cb1d0-4a"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/console.log" append="off"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <video>
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </video>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:08:16 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:08:16 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:08:16 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:08:16 compute-0 nova_compute[259627]: </domain>
Oct 14 09:08:16 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.566 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Preparing to wait for external event network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.567 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.567 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.567 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.568 2 DEBUG nova.virt.libvirt.vif [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1229366743',display_name='tempest-DeleteServersTestJSON-server-1229366743',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1229366743',id=67,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-gdrhmtlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:10Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=9512792b-fd02-459a-8377-c2815c130684,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.569 2 DEBUG nova.network.os_vif_util [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.570 2 DEBUG nova.network.os_vif_util [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=1c4cb1d0-4a75-436b-965e-910994f941f6,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c4cb1d0-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.570 2 DEBUG os_vif [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=1c4cb1d0-4a75-436b-965e-910994f941f6,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c4cb1d0-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c4cb1d0-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c4cb1d0-4a, col_values=(('external_ids', {'iface-id': '1c4cb1d0-4a75-436b-965e-910994f941f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:c5:30', 'vm-uuid': '9512792b-fd02-459a-8377-c2815c130684'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:16 compute-0 NetworkManager[44885]: <info>  [1760432896.5819] manager: (tap1c4cb1d0-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.592 2 INFO os_vif [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=1c4cb1d0-4a75-436b-965e-910994f941f6,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c4cb1d0-4a')
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.658 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.658 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.659 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:3c:c5:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.659 2 INFO nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Using config drive
Oct 14 09:08:16 compute-0 nova_compute[259627]: 2025-10-14 09:08:16.679 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:16 compute-0 ceph-mon[74249]: pgmap v1601: 305 pgs: 305 active+clean; 167 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 3.6 MiB/s wr, 281 op/s
Oct 14 09:08:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2190284725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2921368215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.101 2 INFO nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Creating config drive at /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/disk.config
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.107 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ivf7cvx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.267 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ivf7cvx" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.309 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.314 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/disk.config 9512792b-fd02-459a-8377-c2815c130684_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.512 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/disk.config 9512792b-fd02-459a-8377-c2815c130684_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.513 2 INFO nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Deleting local config drive /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/disk.config because it was imported into RBD.
Oct 14 09:08:17 compute-0 kernel: tap1c4cb1d0-4a: entered promiscuous mode
Oct 14 09:08:17 compute-0 NetworkManager[44885]: <info>  [1760432897.5849] manager: (tap1c4cb1d0-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:17 compute-0 ovn_controller[152662]: 2025-10-14T09:08:17Z|00674|binding|INFO|Claiming lport 1c4cb1d0-4a75-436b-965e-910994f941f6 for this chassis.
Oct 14 09:08:17 compute-0 ovn_controller[152662]: 2025-10-14T09:08:17Z|00675|binding|INFO|1c4cb1d0-4a75-436b-965e-910994f941f6: Claiming fa:16:3e:3c:c5:30 10.100.0.10
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.595 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:c5:30 10.100.0.10'], port_security=['fa:16:3e:3c:c5:30 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9512792b-fd02-459a-8377-c2815c130684', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1c4cb1d0-4a75-436b-965e-910994f941f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.598 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1c4cb1d0-4a75-436b-965e-910994f941f6 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.600 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.613 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ef78e9-7b82-4d71-afe7-d7c2d08305a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.615 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:08:17 compute-0 ovn_controller[152662]: 2025-10-14T09:08:17Z|00676|binding|INFO|Setting lport 1c4cb1d0-4a75-436b-965e-910994f941f6 ovn-installed in OVS
Oct 14 09:08:17 compute-0 ovn_controller[152662]: 2025-10-14T09:08:17Z|00677|binding|INFO|Setting lport 1c4cb1d0-4a75-436b-965e-910994f941f6 up in Southbound
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.617 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b2f7d4-22bd-47b9-a163-8ad5103257fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.619 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a7ad47-06c1-4f6a-b7a5-26c886032f0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.633 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[47e92f87-8c50-4004-b149-5ae433523d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 systemd-machined[214636]: New machine qemu-81-instance-00000043.
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae3ecda-5064-47fa-ac4f-9af302fcff2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000043.
Oct 14 09:08:17 compute-0 systemd-udevd[330732]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.695 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8e4c17-5afe-43ae-a109-3ded0392cd1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 NetworkManager[44885]: <info>  [1760432897.6980] device (tap1c4cb1d0-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:08:17 compute-0 NetworkManager[44885]: <info>  [1760432897.6997] device (tap1c4cb1d0-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:08:17 compute-0 NetworkManager[44885]: <info>  [1760432897.7028] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/289)
Oct 14 09:08:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct 14 09:08:17 compute-0 systemd-udevd[330735]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.700 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b39443-3d61-462a-8322-70e701ae920e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct 14 09:08:17 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.734 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[11320414-a150-402a-bc00-379e5fdf46da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.737 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4eddc8ec-3bc2-4b45-b00a-868f4d613862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 NetworkManager[44885]: <info>  [1760432897.7648] device (tap0a07d59e-b0): carrier: link connected
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.772 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b28a56cc-06b2-4fc0-b920-e6140973bd91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 167 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 132 KiB/s rd, 3.6 MiB/s wr, 199 op/s
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.800 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[044d7bab-9270-446d-a0f5-e162d19a2f70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667652, 'reachable_time': 19984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330760, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.821 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31d4a1f9-8654-4235-9266-a4be10b4bc0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667652, 'tstamp': 667652}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330761, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.847 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0babde-a368-4a82-96d5-5a5a6560f4e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667652, 'reachable_time': 19984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330762, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.885 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc7394c-7f49-411b-91f9-8801587a6fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.966 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[335a40c9-5aec-4cc4-9978-6a4b50204bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.967 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.967 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.968 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:17 compute-0 NetworkManager[44885]: <info>  [1760432897.9702] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Oct 14 09:08:17 compute-0 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.978 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:17 compute-0 ovn_controller[152662]: 2025-10-14T09:08:17Z|00678|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.980 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd9cf3d-338e-4b6c-b4fd-6bc5101e0a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.982 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:08:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:17.983 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:08:17 compute-0 nova_compute[259627]: 2025-10-14 09:08:17.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.128 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432883.1267385, 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.128 2 INFO nova.compute.manager [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] VM Stopped (Lifecycle Event)
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.170 2 DEBUG nova.compute.manager [None req-bff29091-d339-46da-8b3e-1c8a73fdc5f4 - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.376 2 DEBUG nova.compute.manager [req-f9cf47fa-13f0-4acf-b6e9-912f264c3145 req-b6c576ed-8686-40eb-b6bb-9cc19863ae38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received event network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.377 2 DEBUG oslo_concurrency.lockutils [req-f9cf47fa-13f0-4acf-b6e9-912f264c3145 req-b6c576ed-8686-40eb-b6bb-9cc19863ae38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.378 2 DEBUG oslo_concurrency.lockutils [req-f9cf47fa-13f0-4acf-b6e9-912f264c3145 req-b6c576ed-8686-40eb-b6bb-9cc19863ae38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.379 2 DEBUG oslo_concurrency.lockutils [req-f9cf47fa-13f0-4acf-b6e9-912f264c3145 req-b6c576ed-8686-40eb-b6bb-9cc19863ae38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.379 2 DEBUG nova.compute.manager [req-f9cf47fa-13f0-4acf-b6e9-912f264c3145 req-b6c576ed-8686-40eb-b6bb-9cc19863ae38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Processing event network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:08:18 compute-0 podman[330835]: 2025-10-14 09:08:18.422825341 +0000 UTC m=+0.067119334 container create 802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:08:18 compute-0 systemd[1]: Started libpod-conmon-802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1.scope.
Oct 14 09:08:18 compute-0 podman[330835]: 2025-10-14 09:08:18.393491049 +0000 UTC m=+0.037785092 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:08:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94aa42d3f4937c466f2cfcc3b721652aceadaae275cde84f3f32e5ffbbc78a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:18 compute-0 podman[330835]: 2025-10-14 09:08:18.523775877 +0000 UTC m=+0.168069950 container init 802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:08:18 compute-0 podman[330835]: 2025-10-14 09:08:18.532359389 +0000 UTC m=+0.176653412 container start 802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:08:18 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[330850]: [NOTICE]   (330854) : New worker (330856) forked
Oct 14 09:08:18 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[330850]: [NOTICE]   (330854) : Loading success.
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.576 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432898.5758305, 9512792b-fd02-459a-8377-c2815c130684 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.577 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] VM Started (Lifecycle Event)
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.581 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.588 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.593 2 INFO nova.virt.libvirt.driver [-] [instance: 9512792b-fd02-459a-8377-c2815c130684] Instance spawned successfully.
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.593 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.600 2 DEBUG nova.network.neutron [req-a9d7873f-29cf-44a5-808c-81497367085e req-04701e68-af69-4176-8e6f-cf011f6533b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Updated VIF entry in instance network info cache for port 1c4cb1d0-4a75-436b-965e-910994f941f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.601 2 DEBUG nova.network.neutron [req-a9d7873f-29cf-44a5-808c-81497367085e req-04701e68-af69-4176-8e6f-cf011f6533b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Updating instance_info_cache with network_info: [{"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.626 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.627 2 DEBUG oslo_concurrency.lockutils [req-a9d7873f-29cf-44a5-808c-81497367085e req-04701e68-af69-4176-8e6f-cf011f6533b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.635 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.644 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.645 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.646 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.646 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.647 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.648 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.669 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.670 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432898.5761492, 9512792b-fd02-459a-8377-c2815c130684 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.670 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] VM Paused (Lifecycle Event)
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.691 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.694 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432898.5894856, 9512792b-fd02-459a-8377-c2815c130684 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.694 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] VM Resumed (Lifecycle Event)
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.698 2 INFO nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Took 8.09 seconds to spawn the instance on the hypervisor.
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.698 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.717 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:18 compute-0 ceph-mon[74249]: osdmap e240: 3 total, 3 up, 3 in
Oct 14 09:08:18 compute-0 ceph-mon[74249]: pgmap v1603: 305 pgs: 305 active+clean; 167 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 132 KiB/s rd, 3.6 MiB/s wr, 199 op/s
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.726 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.752 2 INFO nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Took 9.14 seconds to build instance.
Oct 14 09:08:18 compute-0 nova_compute[259627]: 2025-10-14 09:08:18.765 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 167 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 3.0 MiB/s wr, 167 op/s
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.436 2 DEBUG nova.objects.instance [None req-5dd5de0d-d896-4e91-b770-923cca3f218e a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9512792b-fd02-459a-8377-c2815c130684 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.461 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432900.4610863, 9512792b-fd02-459a-8377-c2815c130684 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.461 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] VM Paused (Lifecycle Event)
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.487 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.491 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.507 2 DEBUG nova.compute.manager [req-0836d3a1-1eb6-4e3b-8c7d-4f427d788f8a req-d8b518b2-3123-4008-ac96-f801d3120121 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received event network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.507 2 DEBUG oslo_concurrency.lockutils [req-0836d3a1-1eb6-4e3b-8c7d-4f427d788f8a req-d8b518b2-3123-4008-ac96-f801d3120121 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.507 2 DEBUG oslo_concurrency.lockutils [req-0836d3a1-1eb6-4e3b-8c7d-4f427d788f8a req-d8b518b2-3123-4008-ac96-f801d3120121 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.507 2 DEBUG oslo_concurrency.lockutils [req-0836d3a1-1eb6-4e3b-8c7d-4f427d788f8a req-d8b518b2-3123-4008-ac96-f801d3120121 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.508 2 DEBUG nova.compute.manager [req-0836d3a1-1eb6-4e3b-8c7d-4f427d788f8a req-d8b518b2-3123-4008-ac96-f801d3120121 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] No waiting events found dispatching network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.508 2 WARNING nova.compute.manager [req-0836d3a1-1eb6-4e3b-8c7d-4f427d788f8a req-d8b518b2-3123-4008-ac96-f801d3120121 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received unexpected event network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 for instance with vm_state active and task_state suspending.
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.521 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.636 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "7167ef21-b041-47b9-8d93-55b5853f4d01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.637 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.656 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.832 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.833 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.842 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.842 2 INFO nova.compute.claims [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:08:20 compute-0 ceph-mon[74249]: pgmap v1604: 305 pgs: 305 active+clean; 167 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 3.0 MiB/s wr, 167 op/s
Oct 14 09:08:20 compute-0 kernel: tap1c4cb1d0-4a (unregistering): left promiscuous mode
Oct 14 09:08:20 compute-0 NetworkManager[44885]: <info>  [1760432900.9372] device (tap1c4cb1d0-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:20 compute-0 ovn_controller[152662]: 2025-10-14T09:08:20Z|00679|binding|INFO|Releasing lport 1c4cb1d0-4a75-436b-965e-910994f941f6 from this chassis (sb_readonly=0)
Oct 14 09:08:20 compute-0 ovn_controller[152662]: 2025-10-14T09:08:20Z|00680|binding|INFO|Setting lport 1c4cb1d0-4a75-436b-965e-910994f941f6 down in Southbound
Oct 14 09:08:20 compute-0 ovn_controller[152662]: 2025-10-14T09:08:20Z|00681|binding|INFO|Removing iface tap1c4cb1d0-4a ovn-installed in OVS
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:20.971 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:c5:30 10.100.0.10'], port_security=['fa:16:3e:3c:c5:30 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9512792b-fd02-459a-8377-c2815c130684', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1c4cb1d0-4a75-436b-965e-910994f941f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:08:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:20.973 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1c4cb1d0-4a75-436b-965e-910994f941f6 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis
Oct 14 09:08:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:20.976 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:08:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:20.977 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d7415ed7-c19e-4f39-a594-4dff93b272d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:20.978 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:20 compute-0 nova_compute[259627]: 2025-10-14 09:08:20.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:21 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 14 09:08:21 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000043.scope: Consumed 2.892s CPU time.
Oct 14 09:08:21 compute-0 systemd-machined[214636]: Machine qemu-81-instance-00000043 terminated.
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.077 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.142 2 DEBUG nova.compute.manager [None req-5dd5de0d-d896-4e91-b770-923cca3f218e a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:21 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[330850]: [NOTICE]   (330854) : haproxy version is 2.8.14-c23fe91
Oct 14 09:08:21 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[330850]: [NOTICE]   (330854) : path to executable is /usr/sbin/haproxy
Oct 14 09:08:21 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[330850]: [WARNING]  (330854) : Exiting Master process...
Oct 14 09:08:21 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[330850]: [ALERT]    (330854) : Current worker (330856) exited with code 143 (Terminated)
Oct 14 09:08:21 compute-0 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[330850]: [WARNING]  (330854) : All workers exited. Exiting... (0)
Oct 14 09:08:21 compute-0 systemd[1]: libpod-802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1.scope: Deactivated successfully.
Oct 14 09:08:21 compute-0 conmon[330850]: conmon 802da8df677c41370850 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1.scope/container/memory.events
Oct 14 09:08:21 compute-0 podman[330892]: 2025-10-14 09:08:21.186122634 +0000 UTC m=+0.070787224 container died 802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1-userdata-shm.mount: Deactivated successfully.
Oct 14 09:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b94aa42d3f4937c466f2cfcc3b721652aceadaae275cde84f3f32e5ffbbc78a4-merged.mount: Deactivated successfully.
Oct 14 09:08:21 compute-0 podman[330892]: 2025-10-14 09:08:21.27284751 +0000 UTC m=+0.157512040 container cleanup 802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:08:21 compute-0 systemd[1]: libpod-conmon-802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1.scope: Deactivated successfully.
Oct 14 09:08:21 compute-0 ovn_controller[152662]: 2025-10-14T09:08:21Z|00682|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 09:08:21 compute-0 ovn_controller[152662]: 2025-10-14T09:08:21Z|00683|binding|INFO|Releasing lport fcca615a-5470-4880-844d-73adc425bce1 from this chassis (sb_readonly=0)
Oct 14 09:08:21 compute-0 podman[330948]: 2025-10-14 09:08:21.350774379 +0000 UTC m=+0.045963683 container remove 802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.357 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4c01a7-01d5-419b-9d92-e30d959d2cbf]: (4, ('Tue Oct 14 09:08:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1)\n802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1\nTue Oct 14 09:08:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1)\n802da8df677c41370850e2ad9c48358291d685bc626ee5aa05b8b8e48afb5cc1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.359 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cbfb99-8f00-4737-9236-ad5637f2a710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.361 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:21 compute-0 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.424 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f5d5bc-f374-4eb2-b6ad-5dc322368d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c306ea-f668-49e1-bb08-59dba2980cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3f639fd7-cf3e-48ad-a0e6-aefa6ced3d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.466 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db070229-383b-445d-bf32-ee0262280025]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667645, 'reachable_time': 24703, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330964, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.471 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:08:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:21.471 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8832a1f4-b7af-4d19-a11e-7528c636df57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1102186980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.626 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.634 2 DEBUG nova.compute.provider_tree [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.656 2 DEBUG nova.scheduler.client.report [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.685 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.686 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.752 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.753 2 DEBUG nova.network.neutron [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.771 2 INFO nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.787 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:08:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 167 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 229 op/s
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.874 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.876 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.877 2 INFO nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Creating image(s)
Oct 14 09:08:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1102186980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.919 2 DEBUG nova.storage.rbd_utils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image 7167ef21-b041-47b9-8d93-55b5853f4d01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.947 2 DEBUG nova.storage.rbd_utils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image 7167ef21-b041-47b9-8d93-55b5853f4d01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.977 2 DEBUG nova.storage.rbd_utils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image 7167ef21-b041-47b9-8d93-55b5853f4d01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:21 compute-0 nova_compute[259627]: 2025-10-14 09:08:21.981 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.024 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.078 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.079 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.079 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.079 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.103 2 DEBUG nova.storage.rbd_utils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image 7167ef21-b041-47b9-8d93-55b5853f4d01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.106 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7167ef21-b041-47b9-8d93-55b5853f4d01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.253 2 DEBUG nova.policy [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '695c749a8dce4506a31e2cec4f02876b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bda6775f81f403e83269a5f798c9853', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.487 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7167ef21-b041-47b9-8d93-55b5853f4d01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.554 2 DEBUG nova.storage.rbd_utils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] resizing rbd image 7167ef21-b041-47b9-8d93-55b5853f4d01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.672 2 DEBUG nova.objects.instance [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'migration_context' on Instance uuid 7167ef21-b041-47b9-8d93-55b5853f4d01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.684 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.684 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Ensure instance console log exists: /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.684 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.685 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.685 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:22 compute-0 ceph-mon[74249]: pgmap v1605: 305 pgs: 305 active+clean; 167 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 229 op/s
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.950 2 DEBUG nova.compute.manager [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received event network-vif-unplugged-1c4cb1d0-4a75-436b-965e-910994f941f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.951 2 DEBUG oslo_concurrency.lockutils [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.952 2 DEBUG oslo_concurrency.lockutils [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.952 2 DEBUG oslo_concurrency.lockutils [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.952 2 DEBUG nova.compute.manager [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] No waiting events found dispatching network-vif-unplugged-1c4cb1d0-4a75-436b-965e-910994f941f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.953 2 WARNING nova.compute.manager [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received unexpected event network-vif-unplugged-1c4cb1d0-4a75-436b-965e-910994f941f6 for instance with vm_state suspended and task_state None.
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.953 2 DEBUG nova.compute.manager [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received event network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.954 2 DEBUG oslo_concurrency.lockutils [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.954 2 DEBUG oslo_concurrency.lockutils [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.955 2 DEBUG oslo_concurrency.lockutils [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.955 2 DEBUG nova.compute.manager [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] No waiting events found dispatching network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.955 2 WARNING nova.compute.manager [req-c85ff162-a575-445b-87db-986986a63314 req-80a19e2d-ef4d-490f-892c-3e8f79eb35b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received unexpected event network-vif-plugged-1c4cb1d0-4a75-436b-965e-910994f941f6 for instance with vm_state suspended and task_state None.
Oct 14 09:08:22 compute-0 nova_compute[259627]: 2025-10-14 09:08:22.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.110 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.110 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.111 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.111 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.112 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.113 2 INFO nova.compute.manager [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Terminating instance
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.114 2 DEBUG nova.compute.manager [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.123 2 INFO nova.virt.libvirt.driver [-] [instance: 9512792b-fd02-459a-8377-c2815c130684] Instance destroyed successfully.
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.123 2 DEBUG nova.objects.instance [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid 9512792b-fd02-459a-8377-c2815c130684 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.148 2 DEBUG nova.virt.libvirt.vif [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:08:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1229366743',display_name='tempest-DeleteServersTestJSON-server-1229366743',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1229366743',id=67,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:08:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-gdrhmtlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:08:21Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=9512792b-fd02-459a-8377-c2815c130684,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.148 2 DEBUG nova.network.os_vif_util [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.150 2 DEBUG nova.network.os_vif_util [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=1c4cb1d0-4a75-436b-965e-910994f941f6,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c4cb1d0-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.150 2 DEBUG os_vif [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=1c4cb1d0-4a75-436b-965e-910994f941f6,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c4cb1d0-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.153 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c4cb1d0-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.161 2 INFO os_vif [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:c5:30,bridge_name='br-int',has_traffic_filtering=True,id=1c4cb1d0-4a75-436b-965e-910994f941f6,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c4cb1d0-4a')
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.550 2 DEBUG nova.network.neutron [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Successfully created port: 6383be3b-4378-4c3f-a2be-af1c6ec32afb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.626 2 INFO nova.virt.libvirt.driver [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Deleting instance files /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684_del
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.627 2 INFO nova.virt.libvirt.driver [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Deletion of /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684_del complete
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.706 2 INFO nova.compute.manager [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Took 0.59 seconds to destroy the instance on the hypervisor.
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.707 2 DEBUG oslo.service.loopingcall [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.707 2 DEBUG nova.compute.manager [-] [instance: 9512792b-fd02-459a-8377-c2815c130684] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:08:23 compute-0 nova_compute[259627]: 2025-10-14 09:08:23.707 2 DEBUG nova.network.neutron [-] [instance: 9512792b-fd02-459a-8377-c2815c130684] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:08:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 167 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 207 op/s
Oct 14 09:08:24 compute-0 podman[331153]: 2025-10-14 09:08:24.659766251 +0000 UTC m=+0.065802962 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 09:08:24 compute-0 podman[331152]: 2025-10-14 09:08:24.69222625 +0000 UTC m=+0.099942882 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Oct 14 09:08:24 compute-0 nova_compute[259627]: 2025-10-14 09:08:24.692 2 DEBUG nova.network.neutron [-] [instance: 9512792b-fd02-459a-8377-c2815c130684] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:24 compute-0 nova_compute[259627]: 2025-10-14 09:08:24.712 2 INFO nova.compute.manager [-] [instance: 9512792b-fd02-459a-8377-c2815c130684] Took 1.01 seconds to deallocate network for instance.
Oct 14 09:08:24 compute-0 nova_compute[259627]: 2025-10-14 09:08:24.760 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:24 compute-0 nova_compute[259627]: 2025-10-14 09:08:24.761 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:24 compute-0 nova_compute[259627]: 2025-10-14 09:08:24.870 2 DEBUG oslo_concurrency.processutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:24 compute-0 ceph-mon[74249]: pgmap v1606: 305 pgs: 305 active+clean; 167 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 207 op/s
Oct 14 09:08:24 compute-0 nova_compute[259627]: 2025-10-14 09:08:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.143 2 DEBUG nova.network.neutron [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Successfully updated port: 6383be3b-4378-4c3f-a2be-af1c6ec32afb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.168 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.169 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquired lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.169 2 DEBUG nova.network.neutron [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.179 2 DEBUG nova.compute.manager [req-5c14f1b8-e345-4090-b268-c50ecbc5f179 req-280850c1-bfd6-400f-9d51-27b5bb9cc20e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Received event network-vif-deleted-1c4cb1d0-4a75-436b-965e-910994f941f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.261 2 DEBUG nova.compute.manager [req-b315ebc1-e696-4895-8c16-82629c6ed367 req-fc561498-7779-42be-943d-6c4b2427e5e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received event network-changed-6383be3b-4378-4c3f-a2be-af1c6ec32afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.262 2 DEBUG nova.compute.manager [req-b315ebc1-e696-4895-8c16-82629c6ed367 req-fc561498-7779-42be-943d-6c4b2427e5e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Refreshing instance network info cache due to event network-changed-6383be3b-4378-4c3f-a2be-af1c6ec32afb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.264 2 DEBUG oslo_concurrency.lockutils [req-b315ebc1-e696-4895-8c16-82629c6ed367 req-fc561498-7779-42be-943d-6c4b2427e5e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.301 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432890.3002074, 8a58a504-85a5-44e6-b815-99abb4ca2fc8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.301 2 INFO nova.compute.manager [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] VM Stopped (Lifecycle Event)
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.322 2 DEBUG nova.compute.manager [None req-df04959c-dfde-40d9-b66f-010a95734e27 - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.398 2 DEBUG nova.network.neutron [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:08:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792147824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.447 2 DEBUG oslo_concurrency.processutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.452 2 DEBUG nova.compute.provider_tree [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.466 2 DEBUG nova.scheduler.client.report [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.492 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.494 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.495 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.495 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.495 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.546 2 INFO nova.scheduler.client.report [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance 9512792b-fd02-459a-8377-c2815c130684
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.613 2 DEBUG oslo_concurrency.lockutils [None req-15c081a6-041c-4589-aaa4-fea2f568a111 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Oct 14 09:08:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115172739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3792147824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2115172739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:25 compute-0 nova_compute[259627]: 2025-10-14 09:08:25.930 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.006 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.006 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.245 2 DEBUG nova.network.neutron [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Updating instance_info_cache with network_info: [{"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.266 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Releasing lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.267 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Instance network_info: |[{"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.268 2 DEBUG oslo_concurrency.lockutils [req-b315ebc1-e696-4895-8c16-82629c6ed367 req-fc561498-7779-42be-943d-6c4b2427e5e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.269 2 DEBUG nova.network.neutron [req-b315ebc1-e696-4895-8c16-82629c6ed367 req-fc561498-7779-42be-943d-6c4b2427e5e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Refreshing network info cache for port 6383be3b-4378-4c3f-a2be-af1c6ec32afb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.275 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Start _get_guest_xml network_info=[{"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.283 2 WARNING nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.291 2 DEBUG nova.virt.libvirt.host [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.292 2 DEBUG nova.virt.libvirt.host [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.297 2 DEBUG nova.virt.libvirt.host [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.298 2 DEBUG nova.virt.libvirt.host [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.299 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.299 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.300 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.301 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.301 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.302 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.302 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.303 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.303 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.304 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.304 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.305 2 DEBUG nova.virt.hardware [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.310 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.384 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.388 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3706MB free_disk=59.921783447265625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.388 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.389 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.455 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e065d857-2df9-4199-aa98-41ca3c436bad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.456 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 7167ef21-b041-47b9-8d93-55b5853f4d01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.456 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.457 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.519 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:08:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2989834504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.801 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.822 2 DEBUG nova.storage.rbd_utils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image 7167ef21-b041-47b9-8d93-55b5853f4d01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.826 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:26 compute-0 ceph-mon[74249]: pgmap v1607: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Oct 14 09:08:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2989834504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837343065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.966 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:26 compute-0 nova_compute[259627]: 2025-10-14 09:08:26.973 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.127 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.155 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.156 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:08:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4119659457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.281 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.283 2 DEBUG nova.virt.libvirt.vif [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-741371027',display_name='tempest-ServerActionsTestOtherB-server-741371027',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-741371027',id=68,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-xivhqpt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:21Z,user_data=None,user_id='695c749a8dce4506a31e2cec4f02876b',uuid=7167ef21-b041-47b9-8d93-55b5853f4d01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.284 2 DEBUG nova.network.os_vif_util [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.286 2 DEBUG nova.network.os_vif_util [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:e8:10,bridge_name='br-int',has_traffic_filtering=True,id=6383be3b-4378-4c3f-a2be-af1c6ec32afb,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6383be3b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.288 2 DEBUG nova.objects.instance [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7167ef21-b041-47b9-8d93-55b5853f4d01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.317 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <uuid>7167ef21-b041-47b9-8d93-55b5853f4d01</uuid>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <name>instance-00000044</name>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestOtherB-server-741371027</nova:name>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:08:26</nova:creationTime>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <nova:user uuid="695c749a8dce4506a31e2cec4f02876b">tempest-ServerActionsTestOtherB-381012378-project-member</nova:user>
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <nova:project uuid="4bda6775f81f403e83269a5f798c9853">tempest-ServerActionsTestOtherB-381012378</nova:project>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <nova:port uuid="6383be3b-4378-4c3f-a2be-af1c6ec32afb">
Oct 14 09:08:27 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <system>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <entry name="serial">7167ef21-b041-47b9-8d93-55b5853f4d01</entry>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <entry name="uuid">7167ef21-b041-47b9-8d93-55b5853f4d01</entry>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </system>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <os>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   </os>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <features>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   </features>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7167ef21-b041-47b9-8d93-55b5853f4d01_disk">
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       </source>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7167ef21-b041-47b9-8d93-55b5853f4d01_disk.config">
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       </source>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:08:27 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:01:e8:10"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <target dev="tap6383be3b-43"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01/console.log" append="off"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <video>
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </video>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:08:27 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:08:27 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:08:27 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:08:27 compute-0 nova_compute[259627]: </domain>
Oct 14 09:08:27 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.319 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Preparing to wait for external event network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.320 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.321 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.322 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.323 2 DEBUG nova.virt.libvirt.vif [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-741371027',display_name='tempest-ServerActionsTestOtherB-server-741371027',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-741371027',id=68,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-xivhqpt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:21Z,user_data=None,user_id='695c749a8dce4506a31e2cec4f02876b',uuid=7167ef21-b041-47b9-8d93-55b5853f4d01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.324 2 DEBUG nova.network.os_vif_util [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.325 2 DEBUG nova.network.os_vif_util [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:e8:10,bridge_name='br-int',has_traffic_filtering=True,id=6383be3b-4378-4c3f-a2be-af1c6ec32afb,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6383be3b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.326 2 DEBUG os_vif [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:e8:10,bridge_name='br-int',has_traffic_filtering=True,id=6383be3b-4378-4c3f-a2be-af1c6ec32afb,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6383be3b-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6383be3b-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6383be3b-43, col_values=(('external_ids', {'iface-id': '6383be3b-4378-4c3f-a2be-af1c6ec32afb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:e8:10', 'vm-uuid': '7167ef21-b041-47b9-8d93-55b5853f4d01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:27 compute-0 NetworkManager[44885]: <info>  [1760432907.3408] manager: (tap6383be3b-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.349 2 INFO os_vif [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:e8:10,bridge_name='br-int',has_traffic_filtering=True,id=6383be3b-4378-4c3f-a2be-af1c6ec32afb,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6383be3b-43')
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.409 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.410 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.410 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No VIF found with MAC fa:16:3e:01:e8:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.410 2 INFO nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Using config drive
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.444 2 DEBUG nova.storage.rbd_utils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image 7167ef21-b041-47b9-8d93-55b5853f4d01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.714134) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432907714193, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 948, "num_deletes": 265, "total_data_size": 1085744, "memory_usage": 1115184, "flush_reason": "Manual Compaction"}
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432907724567, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1071602, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32896, "largest_seqno": 33843, "table_properties": {"data_size": 1066797, "index_size": 2329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10894, "raw_average_key_size": 20, "raw_value_size": 1056959, "raw_average_value_size": 1953, "num_data_blocks": 102, "num_entries": 541, "num_filter_entries": 541, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432855, "oldest_key_time": 1760432855, "file_creation_time": 1760432907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 10489 microseconds, and 6553 cpu microseconds.
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.724624) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1071602 bytes OK
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.724650) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.726477) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.726499) EVENT_LOG_v1 {"time_micros": 1760432907726492, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.726521) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1081015, prev total WAL file size 1081015, number of live WAL files 2.
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.727430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1046KB)], [71(8316KB)]
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432907727505, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9588057, "oldest_snapshot_seqno": -1}
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5752 keys, 9472986 bytes, temperature: kUnknown
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432907791776, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9472986, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9431839, "index_size": 25677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 145593, "raw_average_key_size": 25, "raw_value_size": 9325714, "raw_average_value_size": 1621, "num_data_blocks": 1045, "num_entries": 5752, "num_filter_entries": 5752, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:08:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.792984) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9472986 bytes
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.795201) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.2 rd, 146.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.1 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(17.8) write-amplify(8.8) OK, records in: 6293, records dropped: 541 output_compression: NoCompression
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.795234) EVENT_LOG_v1 {"time_micros": 1760432907795218, "job": 40, "event": "compaction_finished", "compaction_time_micros": 64713, "compaction_time_cpu_micros": 44013, "output_level": 6, "num_output_files": 1, "total_output_size": 9472986, "num_input_records": 6293, "num_output_records": 5752, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432907796659, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432907800231, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.727216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.800435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.800442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.800445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.800448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:08:27 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:08:27.800451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.942 2 DEBUG nova.network.neutron [req-b315ebc1-e696-4895-8c16-82629c6ed367 req-fc561498-7779-42be-943d-6c4b2427e5e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Updated VIF entry in instance network info cache for port 6383be3b-4378-4c3f-a2be-af1c6ec32afb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.943 2 DEBUG nova.network.neutron [req-b315ebc1-e696-4895-8c16-82629c6ed367 req-fc561498-7779-42be-943d-6c4b2427e5e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Updating instance_info_cache with network_info: [{"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/837343065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4119659457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:27 compute-0 nova_compute[259627]: 2025-10-14 09:08:27.970 2 DEBUG oslo_concurrency.lockutils [req-b315ebc1-e696-4895-8c16-82629c6ed367 req-fc561498-7779-42be-943d-6c4b2427e5e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.156 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.157 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.157 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.549 2 INFO nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Creating config drive at /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01/disk.config
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.560 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjljsa0h4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.725 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjljsa0h4" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.764 2 DEBUG nova.storage.rbd_utils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image 7167ef21-b041-47b9-8d93-55b5853f4d01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.769 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01/disk.config 7167ef21-b041-47b9-8d93-55b5853f4d01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:28 compute-0 ceph-mon[74249]: pgmap v1608: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.989 2 DEBUG oslo_concurrency.processutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01/disk.config 7167ef21-b041-47b9-8d93-55b5853f4d01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.990 2 INFO nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Deleting local config drive /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01/disk.config because it was imported into RBD.
Oct 14 09:08:28 compute-0 nova_compute[259627]: 2025-10-14 09:08:28.997 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:08:29 compute-0 kernel: tap6383be3b-43: entered promiscuous mode
Oct 14 09:08:29 compute-0 nova_compute[259627]: 2025-10-14 09:08:29.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:29 compute-0 NetworkManager[44885]: <info>  [1760432909.0694] manager: (tap6383be3b-43): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Oct 14 09:08:29 compute-0 ovn_controller[152662]: 2025-10-14T09:08:29Z|00684|binding|INFO|Claiming lport 6383be3b-4378-4c3f-a2be-af1c6ec32afb for this chassis.
Oct 14 09:08:29 compute-0 ovn_controller[152662]: 2025-10-14T09:08:29Z|00685|binding|INFO|6383be3b-4378-4c3f-a2be-af1c6ec32afb: Claiming fa:16:3e:01:e8:10 10.100.0.7
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.078 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:e8:10 10.100.0.7'], port_security=['fa:16:3e:01:e8:10 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7167ef21-b041-47b9-8d93-55b5853f4d01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4ffd682-0c28-40f2-a6f1-d3d67aecef45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6383be3b-4378-4c3f-a2be-af1c6ec32afb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.081 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6383be3b-4378-4c3f-a2be-af1c6ec32afb in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 bound to our chassis
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.083 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:08:29 compute-0 ovn_controller[152662]: 2025-10-14T09:08:29Z|00686|binding|INFO|Setting lport 6383be3b-4378-4c3f-a2be-af1c6ec32afb ovn-installed in OVS
Oct 14 09:08:29 compute-0 ovn_controller[152662]: 2025-10-14T09:08:29Z|00687|binding|INFO|Setting lport 6383be3b-4378-4c3f-a2be-af1c6ec32afb up in Southbound
Oct 14 09:08:29 compute-0 nova_compute[259627]: 2025-10-14 09:08:29.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.112 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cee2c295-b949-45f6-8f7c-94d6fb0f32d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:29 compute-0 nova_compute[259627]: 2025-10-14 09:08:29.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:29 compute-0 systemd-machined[214636]: New machine qemu-82-instance-00000044.
Oct 14 09:08:29 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-00000044.
Oct 14 09:08:29 compute-0 systemd-udevd[331398]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.165 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7bcb74-ffb6-4c60-8324-c1fb141dfbbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.169 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[86f8f62e-3e8a-4459-bc99-5bbf7a88f618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:29 compute-0 NetworkManager[44885]: <info>  [1760432909.1735] device (tap6383be3b-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:08:29 compute-0 NetworkManager[44885]: <info>  [1760432909.1749] device (tap6383be3b-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.213 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[143c908d-73da-4870-bf15-8f23016b40eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.241 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[25d57d71-a5fc-4449-be05-4644b15bfd4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331408, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.269 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f897f4-09c2-4f90-9fd0-ea37b4ff4ec8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662169, 'tstamp': 662169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331409, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662173, 'tstamp': 662173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331409, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.272 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:29 compute-0 nova_compute[259627]: 2025-10-14 09:08:29.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:29 compute-0 nova_compute[259627]: 2025-10-14 09:08:29.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.276 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d540b01-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.277 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.278 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d540b01-e0, col_values=(('external_ids', {'iface-id': 'fcca615a-5470-4880-844d-73adc425bce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:29.278 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:08:29 compute-0 nova_compute[259627]: 2025-10-14 09:08:29.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:29 compute-0 nova_compute[259627]: 2025-10-14 09:08:29.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:08:30 compute-0 ovn_controller[152662]: 2025-10-14T09:08:30Z|00688|binding|INFO|Releasing lport fcca615a-5470-4880-844d-73adc425bce1 from this chassis (sb_readonly=0)
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.191 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.192 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.192 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.259 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432910.258246, 7167ef21-b041-47b9-8d93-55b5853f4d01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.259 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] VM Started (Lifecycle Event)
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.283 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.290 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432910.2598057, 7167ef21-b041-47b9-8d93-55b5853f4d01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.291 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] VM Paused (Lifecycle Event)
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.325 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.328 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.591 2 DEBUG nova.compute.manager [req-4f0bcb14-39c8-4d4a-af82-25d4cb6de83c req-4521b3dd-6155-4786-a272-943b6c7c94c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received event network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.592 2 DEBUG oslo_concurrency.lockutils [req-4f0bcb14-39c8-4d4a-af82-25d4cb6de83c req-4521b3dd-6155-4786-a272-943b6c7c94c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.593 2 DEBUG oslo_concurrency.lockutils [req-4f0bcb14-39c8-4d4a-af82-25d4cb6de83c req-4521b3dd-6155-4786-a272-943b6c7c94c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.593 2 DEBUG oslo_concurrency.lockutils [req-4f0bcb14-39c8-4d4a-af82-25d4cb6de83c req-4521b3dd-6155-4786-a272-943b6c7c94c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.593 2 DEBUG nova.compute.manager [req-4f0bcb14-39c8-4d4a-af82-25d4cb6de83c req-4521b3dd-6155-4786-a272-943b6c7c94c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Processing event network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.594 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.600 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432910.6002207, 7167ef21-b041-47b9-8d93-55b5853f4d01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.601 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] VM Resumed (Lifecycle Event)
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.604 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.609 2 INFO nova.virt.libvirt.driver [-] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Instance spawned successfully.
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.609 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.632 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.641 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.649 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.650 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.650 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.651 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.652 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.652 2 DEBUG nova.virt.libvirt.driver [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.661 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.716 2 INFO nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Took 8.84 seconds to spawn the instance on the hypervisor.
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.717 2 DEBUG nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.811 2 INFO nova.compute.manager [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Took 10.10 seconds to build instance.
Oct 14 09:08:30 compute-0 nova_compute[259627]: 2025-10-14 09:08:30.836 2 DEBUG oslo_concurrency.lockutils [None req-932ffa6a-7513-4585-bf7d-6876a8bfb2f3 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:30 compute-0 ceph-mon[74249]: pgmap v1609: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:08:31 compute-0 nova_compute[259627]: 2025-10-14 09:08:31.775 2 INFO nova.compute.manager [None req-08f920fe-1d0e-43ef-8f68-65dd879a3108 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Get console output
Oct 14 09:08:31 compute-0 nova_compute[259627]: 2025-10-14 09:08:31.784 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:31 compute-0 nova_compute[259627]: 2025-10-14 09:08:31.783 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:08:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct 14 09:08:31 compute-0 nova_compute[259627]: 2025-10-14 09:08:31.801 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:31 compute-0 nova_compute[259627]: 2025-10-14 09:08:31.802 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:08:31 compute-0 nova_compute[259627]: 2025-10-14 09:08:31.802 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:31 compute-0 nova_compute[259627]: 2025-10-14 09:08:31.802 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.734 2 DEBUG nova.compute.manager [req-5d38acc4-3f9c-4140-aedc-6514d3d0a78c req-7719d49d-7d4d-4149-9387-588051292dc7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received event network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.734 2 DEBUG oslo_concurrency.lockutils [req-5d38acc4-3f9c-4140-aedc-6514d3d0a78c req-7719d49d-7d4d-4149-9387-588051292dc7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.735 2 DEBUG oslo_concurrency.lockutils [req-5d38acc4-3f9c-4140-aedc-6514d3d0a78c req-7719d49d-7d4d-4149-9387-588051292dc7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.735 2 DEBUG oslo_concurrency.lockutils [req-5d38acc4-3f9c-4140-aedc-6514d3d0a78c req-7719d49d-7d4d-4149-9387-588051292dc7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.735 2 DEBUG nova.compute.manager [req-5d38acc4-3f9c-4140-aedc-6514d3d0a78c req-7719d49d-7d4d-4149-9387-588051292dc7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] No waiting events found dispatching network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.735 2 WARNING nova.compute.manager [req-5d38acc4-3f9c-4140-aedc-6514d3d0a78c req-7719d49d-7d4d-4149-9387-588051292dc7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received unexpected event network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb for instance with vm_state active and task_state None.
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:08:32
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups']
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:08:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:08:32 compute-0 nova_compute[259627]: 2025-10-14 09:08:32.928 2 INFO nova.compute.manager [None req-fb9bb58a-a2d7-437f-b4c2-b0976636f1f8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Get console output
Oct 14 09:08:32 compute-0 ceph-mon[74249]: pgmap v1610: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:08:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 14 09:08:34 compute-0 nova_compute[259627]: 2025-10-14 09:08:34.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:34 compute-0 ceph-mon[74249]: pgmap v1611: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 14 09:08:34 compute-0 nova_compute[259627]: 2025-10-14 09:08:34.985 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:08:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:08:36 compute-0 nova_compute[259627]: 2025-10-14 09:08:36.143 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432901.1405594, 9512792b-fd02-459a-8377-c2815c130684 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:36 compute-0 nova_compute[259627]: 2025-10-14 09:08:36.144 2 INFO nova.compute.manager [-] [instance: 9512792b-fd02-459a-8377-c2815c130684] VM Stopped (Lifecycle Event)
Oct 14 09:08:36 compute-0 nova_compute[259627]: 2025-10-14 09:08:36.175 2 DEBUG nova.compute.manager [None req-46acfe53-5940-4dee-a33c-3b5d885e86e8 - - - - - -] [instance: 9512792b-fd02-459a-8377-c2815c130684] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:36 compute-0 ceph-mon[74249]: pgmap v1612: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:08:37 compute-0 nova_compute[259627]: 2025-10-14 09:08:37.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:37 compute-0 nova_compute[259627]: 2025-10-14 09:08:37.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:08:38 compute-0 ceph-mon[74249]: pgmap v1613: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:39 compute-0 ovn_controller[152662]: 2025-10-14T09:08:39Z|00689|binding|INFO|Releasing lport fcca615a-5470-4880-844d-73adc425bce1 from this chassis (sb_readonly=0)
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.766 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "f74daf08-3420-4644-8fc8-b1450b733908" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.767 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.783 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:08:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.861 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.861 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.869 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:08:39 compute-0 nova_compute[259627]: 2025-10-14 09:08:39.869 2 INFO nova.compute.claims [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.036 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521781679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.538 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.549 2 DEBUG nova.compute.provider_tree [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.569 2 DEBUG nova.scheduler.client.report [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.605 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.606 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:08:40 compute-0 podman[331476]: 2025-10-14 09:08:40.657781878 +0000 UTC m=+0.069633866 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:08:40 compute-0 podman[331475]: 2025-10-14 09:08:40.672547262 +0000 UTC m=+0.075821838 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.671 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.672 2 DEBUG nova.network.neutron [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.690 2 INFO nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.712 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.809 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.810 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.810 2 INFO nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Creating image(s)
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.828 2 DEBUG nova.storage.rbd_utils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image f74daf08-3420-4644-8fc8-b1450b733908_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.854 2 DEBUG nova.storage.rbd_utils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image f74daf08-3420-4644-8fc8-b1450b733908_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.876 2 DEBUG nova.storage.rbd_utils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image f74daf08-3420-4644-8fc8-b1450b733908_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.879 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.951 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.952 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.952 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.953 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.977 2 DEBUG nova.storage.rbd_utils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image f74daf08-3420-4644-8fc8-b1450b733908_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:40 compute-0 nova_compute[259627]: 2025-10-14 09:08:40.982 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f74daf08-3420-4644-8fc8-b1450b733908_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:41 compute-0 ceph-mon[74249]: pgmap v1614: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:08:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1521781679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.077 2 DEBUG nova.policy [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '695c749a8dce4506a31e2cec4f02876b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bda6775f81f403e83269a5f798c9853', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.293 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f74daf08-3420-4644-8fc8-b1450b733908_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.375 2 DEBUG nova.storage.rbd_utils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] resizing rbd image f74daf08-3420-4644-8fc8-b1450b733908_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.467 2 DEBUG nova.objects.instance [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'migration_context' on Instance uuid f74daf08-3420-4644-8fc8-b1450b733908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.488 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.489 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Ensure instance console log exists: /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.489 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.489 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:41 compute-0 nova_compute[259627]: 2025-10-14 09:08:41.490 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:41 compute-0 ovn_controller[152662]: 2025-10-14T09:08:41Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:e8:10 10.100.0.7
Oct 14 09:08:41 compute-0 ovn_controller[152662]: 2025-10-14T09:08:41Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:e8:10 10.100.0.7
Oct 14 09:08:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:08:42 compute-0 nova_compute[259627]: 2025-10-14 09:08:42.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:42 compute-0 nova_compute[259627]: 2025-10-14 09:08:42.391 2 DEBUG nova.network.neutron [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Successfully created port: 28b0076a-cf34-43f8-b4e7-11c60a4699bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:08:42 compute-0 nova_compute[259627]: 2025-10-14 09:08:42.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:43 compute-0 ceph-mon[74249]: pgmap v1615: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001108513218727616 of space, bias 1.0, pg target 0.3325539656182848 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:08:43 compute-0 nova_compute[259627]: 2025-10-14 09:08:43.667 2 DEBUG nova.network.neutron [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Successfully updated port: 28b0076a-cf34-43f8-b4e7-11c60a4699bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:08:43 compute-0 nova_compute[259627]: 2025-10-14 09:08:43.683 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:43 compute-0 nova_compute[259627]: 2025-10-14 09:08:43.684 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquired lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:43 compute-0 nova_compute[259627]: 2025-10-14 09:08:43.684 2 DEBUG nova.network.neutron [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:08:43 compute-0 nova_compute[259627]: 2025-10-14 09:08:43.765 2 DEBUG nova.compute.manager [req-11efca39-3948-4215-86fd-7aec3cacbf8f req-185df1b2-42ea-45f9-a0e4-2d915284c938 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received event network-changed-28b0076a-cf34-43f8-b4e7-11c60a4699bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:43 compute-0 nova_compute[259627]: 2025-10-14 09:08:43.766 2 DEBUG nova.compute.manager [req-11efca39-3948-4215-86fd-7aec3cacbf8f req-185df1b2-42ea-45f9-a0e4-2d915284c938 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Refreshing instance network info cache due to event network-changed-28b0076a-cf34-43f8-b4e7-11c60a4699bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:08:43 compute-0 nova_compute[259627]: 2025-10-14 09:08:43.767 2 DEBUG oslo_concurrency.lockutils [req-11efca39-3948-4215-86fd-7aec3cacbf8f req-185df1b2-42ea-45f9-a0e4-2d915284c938 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 14 09:08:43 compute-0 nova_compute[259627]: 2025-10-14 09:08:43.881 2 DEBUG nova.network.neutron [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:08:45 compute-0 ceph-mon[74249]: pgmap v1616: 305 pgs: 305 active+clean; 167 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 14 09:08:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 246 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.823 2 DEBUG nova.network.neutron [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Updating instance_info_cache with network_info: [{"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.842 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Releasing lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.842 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Instance network_info: |[{"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.843 2 DEBUG oslo_concurrency.lockutils [req-11efca39-3948-4215-86fd-7aec3cacbf8f req-185df1b2-42ea-45f9-a0e4-2d915284c938 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.843 2 DEBUG nova.network.neutron [req-11efca39-3948-4215-86fd-7aec3cacbf8f req-185df1b2-42ea-45f9-a0e4-2d915284c938 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Refreshing network info cache for port 28b0076a-cf34-43f8-b4e7-11c60a4699bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.849 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Start _get_guest_xml network_info=[{"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.856 2 WARNING nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.867 2 DEBUG nova.virt.libvirt.host [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.868 2 DEBUG nova.virt.libvirt.host [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.873 2 DEBUG nova.virt.libvirt.host [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.874 2 DEBUG nova.virt.libvirt.host [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.875 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.875 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.876 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.876 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.877 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.877 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.878 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.878 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.879 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.879 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.880 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.880 2 DEBUG nova.virt.hardware [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:08:45 compute-0 nova_compute[259627]: 2025-10-14 09:08:45.885 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.217 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.218 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.241 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.358 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.359 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.368 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.368 2 INFO nova.compute.claims [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:08:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:08:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648911438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.424 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.461 2 DEBUG nova.storage.rbd_utils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image f74daf08-3420-4644-8fc8-b1450b733908_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.469 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.607 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:08:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1910589750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.947 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.950 2 DEBUG nova.virt.libvirt.vif [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-485914830',display_name='tempest-ServerActionsTestOtherB-server-485914830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-485914830',id=69,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j3infv6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:40Z,user_data=None,user_id='695c749a8dce4506a31e2cec4f02876b',uuid=f74daf08-3420-4644-8fc8-b1450b733908,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.951 2 DEBUG nova.network.os_vif_util [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.952 2 DEBUG nova.network.os_vif_util [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7c:0b,bridge_name='br-int',has_traffic_filtering=True,id=28b0076a-cf34-43f8-b4e7-11c60a4699bd,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28b0076a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.955 2 DEBUG nova.objects.instance [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'pci_devices' on Instance uuid f74daf08-3420-4644-8fc8-b1450b733908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.977 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <uuid>f74daf08-3420-4644-8fc8-b1450b733908</uuid>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <name>instance-00000045</name>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestOtherB-server-485914830</nova:name>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:08:45</nova:creationTime>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <nova:user uuid="695c749a8dce4506a31e2cec4f02876b">tempest-ServerActionsTestOtherB-381012378-project-member</nova:user>
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <nova:project uuid="4bda6775f81f403e83269a5f798c9853">tempest-ServerActionsTestOtherB-381012378</nova:project>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <nova:port uuid="28b0076a-cf34-43f8-b4e7-11c60a4699bd">
Oct 14 09:08:46 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <system>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <entry name="serial">f74daf08-3420-4644-8fc8-b1450b733908</entry>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <entry name="uuid">f74daf08-3420-4644-8fc8-b1450b733908</entry>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </system>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <os>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   </os>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <features>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   </features>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f74daf08-3420-4644-8fc8-b1450b733908_disk">
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f74daf08-3420-4644-8fc8-b1450b733908_disk.config">
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:08:46 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b0:7c:0b"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <target dev="tap28b0076a-cf"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908/console.log" append="off"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <video>
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </video>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:08:46 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:08:46 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:08:46 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:08:46 compute-0 nova_compute[259627]: </domain>
Oct 14 09:08:46 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.980 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Preparing to wait for external event network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.981 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "f74daf08-3420-4644-8fc8-b1450b733908-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.981 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.982 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.984 2 DEBUG nova.virt.libvirt.vif [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-485914830',display_name='tempest-ServerActionsTestOtherB-server-485914830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-485914830',id=69,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j3infv6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:40Z,user_data=None,user_id='695c749a8dce4506a31e2cec4f02876b',uuid=f74daf08-3420-4644-8fc8-b1450b733908,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.985 2 DEBUG nova.network.os_vif_util [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.986 2 DEBUG nova.network.os_vif_util [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7c:0b,bridge_name='br-int',has_traffic_filtering=True,id=28b0076a-cf34-43f8-b4e7-11c60a4699bd,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28b0076a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.987 2 DEBUG os_vif [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7c:0b,bridge_name='br-int',has_traffic_filtering=True,id=28b0076a-cf34-43f8-b4e7-11c60a4699bd,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28b0076a-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:46 compute-0 nova_compute[259627]: 2025-10-14 09:08:46.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.004 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28b0076a-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28b0076a-cf, col_values=(('external_ids', {'iface-id': '28b0076a-cf34-43f8-b4e7-11c60a4699bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:7c:0b', 'vm-uuid': 'f74daf08-3420-4644-8fc8-b1450b733908'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:47 compute-0 NetworkManager[44885]: <info>  [1760432927.0076] manager: (tap28b0076a-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.014 2 INFO os_vif [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7c:0b,bridge_name='br-int',has_traffic_filtering=True,id=28b0076a-cf34-43f8-b4e7-11c60a4699bd,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28b0076a-cf')
Oct 14 09:08:47 compute-0 ceph-mon[74249]: pgmap v1617: 305 pgs: 305 active+clean; 246 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Oct 14 09:08:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1648911438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1910589750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761850762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.072 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.073 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.073 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No VIF found with MAC fa:16:3e:b0:7c:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.074 2 INFO nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Using config drive
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.097 2 DEBUG nova.storage.rbd_utils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image f74daf08-3420-4644-8fc8-b1450b733908_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.103 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.110 2 DEBUG nova.compute.provider_tree [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.142 2 DEBUG nova.scheduler.client.report [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.171 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.172 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.223 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.223 2 DEBUG nova.network.neutron [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.245 2 INFO nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.254 2 DEBUG nova.network.neutron [req-11efca39-3948-4215-86fd-7aec3cacbf8f req-185df1b2-42ea-45f9-a0e4-2d915284c938 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Updated VIF entry in instance network info cache for port 28b0076a-cf34-43f8-b4e7-11c60a4699bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.254 2 DEBUG nova.network.neutron [req-11efca39-3948-4215-86fd-7aec3cacbf8f req-185df1b2-42ea-45f9-a0e4-2d915284c938 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Updating instance_info_cache with network_info: [{"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.269 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.274 2 DEBUG oslo_concurrency.lockutils [req-11efca39-3948-4215-86fd-7aec3cacbf8f req-185df1b2-42ea-45f9-a0e4-2d915284c938 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.363 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.365 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.366 2 INFO nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Creating image(s)
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.401 2 DEBUG nova.storage.rbd_utils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.438 2 DEBUG nova.storage.rbd_utils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.473 2 DEBUG nova.storage.rbd_utils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.478 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.573 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.574 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.575 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.576 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:47.603 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:08:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:47.604 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.616 2 DEBUG nova.storage.rbd_utils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.637 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.683 2 DEBUG nova.policy [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '979aa20794dc414f91c59f224a0db083', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9099e3128b584ff7a140b8021451223e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:08:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 246 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.926 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.960 2 INFO nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Creating config drive at /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908/disk.config
Oct 14 09:08:47 compute-0 nova_compute[259627]: 2025-10-14 09:08:47.966 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppshw5h6y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/761850762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.042 2 DEBUG nova.storage.rbd_utils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] resizing rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.139 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppshw5h6y" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.164 2 DEBUG nova.storage.rbd_utils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image f74daf08-3420-4644-8fc8-b1450b733908_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.167 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908/disk.config f74daf08-3420-4644-8fc8-b1450b733908_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.215 2 DEBUG nova.objects.instance [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'migration_context' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.237 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.238 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Ensure instance console log exists: /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.239 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.239 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.239 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.350 2 DEBUG oslo_concurrency.processutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908/disk.config f74daf08-3420-4644-8fc8-b1450b733908_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.351 2 INFO nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Deleting local config drive /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908/disk.config because it was imported into RBD.
Oct 14 09:08:48 compute-0 NetworkManager[44885]: <info>  [1760432928.4095] manager: (tap28b0076a-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/294)
Oct 14 09:08:48 compute-0 kernel: tap28b0076a-cf: entered promiscuous mode
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:48 compute-0 ovn_controller[152662]: 2025-10-14T09:08:48Z|00690|binding|INFO|Claiming lport 28b0076a-cf34-43f8-b4e7-11c60a4699bd for this chassis.
Oct 14 09:08:48 compute-0 ovn_controller[152662]: 2025-10-14T09:08:48Z|00691|binding|INFO|28b0076a-cf34-43f8-b4e7-11c60a4699bd: Claiming fa:16:3e:b0:7c:0b 10.100.0.10
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.423 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:7c:0b 10.100.0.10'], port_security=['fa:16:3e:b0:7c:0b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f74daf08-3420-4644-8fc8-b1450b733908', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4ffd682-0c28-40f2-a6f1-d3d67aecef45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=28b0076a-cf34-43f8-b4e7-11c60a4699bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.424 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 28b0076a-cf34-43f8-b4e7-11c60a4699bd in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 bound to our chassis
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.426 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:08:48 compute-0 ovn_controller[152662]: 2025-10-14T09:08:48Z|00692|binding|INFO|Setting lport 28b0076a-cf34-43f8-b4e7-11c60a4699bd ovn-installed in OVS
Oct 14 09:08:48 compute-0 ovn_controller[152662]: 2025-10-14T09:08:48Z|00693|binding|INFO|Setting lport 28b0076a-cf34-43f8-b4e7-11c60a4699bd up in Southbound
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.448 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc1f07b-c459-4f19-8ee0-b63aef8c2ed3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:48 compute-0 systemd-machined[214636]: New machine qemu-83-instance-00000045.
Oct 14 09:08:48 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-00000045.
Oct 14 09:08:48 compute-0 systemd-udevd[332007]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.481 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3790f705-be98-469a-907c-ed4e4c28e2fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.486 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[89f0180c-ac90-42ad-b068-33262fac1e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:48 compute-0 NetworkManager[44885]: <info>  [1760432928.4987] device (tap28b0076a-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:08:48 compute-0 NetworkManager[44885]: <info>  [1760432928.4997] device (tap28b0076a-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.516 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f81285e0-c3ef-454e-8183-81a0611915d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.542 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0212957c-b621-43c8-9995-b6756f642613]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332015, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5875c28-549b-48cc-8499-b4f24a5879db]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662169, 'tstamp': 662169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332018, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662173, 'tstamp': 662173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332018, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.568 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:48 compute-0 nova_compute[259627]: 2025-10-14 09:08:48.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.573 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d540b01-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.573 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.574 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d540b01-e0, col_values=(('external_ids', {'iface-id': 'fcca615a-5470-4880-844d-73adc425bce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:48.574 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:49 compute-0 ceph-mon[74249]: pgmap v1618: 305 pgs: 305 active+clean; 246 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.242 2 DEBUG nova.network.neutron [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Successfully created port: 67728610-6776-4496-98b2-a14f59c9674d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.255 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432929.255382, f74daf08-3420-4644-8fc8-b1450b733908 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.256 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] VM Started (Lifecycle Event)
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.288 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.296 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432929.2555583, f74daf08-3420-4644-8fc8-b1450b733908 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.297 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] VM Paused (Lifecycle Event)
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.323 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.328 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:49 compute-0 nova_compute[259627]: 2025-10-14 09:08:49.352 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:08:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 246 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Oct 14 09:08:49 compute-0 sshd-session[332062]: Connection closed by 194.0.234.20 port 65105
Oct 14 09:08:51 compute-0 ceph-mon[74249]: pgmap v1619: 305 pgs: 305 active+clean; 246 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.103 2 DEBUG nova.network.neutron [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Successfully updated port: 67728610-6776-4496-98b2-a14f59c9674d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.119 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "refresh_cache-e9ccbc44-715e-4419-9286-f0cb6e41d9cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.120 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquired lock "refresh_cache-e9ccbc44-715e-4419-9286-f0cb6e41d9cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.120 2 DEBUG nova.network.neutron [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.508 2 DEBUG nova.network.neutron [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.642 2 DEBUG nova.compute.manager [req-1a1efa1e-1a9f-4cfb-9c10-c34efcc89695 req-7e363309-6e9c-43cf-8963-7532704445db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-changed-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.642 2 DEBUG nova.compute.manager [req-1a1efa1e-1a9f-4cfb-9c10-c34efcc89695 req-7e363309-6e9c-43cf-8963-7532704445db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Refreshing instance network info cache due to event network-changed-67728610-6776-4496-98b2-a14f59c9674d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.643 2 DEBUG oslo_concurrency.lockutils [req-1a1efa1e-1a9f-4cfb-9c10-c34efcc89695 req-7e363309-6e9c-43cf-8963-7532704445db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e9ccbc44-715e-4419-9286-f0cb6e41d9cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:08:51 compute-0 nova_compute[259627]: 2025-10-14 09:08:51.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 124 op/s
Oct 14 09:08:52 compute-0 nova_compute[259627]: 2025-10-14 09:08:52.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:52 compute-0 nova_compute[259627]: 2025-10-14 09:08:52.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:53 compute-0 ceph-mon[74249]: pgmap v1620: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 124 op/s
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.484 2 DEBUG nova.network.neutron [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Updating instance_info_cache with network_info: [{"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.503 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Releasing lock "refresh_cache-e9ccbc44-715e-4419-9286-f0cb6e41d9cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.504 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance network_info: |[{"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.505 2 DEBUG oslo_concurrency.lockutils [req-1a1efa1e-1a9f-4cfb-9c10-c34efcc89695 req-7e363309-6e9c-43cf-8963-7532704445db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e9ccbc44-715e-4419-9286-f0cb6e41d9cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.505 2 DEBUG nova.network.neutron [req-1a1efa1e-1a9f-4cfb-9c10-c34efcc89695 req-7e363309-6e9c-43cf-8963-7532704445db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Refreshing network info cache for port 67728610-6776-4496-98b2-a14f59c9674d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.508 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Start _get_guest_xml network_info=[{"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.513 2 WARNING nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.518 2 DEBUG nova.virt.libvirt.host [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.518 2 DEBUG nova.virt.libvirt.host [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.526 2 DEBUG nova.virt.libvirt.host [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.527 2 DEBUG nova.virt.libvirt.host [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.527 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.528 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.528 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.528 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.529 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.529 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.529 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.529 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.530 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.530 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.530 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.531 2 DEBUG nova.virt.hardware [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.533 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 124 op/s
Oct 14 09:08:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:08:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1687252426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.959 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.980 2 DEBUG nova.storage.rbd_utils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:53 compute-0 nova_compute[259627]: 2025-10-14 09:08:53.984 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1687252426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:08:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172675224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.419 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.422 2 DEBUG nova.virt.libvirt.vif [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1155536075',display_name='tempest-ServerDiskConfigTestJSON-server-1155536075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1155536075',id=70,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-ak09v01b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:47Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=e9ccbc44-715e-4419-9286-f0cb6e41d9cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.423 2 DEBUG nova.network.os_vif_util [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.424 2 DEBUG nova.network.os_vif_util [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.427 2 DEBUG nova.objects.instance [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'pci_devices' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.449 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <uuid>e9ccbc44-715e-4419-9286-f0cb6e41d9cd</uuid>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <name>instance-00000046</name>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1155536075</nova:name>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:08:53</nova:creationTime>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <nova:user uuid="979aa20794dc414f91c59f224a0db083">tempest-ServerDiskConfigTestJSON-1253454894-project-member</nova:user>
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <nova:project uuid="9099e3128b584ff7a140b8021451223e">tempest-ServerDiskConfigTestJSON-1253454894</nova:project>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <nova:port uuid="67728610-6776-4496-98b2-a14f59c9674d">
Oct 14 09:08:54 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <system>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <entry name="serial">e9ccbc44-715e-4419-9286-f0cb6e41d9cd</entry>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <entry name="uuid">e9ccbc44-715e-4419-9286-f0cb6e41d9cd</entry>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </system>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <os>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   </os>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <features>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   </features>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk">
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       </source>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config">
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       </source>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:08:54 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c3:77:95"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <target dev="tap67728610-67"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/console.log" append="off"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <video>
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </video>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:08:54 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:08:54 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:08:54 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:08:54 compute-0 nova_compute[259627]: </domain>
Oct 14 09:08:54 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.451 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Preparing to wait for external event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.452 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.452 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.453 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.454 2 DEBUG nova.virt.libvirt.vif [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1155536075',display_name='tempest-ServerDiskConfigTestJSON-server-1155536075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1155536075',id=70,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-ak09v01b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:47Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=e9ccbc44-715e-4419-9286-f0cb6e41d9cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.455 2 DEBUG nova.network.os_vif_util [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.456 2 DEBUG nova.network.os_vif_util [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.457 2 DEBUG os_vif [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67728610-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.465 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67728610-67, col_values=(('external_ids', {'iface-id': '67728610-6776-4496-98b2-a14f59c9674d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:77:95', 'vm-uuid': 'e9ccbc44-715e-4419-9286-f0cb6e41d9cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:54 compute-0 NetworkManager[44885]: <info>  [1760432934.4683] manager: (tap67728610-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.479 2 INFO os_vif [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67')
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.564 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.565 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.566 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No VIF found with MAC fa:16:3e:c3:77:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.567 2 INFO nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Using config drive
Oct 14 09:08:54 compute-0 nova_compute[259627]: 2025-10-14 09:08:54.602 2 DEBUG nova.storage.rbd_utils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:55 compute-0 ceph-mon[74249]: pgmap v1621: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 124 op/s
Oct 14 09:08:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2172675224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.322 2 INFO nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Creating config drive at /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.326 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_qmz9u2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.471 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_qmz9u2" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.510 2 DEBUG nova.storage.rbd_utils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.514 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.566 2 DEBUG nova.compute.manager [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received event network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.567 2 DEBUG oslo_concurrency.lockutils [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f74daf08-3420-4644-8fc8-b1450b733908-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.568 2 DEBUG oslo_concurrency.lockutils [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.568 2 DEBUG oslo_concurrency.lockutils [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.569 2 DEBUG nova.compute.manager [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Processing event network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.569 2 DEBUG nova.compute.manager [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received event network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.570 2 DEBUG oslo_concurrency.lockutils [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f74daf08-3420-4644-8fc8-b1450b733908-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.570 2 DEBUG oslo_concurrency.lockutils [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.571 2 DEBUG oslo_concurrency.lockutils [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.571 2 DEBUG nova.compute.manager [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] No waiting events found dispatching network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.571 2 WARNING nova.compute.manager [req-da165e73-7c6a-4f9c-bc77-41d7e52ce01a req-35e9b991-6d3b-423d-a929-be2e8aae8eb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received unexpected event network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd for instance with vm_state building and task_state spawning.
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.573 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.582 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432935.581481, f74daf08-3420-4644-8fc8-b1450b733908 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.582 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] VM Resumed (Lifecycle Event)
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.586 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.590 2 INFO nova.virt.libvirt.driver [-] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Instance spawned successfully.
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.590 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.606 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.613 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.619 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.620 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.621 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.621 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.622 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.624 2 DEBUG nova.virt.libvirt.driver [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.630 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.683 2 INFO nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Took 14.87 seconds to spawn the instance on the hypervisor.
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.684 2 DEBUG nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:55 compute-0 podman[332169]: 2025-10-14 09:08:55.705795941 +0000 UTC m=+0.107466527 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:08:55 compute-0 podman[332168]: 2025-10-14 09:08:55.709142644 +0000 UTC m=+0.119554106 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.737 2 INFO nova.compute.manager [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Took 15.92 seconds to build instance.
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.743 2 DEBUG oslo_concurrency.processutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.744 2 INFO nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Deleting local config drive /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config because it was imported into RBD.
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.751 2 DEBUG oslo_concurrency.lockutils [None req-fe63e927-9813-4f86-8c7a-7d114ea8dac8 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 124 op/s
Oct 14 09:08:55 compute-0 kernel: tap67728610-67: entered promiscuous mode
Oct 14 09:08:55 compute-0 NetworkManager[44885]: <info>  [1760432935.8120] manager: (tap67728610-67): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:55 compute-0 ovn_controller[152662]: 2025-10-14T09:08:55Z|00694|binding|INFO|Claiming lport 67728610-6776-4496-98b2-a14f59c9674d for this chassis.
Oct 14 09:08:55 compute-0 ovn_controller[152662]: 2025-10-14T09:08:55Z|00695|binding|INFO|67728610-6776-4496-98b2-a14f59c9674d: Claiming fa:16:3e:c3:77:95 10.100.0.13
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.826 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:77:95 10.100.0.13'], port_security=['fa:16:3e:c3:77:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9ccbc44-715e-4419-9286-f0cb6e41d9cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=67728610-6776-4496-98b2-a14f59c9674d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.828 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 67728610-6776-4496-98b2-a14f59c9674d in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 bound to our chassis
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:08:55 compute-0 systemd-udevd[332243]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.850 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7855c93a-90dd-4eec-aece-bea764512589]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.853 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99db3452-81 in ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.855 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99db3452-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.855 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fddf61-0116-4ba8-971c-6011f84d2165]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.856 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[035163c1-1a2d-4c43-a90a-c14d39eb338e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:55 compute-0 ovn_controller[152662]: 2025-10-14T09:08:55Z|00696|binding|INFO|Setting lport 67728610-6776-4496-98b2-a14f59c9674d ovn-installed in OVS
Oct 14 09:08:55 compute-0 ovn_controller[152662]: 2025-10-14T09:08:55Z|00697|binding|INFO|Setting lport 67728610-6776-4496-98b2-a14f59c9674d up in Southbound
Oct 14 09:08:55 compute-0 nova_compute[259627]: 2025-10-14 09:08:55.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:55 compute-0 NetworkManager[44885]: <info>  [1760432935.8683] device (tap67728610-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:08:55 compute-0 NetworkManager[44885]: <info>  [1760432935.8706] device (tap67728610-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.884 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[cd08f49a-01f3-4829-a94d-50febca60d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:55 compute-0 systemd-machined[214636]: New machine qemu-84-instance-00000046.
Oct 14 09:08:55 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-00000046.
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.919 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c59dba10-e6c6-4c50-9f05-ceb917bffb03]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.958 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3de68863-dd12-487d-b2e8-95d20a11483b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:55 compute-0 NetworkManager[44885]: <info>  [1760432935.9704] manager: (tap99db3452-80): new Veth device (/org/freedesktop/NetworkManager/Devices/297)
Oct 14 09:08:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:55.969 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e25d790-88fb-4422-a4f2-60c33ab243c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.021 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eb113484-7d9e-42f6-b2ba-228fc57561f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.024 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[406fe92d-5569-4afd-b743-264e6a7547f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 NetworkManager[44885]: <info>  [1760432936.0523] device (tap99db3452-80): carrier: link connected
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.066 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8533f015-88aa-440b-8a07-0db407d4eceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.090 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ced2caa3-29e6-4990-84e6-73dd8e4ce2a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671481, 'reachable_time': 30140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332278, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.108 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa521212-e2d7-44c0-a490-20be973aff8b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:a670'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671481, 'tstamp': 671481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332279, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.127 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[942cebb6-1bf7-48aa-936d-6248b66d3510]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671481, 'reachable_time': 30140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332280, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.162 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d59ab2ed-4ab4-4a3f-b7a8-5df2830f3321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.228 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7105df0d-4a1c-4e42-abc6-c0e176e3419c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.230 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.230 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.232 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99db3452-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:56 compute-0 NetworkManager[44885]: <info>  [1760432936.2359] manager: (tap99db3452-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct 14 09:08:56 compute-0 kernel: tap99db3452-80: entered promiscuous mode
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.257 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99db3452-80, col_values=(('external_ids', {'iface-id': '59e7d558-49d1-48cf-b926-27e93fe381b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:56 compute-0 ovn_controller[152662]: 2025-10-14T09:08:56Z|00698|binding|INFO|Releasing lport 59e7d558-49d1-48cf-b926-27e93fe381b1 from this chassis (sb_readonly=0)
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.275 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.276 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7088252-93f1-4a48-bbe4-5b3d254fae21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.277 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:08:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:56.277 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'env', 'PROCESS_TAG=haproxy-99db3452-8467-4a2b-a51d-30679c346bb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99db3452-8467-4a2b-a51d-30679c346bb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.417 2 DEBUG nova.network.neutron [req-1a1efa1e-1a9f-4cfb-9c10-c34efcc89695 req-7e363309-6e9c-43cf-8963-7532704445db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Updated VIF entry in instance network info cache for port 67728610-6776-4496-98b2-a14f59c9674d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.418 2 DEBUG nova.network.neutron [req-1a1efa1e-1a9f-4cfb-9c10-c34efcc89695 req-7e363309-6e9c-43cf-8963-7532704445db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Updating instance_info_cache with network_info: [{"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.435 2 DEBUG oslo_concurrency.lockutils [req-1a1efa1e-1a9f-4cfb-9c10-c34efcc89695 req-7e363309-6e9c-43cf-8963-7532704445db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e9ccbc44-715e-4419-9286-f0cb6e41d9cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.501 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.501 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.515 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.572 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.572 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.578 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.578 2 INFO nova.compute.claims [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:08:56 compute-0 podman[332312]: 2025-10-14 09:08:56.683390797 +0000 UTC m=+0.049907790 container create a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:08:56 compute-0 nova_compute[259627]: 2025-10-14 09:08:56.709 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:56 compute-0 systemd[1]: Started libpod-conmon-a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff.scope.
Oct 14 09:08:56 compute-0 podman[332312]: 2025-10-14 09:08:56.65875981 +0000 UTC m=+0.025276783 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:08:56 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b23b051526deddaa336b2f5c3fec20155e9d4d43143be390798695acdb5cf3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:08:56 compute-0 podman[332312]: 2025-10-14 09:08:56.804153441 +0000 UTC m=+0.170670514 container init a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 09:08:56 compute-0 podman[332312]: 2025-10-14 09:08:56.812847375 +0000 UTC m=+0.179364388 container start a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:08:56 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[332329]: [NOTICE]   (332333) : New worker (332369) forked
Oct 14 09:08:56 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[332329]: [NOTICE]   (332333) : Loading success.
Oct 14 09:08:57 compute-0 ceph-mon[74249]: pgmap v1622: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 124 op/s
Oct 14 09:08:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:08:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968840312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.193 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.200 2 DEBUG nova.compute.provider_tree [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.218 2 DEBUG nova.scheduler.client.report [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.250 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.251 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.329 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.330 2 DEBUG nova.network.neutron [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.363 2 INFO nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.399 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.485 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432937.4844265, e9ccbc44-715e-4419-9286-f0cb6e41d9cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.485 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] VM Started (Lifecycle Event)
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.513 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.517 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.519 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.519 2 INFO nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Creating image(s)
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.547 2 DEBUG nova.storage.rbd_utils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] rbd image 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.574 2 DEBUG nova.storage.rbd_utils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] rbd image 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.602 2 DEBUG nova.storage.rbd_utils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] rbd image 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:08:57.606 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.615 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.661 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432937.4847496, e9ccbc44-715e-4419-9286-f0cb6e41d9cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.662 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] VM Paused (Lifecycle Event)
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.680 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.685 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.702 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.706 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.707 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.707 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.708 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.730 2 DEBUG nova.storage.rbd_utils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] rbd image 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:08:57 compute-0 nova_compute[259627]: 2025-10-14 09:08:57.738 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:08:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.001 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.065 2 DEBUG nova.storage.rbd_utils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] resizing rbd image 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:08:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/968840312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.147 2 DEBUG nova.objects.instance [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a0796fa-b0ab-49c7-ac7f-a66013f58074 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.163 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.164 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Ensure instance console log exists: /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.165 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.165 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.166 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.200 2 DEBUG nova.policy [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '851077659f3a499b97ce67237e198aab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '38ea96e617b14d28be8fd9f65647a849', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.618 2 INFO nova.compute.manager [None req-3f8b277a-310c-470a-b278-93e0e2ee1156 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Pausing
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.620 2 DEBUG nova.objects.instance [None req-3f8b277a-310c-470a-b278-93e0e2ee1156 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'flavor' on Instance uuid f74daf08-3420-4644-8fc8-b1450b733908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.664 2 DEBUG nova.compute.manager [None req-3f8b277a-310c-470a-b278-93e0e2ee1156 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.666 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432938.663621, f74daf08-3420-4644-8fc8-b1450b733908 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.667 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] VM Paused (Lifecycle Event)
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.710 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.714 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.751 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 14 09:08:58 compute-0 nova_compute[259627]: 2025-10-14 09:08:58.934 2 DEBUG nova.network.neutron [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Successfully created port: eade513b-fd09-42b6-ae4f-0d1913be50f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:08:59 compute-0 ceph-mon[74249]: pgmap v1623: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:08:59 compute-0 nova_compute[259627]: 2025-10-14 09:08:59.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:08:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:08:59 compute-0 nova_compute[259627]: 2025-10-14 09:08:59.985 2 DEBUG nova.network.neutron [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Successfully updated port: eade513b-fd09-42b6-ae4f-0d1913be50f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.006 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "refresh_cache-9a0796fa-b0ab-49c7-ac7f-a66013f58074" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.007 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquired lock "refresh_cache-9a0796fa-b0ab-49c7-ac7f-a66013f58074" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.007 2 DEBUG nova.network.neutron [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.090 2 DEBUG nova.compute.manager [req-3a8ecf3c-554a-42f9-9f71-c7ffc4a5feb6 req-a53a5970-fcf2-4f07-a3f2-3cf3aec07ab7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received event network-changed-eade513b-fd09-42b6-ae4f-0d1913be50f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.091 2 DEBUG nova.compute.manager [req-3a8ecf3c-554a-42f9-9f71-c7ffc4a5feb6 req-a53a5970-fcf2-4f07-a3f2-3cf3aec07ab7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Refreshing instance network info cache due to event network-changed-eade513b-fd09-42b6-ae4f-0d1913be50f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.091 2 DEBUG oslo_concurrency.lockutils [req-3a8ecf3c-554a-42f9-9f71-c7ffc4a5feb6 req-a53a5970-fcf2-4f07-a3f2-3cf3aec07ab7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9a0796fa-b0ab-49c7-ac7f-a66013f58074" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.184 2 DEBUG nova.network.neutron [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.237 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "f74daf08-3420-4644-8fc8-b1450b733908" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.238 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.238 2 INFO nova.compute.manager [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Shelving
Oct 14 09:09:00 compute-0 kernel: tap28b0076a-cf (unregistering): left promiscuous mode
Oct 14 09:09:00 compute-0 NetworkManager[44885]: <info>  [1760432940.3164] device (tap28b0076a-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:00 compute-0 ovn_controller[152662]: 2025-10-14T09:09:00Z|00699|binding|INFO|Releasing lport 28b0076a-cf34-43f8-b4e7-11c60a4699bd from this chassis (sb_readonly=0)
Oct 14 09:09:00 compute-0 ovn_controller[152662]: 2025-10-14T09:09:00Z|00700|binding|INFO|Setting lport 28b0076a-cf34-43f8-b4e7-11c60a4699bd down in Southbound
Oct 14 09:09:00 compute-0 ovn_controller[152662]: 2025-10-14T09:09:00Z|00701|binding|INFO|Removing iface tap28b0076a-cf ovn-installed in OVS
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.346 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:7c:0b 10.100.0.10'], port_security=['fa:16:3e:b0:7c:0b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f74daf08-3420-4644-8fc8-b1450b733908', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4ffd682-0c28-40f2-a6f1-d3d67aecef45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=28b0076a-cf34-43f8-b4e7-11c60a4699bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.348 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 28b0076a-cf34-43f8-b4e7-11c60a4699bd in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 unbound from our chassis
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.351 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:00 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000045.scope: Deactivated successfully.
Oct 14 09:09:00 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000045.scope: Consumed 3.901s CPU time.
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.378 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[62eed223-7815-4448-85dc-4fb3d9f9fb25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:00 compute-0 systemd-machined[214636]: Machine qemu-83-instance-00000045 terminated.
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.418 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[25afadaa-2e82-4fcb-98dd-2dd78f318890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.424 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[80534e52-2bd8-44cb-8eb9-a350120395dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.462 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fce6b9c9-cd35-4deb-becd-475fd788bacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.491 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8948e084-e5d5-4be8-9bde-f0cf5d56f204]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332584, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.507 2 INFO nova.virt.libvirt.driver [-] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Instance destroyed successfully.
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.507 2 DEBUG nova.objects.instance [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'numa_topology' on Instance uuid f74daf08-3420-4644-8fc8-b1450b733908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18ca5b90-a37f-4de1-8191-e44ca6908da4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662169, 'tstamp': 662169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332594, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662173, 'tstamp': 662173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332594, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.517 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.568 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d540b01-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.568 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.569 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d540b01-e0, col_values=(('external_ids', {'iface-id': 'fcca615a-5470-4880-844d-73adc425bce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:00.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:00 compute-0 nova_compute[259627]: 2025-10-14 09:09:00.893 2 INFO nova.virt.libvirt.driver [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Beginning cold snapshot process
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.062 2 DEBUG nova.virt.libvirt.imagebackend [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:09:01 compute-0 ceph-mon[74249]: pgmap v1624: 305 pgs: 305 active+clean; 293 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.292 2 DEBUG nova.compute.manager [req-434df4df-caca-48ad-841d-61c1c65dfce0 req-475f1df6-52c8-46ea-9c0d-6a41c9146b9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.293 2 DEBUG oslo_concurrency.lockutils [req-434df4df-caca-48ad-841d-61c1c65dfce0 req-475f1df6-52c8-46ea-9c0d-6a41c9146b9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.294 2 DEBUG oslo_concurrency.lockutils [req-434df4df-caca-48ad-841d-61c1c65dfce0 req-475f1df6-52c8-46ea-9c0d-6a41c9146b9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.294 2 DEBUG oslo_concurrency.lockutils [req-434df4df-caca-48ad-841d-61c1c65dfce0 req-475f1df6-52c8-46ea-9c0d-6a41c9146b9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.295 2 DEBUG nova.compute.manager [req-434df4df-caca-48ad-841d-61c1c65dfce0 req-475f1df6-52c8-46ea-9c0d-6a41c9146b9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Processing event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.296 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.301 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432941.299841, e9ccbc44-715e-4419-9286-f0cb6e41d9cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.302 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] VM Resumed (Lifecycle Event)
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.304 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.309 2 INFO nova.virt.libvirt.driver [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance spawned successfully.
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.310 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.318 2 DEBUG nova.storage.rbd_utils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(465fb1df79454bb58df9b28792721c2f) on rbd image(f74daf08-3420-4644-8fc8-b1450b733908_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.365 2 DEBUG nova.network.neutron [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Updating instance_info_cache with network_info: [{"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.368 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.377 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.377 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.378 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.379 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.379 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.380 2 DEBUG nova.virt.libvirt.driver [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.387 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.412 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Releasing lock "refresh_cache-9a0796fa-b0ab-49c7-ac7f-a66013f58074" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.412 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Instance network_info: |[{"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.414 2 DEBUG oslo_concurrency.lockutils [req-3a8ecf3c-554a-42f9-9f71-c7ffc4a5feb6 req-a53a5970-fcf2-4f07-a3f2-3cf3aec07ab7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9a0796fa-b0ab-49c7-ac7f-a66013f58074" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.414 2 DEBUG nova.network.neutron [req-3a8ecf3c-554a-42f9-9f71-c7ffc4a5feb6 req-a53a5970-fcf2-4f07-a3f2-3cf3aec07ab7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Refreshing network info cache for port eade513b-fd09-42b6-ae4f-0d1913be50f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.420 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Start _get_guest_xml network_info=[{"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.422 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.428 2 WARNING nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.435 2 DEBUG nova.virt.libvirt.host [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.436 2 DEBUG nova.virt.libvirt.host [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.445 2 INFO nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Took 14.08 seconds to spawn the instance on the hypervisor.
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.445 2 DEBUG nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.449 2 DEBUG nova.virt.libvirt.host [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.449 2 DEBUG nova.virt.libvirt.host [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.450 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.450 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.451 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.452 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.452 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.452 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.453 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.453 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.453 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.454 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.454 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.454 2 DEBUG nova.virt.hardware [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.459 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.555 2 INFO nova.compute.manager [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Took 15.25 seconds to build instance.
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.572 2 DEBUG oslo_concurrency.lockutils [None req-c4350d34-d463-4e3c-a857-952ea48c36ff 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 14 09:09:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616922686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.915 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.948 2 DEBUG nova.storage.rbd_utils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] rbd image 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:01 compute-0 nova_compute[259627]: 2025-10-14 09:09:01.953 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct 14 09:09:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3616922686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct 14 09:09:02 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.192 2 DEBUG nova.storage.rbd_utils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning vms/f74daf08-3420-4644-8fc8-b1450b733908_disk@465fb1df79454bb58df9b28792721c2f to images/5f7c88ba-e0d8-4af1-880e-7b2d04232e26 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.300 2 DEBUG nova.storage.rbd_utils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening images/5f7c88ba-e0d8-4af1-880e-7b2d04232e26 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104391325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.508 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.509 2 DEBUG nova.virt.libvirt.vif [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-371646039',display_name='tempest-ServerMetadataNegativeTestJSON-server-371646039',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-371646039',id=71,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38ea96e617b14d28be8fd9f65647a849',ramdisk_id='',reservation_id='r-ngqpevh1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-62363335',owner_user_name='tempest-ServerMetadataNegativeTestJSON-62363335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:57Z,user_data=None,user_id='851077659f3a499b97ce67237e198aab',uuid=9a0796fa-b0ab-49c7-ac7f-a66013f58074,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.510 2 DEBUG nova.network.os_vif_util [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Converting VIF {"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.510 2 DEBUG nova.network.os_vif_util [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:3b:5f,bridge_name='br-int',has_traffic_filtering=True,id=eade513b-fd09-42b6-ae4f-0d1913be50f8,network=Network(327c2ca5-7a4d-4421-a285-7bcc1ce806d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeade513b-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.512 2 DEBUG nova.objects.instance [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a0796fa-b0ab-49c7-ac7f-a66013f58074 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.524 2 DEBUG nova.network.neutron [req-3a8ecf3c-554a-42f9-9f71-c7ffc4a5feb6 req-a53a5970-fcf2-4f07-a3f2-3cf3aec07ab7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Updated VIF entry in instance network info cache for port eade513b-fd09-42b6-ae4f-0d1913be50f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.524 2 DEBUG nova.network.neutron [req-3a8ecf3c-554a-42f9-9f71-c7ffc4a5feb6 req-a53a5970-fcf2-4f07-a3f2-3cf3aec07ab7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Updating instance_info_cache with network_info: [{"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.526 2 DEBUG nova.storage.rbd_utils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] removing snapshot(465fb1df79454bb58df9b28792721c2f) on rbd image(f74daf08-3420-4644-8fc8-b1450b733908_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.542 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <uuid>9a0796fa-b0ab-49c7-ac7f-a66013f58074</uuid>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <name>instance-00000047</name>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-371646039</nova:name>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:09:01</nova:creationTime>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <nova:user uuid="851077659f3a499b97ce67237e198aab">tempest-ServerMetadataNegativeTestJSON-62363335-project-member</nova:user>
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <nova:project uuid="38ea96e617b14d28be8fd9f65647a849">tempest-ServerMetadataNegativeTestJSON-62363335</nova:project>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <nova:port uuid="eade513b-fd09-42b6-ae4f-0d1913be50f8">
Oct 14 09:09:02 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <system>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <entry name="serial">9a0796fa-b0ab-49c7-ac7f-a66013f58074</entry>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <entry name="uuid">9a0796fa-b0ab-49c7-ac7f-a66013f58074</entry>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </system>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <os>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   </os>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <features>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   </features>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk">
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk.config">
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:02 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:16:3b:5f"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <target dev="tapeade513b-fd"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074/console.log" append="off"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <video>
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </video>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:09:02 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:09:02 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:09:02 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:09:02 compute-0 nova_compute[259627]: </domain>
Oct 14 09:09:02 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.543 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Preparing to wait for external event network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.543 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.543 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.544 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.545 2 DEBUG nova.virt.libvirt.vif [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-371646039',display_name='tempest-ServerMetadataNegativeTestJSON-server-371646039',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-371646039',id=71,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38ea96e617b14d28be8fd9f65647a849',ramdisk_id='',reservation_id='r-ngqpevh1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-62363335',owner_user_name='tempest-ServerMetadataNegativeTestJSON-62363335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:08:57Z,user_data=None,user_id='851077659f3a499b97ce67237e198aab',uuid=9a0796fa-b0ab-49c7-ac7f-a66013f58074,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.545 2 DEBUG nova.network.os_vif_util [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Converting VIF {"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.546 2 DEBUG nova.network.os_vif_util [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:3b:5f,bridge_name='br-int',has_traffic_filtering=True,id=eade513b-fd09-42b6-ae4f-0d1913be50f8,network=Network(327c2ca5-7a4d-4421-a285-7bcc1ce806d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeade513b-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.547 2 DEBUG os_vif [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:3b:5f,bridge_name='br-int',has_traffic_filtering=True,id=eade513b-fd09-42b6-ae4f-0d1913be50f8,network=Network(327c2ca5-7a4d-4421-a285-7bcc1ce806d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeade513b-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.549 2 DEBUG oslo_concurrency.lockutils [req-3a8ecf3c-554a-42f9-9f71-c7ffc4a5feb6 req-a53a5970-fcf2-4f07-a3f2-3cf3aec07ab7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9a0796fa-b0ab-49c7-ac7f-a66013f58074" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.551 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeade513b-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeade513b-fd, col_values=(('external_ids', {'iface-id': 'eade513b-fd09-42b6-ae4f-0d1913be50f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:3b:5f', 'vm-uuid': '9a0796fa-b0ab-49c7-ac7f-a66013f58074'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:02 compute-0 NetworkManager[44885]: <info>  [1760432942.5548] manager: (tapeade513b-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.563 2 INFO os_vif [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:3b:5f,bridge_name='br-int',has_traffic_filtering=True,id=eade513b-fd09-42b6-ae4f-0d1913be50f8,network=Network(327c2ca5-7a4d-4421-a285-7bcc1ce806d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeade513b-fd')
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.614 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.615 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.615 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] No VIF found with MAC fa:16:3e:16:3b:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.616 2 INFO nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Using config drive
Oct 14 09:09:02 compute-0 nova_compute[259627]: 2025-10-14 09:09:02.634 2 DEBUG nova.storage.rbd_utils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] rbd image 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:09:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:09:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:09:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:09:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:09:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:09:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct 14 09:09:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct 14 09:09:03 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct 14 09:09:03 compute-0 ceph-mon[74249]: pgmap v1625: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 14 09:09:03 compute-0 ceph-mon[74249]: osdmap e241: 3 total, 3 up, 3 in
Oct 14 09:09:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3104391325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.188 2 DEBUG nova.storage.rbd_utils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(snap) on rbd image(5f7c88ba-e0d8-4af1-880e-7b2d04232e26) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.442 2 INFO nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Creating config drive at /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074/disk.config
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.450 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkth1x1t9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.531 2 DEBUG nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.532 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.532 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.532 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.532 2 DEBUG nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] No waiting events found dispatching network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.532 2 WARNING nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received unexpected event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d for instance with vm_state active and task_state None.
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.533 2 DEBUG nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received event network-vif-unplugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.533 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f74daf08-3420-4644-8fc8-b1450b733908-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.533 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.534 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.534 2 DEBUG nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] No waiting events found dispatching network-vif-unplugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.534 2 WARNING nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received unexpected event network-vif-unplugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd for instance with vm_state paused and task_state shelving_image_uploading.
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.534 2 DEBUG nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received event network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.535 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f74daf08-3420-4644-8fc8-b1450b733908-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.535 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.535 2 DEBUG oslo_concurrency.lockutils [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.535 2 DEBUG nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] No waiting events found dispatching network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.536 2 WARNING nova.compute.manager [req-d8ce7ffc-f0ad-4781-bfcd-fb386e1e922f req-841523d4-9478-4be6-8ae8-1602ee5c201a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received unexpected event network-vif-plugged-28b0076a-cf34-43f8-b4e7-11c60a4699bd for instance with vm_state paused and task_state shelving_image_uploading.
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.611 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkth1x1t9" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.633 2 DEBUG nova.storage.rbd_utils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] rbd image 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.637 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074/disk.config 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.811 2 DEBUG oslo_concurrency.processutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074/disk.config 9a0796fa-b0ab-49c7-ac7f-a66013f58074_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.812 2 INFO nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Deleting local config drive /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074/disk.config because it was imported into RBD.
Oct 14 09:09:03 compute-0 NetworkManager[44885]: <info>  [1760432943.8686] manager: (tapeade513b-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Oct 14 09:09:03 compute-0 kernel: tapeade513b-fd: entered promiscuous mode
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:03 compute-0 ovn_controller[152662]: 2025-10-14T09:09:03Z|00702|binding|INFO|Claiming lport eade513b-fd09-42b6-ae4f-0d1913be50f8 for this chassis.
Oct 14 09:09:03 compute-0 ovn_controller[152662]: 2025-10-14T09:09:03Z|00703|binding|INFO|eade513b-fd09-42b6-ae4f-0d1913be50f8: Claiming fa:16:3e:16:3b:5f 10.100.0.4
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.883 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:3b:5f 10.100.0.4'], port_security=['fa:16:3e:16:3b:5f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9a0796fa-b0ab-49c7-ac7f-a66013f58074', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-327c2ca5-7a4d-4421-a285-7bcc1ce806d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38ea96e617b14d28be8fd9f65647a849', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'da7fc45b-3de4-4937-861d-d9b4c2429bfa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67c8fc0d-000b-4c12-8ff9-0fd1ab8f32c9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=eade513b-fd09-42b6-ae4f-0d1913be50f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.884 162547 INFO neutron.agent.ovn.metadata.agent [-] Port eade513b-fd09-42b6-ae4f-0d1913be50f8 in datapath 327c2ca5-7a4d-4421-a285-7bcc1ce806d2 bound to our chassis
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.886 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 327c2ca5-7a4d-4421-a285-7bcc1ce806d2
Oct 14 09:09:03 compute-0 ovn_controller[152662]: 2025-10-14T09:09:03Z|00704|binding|INFO|Setting lport eade513b-fd09-42b6-ae4f-0d1913be50f8 ovn-installed in OVS
Oct 14 09:09:03 compute-0 ovn_controller[152662]: 2025-10-14T09:09:03Z|00705|binding|INFO|Setting lport eade513b-fd09-42b6-ae4f-0d1913be50f8 up in Southbound
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.905 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0a0bad-c7cf-47da-86c6-54a163e52e10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.906 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap327c2ca5-71 in ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:09:03 compute-0 systemd-udevd[332879]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.911 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap327c2ca5-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.911 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7648c77f-03c0-491a-a669-f29552b0df44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.913 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83fc62cb-acf1-4fd7-a880-77ad9820e071]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:03 compute-0 nova_compute[259627]: 2025-10-14 09:09:03.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:03 compute-0 systemd-machined[214636]: New machine qemu-85-instance-00000047.
Oct 14 09:09:03 compute-0 NetworkManager[44885]: <info>  [1760432943.9229] device (tapeade513b-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:09:03 compute-0 NetworkManager[44885]: <info>  [1760432943.9240] device (tapeade513b-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:09:03 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000047.
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.930 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[094f0c7f-0930-41f4-ba3a-b420e057441e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.945 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5539c59b-53ed-478b-8cf6-42d0bbce0fc5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.970 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0d446c49-1b6c-4964-975b-0af4d4a298a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:03 compute-0 NetworkManager[44885]: <info>  [1760432943.9782] manager: (tap327c2ca5-70): new Veth device (/org/freedesktop/NetworkManager/Devices/301)
Oct 14 09:09:03 compute-0 systemd-udevd[332883]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:09:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:03.977 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa40149d-e5c4-405f-a59f-016060240c05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.027 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e490426b-e720-4bbd-a9b2-b22b90215449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.031 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[34e9163d-9601-42a6-a29e-b6baf50adc70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 NetworkManager[44885]: <info>  [1760432944.0616] device (tap327c2ca5-70): carrier: link connected
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.071 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[35ade41f-b393-4e91-9c38-cdf21927662e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.092 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fccc3c32-715d-4d86-a61e-15377119627b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap327c2ca5-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ba:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672282, 'reachable_time': 20986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332915, 'error': None, 'target': 'ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.113 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b78dc185-e6a2-4479-a864-ffc66638a87c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:ba24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672282, 'tstamp': 672282}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332916, 'error': None, 'target': 'ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.130 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67146075-7113-4c82-b884-b8864e42c0a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap327c2ca5-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ba:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672282, 'reachable_time': 20986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332917, 'error': None, 'target': 'ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.157 2 DEBUG nova.compute.manager [req-5b6af15b-b29d-49fa-944d-64d4b1a77acb req-a750b030-da9d-4b7e-b526-531edd6f0865 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received event network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.157 2 DEBUG oslo_concurrency.lockutils [req-5b6af15b-b29d-49fa-944d-64d4b1a77acb req-a750b030-da9d-4b7e-b526-531edd6f0865 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.158 2 DEBUG oslo_concurrency.lockutils [req-5b6af15b-b29d-49fa-944d-64d4b1a77acb req-a750b030-da9d-4b7e-b526-531edd6f0865 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.158 2 DEBUG oslo_concurrency.lockutils [req-5b6af15b-b29d-49fa-944d-64d4b1a77acb req-a750b030-da9d-4b7e-b526-531edd6f0865 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.158 2 DEBUG nova.compute.manager [req-5b6af15b-b29d-49fa-944d-64d4b1a77acb req-a750b030-da9d-4b7e-b526-531edd6f0865 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Processing event network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:09:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 14 09:09:04 compute-0 ceph-mon[74249]: osdmap e242: 3 total, 3 up, 3 in
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.170 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f873bfd-de18-4e49-9131-81fa43214953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.232 2 INFO nova.compute.manager [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Rebuilding instance
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.249 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[414ba17d-214c-41ad-b16c-3d54c2b5b1ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.250 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap327c2ca5-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.250 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.251 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap327c2ca5-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:04 compute-0 NetworkManager[44885]: <info>  [1760432944.2531] manager: (tap327c2ca5-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:04 compute-0 kernel: tap327c2ca5-70: entered promiscuous mode
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.258 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap327c2ca5-70, col_values=(('external_ids', {'iface-id': '3e283180-dbf4-4a04-aa80-8c3594c202ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:04 compute-0 ovn_controller[152662]: 2025-10-14T09:09:04Z|00706|binding|INFO|Releasing lport 3e283180-dbf4-4a04-aa80-8c3594c202ae from this chassis (sb_readonly=0)
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.281 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/327c2ca5-7a4d-4421-a285-7bcc1ce806d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/327c2ca5-7a4d-4421-a285-7bcc1ce806d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2809eba-3654-4690-85dd-de2e5c028e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.282 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-327c2ca5-7a4d-4421-a285-7bcc1ce806d2
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/327c2ca5-7a4d-4421-a285-7bcc1ce806d2.pid.haproxy
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 327c2ca5-7a4d-4421-a285-7bcc1ce806d2
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:09:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:04.282 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2', 'env', 'PROCESS_TAG=haproxy-327c2ca5-7a4d-4421-a285-7bcc1ce806d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/327c2ca5-7a4d-4421-a285-7bcc1ce806d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.552 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'trusted_certs' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.575 2 DEBUG nova.compute.manager [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.636 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'pci_requests' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.649 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'pci_devices' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.662 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'resources' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.675 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'migration_context' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.690 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:09:04 compute-0 nova_compute[259627]: 2025-10-14 09:09:04.693 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:09:04 compute-0 podman[332996]: 2025-10-14 09:09:04.708131175 +0000 UTC m=+0.062404758 container create a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:09:04 compute-0 systemd[1]: Started libpod-conmon-a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd.scope.
Oct 14 09:09:04 compute-0 podman[332996]: 2025-10-14 09:09:04.676720232 +0000 UTC m=+0.030993845 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:09:04 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac994cc8898f2009254ad2034042d3fe3674a7c5c7f52510a42b9fa37a963f86/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:04 compute-0 podman[332996]: 2025-10-14 09:09:04.832865616 +0000 UTC m=+0.187139259 container init a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:09:04 compute-0 podman[332996]: 2025-10-14 09:09:04.843616331 +0000 UTC m=+0.197889924 container start a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 09:09:04 compute-0 neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2[333009]: [NOTICE]   (333015) : New worker (333017) forked
Oct 14 09:09:04 compute-0 neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2[333009]: [NOTICE]   (333015) : Loading success.
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432945.045192, 9a0796fa-b0ab-49c7-ac7f-a66013f58074 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] VM Started (Lifecycle Event)
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.047 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.054 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.058 2 INFO nova.virt.libvirt.driver [-] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Instance spawned successfully.
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.058 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.071 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.078 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.081 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.082 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.082 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.083 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.083 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.084 2 DEBUG nova.virt.libvirt.driver [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.109 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.109 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432945.0453432, 9a0796fa-b0ab-49c7-ac7f-a66013f58074 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.109 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] VM Paused (Lifecycle Event)
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.152 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.157 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432945.0540998, 9a0796fa-b0ab-49c7-ac7f-a66013f58074 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.157 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] VM Resumed (Lifecycle Event)
Oct 14 09:09:05 compute-0 ceph-mon[74249]: pgmap v1628: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Oct 14 09:09:05 compute-0 ceph-mon[74249]: osdmap e243: 3 total, 3 up, 3 in
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.186 2 INFO nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Took 7.67 seconds to spawn the instance on the hypervisor.
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.187 2 DEBUG nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.188 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.195 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.247 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.272 2 INFO nova.compute.manager [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Took 8.72 seconds to build instance.
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.291 2 DEBUG oslo_concurrency.lockutils [None req-e6efe38a-8fc7-4247-824d-d91454fe632b 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:09:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1232290442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:09:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:09:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1232290442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.683 2 INFO nova.virt.libvirt.driver [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Snapshot image upload complete
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.683 2 DEBUG nova.compute.manager [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.728 2 INFO nova.compute.manager [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Shelve offloading
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.734 2 INFO nova.virt.libvirt.driver [-] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Instance destroyed successfully.
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.734 2 DEBUG nova.compute.manager [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.736 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.737 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquired lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:05 compute-0 nova_compute[259627]: 2025-10-14 09:09:05.737 2 DEBUG nova.network.neutron [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:09:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 386 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.1 MiB/s wr, 465 op/s
Oct 14 09:09:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1232290442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:09:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1232290442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:09:06 compute-0 nova_compute[259627]: 2025-10-14 09:09:06.509 2 DEBUG nova.compute.manager [req-8c6a2757-f118-4d21-859a-a61dabcd4fc7 req-c399364b-02f6-4e17-a64f-8549b977a211 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received event network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:06 compute-0 nova_compute[259627]: 2025-10-14 09:09:06.509 2 DEBUG oslo_concurrency.lockutils [req-8c6a2757-f118-4d21-859a-a61dabcd4fc7 req-c399364b-02f6-4e17-a64f-8549b977a211 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:06 compute-0 nova_compute[259627]: 2025-10-14 09:09:06.510 2 DEBUG oslo_concurrency.lockutils [req-8c6a2757-f118-4d21-859a-a61dabcd4fc7 req-c399364b-02f6-4e17-a64f-8549b977a211 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:06 compute-0 nova_compute[259627]: 2025-10-14 09:09:06.511 2 DEBUG oslo_concurrency.lockutils [req-8c6a2757-f118-4d21-859a-a61dabcd4fc7 req-c399364b-02f6-4e17-a64f-8549b977a211 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:06 compute-0 nova_compute[259627]: 2025-10-14 09:09:06.511 2 DEBUG nova.compute.manager [req-8c6a2757-f118-4d21-859a-a61dabcd4fc7 req-c399364b-02f6-4e17-a64f-8549b977a211 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] No waiting events found dispatching network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:06 compute-0 nova_compute[259627]: 2025-10-14 09:09:06.511 2 WARNING nova.compute.manager [req-8c6a2757-f118-4d21-859a-a61dabcd4fc7 req-c399364b-02f6-4e17-a64f-8549b977a211 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received unexpected event network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 for instance with vm_state active and task_state None.
Oct 14 09:09:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:07.026 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:07.027 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:07 compute-0 ceph-mon[74249]: pgmap v1630: 305 pgs: 305 active+clean; 386 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.1 MiB/s wr, 465 op/s
Oct 14 09:09:07 compute-0 nova_compute[259627]: 2025-10-14 09:09:07.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:07 compute-0 nova_compute[259627]: 2025-10-14 09:09:07.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:07 compute-0 nova_compute[259627]: 2025-10-14 09:09:07.656 2 DEBUG nova.network.neutron [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Updating instance_info_cache with network_info: [{"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:07 compute-0 nova_compute[259627]: 2025-10-14 09:09:07.672 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Releasing lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 14 09:09:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 14 09:09:07 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 14 09:09:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 386 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 3.8 MiB/s wr, 278 op/s
Oct 14 09:09:08 compute-0 ceph-mon[74249]: osdmap e244: 3 total, 3 up, 3 in
Oct 14 09:09:08 compute-0 ceph-mon[74249]: pgmap v1632: 305 pgs: 305 active+clean; 386 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 3.8 MiB/s wr, 278 op/s
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.045 2 INFO nova.virt.libvirt.driver [-] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Instance destroyed successfully.
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.046 2 DEBUG nova.objects.instance [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'resources' on Instance uuid f74daf08-3420-4644-8fc8-b1450b733908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.064 2 DEBUG nova.virt.libvirt.vif [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-485914830',display_name='tempest-ServerActionsTestOtherB-server-485914830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-485914830',id=69,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:08:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j3infv6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member',shelved_at='2025-10-14T09:09:05.683717',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='5f7c88ba-e0d8-4af1-880e-7b2d04232e26'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:09:00Z,user_data=None,user_id='695c749a8dce4506a31e2cec4f02876b',uuid=f74daf08-3420-4644-8fc8-b1450b733908,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.064 2 DEBUG nova.network.os_vif_util [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28b0076a-cf", "ovs_interfaceid": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.065 2 DEBUG nova.network.os_vif_util [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7c:0b,bridge_name='br-int',has_traffic_filtering=True,id=28b0076a-cf34-43f8-b4e7-11c60a4699bd,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28b0076a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.066 2 DEBUG os_vif [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7c:0b,bridge_name='br-int',has_traffic_filtering=True,id=28b0076a-cf34-43f8-b4e7-11c60a4699bd,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28b0076a-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28b0076a-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.076 2 INFO os_vif [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7c:0b,bridge_name='br-int',has_traffic_filtering=True,id=28b0076a-cf34-43f8-b4e7-11c60a4699bd,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28b0076a-cf')
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.131 2 DEBUG nova.compute.manager [req-edab9801-f642-4306-95c3-f6d2929872e4 req-3376a539-7aec-40c7-aea6-7c46fb1fadbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Received event network-changed-28b0076a-cf34-43f8-b4e7-11c60a4699bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.132 2 DEBUG nova.compute.manager [req-edab9801-f642-4306-95c3-f6d2929872e4 req-3376a539-7aec-40c7-aea6-7c46fb1fadbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Refreshing instance network info cache due to event network-changed-28b0076a-cf34-43f8-b4e7-11c60a4699bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.133 2 DEBUG oslo_concurrency.lockutils [req-edab9801-f642-4306-95c3-f6d2929872e4 req-3376a539-7aec-40c7-aea6-7c46fb1fadbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.133 2 DEBUG oslo_concurrency.lockutils [req-edab9801-f642-4306-95c3-f6d2929872e4 req-3376a539-7aec-40c7-aea6-7c46fb1fadbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.134 2 DEBUG nova.network.neutron [req-edab9801-f642-4306-95c3-f6d2929872e4 req-3376a539-7aec-40c7-aea6-7c46fb1fadbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Refreshing network info cache for port 28b0076a-cf34-43f8-b4e7-11c60a4699bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.489 2 INFO nova.virt.libvirt.driver [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Deleting instance files /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908_del
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.490 2 INFO nova.virt.libvirt.driver [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Deletion of /var/lib/nova/instances/f74daf08-3420-4644-8fc8-b1450b733908_del complete
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.618 2 INFO nova.scheduler.client.report [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Deleted allocations for instance f74daf08-3420-4644-8fc8-b1450b733908
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.676 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.677 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:09 compute-0 nova_compute[259627]: 2025-10-14 09:09:09.773 2 DEBUG oslo_concurrency.processutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 386 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 237 op/s
Oct 14 09:09:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1537024332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:10 compute-0 nova_compute[259627]: 2025-10-14 09:09:10.259 2 DEBUG oslo_concurrency.processutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:10 compute-0 nova_compute[259627]: 2025-10-14 09:09:10.267 2 DEBUG nova.compute.provider_tree [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:10 compute-0 nova_compute[259627]: 2025-10-14 09:09:10.286 2 DEBUG nova.scheduler.client.report [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:10 compute-0 nova_compute[259627]: 2025-10-14 09:09:10.314 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:10 compute-0 nova_compute[259627]: 2025-10-14 09:09:10.389 2 DEBUG oslo_concurrency.lockutils [None req-9940d680-db51-42a8-a9f5-7c1d4fa60e9a 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "f74daf08-3420-4644-8fc8-b1450b733908" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 10.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:10 compute-0 nova_compute[259627]: 2025-10-14 09:09:10.765 2 DEBUG nova.network.neutron [req-edab9801-f642-4306-95c3-f6d2929872e4 req-3376a539-7aec-40c7-aea6-7c46fb1fadbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Updated VIF entry in instance network info cache for port 28b0076a-cf34-43f8-b4e7-11c60a4699bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:09:10 compute-0 nova_compute[259627]: 2025-10-14 09:09:10.766 2 DEBUG nova.network.neutron [req-edab9801-f642-4306-95c3-f6d2929872e4 req-3376a539-7aec-40c7-aea6-7c46fb1fadbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Updating instance_info_cache with network_info: [{"id": "28b0076a-cf34-43f8-b4e7-11c60a4699bd", "address": "fa:16:3e:b0:7c:0b", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": null, "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap28b0076a-cf", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:10 compute-0 nova_compute[259627]: 2025-10-14 09:09:10.785 2 DEBUG oslo_concurrency.lockutils [req-edab9801-f642-4306-95c3-f6d2929872e4 req-3376a539-7aec-40c7-aea6-7c46fb1fadbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f74daf08-3420-4644-8fc8-b1450b733908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:10 compute-0 ceph-mon[74249]: pgmap v1633: 305 pgs: 305 active+clean; 386 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 237 op/s
Oct 14 09:09:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1537024332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:11 compute-0 podman[333068]: 2025-10-14 09:09:11.684937335 +0000 UTC m=+0.096052486 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:09:11 compute-0 podman[333067]: 2025-10-14 09:09:11.688102973 +0000 UTC m=+0.092417207 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:09:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 339 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 2.7 MiB/s wr, 343 op/s
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.561 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.563 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.564 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.565 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.566 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.569 2 INFO nova.compute.manager [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Terminating instance
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.573 2 DEBUG nova.compute.manager [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:09:12 compute-0 kernel: tapeade513b-fd (unregistering): left promiscuous mode
Oct 14 09:09:12 compute-0 NetworkManager[44885]: <info>  [1760432952.6268] device (tapeade513b-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:12 compute-0 ovn_controller[152662]: 2025-10-14T09:09:12Z|00707|binding|INFO|Releasing lport eade513b-fd09-42b6-ae4f-0d1913be50f8 from this chassis (sb_readonly=0)
Oct 14 09:09:12 compute-0 ovn_controller[152662]: 2025-10-14T09:09:12Z|00708|binding|INFO|Setting lport eade513b-fd09-42b6-ae4f-0d1913be50f8 down in Southbound
Oct 14 09:09:12 compute-0 ovn_controller[152662]: 2025-10-14T09:09:12Z|00709|binding|INFO|Removing iface tapeade513b-fd ovn-installed in OVS
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.654 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:3b:5f 10.100.0.4'], port_security=['fa:16:3e:16:3b:5f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9a0796fa-b0ab-49c7-ac7f-a66013f58074', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-327c2ca5-7a4d-4421-a285-7bcc1ce806d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38ea96e617b14d28be8fd9f65647a849', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'da7fc45b-3de4-4937-861d-d9b4c2429bfa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67c8fc0d-000b-4c12-8ff9-0fd1ab8f32c9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=eade513b-fd09-42b6-ae4f-0d1913be50f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.655 162547 INFO neutron.agent.ovn.metadata.agent [-] Port eade513b-fd09-42b6-ae4f-0d1913be50f8 in datapath 327c2ca5-7a4d-4421-a285-7bcc1ce806d2 unbound from our chassis
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.656 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 327c2ca5-7a4d-4421-a285-7bcc1ce806d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.657 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ea837839-b37f-4915-8912-956a04edf68e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.657 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2 namespace which is not needed anymore
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:12 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000047.scope: Deactivated successfully.
Oct 14 09:09:12 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000047.scope: Consumed 8.587s CPU time.
Oct 14 09:09:12 compute-0 systemd-machined[214636]: Machine qemu-85-instance-00000047 terminated.
Oct 14 09:09:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:12 compute-0 neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2[333009]: [NOTICE]   (333015) : haproxy version is 2.8.14-c23fe91
Oct 14 09:09:12 compute-0 neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2[333009]: [NOTICE]   (333015) : path to executable is /usr/sbin/haproxy
Oct 14 09:09:12 compute-0 neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2[333009]: [WARNING]  (333015) : Exiting Master process...
Oct 14 09:09:12 compute-0 neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2[333009]: [WARNING]  (333015) : Exiting Master process...
Oct 14 09:09:12 compute-0 neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2[333009]: [ALERT]    (333015) : Current worker (333017) exited with code 143 (Terminated)
Oct 14 09:09:12 compute-0 neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2[333009]: [WARNING]  (333015) : All workers exited. Exiting... (0)
Oct 14 09:09:12 compute-0 systemd[1]: libpod-a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd.scope: Deactivated successfully.
Oct 14 09:09:12 compute-0 podman[333128]: 2025-10-14 09:09:12.802255361 +0000 UTC m=+0.049090860 container died a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.813 2 INFO nova.virt.libvirt.driver [-] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Instance destroyed successfully.
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.815 2 DEBUG nova.objects.instance [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lazy-loading 'resources' on Instance uuid 9a0796fa-b0ab-49c7-ac7f-a66013f58074 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.839 2 DEBUG nova.virt.libvirt.vif [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-371646039',display_name='tempest-ServerMetadataNegativeTestJSON-server-371646039',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-371646039',id=71,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='38ea96e617b14d28be8fd9f65647a849',ramdisk_id='',reservation_id='r-ngqpevh1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-62363335',owner_user_name='tempest-ServerMetadataNegativeTestJSON-62363335-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:09:05Z,user_data=None,user_id='851077659f3a499b97ce67237e198aab',uuid=9a0796fa-b0ab-49c7-ac7f-a66013f58074,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.840 2 DEBUG nova.network.os_vif_util [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Converting VIF {"id": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "address": "fa:16:3e:16:3b:5f", "network": {"id": "327c2ca5-7a4d-4421-a285-7bcc1ce806d2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1727148232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38ea96e617b14d28be8fd9f65647a849", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeade513b-fd", "ovs_interfaceid": "eade513b-fd09-42b6-ae4f-0d1913be50f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.841 2 DEBUG nova.network.os_vif_util [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:3b:5f,bridge_name='br-int',has_traffic_filtering=True,id=eade513b-fd09-42b6-ae4f-0d1913be50f8,network=Network(327c2ca5-7a4d-4421-a285-7bcc1ce806d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeade513b-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.842 2 DEBUG os_vif [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:3b:5f,bridge_name='br-int',has_traffic_filtering=True,id=eade513b-fd09-42b6-ae4f-0d1913be50f8,network=Network(327c2ca5-7a4d-4421-a285-7bcc1ce806d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeade513b-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd-userdata-shm.mount: Deactivated successfully.
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeade513b-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac994cc8898f2009254ad2034042d3fe3674a7c5c7f52510a42b9fa37a963f86-merged.mount: Deactivated successfully.
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.854 2 INFO os_vif [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:3b:5f,bridge_name='br-int',has_traffic_filtering=True,id=eade513b-fd09-42b6-ae4f-0d1913be50f8,network=Network(327c2ca5-7a4d-4421-a285-7bcc1ce806d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeade513b-fd')
Oct 14 09:09:12 compute-0 podman[333128]: 2025-10-14 09:09:12.863729785 +0000 UTC m=+0.110565274 container cleanup a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:09:12 compute-0 ceph-mon[74249]: pgmap v1634: 305 pgs: 305 active+clean; 339 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 2.7 MiB/s wr, 343 op/s
Oct 14 09:09:12 compute-0 systemd[1]: libpod-conmon-a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd.scope: Deactivated successfully.
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.895 2 DEBUG nova.compute.manager [req-3187ea95-608f-4b5a-9ac0-16ea5f4372da req-d63fc8ad-31a4-4fb4-bb9d-b00465498f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received event network-vif-unplugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.896 2 DEBUG oslo_concurrency.lockutils [req-3187ea95-608f-4b5a-9ac0-16ea5f4372da req-d63fc8ad-31a4-4fb4-bb9d-b00465498f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.896 2 DEBUG oslo_concurrency.lockutils [req-3187ea95-608f-4b5a-9ac0-16ea5f4372da req-d63fc8ad-31a4-4fb4-bb9d-b00465498f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.897 2 DEBUG oslo_concurrency.lockutils [req-3187ea95-608f-4b5a-9ac0-16ea5f4372da req-d63fc8ad-31a4-4fb4-bb9d-b00465498f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.897 2 DEBUG nova.compute.manager [req-3187ea95-608f-4b5a-9ac0-16ea5f4372da req-d63fc8ad-31a4-4fb4-bb9d-b00465498f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] No waiting events found dispatching network-vif-unplugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.897 2 DEBUG nova.compute.manager [req-3187ea95-608f-4b5a-9ac0-16ea5f4372da req-d63fc8ad-31a4-4fb4-bb9d-b00465498f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received event network-vif-unplugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:09:12 compute-0 podman[333184]: 2025-10-14 09:09:12.945997631 +0000 UTC m=+0.052797981 container remove a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.953 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[603d0877-f973-47d5-9acd-e30174d524a6]: (4, ('Tue Oct 14 09:09:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2 (a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd)\na46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd\nTue Oct 14 09:09:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2 (a46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd)\na46e75ff5c5905a13bb8168c5ce12e161c523f5d4627000db8613938bd7171bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.955 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f518bb-8811-463e-9c78-dd954125701e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.959 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap327c2ca5-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:12 compute-0 kernel: tap327c2ca5-70: left promiscuous mode
Oct 14 09:09:12 compute-0 nova_compute[259627]: 2025-10-14 09:09:12.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:12.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2fd090-c7bb-4169-9944-ba2a7abd37b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:13.010 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31be56c8-6964-4488-9d94-2477979a0158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:13.013 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8911844-976c-4cf3-9794-7ba07ae5815a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:13.030 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f719b4f-4503-4bf6-a6ae-418e00867d9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672272, 'reachable_time': 21317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333202, 'error': None, 'target': 'ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d327c2ca5\x2d7a4d\x2d4421\x2da285\x2d7bcc1ce806d2.mount: Deactivated successfully.
Oct 14 09:09:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:13.036 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-327c2ca5-7a4d-4421-a285-7bcc1ce806d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:09:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:13.036 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c86e9a-047a-4bd6-b225-cdeba1a24620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:13 compute-0 ovn_controller[152662]: 2025-10-14T09:09:13Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:77:95 10.100.0.13
Oct 14 09:09:13 compute-0 ovn_controller[152662]: 2025-10-14T09:09:13Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:77:95 10.100.0.13
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.319 2 INFO nova.virt.libvirt.driver [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Deleting instance files /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074_del
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.320 2 INFO nova.virt.libvirt.driver [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Deletion of /var/lib/nova/instances/9a0796fa-b0ab-49c7-ac7f-a66013f58074_del complete
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.379 2 INFO nova.compute.manager [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.379 2 DEBUG oslo.service.loopingcall [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.380 2 DEBUG nova.compute.manager [-] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.380 2 DEBUG nova.network.neutron [-] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.525 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.525 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.525 2 INFO nova.compute.manager [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Shelving
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.546 2 DEBUG nova.virt.libvirt.driver [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:09:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 339 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.2 MiB/s wr, 285 op/s
Oct 14 09:09:13 compute-0 nova_compute[259627]: 2025-10-14 09:09:13.995 2 DEBUG nova.network.neutron [-] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.014 2 INFO nova.compute.manager [-] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Took 0.63 seconds to deallocate network for instance.
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.064 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.065 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.163 2 DEBUG oslo_concurrency.processutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012451383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.609 2 DEBUG oslo_concurrency.processutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.618 2 DEBUG nova.compute.provider_tree [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.637 2 DEBUG nova.scheduler.client.report [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.670 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.706 2 INFO nova.scheduler.client.report [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Deleted allocations for instance 9a0796fa-b0ab-49c7-ac7f-a66013f58074
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.751 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:09:14 compute-0 nova_compute[259627]: 2025-10-14 09:09:14.797 2 DEBUG oslo_concurrency.lockutils [None req-78519d8c-cee3-4c6e-8c08-dae6b93c7eb1 851077659f3a499b97ce67237e198aab 38ea96e617b14d28be8fd9f65647a849 - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:14 compute-0 ceph-mon[74249]: pgmap v1635: 305 pgs: 305 active+clean; 339 MiB data, 703 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.2 MiB/s wr, 285 op/s
Oct 14 09:09:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1012451383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:15 compute-0 sudo[333227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:15 compute-0 sudo[333227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:15 compute-0 sudo[333227]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:15 compute-0 sudo[333252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:09:15 compute-0 sudo[333252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:15 compute-0 sudo[333252]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:15 compute-0 sudo[333277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:15 compute-0 sudo[333277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:15 compute-0 sudo[333277]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:15 compute-0 sudo[333302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:09:15 compute-0 sudo[333302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.504 2 DEBUG nova.compute.manager [req-4b4ade9a-6c9b-4693-8399-17436d61e3f9 req-b789b9e2-b574-4863-9d6b-f3a8eef7aab8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received event network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.506 2 DEBUG oslo_concurrency.lockutils [req-4b4ade9a-6c9b-4693-8399-17436d61e3f9 req-b789b9e2-b574-4863-9d6b-f3a8eef7aab8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.506 2 DEBUG oslo_concurrency.lockutils [req-4b4ade9a-6c9b-4693-8399-17436d61e3f9 req-b789b9e2-b574-4863-9d6b-f3a8eef7aab8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.507 2 DEBUG oslo_concurrency.lockutils [req-4b4ade9a-6c9b-4693-8399-17436d61e3f9 req-b789b9e2-b574-4863-9d6b-f3a8eef7aab8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9a0796fa-b0ab-49c7-ac7f-a66013f58074-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.508 2 DEBUG nova.compute.manager [req-4b4ade9a-6c9b-4693-8399-17436d61e3f9 req-b789b9e2-b574-4863-9d6b-f3a8eef7aab8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] No waiting events found dispatching network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.508 2 WARNING nova.compute.manager [req-4b4ade9a-6c9b-4693-8399-17436d61e3f9 req-b789b9e2-b574-4863-9d6b-f3a8eef7aab8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received unexpected event network-vif-plugged-eade513b-fd09-42b6-ae4f-0d1913be50f8 for instance with vm_state deleted and task_state None.
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.509 2 DEBUG nova.compute.manager [req-4b4ade9a-6c9b-4693-8399-17436d61e3f9 req-b789b9e2-b574-4863-9d6b-f3a8eef7aab8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Received event network-vif-deleted-eade513b-fd09-42b6-ae4f-0d1913be50f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.510 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432940.5042844, f74daf08-3420-4644-8fc8-b1450b733908 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.510 2 INFO nova.compute.manager [-] [instance: f74daf08-3420-4644-8fc8-b1450b733908] VM Stopped (Lifecycle Event)
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.530 2 DEBUG nova.compute.manager [None req-baa23098-9b0b-4a61-be65-c90f2387bde1 - - - - - -] [instance: f74daf08-3420-4644-8fc8-b1450b733908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 326 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 232 op/s
Oct 14 09:09:15 compute-0 kernel: tape18648ba-61 (unregistering): left promiscuous mode
Oct 14 09:09:15 compute-0 NetworkManager[44885]: <info>  [1760432955.8187] device (tape18648ba-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:15 compute-0 ovn_controller[152662]: 2025-10-14T09:09:15Z|00710|binding|INFO|Releasing lport e18648ba-6112-40fa-85f6-bdf82a012079 from this chassis (sb_readonly=0)
Oct 14 09:09:15 compute-0 ovn_controller[152662]: 2025-10-14T09:09:15Z|00711|binding|INFO|Setting lport e18648ba-6112-40fa-85f6-bdf82a012079 down in Southbound
Oct 14 09:09:15 compute-0 ovn_controller[152662]: 2025-10-14T09:09:15Z|00712|binding|INFO|Removing iface tape18648ba-61 ovn-installed in OVS
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:15.892 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:9e:35 10.100.0.9'], port_security=['fa:16:3e:0b:9e:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e065d857-2df9-4199-aa98-41ca3c436bad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baab55cf-9843-49b9-a43b-28ca1ab122c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e18648ba-6112-40fa-85f6-bdf82a012079) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:15.894 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e18648ba-6112-40fa-85f6-bdf82a012079 in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 unbound from our chassis
Oct 14 09:09:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:15.896 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:09:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:15.915 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8555fce-c3ed-4990-986e-9d893ee62c03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:15 compute-0 nova_compute[259627]: 2025-10-14 09:09:15.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:15 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000040.scope: Deactivated successfully.
Oct 14 09:09:15 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000040.scope: Consumed 17.737s CPU time.
Oct 14 09:09:15 compute-0 systemd-machined[214636]: Machine qemu-78-instance-00000040 terminated.
Oct 14 09:09:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:15.963 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27adcbf6-0787-4765-ba80-20594664b501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:15.967 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb7d9a2-9a59-4479-9c82-e2eb5dd93fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:15.996 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[53ce2449-d92f-4e5e-aa73-57d6df62ecb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:16.018 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[09ca0d6f-8528-4930-a59f-1e1a7eebacc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333369, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:16.035 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[764492aa-cb3f-454e-8a6a-aa9e032942ce]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662169, 'tstamp': 662169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333370, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662173, 'tstamp': 662173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333370, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:16.037 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:16.045 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d540b01-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:16.046 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:16.046 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d540b01-e0, col_values=(('external_ids', {'iface-id': 'fcca615a-5470-4880-844d-73adc425bce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:16.047 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:16 compute-0 sudo[333302]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:09:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:09:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:09:16 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:09:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:09:16 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:09:16 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 1b1b514c-a9f3-4e39-a4e9-c93bd9f94341 does not exist
Oct 14 09:09:16 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6e6febfa-ba26-46a4-99b5-3d5b43770cde does not exist
Oct 14 09:09:16 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 25ad7aff-bd9a-41e0-bc1a-0e9553746175 does not exist
Oct 14 09:09:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:09:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:09:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:09:16 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:09:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:09:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:16 compute-0 sudo[333376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:16 compute-0 sudo[333376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:16 compute-0 sudo[333376]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:16 compute-0 sudo[333407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:09:16 compute-0 sudo[333407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:16 compute-0 sudo[333407]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:16 compute-0 sudo[333432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:16 compute-0 sudo[333432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:16 compute-0 sudo[333432]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:16 compute-0 sudo[333457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:09:16 compute-0 sudo[333457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.440 2 DEBUG nova.compute.manager [req-bd992bc2-b424-45b8-8600-a4e4cb03d3b1 req-35896ab8-845f-441d-b930-f67a62d5bf5c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-unplugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.441 2 DEBUG oslo_concurrency.lockutils [req-bd992bc2-b424-45b8-8600-a4e4cb03d3b1 req-35896ab8-845f-441d-b930-f67a62d5bf5c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.441 2 DEBUG oslo_concurrency.lockutils [req-bd992bc2-b424-45b8-8600-a4e4cb03d3b1 req-35896ab8-845f-441d-b930-f67a62d5bf5c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.442 2 DEBUG oslo_concurrency.lockutils [req-bd992bc2-b424-45b8-8600-a4e4cb03d3b1 req-35896ab8-845f-441d-b930-f67a62d5bf5c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.443 2 DEBUG nova.compute.manager [req-bd992bc2-b424-45b8-8600-a4e4cb03d3b1 req-35896ab8-845f-441d-b930-f67a62d5bf5c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] No waiting events found dispatching network-vif-unplugged-e18648ba-6112-40fa-85f6-bdf82a012079 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.443 2 WARNING nova.compute.manager [req-bd992bc2-b424-45b8-8600-a4e4cb03d3b1 req-35896ab8-845f-441d-b930-f67a62d5bf5c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received unexpected event network-vif-unplugged-e18648ba-6112-40fa-85f6-bdf82a012079 for instance with vm_state active and task_state shelving.
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.606 2 INFO nova.virt.libvirt.driver [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance shutdown successfully after 3 seconds.
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.613 2 INFO nova.virt.libvirt.driver [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance destroyed successfully.
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.613 2 DEBUG nova.objects.instance [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'numa_topology' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:16 compute-0 podman[333524]: 2025-10-14 09:09:16.799972574 +0000 UTC m=+0.058427199 container create fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.832 2 INFO nova.virt.libvirt.driver [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Beginning cold snapshot process
Oct 14 09:09:16 compute-0 systemd[1]: Started libpod-conmon-fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364.scope.
Oct 14 09:09:16 compute-0 podman[333524]: 2025-10-14 09:09:16.762888921 +0000 UTC m=+0.021343576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:09:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:16 compute-0 ceph-mon[74249]: pgmap v1636: 305 pgs: 305 active+clean; 326 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 232 op/s
Oct 14 09:09:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:09:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:09:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:09:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:09:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:09:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:09:16 compute-0 podman[333524]: 2025-10-14 09:09:16.917642212 +0000 UTC m=+0.176096867 container init fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:09:16 compute-0 podman[333524]: 2025-10-14 09:09:16.926830939 +0000 UTC m=+0.185285564 container start fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 09:09:16 compute-0 podman[333524]: 2025-10-14 09:09:16.931644357 +0000 UTC m=+0.190099012 container attach fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:09:16 compute-0 blissful_goodall[333540]: 167 167
Oct 14 09:09:16 compute-0 systemd[1]: libpod-fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364.scope: Deactivated successfully.
Oct 14 09:09:16 compute-0 podman[333524]: 2025-10-14 09:09:16.935162154 +0000 UTC m=+0.193616789 container died fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:09:16 compute-0 nova_compute[259627]: 2025-10-14 09:09:16.962 2 DEBUG nova.virt.libvirt.imagebackend [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b3565eb2f97060ebcab0df8995c9b7c49c9c2e677a6841e19dd0cd2a552705a-merged.mount: Deactivated successfully.
Oct 14 09:09:17 compute-0 podman[333524]: 2025-10-14 09:09:17.023990052 +0000 UTC m=+0.282444677 container remove fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 09:09:17 compute-0 systemd[1]: libpod-conmon-fb165a023919810d77538f6332c830e7e61208bd86be6ace12c11281c46e6364.scope: Deactivated successfully.
Oct 14 09:09:17 compute-0 kernel: tap67728610-67 (unregistering): left promiscuous mode
Oct 14 09:09:17 compute-0 NetworkManager[44885]: <info>  [1760432957.0920] device (tap67728610-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 ovn_controller[152662]: 2025-10-14T09:09:17Z|00713|binding|INFO|Releasing lport 67728610-6776-4496-98b2-a14f59c9674d from this chassis (sb_readonly=0)
Oct 14 09:09:17 compute-0 ovn_controller[152662]: 2025-10-14T09:09:17Z|00714|binding|INFO|Setting lport 67728610-6776-4496-98b2-a14f59c9674d down in Southbound
Oct 14 09:09:17 compute-0 ovn_controller[152662]: 2025-10-14T09:09:17Z|00715|binding|INFO|Removing iface tap67728610-67 ovn-installed in OVS
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.119 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:77:95 10.100.0.13'], port_security=['fa:16:3e:c3:77:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9ccbc44-715e-4419-9286-f0cb6e41d9cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=67728610-6776-4496-98b2-a14f59c9674d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.120 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 67728610-6776-4496-98b2-a14f59c9674d in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 unbound from our chassis
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.121 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99db3452-8467-4a2b-a51d-30679c346bb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.122 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[10022e6c-7772-4b8d-bdb6-6e9601fa2a0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.122 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace which is not needed anymore
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000046.scope: Deactivated successfully.
Oct 14 09:09:17 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000046.scope: Consumed 13.464s CPU time.
Oct 14 09:09:17 compute-0 systemd-machined[214636]: Machine qemu-84-instance-00000046 terminated.
Oct 14 09:09:17 compute-0 podman[333606]: 2025-10-14 09:09:17.208382503 +0000 UTC m=+0.049560492 container create 04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:09:17 compute-0 systemd[1]: Started libpod-conmon-04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d.scope.
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.238 2 DEBUG nova.storage.rbd_utils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(2e8a3dd0b2d247a1a466176807ce33f3) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:09:17 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:17 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[332329]: [NOTICE]   (332333) : haproxy version is 2.8.14-c23fe91
Oct 14 09:09:17 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[332329]: [NOTICE]   (332333) : path to executable is /usr/sbin/haproxy
Oct 14 09:09:17 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[332329]: [WARNING]  (332333) : Exiting Master process...
Oct 14 09:09:17 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[332329]: [WARNING]  (332333) : Exiting Master process...
Oct 14 09:09:17 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[332329]: [ALERT]    (332333) : Current worker (332369) exited with code 143 (Terminated)
Oct 14 09:09:17 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[332329]: [WARNING]  (332333) : All workers exited. Exiting... (0)
Oct 14 09:09:17 compute-0 systemd[1]: libpod-a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff.scope: Deactivated successfully.
Oct 14 09:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1451e17cdeefe9555e43b5a1d0196c4f70eeecf3359b11bd5dc373c231734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:17 compute-0 podman[333632]: 2025-10-14 09:09:17.27812644 +0000 UTC m=+0.059519416 container died a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1451e17cdeefe9555e43b5a1d0196c4f70eeecf3359b11bd5dc373c231734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1451e17cdeefe9555e43b5a1d0196c4f70eeecf3359b11bd5dc373c231734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1451e17cdeefe9555e43b5a1d0196c4f70eeecf3359b11bd5dc373c231734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1451e17cdeefe9555e43b5a1d0196c4f70eeecf3359b11bd5dc373c231734/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:17 compute-0 podman[333606]: 2025-10-14 09:09:17.184927815 +0000 UTC m=+0.026105844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:09:17 compute-0 podman[333606]: 2025-10-14 09:09:17.302093281 +0000 UTC m=+0.143271270 container init 04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:09:17 compute-0 podman[333606]: 2025-10-14 09:09:17.312240661 +0000 UTC m=+0.153418660 container start 04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:09:17 compute-0 podman[333606]: 2025-10-14 09:09:17.317297135 +0000 UTC m=+0.158475154 container attach 04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noether, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff-userdata-shm.mount: Deactivated successfully.
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-28b23b051526deddaa336b2f5c3fec20155e9d4d43143be390798695acdb5cf3-merged.mount: Deactivated successfully.
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 podman[333632]: 2025-10-14 09:09:17.343997973 +0000 UTC m=+0.125390949 container cleanup a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:09:17 compute-0 systemd[1]: libpod-conmon-a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff.scope: Deactivated successfully.
Oct 14 09:09:17 compute-0 podman[333698]: 2025-10-14 09:09:17.424734831 +0000 UTC m=+0.046870565 container remove a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.431 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1266f22-a85e-4916-9ce0-e3f2817d6266]: (4, ('Tue Oct 14 09:09:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff)\na67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff\nTue Oct 14 09:09:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (a67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff)\na67c9cafd189f26f4bee01f4597d52555bacfc28cff95f7963bd7b4c41e1b3ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.433 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7935e8fc-2c55-45d7-8d2c-5be764b37dd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.433 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 kernel: tap99db3452-80: left promiscuous mode
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.460 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68ff58e8-1ce6-4937-bf41-970d17d9d68f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.482 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd2d6d4-6e8e-40bb-868b-e37130a5afa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.483 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f79baa8-f644-492f-b9d6-b479758cb5f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.497 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31d09952-a3dd-402c-978a-646f357596d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671471, 'reachable_time': 25715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333719, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.499 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:09:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:17.499 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[62e3b49c-f845-498a-aed1-503caf4e00f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.767 2 INFO nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance shutdown successfully after 13 seconds.
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.773 2 INFO nova.virt.libvirt.driver [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance destroyed successfully.
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.781 2 INFO nova.virt.libvirt.driver [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance destroyed successfully.
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.782 2 DEBUG nova.virt.libvirt.vif [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1155536075',display_name='tempest-ServerDiskConfigTestJSON-server-1155536075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1155536075',id=70,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-ak09v01b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:03Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=e9ccbc44-715e-4419-9286-f0cb6e41d9cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.783 2 DEBUG nova.network.os_vif_util [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.784 2 DEBUG nova.network.os_vif_util [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.785 2 DEBUG os_vif [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67728610-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:17 compute-0 nova_compute[259627]: 2025-10-14 09:09:17.800 2 INFO os_vif [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67')
Oct 14 09:09:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d99db3452\x2d8467\x2d4a2b\x2da51d\x2d30679c346bb2.mount: Deactivated successfully.
Oct 14 09:09:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 326 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 230 op/s
Oct 14 09:09:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 14 09:09:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 14 09:09:17 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.055 2 DEBUG nova.storage.rbd_utils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk@2e8a3dd0b2d247a1a466176807ce33f3 to images/afaa5b96-a65d-4840-9509-b25dadfeafa7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.214 2 DEBUG nova.storage.rbd_utils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening images/afaa5b96-a65d-4840-9509-b25dadfeafa7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:09:18 compute-0 exciting_noether[333646]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:09:18 compute-0 exciting_noether[333646]: --> relative data size: 1.0
Oct 14 09:09:18 compute-0 exciting_noether[333646]: --> All data devices are unavailable
Oct 14 09:09:18 compute-0 systemd[1]: libpod-04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d.scope: Deactivated successfully.
Oct 14 09:09:18 compute-0 podman[333606]: 2025-10-14 09:09:18.504310788 +0000 UTC m=+1.345488777 container died 04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:09:18 compute-0 systemd[1]: libpod-04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d.scope: Consumed 1.102s CPU time.
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.590 2 DEBUG nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.590 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.591 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.591 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.591 2 DEBUG nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] No waiting events found dispatching network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.592 2 WARNING nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received unexpected event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 for instance with vm_state active and task_state shelving_image_uploading.
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.592 2 DEBUG nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-unplugged-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.592 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.592 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.593 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.593 2 DEBUG nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] No waiting events found dispatching network-vif-unplugged-67728610-6776-4496-98b2-a14f59c9674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.593 2 WARNING nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received unexpected event network-vif-unplugged-67728610-6776-4496-98b2-a14f59c9674d for instance with vm_state active and task_state rebuilding.
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.593 2 DEBUG nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.594 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.594 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.594 2 DEBUG oslo_concurrency.lockutils [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.595 2 DEBUG nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] No waiting events found dispatching network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:18 compute-0 nova_compute[259627]: 2025-10-14 09:09:18.595 2 WARNING nova.compute.manager [req-83c2c08e-e247-4816-97e4-949aed60cc44 req-fecd2364-6d7f-4a43-864d-9318a9fa0b3b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received unexpected event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d for instance with vm_state active and task_state rebuilding.
Oct 14 09:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-31f1451e17cdeefe9555e43b5a1d0196c4f70eeecf3359b11bd5dc373c231734-merged.mount: Deactivated successfully.
Oct 14 09:09:18 compute-0 podman[333606]: 2025-10-14 09:09:18.864181371 +0000 UTC m=+1.705359390 container remove 04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noether, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:09:18 compute-0 systemd[1]: libpod-conmon-04b7bee6393103d4ba8624fc8d0ecbac1317d0b5d97fce483bf47ccbbb80608d.scope: Deactivated successfully.
Oct 14 09:09:18 compute-0 sudo[333457]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:18 compute-0 ceph-mon[74249]: pgmap v1637: 305 pgs: 305 active+clean; 326 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 230 op/s
Oct 14 09:09:18 compute-0 ceph-mon[74249]: osdmap e245: 3 total, 3 up, 3 in
Oct 14 09:09:19 compute-0 sudo[333832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:19 compute-0 sudo[333832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:19 compute-0 sudo[333832]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.051 2 DEBUG nova.storage.rbd_utils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] removing snapshot(2e8a3dd0b2d247a1a466176807ce33f3) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.092 2 INFO nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Deleting instance files /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd_del
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.093 2 INFO nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Deletion of /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd_del complete
Oct 14 09:09:19 compute-0 sudo[333875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:09:19 compute-0 sudo[333875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:19 compute-0 sudo[333875]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:19 compute-0 sudo[333900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:19 compute-0 sudo[333900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:19 compute-0 sudo[333900]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:19 compute-0 sudo[333925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:09:19 compute-0 sudo[333925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.312 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.313 2 INFO nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Creating image(s)
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.342 2 DEBUG nova.storage.rbd_utils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.376 2 DEBUG nova.storage.rbd_utils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.408 2 DEBUG nova.storage.rbd_utils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.413 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.529 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.531 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.531 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.532 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.557 2 DEBUG nova.storage.rbd_utils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.561 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:19 compute-0 podman[334046]: 2025-10-14 09:09:19.601279823 +0000 UTC m=+0.049173022 container create 6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:09:19 compute-0 systemd[1]: Started libpod-conmon-6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40.scope.
Oct 14 09:09:19 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:19 compute-0 podman[334046]: 2025-10-14 09:09:19.58449204 +0000 UTC m=+0.032385259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:09:19 compute-0 podman[334046]: 2025-10-14 09:09:19.680687689 +0000 UTC m=+0.128580898 container init 6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:09:19 compute-0 podman[334046]: 2025-10-14 09:09:19.689595628 +0000 UTC m=+0.137488837 container start 6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:09:19 compute-0 pensive_hugle[334081]: 167 167
Oct 14 09:09:19 compute-0 systemd[1]: libpod-6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40.scope: Deactivated successfully.
Oct 14 09:09:19 compute-0 podman[334046]: 2025-10-14 09:09:19.694608381 +0000 UTC m=+0.142501590 container attach 6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:09:19 compute-0 podman[334046]: 2025-10-14 09:09:19.697034071 +0000 UTC m=+0.144927310 container died 6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-26c60bc89555879a319fe049958c0d132a291d266d4b6723391f9c2ff8fe7aa5-merged.mount: Deactivated successfully.
Oct 14 09:09:19 compute-0 podman[334046]: 2025-10-14 09:09:19.7546371 +0000 UTC m=+0.202530289 container remove 6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:09:19 compute-0 systemd[1]: libpod-conmon-6b35ee099f183821d42441e50f356183606b218fb145c6efb449925e03c36a40.scope: Deactivated successfully.
Oct 14 09:09:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 326 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 232 op/s
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.880 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 14 09:09:19 compute-0 nova_compute[259627]: 2025-10-14 09:09:19.947 2 DEBUG nova.storage.rbd_utils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] resizing rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:09:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 14 09:09:19 compute-0 podman[334124]: 2025-10-14 09:09:19.956334707 +0000 UTC m=+0.060240664 container create 75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 09:09:19 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 14 09:09:20 compute-0 systemd[1]: Started libpod-conmon-75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4.scope.
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.011 2 DEBUG nova.storage.rbd_utils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(snap) on rbd image(afaa5b96-a65d-4840-9509-b25dadfeafa7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:09:20 compute-0 podman[334124]: 2025-10-14 09:09:19.928170133 +0000 UTC m=+0.032076090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:09:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac3b3eb8c3939b3b1e4b9ad43b7b183ac28049dcff2fa0a293806e45fa622e6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac3b3eb8c3939b3b1e4b9ad43b7b183ac28049dcff2fa0a293806e45fa622e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac3b3eb8c3939b3b1e4b9ad43b7b183ac28049dcff2fa0a293806e45fa622e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac3b3eb8c3939b3b1e4b9ad43b7b183ac28049dcff2fa0a293806e45fa622e6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:20 compute-0 podman[334124]: 2025-10-14 09:09:20.094683424 +0000 UTC m=+0.198589391 container init 75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:09:20 compute-0 podman[334124]: 2025-10-14 09:09:20.103970843 +0000 UTC m=+0.207876760 container start 75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:09:20 compute-0 podman[334124]: 2025-10-14 09:09:20.107352646 +0000 UTC m=+0.211258573 container attach 75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:09:20 compute-0 ovn_controller[152662]: 2025-10-14T09:09:20Z|00716|binding|INFO|Releasing lport fcca615a-5470-4880-844d-73adc425bce1 from this chassis (sb_readonly=0)
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.180 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.181 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Ensure instance console log exists: /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.181 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.182 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.182 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.185 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Start _get_guest_xml network_info=[{"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.195 2 WARNING nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.206 2 DEBUG nova.virt.libvirt.host [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.208 2 DEBUG nova.virt.libvirt.host [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.211 2 DEBUG nova.virt.libvirt.host [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.212 2 DEBUG nova.virt.libvirt.host [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.212 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.213 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.213 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.214 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.214 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.214 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.215 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.215 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.215 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.216 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.216 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.216 2 DEBUG nova.virt.hardware [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.217 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'vcpu_model' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.242 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336993601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.699 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.736 2 DEBUG nova.storage.rbd_utils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:20 compute-0 nova_compute[259627]: 2025-10-14 09:09:20.742 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]: {
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:     "0": [
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:         {
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "devices": [
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "/dev/loop3"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             ],
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_name": "ceph_lv0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_size": "21470642176",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "name": "ceph_lv0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "tags": {
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cluster_name": "ceph",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.crush_device_class": "",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.encrypted": "0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osd_id": "0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.type": "block",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.vdo": "0"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             },
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "type": "block",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "vg_name": "ceph_vg0"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:         }
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:     ],
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:     "1": [
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:         {
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "devices": [
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "/dev/loop4"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             ],
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_name": "ceph_lv1",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_size": "21470642176",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "name": "ceph_lv1",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "tags": {
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cluster_name": "ceph",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.crush_device_class": "",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.encrypted": "0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osd_id": "1",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.type": "block",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.vdo": "0"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             },
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "type": "block",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "vg_name": "ceph_vg1"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:         }
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:     ],
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:     "2": [
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:         {
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "devices": [
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "/dev/loop5"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             ],
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_name": "ceph_lv2",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_size": "21470642176",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "name": "ceph_lv2",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "tags": {
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.cluster_name": "ceph",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.crush_device_class": "",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.encrypted": "0",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osd_id": "2",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.type": "block",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:                 "ceph.vdo": "0"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             },
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "type": "block",
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:             "vg_name": "ceph_vg2"
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:         }
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]:     ]
Oct 14 09:09:20 compute-0 flamboyant_wu[334194]: }
Oct 14 09:09:20 compute-0 systemd[1]: libpod-75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4.scope: Deactivated successfully.
Oct 14 09:09:20 compute-0 conmon[334194]: conmon 75fd4e769932ba6dc537 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4.scope/container/memory.events
Oct 14 09:09:20 compute-0 podman[334124]: 2025-10-14 09:09:20.911780108 +0000 UTC m=+1.015686055 container died 75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac3b3eb8c3939b3b1e4b9ad43b7b183ac28049dcff2fa0a293806e45fa622e6c-merged.mount: Deactivated successfully.
Oct 14 09:09:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 14 09:09:20 compute-0 ceph-mon[74249]: pgmap v1639: 305 pgs: 305 active+clean; 326 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 232 op/s
Oct 14 09:09:20 compute-0 ceph-mon[74249]: osdmap e246: 3 total, 3 up, 3 in
Oct 14 09:09:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/336993601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 14 09:09:20 compute-0 podman[334124]: 2025-10-14 09:09:20.972628026 +0000 UTC m=+1.076533953 container remove 75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:09:20 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 14 09:09:20 compute-0 systemd[1]: libpod-conmon-75fd4e769932ba6dc5372c39264aad0a86419fc8e0fe68285d5779c009318dc4.scope: Deactivated successfully.
Oct 14 09:09:21 compute-0 sudo[333925]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:21 compute-0 sudo[334310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:21 compute-0 sudo[334310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:21 compute-0 sudo[334310]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:21 compute-0 sudo[334335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:09:21 compute-0 sudo[334335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:21 compute-0 sudo[334335]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1725097460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.244 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.246 2 DEBUG nova.virt.libvirt.vif [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1155536075',display_name='tempest-ServerDiskConfigTestJSON-server-1155536075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1155536075',id=70,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-ak09v01b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:19Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=e9ccbc44-715e-4419-9286-f0cb6e41d9cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.247 2 DEBUG nova.network.os_vif_util [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.248 2 DEBUG nova.network.os_vif_util [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.252 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <uuid>e9ccbc44-715e-4419-9286-f0cb6e41d9cd</uuid>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <name>instance-00000046</name>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1155536075</nova:name>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:09:20</nova:creationTime>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <nova:user uuid="979aa20794dc414f91c59f224a0db083">tempest-ServerDiskConfigTestJSON-1253454894-project-member</nova:user>
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <nova:project uuid="9099e3128b584ff7a140b8021451223e">tempest-ServerDiskConfigTestJSON-1253454894</nova:project>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <nova:port uuid="67728610-6776-4496-98b2-a14f59c9674d">
Oct 14 09:09:21 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <system>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <entry name="serial">e9ccbc44-715e-4419-9286-f0cb6e41d9cd</entry>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <entry name="uuid">e9ccbc44-715e-4419-9286-f0cb6e41d9cd</entry>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </system>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <os>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   </os>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <features>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   </features>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk">
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config">
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:21 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c3:77:95"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <target dev="tap67728610-67"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/console.log" append="off"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <video>
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </video>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:09:21 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:09:21 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:09:21 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:09:21 compute-0 nova_compute[259627]: </domain>
Oct 14 09:09:21 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.253 2 DEBUG nova.compute.manager [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Preparing to wait for external event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.254 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.254 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.254 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.256 2 DEBUG nova.virt.libvirt.vif [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1155536075',display_name='tempest-ServerDiskConfigTestJSON-server-1155536075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1155536075',id=70,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-ak09v01b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:19Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=e9ccbc44-715e-4419-9286-f0cb6e41d9cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.256 2 DEBUG nova.network.os_vif_util [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.257 2 DEBUG nova.network.os_vif_util [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.258 2 DEBUG os_vif [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:21 compute-0 sudo[334360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67728610-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:21 compute-0 sudo[334360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67728610-67, col_values=(('external_ids', {'iface-id': '67728610-6776-4496-98b2-a14f59c9674d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:77:95', 'vm-uuid': 'e9ccbc44-715e-4419-9286-f0cb6e41d9cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:21 compute-0 NetworkManager[44885]: <info>  [1760432961.2710] manager: (tap67728610-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Oct 14 09:09:21 compute-0 sudo[334360]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.284 2 INFO os_vif [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67')
Oct 14 09:09:21 compute-0 sudo[334390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.340 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.341 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.342 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No VIF found with MAC fa:16:3e:c3:77:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.342 2 INFO nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Using config drive
Oct 14 09:09:21 compute-0 sudo[334390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.366 2 DEBUG nova.storage.rbd_utils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.387 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'ec2_ids' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:21 compute-0 nova_compute[259627]: 2025-10-14 09:09:21.421 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'keypairs' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:21 compute-0 podman[334476]: 2025-10-14 09:09:21.798200728 +0000 UTC m=+0.063725790 container create 437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:09:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 372 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 252 op/s
Oct 14 09:09:21 compute-0 systemd[1]: Started libpod-conmon-437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14.scope.
Oct 14 09:09:21 compute-0 podman[334476]: 2025-10-14 09:09:21.772578997 +0000 UTC m=+0.038104109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:09:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:21 compute-0 podman[334476]: 2025-10-14 09:09:21.899610165 +0000 UTC m=+0.165135277 container init 437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:09:21 compute-0 podman[334476]: 2025-10-14 09:09:21.907887669 +0000 UTC m=+0.173412741 container start 437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:09:21 compute-0 podman[334476]: 2025-10-14 09:09:21.911867597 +0000 UTC m=+0.177392669 container attach 437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hertz, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:09:21 compute-0 charming_hertz[334492]: 167 167
Oct 14 09:09:21 compute-0 systemd[1]: libpod-437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14.scope: Deactivated successfully.
Oct 14 09:09:21 compute-0 podman[334476]: 2025-10-14 09:09:21.917360633 +0000 UTC m=+0.182885675 container died 437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hertz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 09:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-75c8f25dc01e07d33140049f561af15da31830a150e026fb897d1ad6beb107f8-merged.mount: Deactivated successfully.
Oct 14 09:09:21 compute-0 podman[334476]: 2025-10-14 09:09:21.967225991 +0000 UTC m=+0.232751063 container remove 437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:09:21 compute-0 ceph-mon[74249]: osdmap e247: 3 total, 3 up, 3 in
Oct 14 09:09:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1725097460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:22 compute-0 systemd[1]: libpod-conmon-437533bb4bfb15748b4bc02fb50d11e5906424f6076d1a5566844ad12c8e8c14.scope: Deactivated successfully.
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.004 2 INFO nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Creating config drive at /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.013 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpms1hqiy5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.172 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpms1hqiy5" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:22 compute-0 podman[334518]: 2025-10-14 09:09:22.197684946 +0000 UTC m=+0.059492536 container create 8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.222 2 DEBUG nova.storage.rbd_utils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.228 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:22 compute-0 systemd[1]: Started libpod-conmon-8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103.scope.
Oct 14 09:09:22 compute-0 podman[334518]: 2025-10-14 09:09:22.172893056 +0000 UTC m=+0.034700736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:09:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e822d3e56788d41e8cdfc775a4eb35e1e88712100dbd55592de4e3394d797c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e822d3e56788d41e8cdfc775a4eb35e1e88712100dbd55592de4e3394d797c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e822d3e56788d41e8cdfc775a4eb35e1e88712100dbd55592de4e3394d797c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e822d3e56788d41e8cdfc775a4eb35e1e88712100dbd55592de4e3394d797c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:22 compute-0 podman[334518]: 2025-10-14 09:09:22.339435057 +0000 UTC m=+0.201242737 container init 8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:09:22 compute-0 podman[334518]: 2025-10-14 09:09:22.354371795 +0000 UTC m=+0.216179375 container start 8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:09:22 compute-0 podman[334518]: 2025-10-14 09:09:22.358680581 +0000 UTC m=+0.220488221 container attach 8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.442 2 DEBUG oslo_concurrency.processutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config e9ccbc44-715e-4419-9286-f0cb6e41d9cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.444 2 INFO nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Deleting local config drive /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd/disk.config because it was imported into RBD.
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 kernel: tap67728610-67: entered promiscuous mode
Oct 14 09:09:22 compute-0 NetworkManager[44885]: <info>  [1760432962.5178] manager: (tap67728610-67): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 ovn_controller[152662]: 2025-10-14T09:09:22Z|00717|binding|INFO|Claiming lport 67728610-6776-4496-98b2-a14f59c9674d for this chassis.
Oct 14 09:09:22 compute-0 ovn_controller[152662]: 2025-10-14T09:09:22Z|00718|binding|INFO|67728610-6776-4496-98b2-a14f59c9674d: Claiming fa:16:3e:c3:77:95 10.100.0.13
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.528 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:77:95 10.100.0.13'], port_security=['fa:16:3e:c3:77:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9ccbc44-715e-4419-9286-f0cb6e41d9cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=67728610-6776-4496-98b2-a14f59c9674d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.529 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 67728610-6776-4496-98b2-a14f59c9674d in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 bound to our chassis
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.531 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.542 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af76920a-7c57-4eb3-acd3-faf2c36f1333]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.543 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99db3452-81 in ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.545 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99db3452-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9dc6345-4b97-4f5c-8c33-563939c02529]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.546 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe4bede-5f03-4ca1-abc4-4324c243b168]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_controller[152662]: 2025-10-14T09:09:22Z|00719|binding|INFO|Setting lport 67728610-6776-4496-98b2-a14f59c9674d ovn-installed in OVS
Oct 14 09:09:22 compute-0 ovn_controller[152662]: 2025-10-14T09:09:22Z|00720|binding|INFO|Setting lport 67728610-6776-4496-98b2-a14f59c9674d up in Southbound
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 systemd-udevd[334593]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.562 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[42d506dc-b0bf-4480-980b-2d4778b2d503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 systemd-machined[214636]: New machine qemu-86-instance-00000046.
Oct 14 09:09:22 compute-0 NetworkManager[44885]: <info>  [1760432962.5709] device (tap67728610-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:09:22 compute-0 NetworkManager[44885]: <info>  [1760432962.5717] device (tap67728610-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:09:22 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-00000046.
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.587 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cf6924-1904-4090-b8e0-544250a58882]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.618 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc40f5a-6440-4f17-8414-368284898a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.622 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a33932d1-0815-46e2-81ac-38fc95619735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 NetworkManager[44885]: <info>  [1760432962.6233] manager: (tap99db3452-80): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Oct 14 09:09:22 compute-0 systemd-udevd[334599]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.660 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cd017b8e-8a7e-4224-8fb8-09d0cd54f5e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.662 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aa12052d-7723-49cd-b875-672bc359a85f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 NetworkManager[44885]: <info>  [1760432962.6946] device (tap99db3452-80): carrier: link connected
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.698 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8a35c434-3375-4c60-a616-914d589456c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.718 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[047218bf-0505-46a4-9cf7-f868d1e0041e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674145, 'reachable_time': 33521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334630, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4321b29e-3a92-4c6a-89a8-18726ae1d19f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:a670'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674145, 'tstamp': 674145}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334631, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.757 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[99e287fa-637a-4042-a092-ff8ae6686867]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674145, 'reachable_time': 33521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334632, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.786 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[deb4e269-eaab-4c3f-a6e9-82f584b8240c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6dabf021-0884-42a4-8ad4-abeddc5417de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.855 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.855 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.855 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99db3452-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 NetworkManager[44885]: <info>  [1760432962.8582] manager: (tap99db3452-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Oct 14 09:09:22 compute-0 kernel: tap99db3452-80: entered promiscuous mode
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.867 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99db3452-80, col_values=(('external_ids', {'iface-id': '59e7d558-49d1-48cf-b926-27e93fe381b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 ovn_controller[152662]: 2025-10-14T09:09:22Z|00721|binding|INFO|Releasing lport 59e7d558-49d1-48cf-b926-27e93fe381b1 from this chassis (sb_readonly=0)
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.897 2 INFO nova.virt.libvirt.driver [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Snapshot image upload complete
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.898 2 DEBUG nova.compute.manager [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.912 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.913 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4bbd0f-a19e-4706-9b31-afed12103fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.914 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:09:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:22.915 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'env', 'PROCESS_TAG=haproxy-99db3452-8467-4a2b-a51d-30679c346bb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99db3452-8467-4a2b-a51d-30679c346bb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.987 2 INFO nova.compute.manager [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Shelve offloading
Oct 14 09:09:22 compute-0 ceph-mon[74249]: pgmap v1642: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 372 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 252 op/s
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.997 2 INFO nova.virt.libvirt.driver [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance destroyed successfully.
Oct 14 09:09:22 compute-0 nova_compute[259627]: 2025-10-14 09:09:22.998 2 DEBUG nova.compute.manager [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.000 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.001 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.001 2 DEBUG nova.network.neutron [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:09:23 compute-0 podman[334724]: 2025-10-14 09:09:23.302080274 +0000 UTC m=+0.060902211 container create 291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:09:23 compute-0 systemd[1]: Started libpod-conmon-291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf.scope.
Oct 14 09:09:23 compute-0 podman[334724]: 2025-10-14 09:09:23.267266417 +0000 UTC m=+0.026088444 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:09:23 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4231ee64cf8db13854b13115eae4c000cd19a824f4301b43ef48ee6008e944/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:23 compute-0 interesting_knuth[334555]: {
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "osd_id": 2,
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "type": "bluestore"
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:     },
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "osd_id": 1,
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "type": "bluestore"
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:     },
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "osd_id": 0,
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:         "type": "bluestore"
Oct 14 09:09:23 compute-0 interesting_knuth[334555]:     }
Oct 14 09:09:23 compute-0 interesting_knuth[334555]: }
Oct 14 09:09:23 compute-0 podman[334724]: 2025-10-14 09:09:23.403180414 +0000 UTC m=+0.162002371 container init 291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:09:23 compute-0 podman[334724]: 2025-10-14 09:09:23.412128924 +0000 UTC m=+0.170950881 container start 291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:09:23 compute-0 systemd[1]: libpod-8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103.scope: Deactivated successfully.
Oct 14 09:09:23 compute-0 systemd[1]: libpod-8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103.scope: Consumed 1.026s CPU time.
Oct 14 09:09:23 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[334751]: [NOTICE]   (334758) : New worker (334766) forked
Oct 14 09:09:23 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[334751]: [NOTICE]   (334758) : Loading success.
Oct 14 09:09:23 compute-0 podman[334760]: 2025-10-14 09:09:23.459593293 +0000 UTC m=+0.022232048 container died 8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.466 2 DEBUG nova.compute.manager [req-47c531fa-1b0e-4c8a-810a-8765a7e3f52c req-38dd61de-bb2b-4346-9ccf-9e5fcf30eb0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.467 2 DEBUG oslo_concurrency.lockutils [req-47c531fa-1b0e-4c8a-810a-8765a7e3f52c req-38dd61de-bb2b-4346-9ccf-9e5fcf30eb0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.468 2 DEBUG oslo_concurrency.lockutils [req-47c531fa-1b0e-4c8a-810a-8765a7e3f52c req-38dd61de-bb2b-4346-9ccf-9e5fcf30eb0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.468 2 DEBUG oslo_concurrency.lockutils [req-47c531fa-1b0e-4c8a-810a-8765a7e3f52c req-38dd61de-bb2b-4346-9ccf-9e5fcf30eb0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.469 2 DEBUG nova.compute.manager [req-47c531fa-1b0e-4c8a-810a-8765a7e3f52c req-38dd61de-bb2b-4346-9ccf-9e5fcf30eb0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Processing event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e822d3e56788d41e8cdfc775a4eb35e1e88712100dbd55592de4e3394d797c9-merged.mount: Deactivated successfully.
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.501 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for e9ccbc44-715e-4419-9286-f0cb6e41d9cd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.502 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432963.5009644, e9ccbc44-715e-4419-9286-f0cb6e41d9cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.502 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] VM Started (Lifecycle Event)
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.506 2 DEBUG nova.compute.manager [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.510 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.518 2 INFO nova.virt.libvirt.driver [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance spawned successfully.
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.519 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:09:23 compute-0 podman[334760]: 2025-10-14 09:09:23.522091112 +0000 UTC m=+0.084729857 container remove 8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_knuth, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.524 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:23 compute-0 systemd[1]: libpod-conmon-8749c2320b16f0e86985318193518e94637d6769e399ca4235aa0c80325ae103.scope: Deactivated successfully.
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.532 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.546 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.547 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.548 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.549 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.550 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.551 2 DEBUG nova.virt.libvirt.driver [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:23 compute-0 sudo[334390]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.557 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.558 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432963.5040069, e9ccbc44-715e-4419-9286-f0cb6e41d9cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.558 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] VM Paused (Lifecycle Event)
Oct 14 09:09:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:09:23 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:09:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:09:23 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:09:23 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7e2c35b7-b8af-44ef-93f7-6abe978339f1 does not exist
Oct 14 09:09:23 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev cc5dbe48-5550-437d-af0f-2512a035a652 does not exist
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.588 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.591 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432963.509453, e9ccbc44-715e-4419-9286-f0cb6e41d9cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.591 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] VM Resumed (Lifecycle Event)
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.610 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.614 2 DEBUG nova.compute.manager [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.615 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.640 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:09:23 compute-0 sudo[334784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:09:23 compute-0 sudo[334784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:23 compute-0 sudo[334784]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.669 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.669 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.669 2 DEBUG nova.objects.instance [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:09:23 compute-0 sudo[334809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:09:23 compute-0 sudo[334809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:09:23 compute-0 sudo[334809]: pam_unix(sudo:session): session closed for user root
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.723 2 DEBUG oslo_concurrency.lockutils [None req-f99662b0-3100-4a8e-9a03-05d24fda1d7a 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 372 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 252 op/s
Oct 14 09:09:23 compute-0 nova_compute[259627]: 2025-10-14 09:09:23.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:09:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.069 2 DEBUG nova.network.neutron [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.090 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:25 compute-0 ceph-mon[74249]: pgmap v1643: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 372 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 252 op/s
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.679 2 DEBUG nova.compute.manager [req-614ede7a-7404-4368-9294-7d301b416070 req-a31243a4-da5a-4aaf-a4df-c2a4362aac60 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.680 2 DEBUG oslo_concurrency.lockutils [req-614ede7a-7404-4368-9294-7d301b416070 req-a31243a4-da5a-4aaf-a4df-c2a4362aac60 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.680 2 DEBUG oslo_concurrency.lockutils [req-614ede7a-7404-4368-9294-7d301b416070 req-a31243a4-da5a-4aaf-a4df-c2a4362aac60 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.681 2 DEBUG oslo_concurrency.lockutils [req-614ede7a-7404-4368-9294-7d301b416070 req-a31243a4-da5a-4aaf-a4df-c2a4362aac60 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.681 2 DEBUG nova.compute.manager [req-614ede7a-7404-4368-9294-7d301b416070 req-a31243a4-da5a-4aaf-a4df-c2a4362aac60 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] No waiting events found dispatching network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.682 2 WARNING nova.compute.manager [req-614ede7a-7404-4368-9294-7d301b416070 req-a31243a4-da5a-4aaf-a4df-c2a4362aac60 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received unexpected event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d for instance with vm_state active and task_state None.
Oct 14 09:09:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 372 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 8.7 MiB/s wr, 328 op/s
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.872 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.872 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.873 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.873 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.873 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.875 2 INFO nova.compute.manager [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Terminating instance
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.876 2 DEBUG nova.compute.manager [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:09:25 compute-0 kernel: tap67728610-67 (unregistering): left promiscuous mode
Oct 14 09:09:25 compute-0 NetworkManager[44885]: <info>  [1760432965.9267] device (tap67728610-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:25 compute-0 ovn_controller[152662]: 2025-10-14T09:09:25Z|00722|binding|INFO|Releasing lport 67728610-6776-4496-98b2-a14f59c9674d from this chassis (sb_readonly=0)
Oct 14 09:09:25 compute-0 ovn_controller[152662]: 2025-10-14T09:09:25Z|00723|binding|INFO|Setting lport 67728610-6776-4496-98b2-a14f59c9674d down in Southbound
Oct 14 09:09:25 compute-0 ovn_controller[152662]: 2025-10-14T09:09:25Z|00724|binding|INFO|Removing iface tap67728610-67 ovn-installed in OVS
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:25.949 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:77:95 10.100.0.13'], port_security=['fa:16:3e:c3:77:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9ccbc44-715e-4419-9286-f0cb6e41d9cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=67728610-6776-4496-98b2-a14f59c9674d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:25.951 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 67728610-6776-4496-98b2-a14f59c9674d in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 unbound from our chassis
Oct 14 09:09:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:25.954 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99db3452-8467-4a2b-a51d-30679c346bb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:09:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:25.955 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8742e18d-cbc6-4d70-be75-e6d76d5a6d97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:25.956 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace which is not needed anymore
Oct 14 09:09:25 compute-0 nova_compute[259627]: 2025-10-14 09:09:25.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000046.scope: Deactivated successfully.
Oct 14 09:09:26 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000046.scope: Consumed 3.331s CPU time.
Oct 14 09:09:26 compute-0 systemd-machined[214636]: Machine qemu-86-instance-00000046 terminated.
Oct 14 09:09:26 compute-0 podman[334838]: 2025-10-14 09:09:26.042056883 +0000 UTC m=+0.077899449 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:09:26 compute-0 podman[334834]: 2025-10-14 09:09:26.077973008 +0000 UTC m=+0.115560197 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 09:09:26 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[334751]: [NOTICE]   (334758) : haproxy version is 2.8.14-c23fe91
Oct 14 09:09:26 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[334751]: [NOTICE]   (334758) : path to executable is /usr/sbin/haproxy
Oct 14 09:09:26 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[334751]: [WARNING]  (334758) : Exiting Master process...
Oct 14 09:09:26 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[334751]: [WARNING]  (334758) : Exiting Master process...
Oct 14 09:09:26 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[334751]: [ALERT]    (334758) : Current worker (334766) exited with code 143 (Terminated)
Oct 14 09:09:26 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[334751]: [WARNING]  (334758) : All workers exited. Exiting... (0)
Oct 14 09:09:26 compute-0 kernel: tap67728610-67: entered promiscuous mode
Oct 14 09:09:26 compute-0 systemd-udevd[334860]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:09:26 compute-0 NetworkManager[44885]: <info>  [1760432966.1012] manager: (tap67728610-67): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Oct 14 09:09:26 compute-0 systemd[1]: libpod-291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf.scope: Deactivated successfully.
Oct 14 09:09:26 compute-0 kernel: tap67728610-67 (unregistering): left promiscuous mode
Oct 14 09:09:26 compute-0 podman[334897]: 2025-10-14 09:09:26.112917118 +0000 UTC m=+0.049777047 container died 291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 ovn_controller[152662]: 2025-10-14T09:09:26Z|00725|binding|INFO|Claiming lport 67728610-6776-4496-98b2-a14f59c9674d for this chassis.
Oct 14 09:09:26 compute-0 ovn_controller[152662]: 2025-10-14T09:09:26Z|00726|binding|INFO|67728610-6776-4496-98b2-a14f59c9674d: Claiming fa:16:3e:c3:77:95 10.100.0.13
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.120 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:77:95 10.100.0.13'], port_security=['fa:16:3e:c3:77:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9ccbc44-715e-4419-9286-f0cb6e41d9cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=67728610-6776-4496-98b2-a14f59c9674d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.120 2 INFO nova.virt.libvirt.driver [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Instance destroyed successfully.
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.121 2 DEBUG nova.objects.instance [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'resources' on Instance uuid e9ccbc44-715e-4419-9286-f0cb6e41d9cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:26 compute-0 ovn_controller[152662]: 2025-10-14T09:09:26Z|00727|binding|INFO|Releasing lport 67728610-6776-4496-98b2-a14f59c9674d from this chassis (sb_readonly=0)
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.140 2 DEBUG nova.virt.libvirt.vif [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1155536075',display_name='tempest-ServerDiskConfigTestJSON-server-1155536075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1155536075',id=70,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-ak09v01b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:09:23Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=e9ccbc44-715e-4419-9286-f0cb6e41d9cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.141 2 DEBUG nova.network.os_vif_util [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "67728610-6776-4496-98b2-a14f59c9674d", "address": "fa:16:3e:c3:77:95", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67728610-67", "ovs_interfaceid": "67728610-6776-4496-98b2-a14f59c9674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.142 2 DEBUG nova.network.os_vif_util [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.142 2 DEBUG os_vif [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67728610-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.150 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:77:95 10.100.0.13'], port_security=['fa:16:3e:c3:77:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9ccbc44-715e-4419-9286-f0cb6e41d9cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=67728610-6776-4496-98b2-a14f59c9674d) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf-userdata-shm.mount: Deactivated successfully.
Oct 14 09:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d4231ee64cf8db13854b13115eae4c000cd19a824f4301b43ef48ee6008e944-merged.mount: Deactivated successfully.
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.167 2 INFO os_vif [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:77:95,bridge_name='br-int',has_traffic_filtering=True,id=67728610-6776-4496-98b2-a14f59c9674d,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67728610-67')
Oct 14 09:09:26 compute-0 podman[334897]: 2025-10-14 09:09:26.171080851 +0000 UTC m=+0.107940770 container cleanup 291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:09:26 compute-0 systemd[1]: libpod-conmon-291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf.scope: Deactivated successfully.
Oct 14 09:09:26 compute-0 podman[334953]: 2025-10-14 09:09:26.238257544 +0000 UTC m=+0.042874327 container remove 291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.244 2 INFO nova.virt.libvirt.driver [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance destroyed successfully.
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.245 2 DEBUG nova.objects.instance [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'resources' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.245 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a38e473b-ce5a-4e88-ba8c-9faa0ffb2abc]: (4, ('Tue Oct 14 09:09:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf)\n291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf\nTue Oct 14 09:09:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf)\n291fe9950a590dfa0cc25447548490935959b6013b39a918c05a90d5f1bcfdcf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.247 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afef2355-69e0-4f77-b902-16641ce8f235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 kernel: tap99db3452-80: left promiscuous mode
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.273 2 DEBUG nova.virt.libvirt.vif [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1278548098',display_name='tempest-ServerActionsTestOtherB-server-1278548098',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1278548098',id=64,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN/kVgKkHzFM6KgYtJMEi52k+/MuBrPIt79IRFIgFmNTlVvXooEFluDr37nozPBAZXSiIdNHa7h8jeIafiglGDw1A5mNs3hIQ2Rxweba0GKcdCWJKvOM6RPyHsBm/r09+g==',key_name='tempest-keypair-1307751836',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j6ifs0px',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member',shelved_at='2025-10-14T09:09:22.898515',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='afaa5b96-a65d-4840-9509-b25dadfeafa7'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:09:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='695c749a8dce4506a31e2cec4f02876b',uuid=e065d857-2df9-4199-aa98-41ca3c436bad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.274 2 DEBUG nova.network.os_vif_util [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.274 2 DEBUG nova.network.os_vif_util [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.275 2 DEBUG os_vif [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.276 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape18648ba-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.290 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9065f3-d24f-4d59-8f24-f70fa66a3fe6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.291 2 INFO os_vif [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61')
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.324 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[559e465f-1729-4600-b2bb-c349c153b151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.325 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79b94ad0-c3f5-4ed1-92fe-49162bf5d3dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9897e826-a095-42fc-b130-f91d3525b851]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674137, 'reachable_time': 29993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334992, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d99db3452\x2d8467\x2d4a2b\x2da51d\x2d30679c346bb2.mount: Deactivated successfully.
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.347 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.347 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3b8e48-e0e8-410b-8f11-0e6259c7913a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.349 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 67728610-6776-4496-98b2-a14f59c9674d in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 unbound from our chassis
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.350 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99db3452-8467-4a2b-a51d-30679c346bb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.351 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[852512dc-226f-48c2-a1dc-a165c85fc8ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.351 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 67728610-6776-4496-98b2-a14f59c9674d in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 unbound from our chassis
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.352 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99db3452-8467-4a2b-a51d-30679c346bb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:09:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:26.353 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1583e0-19de-461e-9942-81c9b64a5889]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.650 2 INFO nova.virt.libvirt.driver [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Deleting instance files /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd_del
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.652 2 INFO nova.virt.libvirt.driver [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Deletion of /var/lib/nova/instances/e9ccbc44-715e-4419-9286-f0cb6e41d9cd_del complete
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.699 2 INFO nova.compute.manager [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.700 2 DEBUG oslo.service.loopingcall [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.700 2 DEBUG nova.compute.manager [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.700 2 DEBUG nova.network.neutron [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.734 2 INFO nova.virt.libvirt.driver [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deleting instance files /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad_del
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.735 2 INFO nova.virt.libvirt.driver [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deletion of /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad_del complete
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.824 2 INFO nova.scheduler.client.report [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Deleted allocations for instance e065d857-2df9-4199-aa98-41ca3c436bad
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.881 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.882 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.903 2 DEBUG nova.scheduler.client.report [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.930 2 DEBUG nova.scheduler.client.report [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.930 2 DEBUG nova.compute.provider_tree [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.947 2 DEBUG nova.scheduler.client.report [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.977 2 DEBUG nova.scheduler.client.report [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:09:26 compute-0 nova_compute[259627]: 2025-10-14 09:09:26.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.040 2 DEBUG oslo_concurrency.processutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990697939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.498 2 DEBUG oslo_concurrency.processutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.503 2 DEBUG nova.compute.provider_tree [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.522 2 DEBUG nova.scheduler.client.report [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.562 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.567 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.568 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.568 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.569 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:27 compute-0 ceph-mon[74249]: pgmap v1644: 305 pgs: 305 active+clean; 372 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 8.7 MiB/s wr, 328 op/s
Oct 14 09:09:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2990697939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.696 2 DEBUG oslo_concurrency.lockutils [None req-0bae4bc2-4cf2-4779-8686-dd4a73df6a3e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 14 09:09:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 14 09:09:27 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.811 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432952.8069916, 9a0796fa-b0ab-49c7-ac7f-a66013f58074 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.812 2 INFO nova.compute.manager [-] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] VM Stopped (Lifecycle Event)
Oct 14 09:09:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 372 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 6.4 MiB/s wr, 265 op/s
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.830 2 DEBUG nova.compute.manager [None req-182cf1ea-a7ae-48d5-95ae-c354a572a985 - - - - - -] [instance: 9a0796fa-b0ab-49c7-ac7f-a66013f58074] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.939 2 DEBUG nova.network.neutron [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:27 compute-0 nova_compute[259627]: 2025-10-14 09:09:27.959 2 INFO nova.compute.manager [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Took 1.26 seconds to deallocate network for instance.
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.021 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.021 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670169084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.073 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.107 2 DEBUG oslo_concurrency.processutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.196 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.197 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.299 2 DEBUG nova.compute.manager [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-unplugged-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.300 2 DEBUG oslo_concurrency.lockutils [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.302 2 DEBUG oslo_concurrency.lockutils [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.302 2 DEBUG oslo_concurrency.lockutils [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.303 2 DEBUG nova.compute.manager [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] No waiting events found dispatching network-vif-unplugged-67728610-6776-4496-98b2-a14f59c9674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.303 2 WARNING nova.compute.manager [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received unexpected event network-vif-unplugged-67728610-6776-4496-98b2-a14f59c9674d for instance with vm_state deleted and task_state None.
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.304 2 DEBUG nova.compute.manager [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.304 2 DEBUG oslo_concurrency.lockutils [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.305 2 DEBUG oslo_concurrency.lockutils [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.306 2 DEBUG oslo_concurrency.lockutils [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.306 2 DEBUG nova.compute.manager [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] No waiting events found dispatching network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.307 2 WARNING nova.compute.manager [req-172def84-9440-4be4-94c0-4bd4d3b2a17b req-132f03be-3d9c-45ae-aad2-81b60dc55c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received unexpected event network-vif-plugged-67728610-6776-4496-98b2-a14f59c9674d for instance with vm_state deleted and task_state None.
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.422 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.423 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3734MB free_disk=59.8762321472168GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.423 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797307335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.607 2 DEBUG oslo_concurrency.processutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.612 2 DEBUG nova.compute.provider_tree [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.634 2 DEBUG nova.scheduler.client.report [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.675 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.678 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.724 2 INFO nova.scheduler.client.report [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Deleted allocations for instance e9ccbc44-715e-4419-9286-f0cb6e41d9cd
Oct 14 09:09:28 compute-0 ceph-mon[74249]: osdmap e248: 3 total, 3 up, 3 in
Oct 14 09:09:28 compute-0 ceph-mon[74249]: pgmap v1646: 305 pgs: 305 active+clean; 372 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 6.4 MiB/s wr, 265 op/s
Oct 14 09:09:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3670169084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1797307335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.780 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 7167ef21-b041-47b9-8d93-55b5853f4d01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.780 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.780 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.811 2 DEBUG oslo_concurrency.lockutils [None req-32f3c885-03ef-43cd-b38b-3a824384975d 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "e9ccbc44-715e-4419-9286-f0cb6e41d9cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:28 compute-0 nova_compute[259627]: 2025-10-14 09:09:28.832 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584715696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.259 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.264 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.317 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.325 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.325 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.345 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.345 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.346 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.434 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.435 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.440 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.440 2 INFO nova.compute.claims [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.583 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3584715696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 372 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 18 KiB/s wr, 120 op/s
Oct 14 09:09:29 compute-0 nova_compute[259627]: 2025-10-14 09:09:29.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814540553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.057 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.065 2 DEBUG nova.compute.provider_tree [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.082 2 DEBUG nova.scheduler.client.report [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.113 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.114 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.166 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.166 2 DEBUG nova.network.neutron [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.194 2 INFO nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.219 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.311 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.313 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.313 2 INFO nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Creating image(s)
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.342 2 DEBUG nova.storage.rbd_utils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.365 2 DEBUG nova.storage.rbd_utils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.389 2 DEBUG nova.storage.rbd_utils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.393 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.427 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.429 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.429 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.464 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.464 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.465 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.465 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.486 2 DEBUG nova.storage.rbd_utils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.490 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.530 2 DEBUG nova.policy [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '979aa20794dc414f91c59f224a0db083', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9099e3128b584ff7a140b8021451223e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.589 2 DEBUG nova.compute.manager [req-f4f18a8c-666b-4a2b-b90b-f574330a1f94 req-5e15ede5-b116-40b0-a2db-ad26daa0e32e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Received event network-vif-deleted-67728610-6776-4496-98b2-a14f59c9674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.760 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:30 compute-0 ceph-mon[74249]: pgmap v1647: 305 pgs: 305 active+clean; 372 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 18 KiB/s wr, 120 op/s
Oct 14 09:09:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/814540553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.842 2 DEBUG nova.storage.rbd_utils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] resizing rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.925 2 DEBUG nova.objects.instance [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'migration_context' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.941 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.941 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Ensure instance console log exists: /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.942 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.942 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.943 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:30 compute-0 nova_compute[259627]: 2025-10-14 09:09:30.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:31 compute-0 nova_compute[259627]: 2025-10-14 09:09:31.159 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432956.1584353, e065d857-2df9-4199-aa98-41ca3c436bad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:31 compute-0 nova_compute[259627]: 2025-10-14 09:09:31.159 2 INFO nova.compute.manager [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Stopped (Lifecycle Event)
Oct 14 09:09:31 compute-0 nova_compute[259627]: 2025-10-14 09:09:31.207 2 DEBUG nova.compute.manager [None req-6fcdd9fc-09a8-481b-bd54-bb9af09adbc2 - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:31 compute-0 nova_compute[259627]: 2025-10-14 09:09:31.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:31 compute-0 nova_compute[259627]: 2025-10-14 09:09:31.679 2 DEBUG nova.network.neutron [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Successfully created port: 705e7559-ae3c-461c-be70-b8dee31808cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:09:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 246 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 173 op/s
Oct 14 09:09:31 compute-0 nova_compute[259627]: 2025-10-14 09:09:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:31 compute-0 nova_compute[259627]: 2025-10-14 09:09:31.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:09:31 compute-0 nova_compute[259627]: 2025-10-14 09:09:31.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.007 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.049 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.051 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.051 2 INFO nova.compute.manager [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Unshelving
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.260 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.261 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.261 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.262 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7167ef21-b041-47b9-8d93-55b5853f4d01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:32 compute-0 nova_compute[259627]: 2025-10-14 09:09:32.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:09:32
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'backups', 'vms', '.rgw.root', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images']
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:09:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:09:32 compute-0 ceph-mon[74249]: pgmap v1648: 305 pgs: 305 active+clean; 246 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 173 op/s
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.075 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.075 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.080 2 DEBUG nova.objects.instance [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'pci_requests' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.100 2 DEBUG nova.objects.instance [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'numa_topology' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.115 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.115 2 INFO nova.compute.claims [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.279 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.321 2 DEBUG nova.network.neutron [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Successfully updated port: 705e7559-ae3c-461c-be70-b8dee31808cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.347 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "refresh_cache-dfb68a47-709d-40e3-8a17-01d9c3fb084b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.348 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquired lock "refresh_cache-dfb68a47-709d-40e3-8a17-01d9c3fb084b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.348 2 DEBUG nova.network.neutron [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.472 2 DEBUG nova.compute.manager [req-3beac19f-22e4-4b64-81af-250a1845e51a req-55c7a426-9e24-46a5-9dd4-d45129d8c281 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-changed-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.472 2 DEBUG nova.compute.manager [req-3beac19f-22e4-4b64-81af-250a1845e51a req-55c7a426-9e24-46a5-9dd4-d45129d8c281 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Refreshing instance network info cache due to event network-changed-705e7559-ae3c-461c-be70-b8dee31808cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.473 2 DEBUG oslo_concurrency.lockutils [req-3beac19f-22e4-4b64-81af-250a1845e51a req-55c7a426-9e24-46a5-9dd4-d45129d8c281 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-dfb68a47-709d-40e3-8a17-01d9c3fb084b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.555 2 DEBUG nova.network.neutron [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:09:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351120282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.703 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.711 2 DEBUG nova.compute.provider_tree [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.744 2 DEBUG nova.scheduler.client.report [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.767 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 246 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 173 op/s
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.840 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.840 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.870 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:09:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3351120282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.947 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.948 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.958 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:09:33 compute-0 nova_compute[259627]: 2025-10-14 09:09:33.959 2 INFO nova.compute.claims [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.022 2 INFO nova.network.neutron [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating port e18648ba-6112-40fa-85f6-bdf82a012079 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.174 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.583 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Updating instance_info_cache with network_info: [{"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.607 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-7167ef21-b041-47b9-8d93-55b5853f4d01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.607 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:09:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2789884846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.663 2 DEBUG nova.network.neutron [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Updating instance_info_cache with network_info: [{"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.675 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.681 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Releasing lock "refresh_cache-dfb68a47-709d-40e3-8a17-01d9c3fb084b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.682 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance network_info: |[{"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.683 2 DEBUG oslo_concurrency.lockutils [req-3beac19f-22e4-4b64-81af-250a1845e51a req-55c7a426-9e24-46a5-9dd4-d45129d8c281 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-dfb68a47-709d-40e3-8a17-01d9c3fb084b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.683 2 DEBUG nova.network.neutron [req-3beac19f-22e4-4b64-81af-250a1845e51a req-55c7a426-9e24-46a5-9dd4-d45129d8c281 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Refreshing network info cache for port 705e7559-ae3c-461c-be70-b8dee31808cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.688 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Start _get_guest_xml network_info=[{"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.690 2 DEBUG nova.compute.provider_tree [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.696 2 WARNING nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.702 2 DEBUG nova.virt.libvirt.host [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.703 2 DEBUG nova.virt.libvirt.host [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.705 2 DEBUG nova.virt.libvirt.host [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.706 2 DEBUG nova.virt.libvirt.host [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.707 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.707 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.708 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.708 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.708 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.708 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.709 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.709 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.709 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.710 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.710 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.710 2 DEBUG nova.virt.hardware [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.714 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.769 2 DEBUG nova.scheduler.client.report [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.797 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.797 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.836 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.836 2 DEBUG nova.network.neutron [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.854 2 INFO nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.873 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:09:34 compute-0 ceph-mon[74249]: pgmap v1649: 305 pgs: 305 active+clean; 246 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 173 op/s
Oct 14 09:09:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2789884846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.959 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.961 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.961 2 INFO nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Creating image(s)
Oct 14 09:09:34 compute-0 nova_compute[259627]: 2025-10-14 09:09:34.997 2 DEBUG nova.storage.rbd_utils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.033 2 DEBUG nova.storage.rbd_utils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.060 2 DEBUG nova.storage.rbd_utils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.063 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.103 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.104 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.104 2 DEBUG nova.network.neutron [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.108 2 DEBUG nova.policy [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a268118aae14d449097f4a26371415e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51ae58236f6a432e93764d455a502033', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.150 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.151 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.151 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.152 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.175 2 DEBUG nova.storage.rbd_utils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.179 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 16c93e17-00f2-4710-a0e4-83eb60430088_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207914715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.231 2 DEBUG nova.compute.manager [req-aa484df5-3d62-4089-94a9-fe5eed9fd0c2 req-40644700-0ce4-4d02-91e7-520e2eaeb3ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.231 2 DEBUG nova.compute.manager [req-aa484df5-3d62-4089-94a9-fe5eed9fd0c2 req-40644700-0ce4-4d02-91e7-520e2eaeb3ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing instance network info cache due to event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.232 2 DEBUG oslo_concurrency.lockutils [req-aa484df5-3d62-4089-94a9-fe5eed9fd0c2 req-40644700-0ce4-4d02-91e7-520e2eaeb3ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.232 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.256 2 DEBUG nova.storage.rbd_utils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.260 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.507 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 16c93e17-00f2-4710-a0e4-83eb60430088_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.575 2 DEBUG nova.storage.rbd_utils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] resizing rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.665 2 DEBUG nova.objects.instance [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'migration_context' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.683 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.683 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Ensure instance console log exists: /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.683 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.684 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.684 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795057250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.763 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.766 2 DEBUG nova.virt.libvirt.vif [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:09:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1083913759',display_name='tempest-ServerDiskConfigTestJSON-server-1083913759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1083913759',id=72,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-0sy3orhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:30Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=dfb68a47-709d-40e3-8a17-01d9c3fb084b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.767 2 DEBUG nova.network.os_vif_util [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.769 2 DEBUG nova.network.os_vif_util [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.771 2 DEBUG nova.objects.instance [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'pci_devices' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.786 2 DEBUG nova.network.neutron [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Successfully created port: 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.793 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <uuid>dfb68a47-709d-40e3-8a17-01d9c3fb084b</uuid>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <name>instance-00000048</name>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1083913759</nova:name>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:09:34</nova:creationTime>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <nova:user uuid="979aa20794dc414f91c59f224a0db083">tempest-ServerDiskConfigTestJSON-1253454894-project-member</nova:user>
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <nova:project uuid="9099e3128b584ff7a140b8021451223e">tempest-ServerDiskConfigTestJSON-1253454894</nova:project>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <nova:port uuid="705e7559-ae3c-461c-be70-b8dee31808cf">
Oct 14 09:09:35 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <system>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <entry name="serial">dfb68a47-709d-40e3-8a17-01d9c3fb084b</entry>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <entry name="uuid">dfb68a47-709d-40e3-8a17-01d9c3fb084b</entry>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </system>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <os>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   </os>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <features>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   </features>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk">
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config">
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:35 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:60:ea:f5"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <target dev="tap705e7559-ae"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/console.log" append="off"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <video>
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </video>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:09:35 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:09:35 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:09:35 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:09:35 compute-0 nova_compute[259627]: </domain>
Oct 14 09:09:35 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.795 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Preparing to wait for external event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.796 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.796 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.797 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.798 2 DEBUG nova.virt.libvirt.vif [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:09:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1083913759',display_name='tempest-ServerDiskConfigTestJSON-server-1083913759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1083913759',id=72,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-0sy3orhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:30Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=dfb68a47-709d-40e3-8a17-01d9c3fb084b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.799 2 DEBUG nova.network.os_vif_util [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.800 2 DEBUG nova.network.os_vif_util [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.800 2 DEBUG os_vif [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705e7559-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap705e7559-ae, col_values=(('external_ids', {'iface-id': '705e7559-ae3c-461c-be70-b8dee31808cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:ea:f5', 'vm-uuid': 'dfb68a47-709d-40e3-8a17-01d9c3fb084b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:35 compute-0 NetworkManager[44885]: <info>  [1760432975.8130] manager: (tap705e7559-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 308 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.820 2 INFO os_vif [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae')
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.897 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.897 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.898 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No VIF found with MAC fa:16:3e:60:ea:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.899 2 INFO nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Using config drive
Oct 14 09:09:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/207914715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2795057250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.929 2 DEBUG nova.storage.rbd_utils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:35 compute-0 nova_compute[259627]: 2025-10-14 09:09:35.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.378 2 INFO nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Creating config drive at /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.388 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxg8hr136 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.554 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxg8hr136" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.581 2 DEBUG nova.storage.rbd_utils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.584 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.638 2 DEBUG nova.network.neutron [req-3beac19f-22e4-4b64-81af-250a1845e51a req-55c7a426-9e24-46a5-9dd4-d45129d8c281 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Updated VIF entry in instance network info cache for port 705e7559-ae3c-461c-be70-b8dee31808cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.639 2 DEBUG nova.network.neutron [req-3beac19f-22e4-4b64-81af-250a1845e51a req-55c7a426-9e24-46a5-9dd4-d45129d8c281 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Updating instance_info_cache with network_info: [{"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.660 2 DEBUG oslo_concurrency.lockutils [req-3beac19f-22e4-4b64-81af-250a1845e51a req-55c7a426-9e24-46a5-9dd4-d45129d8c281 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-dfb68a47-709d-40e3-8a17-01d9c3fb084b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.771 2 DEBUG oslo_concurrency.processutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.772 2 INFO nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Deleting local config drive /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config because it was imported into RBD.
Oct 14 09:09:36 compute-0 kernel: tap705e7559-ae: entered promiscuous mode
Oct 14 09:09:36 compute-0 NetworkManager[44885]: <info>  [1760432976.8342] manager: (tap705e7559-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Oct 14 09:09:36 compute-0 ovn_controller[152662]: 2025-10-14T09:09:36Z|00728|binding|INFO|Claiming lport 705e7559-ae3c-461c-be70-b8dee31808cf for this chassis.
Oct 14 09:09:36 compute-0 ovn_controller[152662]: 2025-10-14T09:09:36Z|00729|binding|INFO|705e7559-ae3c-461c-be70-b8dee31808cf: Claiming fa:16:3e:60:ea:f5 10.100.0.11
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.845 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:ea:f5 10.100.0.11'], port_security=['fa:16:3e:60:ea:f5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dfb68a47-709d-40e3-8a17-01d9c3fb084b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=705e7559-ae3c-461c-be70-b8dee31808cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.846 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 705e7559-ae3c-461c-be70-b8dee31808cf in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 bound to our chassis
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.848 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:09:36 compute-0 ovn_controller[152662]: 2025-10-14T09:09:36Z|00730|binding|INFO|Setting lport 705e7559-ae3c-461c-be70-b8dee31808cf ovn-installed in OVS
Oct 14 09:09:36 compute-0 ovn_controller[152662]: 2025-10-14T09:09:36Z|00731|binding|INFO|Setting lport 705e7559-ae3c-461c-be70-b8dee31808cf up in Southbound
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.863 2 DEBUG nova.network.neutron [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a04443-6aa3-435d-995e-8492f01a221b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.868 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99db3452-81 in ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.870 2 DEBUG nova.network.neutron [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Successfully updated port: 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.871 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99db3452-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.872 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc68883-e174-487b-8c07-80238e1cb86a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2157faa3-25dd-4f3e-a920-65b87da9f050]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:36 compute-0 systemd-machined[214636]: New machine qemu-87-instance-00000048.
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.887 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:36 compute-0 systemd-udevd[335619]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.890 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.891 2 INFO nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating image(s)
Oct 14 09:09:36 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-00000048.
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.896 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8765a5-4eb2-49a5-9ebf-1cb711709dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:36 compute-0 NetworkManager[44885]: <info>  [1760432976.9126] device (tap705e7559-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:09:36 compute-0 NetworkManager[44885]: <info>  [1760432976.9152] device (tap705e7559-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:09:36 compute-0 ceph-mon[74249]: pgmap v1650: 305 pgs: 305 active+clean; 308 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.929 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd4d075-4470-4e82-bc45-51e36a9aa6d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.942 2 DEBUG nova.storage.rbd_utils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.947 2 DEBUG nova.objects.instance [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.950 2 DEBUG oslo_concurrency.lockutils [req-aa484df5-3d62-4089-94a9-fe5eed9fd0c2 req-40644700-0ce4-4d02-91e7-520e2eaeb3ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.951 2 DEBUG nova.network.neutron [req-aa484df5-3d62-4089-94a9-fe5eed9fd0c2 req-40644700-0ce4-4d02-91e7-520e2eaeb3ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.954 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.954 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquired lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.958 2 DEBUG nova.network.neutron [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.961 2 DEBUG nova.compute.manager [req-58d987b9-4efe-41e8-bcf3-37819e774ccf req-1cc5b6d0-6db9-49a3-be60-0a7eb2816c99 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-changed-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.962 2 DEBUG nova.compute.manager [req-58d987b9-4efe-41e8-bcf3-37819e774ccf req-1cc5b6d0-6db9-49a3-be60-0a7eb2816c99 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Refreshing instance network info cache due to event network-changed-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:09:36 compute-0 nova_compute[259627]: 2025-10-14 09:09:36.962 2 DEBUG oslo_concurrency.lockutils [req-58d987b9-4efe-41e8-bcf3-37819e774ccf req-1cc5b6d0-6db9-49a3-be60-0a7eb2816c99 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.963 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76ecab27-1c7f-43db-8299-e66c8276bb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:36.970 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f310fdd0-08d2-47b0-9640-591c55a54298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:36 compute-0 NetworkManager[44885]: <info>  [1760432976.9721] manager: (tap99db3452-80): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.014 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[83ab965f-c5a1-4423-95c2-9fcf0be32a57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.018 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce8e6d59-be4e-49bc-800b-123c4a2dd427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.038 2 DEBUG nova.storage.rbd_utils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:37 compute-0 NetworkManager[44885]: <info>  [1760432977.0650] device (tap99db3452-80): carrier: link connected
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.073 2 DEBUG nova.storage.rbd_utils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.075 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2e030f-9718-4713-a08f-bba3d09beb9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.078 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "c23c9639cd55db9982545898c1312631f4e2a778" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.079 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "c23c9639cd55db9982545898c1312631f4e2a778" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.101 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24a5f4ba-ca8a-43b0-9c1e-cd598d98658c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675582, 'reachable_time': 41210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335704, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.124 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa09d5c-de30-4c8f-a727-340d627b8394]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:a670'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675582, 'tstamp': 675582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335705, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.145 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f154fb70-45aa-47ff-8d72-c5fb1097c2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675582, 'reachable_time': 41210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335706, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.187 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8bdc1d-e19c-4a77-81f1-1b288569beaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.253 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec673225-d269-41b3-a4e2-c9c6a86cc9f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.255 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.255 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.256 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99db3452-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:37 compute-0 kernel: tap99db3452-80: entered promiscuous mode
Oct 14 09:09:37 compute-0 NetworkManager[44885]: <info>  [1760432977.2628] manager: (tap99db3452-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.268 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99db3452-80, col_values=(('external_ids', {'iface-id': '59e7d558-49d1-48cf-b926-27e93fe381b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:37 compute-0 ovn_controller[152662]: 2025-10-14T09:09:37Z|00732|binding|INFO|Releasing lport 59e7d558-49d1-48cf-b926-27e93fe381b1 from this chassis (sb_readonly=0)
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.308 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.310 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a812100e-7d0e-4ef7-85e1-2bb2d008dc3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.311 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:09:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:37.312 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'env', 'PROCESS_TAG=haproxy-99db3452-8467-4a2b-a51d-30679c346bb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99db3452-8467-4a2b-a51d-30679c346bb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.329 2 DEBUG nova.network.neutron [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.415 2 DEBUG nova.virt.libvirt.imagebackend [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/afaa5b96-a65d-4840-9509-b25dadfeafa7/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/afaa5b96-a65d-4840-9509-b25dadfeafa7/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.476 2 DEBUG nova.virt.libvirt.imagebackend [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/afaa5b96-a65d-4840-9509-b25dadfeafa7/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.477 2 DEBUG nova.storage.rbd_utils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning images/afaa5b96-a65d-4840-9509-b25dadfeafa7@snap to None/e065d857-2df9-4199-aa98-41ca3c436bad_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.592 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "c23c9639cd55db9982545898c1312631f4e2a778" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:37 compute-0 podman[335860]: 2025-10-14 09:09:37.742885333 +0000 UTC m=+0.059554738 container create 308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.777 2 DEBUG nova.objects.instance [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'migration_context' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:37 compute-0 systemd[1]: Started libpod-conmon-308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96.scope.
Oct 14 09:09:37 compute-0 podman[335860]: 2025-10-14 09:09:37.711730176 +0000 UTC m=+0.028399631 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:09:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6374582ad90e28564932ca32b29633cb4772002e9ea004e655259bf072460bdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:09:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 308 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.6 MiB/s wr, 111 op/s
Oct 14 09:09:37 compute-0 podman[335860]: 2025-10-14 09:09:37.839167344 +0000 UTC m=+0.155836769 container init 308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:09:37 compute-0 podman[335860]: 2025-10-14 09:09:37.844713141 +0000 UTC m=+0.161382546 container start 308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:09:37 compute-0 nova_compute[259627]: 2025-10-14 09:09:37.846 2 DEBUG nova.storage.rbd_utils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:09:37 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[335916]: [NOTICE]   (335938) : New worker (335953) forked
Oct 14 09:09:37 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[335916]: [NOTICE]   (335938) : Loading success.
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.177 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Image rbd:vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.178 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.178 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Ensure instance console log exists: /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.178 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.179 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.179 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.181 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start _get_guest_xml network_info=[{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:09:13Z,direct_url=<?>,disk_format='raw',id=afaa5b96-a65d-4840-9509-b25dadfeafa7,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1278548098-shelved',owner='4bda6775f81f403e83269a5f798c9853',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:09:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.186 2 WARNING nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.190 2 DEBUG nova.virt.libvirt.host [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.191 2 DEBUG nova.virt.libvirt.host [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.194 2 DEBUG nova.virt.libvirt.host [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.194 2 DEBUG nova.virt.libvirt.host [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.195 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.195 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:09:13Z,direct_url=<?>,disk_format='raw',id=afaa5b96-a65d-4840-9509-b25dadfeafa7,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1278548098-shelved',owner='4bda6775f81f403e83269a5f798c9853',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:09:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.195 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.195 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.196 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.196 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.196 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.196 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.196 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.197 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.197 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.197 2 DEBUG nova.virt.hardware [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.197 2 DEBUG nova.objects.instance [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.217 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.261 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432978.2313175, dfb68a47-709d-40e3-8a17-01d9c3fb084b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.262 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] VM Started (Lifecycle Event)
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.291 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.294 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432978.231648, dfb68a47-709d-40e3-8a17-01d9c3fb084b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.295 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] VM Paused (Lifecycle Event)
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.320 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.323 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.344 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.577 2 DEBUG nova.network.neutron [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Updating instance_info_cache with network_info: [{"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.581 2 DEBUG nova.network.neutron [req-aa484df5-3d62-4089-94a9-fe5eed9fd0c2 req-40644700-0ce4-4d02-91e7-520e2eaeb3ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updated VIF entry in instance network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.582 2 DEBUG nova.network.neutron [req-aa484df5-3d62-4089-94a9-fe5eed9fd0c2 req-40644700-0ce4-4d02-91e7-520e2eaeb3ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.600 2 DEBUG oslo_concurrency.lockutils [req-aa484df5-3d62-4089-94a9-fe5eed9fd0c2 req-40644700-0ce4-4d02-91e7-520e2eaeb3ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.604 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Releasing lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.604 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance network_info: |[{"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.605 2 DEBUG oslo_concurrency.lockutils [req-58d987b9-4efe-41e8-bcf3-37819e774ccf req-1cc5b6d0-6db9-49a3-be60-0a7eb2816c99 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.605 2 DEBUG nova.network.neutron [req-58d987b9-4efe-41e8-bcf3-37819e774ccf req-1cc5b6d0-6db9-49a3-be60-0a7eb2816c99 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Refreshing network info cache for port 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.607 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Start _get_guest_xml network_info=[{"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.611 2 WARNING nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.615 2 DEBUG nova.virt.libvirt.host [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.616 2 DEBUG nova.virt.libvirt.host [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.619 2 DEBUG nova.virt.libvirt.host [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.619 2 DEBUG nova.virt.libvirt.host [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.620 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.620 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.621 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.621 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.621 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.622 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.622 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.622 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.622 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.623 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.623 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.623 2 DEBUG nova.virt.hardware [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.626 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791695149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.670 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.692 2 DEBUG nova.storage.rbd_utils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:38 compute-0 nova_compute[259627]: 2025-10-14 09:09:38.697 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:38 compute-0 ceph-mon[74249]: pgmap v1651: 305 pgs: 305 active+clean; 308 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.6 MiB/s wr, 111 op/s
Oct 14 09:09:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2791695149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944600786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.060 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.087 2 DEBUG nova.storage.rbd_utils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.092 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.139 2 DEBUG nova.compute.manager [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.140 2 DEBUG oslo_concurrency.lockutils [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1620035789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.141 2 DEBUG oslo_concurrency.lockutils [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.141 2 DEBUG oslo_concurrency.lockutils [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.141 2 DEBUG nova.compute.manager [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Processing event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.142 2 DEBUG nova.compute.manager [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.142 2 DEBUG oslo_concurrency.lockutils [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.142 2 DEBUG oslo_concurrency.lockutils [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.143 2 DEBUG oslo_concurrency.lockutils [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.143 2 DEBUG nova.compute.manager [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] No waiting events found dispatching network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.144 2 WARNING nova.compute.manager [req-4f04821a-0d12-4faf-97ba-707d730bc8fc req-d8ba70f0-3b36-479e-9fb5-fd549ffa0814 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received unexpected event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf for instance with vm_state building and task_state spawning.
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.145 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.149 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.152 2 INFO nova.virt.libvirt.driver [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance spawned successfully.
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.153 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.156 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.158 2 DEBUG nova.virt.libvirt.vif [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1278548098',display_name='tempest-ServerActionsTestOtherB-server-1278548098',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1278548098',id=64,image_ref='afaa5b96-a65d-4840-9509-b25dadfeafa7',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1307751836',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j6ifs0px',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member',shelved_at='2025-10-14T09:09:22.898515',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='afaa5b96-a65d-4840-9509-b25dadfeafa7'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='695c749a8dce4506a31e2cec4f02876b',uuid=e065d857-2df9-4199-aa98-41ca3c436bad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.158 2 DEBUG nova.network.os_vif_util [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.160 2 DEBUG nova.network.os_vif_util [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.161 2 DEBUG nova.objects.instance [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'pci_devices' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.163 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432979.1572485, dfb68a47-709d-40e3-8a17-01d9c3fb084b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.164 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] VM Resumed (Lifecycle Event)
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.179 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.179 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.180 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.180 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.181 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.182 2 DEBUG nova.virt.libvirt.driver [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.187 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <uuid>e065d857-2df9-4199-aa98-41ca3c436bad</uuid>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <name>instance-00000040</name>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerActionsTestOtherB-server-1278548098</nova:name>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:09:38</nova:creationTime>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:user uuid="695c749a8dce4506a31e2cec4f02876b">tempest-ServerActionsTestOtherB-381012378-project-member</nova:user>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:project uuid="4bda6775f81f403e83269a5f798c9853">tempest-ServerActionsTestOtherB-381012378</nova:project>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="afaa5b96-a65d-4840-9509-b25dadfeafa7"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:port uuid="e18648ba-6112-40fa-85f6-bdf82a012079">
Oct 14 09:09:39 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <system>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="serial">e065d857-2df9-4199-aa98-41ca3c436bad</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="uuid">e065d857-2df9-4199-aa98-41ca3c436bad</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </system>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <os>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </os>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <features>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </features>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk.config">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:0b:9e:35"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <target dev="tape18648ba-61"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/console.log" append="off"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <video>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </video>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:09:39 compute-0 nova_compute[259627]: </domain>
Oct 14 09:09:39 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.195 2 DEBUG nova.compute.manager [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Preparing to wait for external event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.195 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.195 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.196 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.197 2 DEBUG nova.virt.libvirt.vif [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1278548098',display_name='tempest-ServerActionsTestOtherB-server-1278548098',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1278548098',id=64,image_ref='afaa5b96-a65d-4840-9509-b25dadfeafa7',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1307751836',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j6ifs0px',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member',shelved_at='2025-10-14T09:09:22.898515',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='afaa5b96-a65d-4840-9509-b25dadfeafa7'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='695c749a8dce4506a31e2cec4f02876b',uuid=e065d857-2df9-4199-aa98-41ca3c436bad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.197 2 DEBUG nova.network.os_vif_util [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.198 2 DEBUG nova.network.os_vif_util [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.199 2 DEBUG os_vif [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.200 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.204 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape18648ba-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.204 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape18648ba-61, col_values=(('external_ids', {'iface-id': 'e18648ba-6112-40fa-85f6-bdf82a012079', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:9e:35', 'vm-uuid': 'e065d857-2df9-4199-aa98-41ca3c436bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.205 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:39 compute-0 NetworkManager[44885]: <info>  [1760432979.2076] manager: (tape18648ba-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.217 2 INFO os_vif [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61')
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.230 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.256 2 INFO nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Took 8.94 seconds to spawn the instance on the hypervisor.
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.256 2 DEBUG nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.294 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.295 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.295 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No VIF found with MAC fa:16:3e:0b:9e:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.296 2 INFO nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Using config drive
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.316 2 DEBUG nova.storage.rbd_utils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.330 2 INFO nova.compute.manager [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Took 9.94 seconds to build instance.
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.333 2 DEBUG nova.objects.instance [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.363 2 DEBUG oslo_concurrency.lockutils [None req-fcbc0c43-a8cd-4b4a-a244-e3eca63e639e 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.381 2 DEBUG nova.objects.instance [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'keypairs' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774757120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.559 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.561 2 DEBUG nova.virt.libvirt.vif [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:09:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-963382784',display_name='tempest-ServerRescueTestJSON-server-963382784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-963382784',id=73,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51ae58236f6a432e93764d455a502033',ramdisk_id='',reservation_id='r-x8za0q3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-156376826',owner_user_name='tempest-ServerRescueTestJSON-156376826-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:34Z,user_data=None,user_id='7a268118aae14d449097f4a26371415e',uuid=16c93e17-00f2-4710-a0e4-83eb60430088,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.562 2 DEBUG nova.network.os_vif_util [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converting VIF {"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.564 2 DEBUG nova.network.os_vif_util [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:8d:59,bridge_name='br-int',has_traffic_filtering=True,id=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2aa7bfa4-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.565 2 DEBUG nova.objects.instance [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'pci_devices' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.584 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <uuid>16c93e17-00f2-4710-a0e4-83eb60430088</uuid>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <name>instance-00000049</name>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueTestJSON-server-963382784</nova:name>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:09:38</nova:creationTime>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:user uuid="7a268118aae14d449097f4a26371415e">tempest-ServerRescueTestJSON-156376826-project-member</nova:user>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:project uuid="51ae58236f6a432e93764d455a502033">tempest-ServerRescueTestJSON-156376826</nova:project>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <nova:port uuid="2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549">
Oct 14 09:09:39 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <system>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="serial">16c93e17-00f2-4710-a0e4-83eb60430088</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="uuid">16c93e17-00f2-4710-a0e4-83eb60430088</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </system>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <os>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </os>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <features>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </features>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/16c93e17-00f2-4710-a0e4-83eb60430088_disk">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/16c93e17-00f2-4710-a0e4-83eb60430088_disk.config">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:f5:8d:59"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <target dev="tap2aa7bfa4-d8"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/console.log" append="off"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <video>
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </video>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:09:39 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:09:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:09:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:09:39 compute-0 nova_compute[259627]: </domain>
Oct 14 09:09:39 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.592 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Preparing to wait for external event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.592 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.592 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.593 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.593 2 DEBUG nova.virt.libvirt.vif [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:09:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-963382784',display_name='tempest-ServerRescueTestJSON-server-963382784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-963382784',id=73,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51ae58236f6a432e93764d455a502033',ramdisk_id='',reservation_id='r-x8za0q3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-156376826',owner_user_name='tempest-ServerRescueTestJSON-156376826-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:34Z,user_data=None,user_id='7a268118aae14d449097f4a26371415e',uuid=16c93e17-00f2-4710-a0e4-83eb60430088,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.594 2 DEBUG nova.network.os_vif_util [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converting VIF {"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.594 2 DEBUG nova.network.os_vif_util [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:8d:59,bridge_name='br-int',has_traffic_filtering=True,id=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2aa7bfa4-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.594 2 DEBUG os_vif [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:8d:59,bridge_name='br-int',has_traffic_filtering=True,id=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2aa7bfa4-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2aa7bfa4-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2aa7bfa4-d8, col_values=(('external_ids', {'iface-id': '2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:8d:59', 'vm-uuid': '16c93e17-00f2-4710-a0e4-83eb60430088'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:39 compute-0 NetworkManager[44885]: <info>  [1760432979.6036] manager: (tap2aa7bfa4-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.614 2 INFO os_vif [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:8d:59,bridge_name='br-int',has_traffic_filtering=True,id=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2aa7bfa4-d8')
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.663 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.664 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.664 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No VIF found with MAC fa:16:3e:f5:8d:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.664 2 INFO nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Using config drive
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.688 2 DEBUG nova.storage.rbd_utils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 308 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.846 2 INFO nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating config drive at /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config
Oct 14 09:09:39 compute-0 nova_compute[259627]: 2025-10-14 09:09:39.852 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7llpk50 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2944600786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1620035789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/774757120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.020 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7llpk50" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.048 2 DEBUG nova.storage.rbd_utils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.052 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config e065d857-2df9-4199-aa98-41ca3c436bad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.201 2 INFO nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Creating config drive at /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.212 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8zwum9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.273 2 DEBUG oslo_concurrency.processutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config e065d857-2df9-4199-aa98-41ca3c436bad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.274 2 INFO nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deleting local config drive /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config because it was imported into RBD.
Oct 14 09:09:40 compute-0 NetworkManager[44885]: <info>  [1760432980.3350] manager: (tape18648ba-61): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Oct 14 09:09:40 compute-0 kernel: tape18648ba-61: entered promiscuous mode
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.382 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8zwum9x" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:40 compute-0 ovn_controller[152662]: 2025-10-14T09:09:40Z|00733|binding|INFO|Claiming lport e18648ba-6112-40fa-85f6-bdf82a012079 for this chassis.
Oct 14 09:09:40 compute-0 ovn_controller[152662]: 2025-10-14T09:09:40Z|00734|binding|INFO|e18648ba-6112-40fa-85f6-bdf82a012079: Claiming fa:16:3e:0b:9e:35 10.100.0.9
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.391 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:9e:35 10.100.0.9'], port_security=['fa:16:3e:0b:9e:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e065d857-2df9-4199-aa98-41ca3c436bad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'baab55cf-9843-49b9-a43b-28ca1ab122c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e18648ba-6112-40fa-85f6-bdf82a012079) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.394 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e18648ba-6112-40fa-85f6-bdf82a012079 in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 bound to our chassis
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.396 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:09:40 compute-0 systemd-machined[214636]: New machine qemu-88-instance-00000040.
Oct 14 09:09:40 compute-0 ovn_controller[152662]: 2025-10-14T09:09:40Z|00735|binding|INFO|Setting lport e18648ba-6112-40fa-85f6-bdf82a012079 ovn-installed in OVS
Oct 14 09:09:40 compute-0 ovn_controller[152662]: 2025-10-14T09:09:40Z|00736|binding|INFO|Setting lport e18648ba-6112-40fa-85f6-bdf82a012079 up in Southbound
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.415 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f25eb00b-e84f-4f04-89f5-fa509fdf03a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:40 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-00000040.
Oct 14 09:09:40 compute-0 systemd-udevd[336207]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.440 2 DEBUG nova.storage.rbd_utils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:40 compute-0 NetworkManager[44885]: <info>  [1760432980.4451] device (tape18648ba-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:09:40 compute-0 NetworkManager[44885]: <info>  [1760432980.4462] device (tape18648ba-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.462 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.467 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdfe373-ad8b-42cb-84ff-cbe3320b8afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.470 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6c94c8-5407-4591-a61a-14ee80dd4616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.502 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[61e5a4b6-b8bf-4006-b1c3-befefcbf7ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.526 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8744ce74-8d1d-4c1c-9937-8c94236bcc99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336224, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.536 2 DEBUG nova.network.neutron [req-58d987b9-4efe-41e8-bcf3-37819e774ccf req-1cc5b6d0-6db9-49a3-be60-0a7eb2816c99 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Updated VIF entry in instance network info cache for port 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.537 2 DEBUG nova.network.neutron [req-58d987b9-4efe-41e8-bcf3-37819e774ccf req-1cc5b6d0-6db9-49a3-be60-0a7eb2816c99 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Updating instance_info_cache with network_info: [{"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.544 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[236b6d9c-e82f-4ee8-aa59-843ef0e03f09]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662169, 'tstamp': 662169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336225, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662173, 'tstamp': 662173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336225, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.554 2 DEBUG oslo_concurrency.lockutils [req-58d987b9-4efe-41e8-bcf3-37819e774ccf req-1cc5b6d0-6db9-49a3-be60-0a7eb2816c99 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.555 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d540b01-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.555 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d540b01-e0, col_values=(('external_ids', {'iface-id': 'fcca615a-5470-4880-844d-73adc425bce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.620 2 DEBUG oslo_concurrency.processutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.621 2 INFO nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Deleting local config drive /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config because it was imported into RBD.
Oct 14 09:09:40 compute-0 NetworkManager[44885]: <info>  [1760432980.6598] manager: (tap2aa7bfa4-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/315)
Oct 14 09:09:40 compute-0 kernel: tap2aa7bfa4-d8: entered promiscuous mode
Oct 14 09:09:40 compute-0 ovn_controller[152662]: 2025-10-14T09:09:40Z|00737|binding|INFO|Claiming lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for this chassis.
Oct 14 09:09:40 compute-0 ovn_controller[152662]: 2025-10-14T09:09:40Z|00738|binding|INFO|2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549: Claiming fa:16:3e:f5:8d:59 10.100.0.4
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:40 compute-0 NetworkManager[44885]: <info>  [1760432980.6731] device (tap2aa7bfa4-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:09:40 compute-0 NetworkManager[44885]: <info>  [1760432980.6746] device (tap2aa7bfa4-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.676 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:8d:59 10.100.0.4'], port_security=['fa:16:3e:f5:8d:59 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '16c93e17-00f2-4710-a0e4-83eb60430088', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.677 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace bound to our chassis
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.678 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:09:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:40.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[134fb283-ecee-419d-94d7-fefaeefc827b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:40 compute-0 ovn_controller[152662]: 2025-10-14T09:09:40Z|00739|binding|INFO|Setting lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 ovn-installed in OVS
Oct 14 09:09:40 compute-0 ovn_controller[152662]: 2025-10-14T09:09:40Z|00740|binding|INFO|Setting lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 up in Southbound
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:40 compute-0 systemd-machined[214636]: New machine qemu-89-instance-00000049.
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:40 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-00000049.
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.939 2 DEBUG nova.compute.manager [req-12c0d771-3926-4034-ba84-12b53ebd0bbd req-130a3c06-a9a3-4ef4-b735-8ead45797801 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.939 2 DEBUG oslo_concurrency.lockutils [req-12c0d771-3926-4034-ba84-12b53ebd0bbd req-130a3c06-a9a3-4ef4-b735-8ead45797801 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.939 2 DEBUG oslo_concurrency.lockutils [req-12c0d771-3926-4034-ba84-12b53ebd0bbd req-130a3c06-a9a3-4ef4-b735-8ead45797801 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.939 2 DEBUG oslo_concurrency.lockutils [req-12c0d771-3926-4034-ba84-12b53ebd0bbd req-130a3c06-a9a3-4ef4-b735-8ead45797801 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:40 compute-0 nova_compute[259627]: 2025-10-14 09:09:40.939 2 DEBUG nova.compute.manager [req-12c0d771-3926-4034-ba84-12b53ebd0bbd req-130a3c06-a9a3-4ef4-b735-8ead45797801 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Processing event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:09:40 compute-0 ceph-mon[74249]: pgmap v1652: 305 pgs: 305 active+clean; 308 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.118 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432966.1178534, e9ccbc44-715e-4419-9286-f0cb6e41d9cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.119 2 INFO nova.compute.manager [-] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] VM Stopped (Lifecycle Event)
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.160 2 DEBUG nova.compute.manager [None req-1626f7cc-7b5f-4a6d-8306-2750475a82b2 - - - - - -] [instance: e9ccbc44-715e-4419-9286-f0cb6e41d9cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.175 2 DEBUG nova.compute.manager [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.176 2 DEBUG oslo_concurrency.lockutils [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.176 2 DEBUG oslo_concurrency.lockutils [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.176 2 DEBUG oslo_concurrency.lockutils [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.176 2 DEBUG nova.compute.manager [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Processing event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.177 2 DEBUG nova.compute.manager [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.177 2 DEBUG oslo_concurrency.lockutils [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.177 2 DEBUG oslo_concurrency.lockutils [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.178 2 DEBUG oslo_concurrency.lockutils [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.178 2 DEBUG nova.compute.manager [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] No waiting events found dispatching network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.178 2 WARNING nova.compute.manager [req-0ea65429-0cd4-46bc-8e08-3083602972ee req-e6aa0162-a230-4c8f-84d7-cb058bd326d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received unexpected event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 for instance with vm_state shelved_offloaded and task_state spawning.
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.535 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432981.534891, 16c93e17-00f2-4710-a0e4-83eb60430088 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.535 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] VM Started (Lifecycle Event)
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.537 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.540 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.544 2 INFO nova.virt.libvirt.driver [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance spawned successfully.
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.544 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.558 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.562 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.568 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.568 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.569 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.570 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.570 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.571 2 DEBUG nova.virt.libvirt.driver [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.581 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.581 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432981.5360813, 16c93e17-00f2-4710-a0e4-83eb60430088 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.582 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] VM Paused (Lifecycle Event)
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.603 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.607 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432981.539457, 16c93e17-00f2-4710-a0e4-83eb60430088 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.607 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] VM Resumed (Lifecycle Event)
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.659 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.661 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.672 2 INFO nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Took 6.71 seconds to spawn the instance on the hypervisor.
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.673 2 DEBUG nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.684 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.743 2 INFO nova.compute.manager [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Took 7.82 seconds to build instance.
Oct 14 09:09:41 compute-0 nova_compute[259627]: 2025-10-14 09:09:41.764 2 DEBUG oslo_concurrency.lockutils [None req-01f2ba4d-83b3-4931-a2dd-7d2fea4962ab 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 418 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.5 MiB/s wr, 244 op/s
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.085 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432982.0849123, e065d857-2df9-4199-aa98-41ca3c436bad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.085 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Started (Lifecycle Event)
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.087 2 DEBUG nova.compute.manager [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.091 2 DEBUG nova.virt.libvirt.driver [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.095 2 INFO nova.virt.libvirt.driver [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance spawned successfully.
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.116 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.120 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.141 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.142 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432982.0855193, e065d857-2df9-4199-aa98-41ca3c436bad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.142 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Paused (Lifecycle Event)
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.172 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.175 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432982.0911224, e065d857-2df9-4199-aa98-41ca3c436bad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.175 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Resumed (Lifecycle Event)
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.195 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.199 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.224 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:09:42 compute-0 nova_compute[259627]: 2025-10-14 09:09:42.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:42 compute-0 podman[336349]: 2025-10-14 09:09:42.677251963 +0000 UTC m=+0.092832717 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 09:09:42 compute-0 podman[336348]: 2025-10-14 09:09:42.694878897 +0000 UTC m=+0.099218064 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:09:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 14 09:09:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 14 09:09:42 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 14 09:09:42 compute-0 ceph-mon[74249]: pgmap v1653: 305 pgs: 305 active+clean; 418 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.5 MiB/s wr, 244 op/s
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022110489109201017 of space, bias 1.0, pg target 0.6633146732760306 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001770365305723774 of space, bias 1.0, pg target 0.5311095917171322 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.058 2 INFO nova.compute.manager [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Rebuilding instance
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.376 2 DEBUG nova.compute.manager [req-b0ea00df-09d8-4416-bd13-f0789099c1bf req-5ac2c0a1-a273-49e5-a5f6-3b37195fc942 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.377 2 DEBUG oslo_concurrency.lockutils [req-b0ea00df-09d8-4416-bd13-f0789099c1bf req-5ac2c0a1-a273-49e5-a5f6-3b37195fc942 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.377 2 DEBUG oslo_concurrency.lockutils [req-b0ea00df-09d8-4416-bd13-f0789099c1bf req-5ac2c0a1-a273-49e5-a5f6-3b37195fc942 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.377 2 DEBUG oslo_concurrency.lockutils [req-b0ea00df-09d8-4416-bd13-f0789099c1bf req-5ac2c0a1-a273-49e5-a5f6-3b37195fc942 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.378 2 DEBUG nova.compute.manager [req-b0ea00df-09d8-4416-bd13-f0789099c1bf req-5ac2c0a1-a273-49e5-a5f6-3b37195fc942 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] No waiting events found dispatching network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.378 2 WARNING nova.compute.manager [req-b0ea00df-09d8-4416-bd13-f0789099c1bf req-5ac2c0a1-a273-49e5-a5f6-3b37195fc942 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received unexpected event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for instance with vm_state active and task_state None.
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.414 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'trusted_certs' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.430 2 DEBUG nova.compute.manager [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.479 2 DEBUG nova.compute.manager [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.509 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'pci_requests' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.528 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'pci_devices' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.542 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'resources' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.553 2 DEBUG oslo_concurrency.lockutils [None req-eab34a06-6010-4f66-89ad-70b58af6c2ee 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.555 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'migration_context' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.572 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.574 2 INFO nova.compute.manager [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Rescuing
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.575 2 DEBUG oslo_concurrency.lockutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.575 2 DEBUG oslo_concurrency.lockutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquired lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.576 2 DEBUG nova.network.neutron [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:09:43 compute-0 nova_compute[259627]: 2025-10-14 09:09:43.582 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:09:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 418 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 8.9 MiB/s wr, 227 op/s
Oct 14 09:09:43 compute-0 ceph-mon[74249]: osdmap e249: 3 total, 3 up, 3 in
Oct 14 09:09:44 compute-0 nova_compute[259627]: 2025-10-14 09:09:44.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:44 compute-0 nova_compute[259627]: 2025-10-14 09:09:44.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:09:44 compute-0 ceph-mon[74249]: pgmap v1655: 305 pgs: 305 active+clean; 418 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 8.9 MiB/s wr, 227 op/s
Oct 14 09:09:44 compute-0 nova_compute[259627]: 2025-10-14 09:09:44.985 2 DEBUG nova.network.neutron [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Updating instance_info_cache with network_info: [{"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:45 compute-0 nova_compute[259627]: 2025-10-14 09:09:45.010 2 DEBUG oslo_concurrency.lockutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Releasing lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:09:45 compute-0 nova_compute[259627]: 2025-10-14 09:09:45.312 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:09:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.3 MiB/s wr, 403 op/s
Oct 14 09:09:46 compute-0 ceph-mon[74249]: pgmap v1656: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.3 MiB/s wr, 403 op/s
Oct 14 09:09:47 compute-0 nova_compute[259627]: 2025-10-14 09:09:47.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 14 09:09:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 14 09:09:47 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 14 09:09:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 7.9 MiB/s wr, 504 op/s
Oct 14 09:09:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 14 09:09:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 14 09:09:48 compute-0 ceph-mon[74249]: osdmap e250: 3 total, 3 up, 3 in
Oct 14 09:09:48 compute-0 ceph-mon[74249]: pgmap v1658: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 7.9 MiB/s wr, 504 op/s
Oct 14 09:09:48 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 14 09:09:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:49.179 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:49.179 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:09:49 compute-0 nova_compute[259627]: 2025-10-14 09:09:49.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:49 compute-0 nova_compute[259627]: 2025-10-14 09:09:49.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:49 compute-0 ceph-mon[74249]: osdmap e251: 3 total, 3 up, 3 in
Oct 14 09:09:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 49 KiB/s wr, 323 op/s
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.181 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.695 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "7167ef21-b041-47b9-8d93-55b5853f4d01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.696 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.696 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.696 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.696 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.697 2 INFO nova.compute.manager [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Terminating instance
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.698 2 DEBUG nova.compute.manager [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:09:50 compute-0 kernel: tap6383be3b-43 (unregistering): left promiscuous mode
Oct 14 09:09:50 compute-0 NetworkManager[44885]: <info>  [1760432990.7549] device (tap6383be3b-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:50 compute-0 ceph-mon[74249]: pgmap v1660: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 49 KiB/s wr, 323 op/s
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:50 compute-0 ovn_controller[152662]: 2025-10-14T09:09:50Z|00741|binding|INFO|Releasing lport 6383be3b-4378-4c3f-a2be-af1c6ec32afb from this chassis (sb_readonly=0)
Oct 14 09:09:50 compute-0 ovn_controller[152662]: 2025-10-14T09:09:50Z|00742|binding|INFO|Setting lport 6383be3b-4378-4c3f-a2be-af1c6ec32afb down in Southbound
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:50 compute-0 ovn_controller[152662]: 2025-10-14T09:09:50Z|00743|binding|INFO|Removing iface tap6383be3b-43 ovn-installed in OVS
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.777 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:e8:10 10.100.0.7'], port_security=['fa:16:3e:01:e8:10 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7167ef21-b041-47b9-8d93-55b5853f4d01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4ffd682-0c28-40f2-a6f1-d3d67aecef45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6383be3b-4378-4c3f-a2be-af1c6ec32afb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.779 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6383be3b-4378-4c3f-a2be-af1c6ec32afb in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 unbound from our chassis
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.782 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.796 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3c320738-47d7-4d67-a560-ef566c0d4adf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:50 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000044.scope: Deactivated successfully.
Oct 14 09:09:50 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000044.scope: Consumed 15.772s CPU time.
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.843 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70baeccc-807d-4c15-906f-4bf8a19627d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:50 compute-0 systemd-machined[214636]: Machine qemu-82-instance-00000044 terminated.
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.851 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[61c9e16c-f1c5-4754-b8a9-75705b6e4130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.889 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2861eb29-97cb-4c4b-b893-d53584c9b85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.913 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec937f4-3711-405b-adfb-0d4df5e9964d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336395, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.934 2 INFO nova.virt.libvirt.driver [-] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Instance destroyed successfully.
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.933 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb779cd0-96b9-4a54-857e-99cfb0198ce6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662169, 'tstamp': 662169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336403, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d540b01-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662173, 'tstamp': 662173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336403, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.935 2 DEBUG nova.objects.instance [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'resources' on Instance uuid 7167ef21-b041-47b9-8d93-55b5853f4d01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.937 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.950 2 DEBUG nova.virt.libvirt.vif [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-741371027',display_name='tempest-ServerActionsTestOtherB-server-741371027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-741371027',id=68,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:08:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-xivhqpt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:08:30Z,user_data=None,user_id='695c749a8dce4506a31e2cec4f02876b',uuid=7167ef21-b041-47b9-8d93-55b5853f4d01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.951 2 DEBUG nova.network.os_vif_util [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "address": "fa:16:3e:01:e8:10", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6383be3b-43", "ovs_interfaceid": "6383be3b-4378-4c3f-a2be-af1c6ec32afb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.952 2 DEBUG nova.network.os_vif_util [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:e8:10,bridge_name='br-int',has_traffic_filtering=True,id=6383be3b-4378-4c3f-a2be-af1c6ec32afb,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6383be3b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.952 2 DEBUG os_vif [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:e8:10,bridge_name='br-int',has_traffic_filtering=True,id=6383be3b-4378-4c3f-a2be-af1c6ec32afb,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6383be3b-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.954 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6383be3b-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.965 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d540b01-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.965 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.965 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d540b01-e0, col_values=(('external_ids', {'iface-id': 'fcca615a-5470-4880-844d-73adc425bce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:50.966 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:50 compute-0 nova_compute[259627]: 2025-10-14 09:09:50.968 2 INFO os_vif [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:e8:10,bridge_name='br-int',has_traffic_filtering=True,id=6383be3b-4378-4c3f-a2be-af1c6ec32afb,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6383be3b-43')
Oct 14 09:09:51 compute-0 nova_compute[259627]: 2025-10-14 09:09:51.385 2 INFO nova.virt.libvirt.driver [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Deleting instance files /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01_del
Oct 14 09:09:51 compute-0 nova_compute[259627]: 2025-10-14 09:09:51.386 2 INFO nova.virt.libvirt.driver [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Deletion of /var/lib/nova/instances/7167ef21-b041-47b9-8d93-55b5853f4d01_del complete
Oct 14 09:09:51 compute-0 nova_compute[259627]: 2025-10-14 09:09:51.463 2 INFO nova.compute.manager [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 14 09:09:51 compute-0 nova_compute[259627]: 2025-10-14 09:09:51.464 2 DEBUG oslo.service.loopingcall [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:09:51 compute-0 nova_compute[259627]: 2025-10-14 09:09:51.464 2 DEBUG nova.compute.manager [-] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:09:51 compute-0 nova_compute[259627]: 2025-10-14 09:09:51.464 2 DEBUG nova.network.neutron [-] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:09:51 compute-0 ovn_controller[152662]: 2025-10-14T09:09:51Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:ea:f5 10.100.0.11
Oct 14 09:09:51 compute-0 ovn_controller[152662]: 2025-10-14T09:09:51Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:ea:f5 10.100.0.11
Oct 14 09:09:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 293 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 43 KiB/s wr, 311 op/s
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.396 2 DEBUG nova.compute.manager [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received event network-vif-unplugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.398 2 DEBUG oslo_concurrency.lockutils [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.398 2 DEBUG oslo_concurrency.lockutils [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.399 2 DEBUG oslo_concurrency.lockutils [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.399 2 DEBUG nova.compute.manager [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] No waiting events found dispatching network-vif-unplugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.399 2 DEBUG nova.compute.manager [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received event network-vif-unplugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.400 2 DEBUG nova.compute.manager [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received event network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.400 2 DEBUG oslo_concurrency.lockutils [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.401 2 DEBUG oslo_concurrency.lockutils [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.401 2 DEBUG oslo_concurrency.lockutils [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.401 2 DEBUG nova.compute.manager [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] No waiting events found dispatching network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.402 2 WARNING nova.compute.manager [req-ef558a44-de28-42b2-8f55-c11e4173773d req-03b9f066-0636-4580-b8f7-6647481bb12f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received unexpected event network-vif-plugged-6383be3b-4378-4c3f-a2be-af1c6ec32afb for instance with vm_state active and task_state deleting.
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.718 2 DEBUG nova.network.neutron [-] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 14 09:09:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 14 09:09:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.741 2 INFO nova.compute.manager [-] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Took 1.28 seconds to deallocate network for instance.
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.813 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.814 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:52 compute-0 ceph-mon[74249]: pgmap v1661: 305 pgs: 305 active+clean; 293 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 43 KiB/s wr, 311 op/s
Oct 14 09:09:52 compute-0 ceph-mon[74249]: osdmap e252: 3 total, 3 up, 3 in
Oct 14 09:09:52 compute-0 nova_compute[259627]: 2025-10-14 09:09:52.908 2 DEBUG oslo_concurrency.processutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750195773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:53 compute-0 nova_compute[259627]: 2025-10-14 09:09:53.357 2 DEBUG oslo_concurrency.processutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:53 compute-0 nova_compute[259627]: 2025-10-14 09:09:53.362 2 DEBUG nova.compute.provider_tree [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:53 compute-0 nova_compute[259627]: 2025-10-14 09:09:53.376 2 DEBUG nova.scheduler.client.report [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:53 compute-0 nova_compute[259627]: 2025-10-14 09:09:53.405 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:53 compute-0 nova_compute[259627]: 2025-10-14 09:09:53.428 2 INFO nova.scheduler.client.report [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Deleted allocations for instance 7167ef21-b041-47b9-8d93-55b5853f4d01
Oct 14 09:09:53 compute-0 nova_compute[259627]: 2025-10-14 09:09:53.517 2 DEBUG oslo_concurrency.lockutils [None req-ac50a6fd-4906-4c5e-8cde-9daa51b94846 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "7167ef21-b041-47b9-8d93-55b5853f4d01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:53 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 14 09:09:53 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 14 09:09:53 compute-0 nova_compute[259627]: 2025-10-14 09:09:53.638 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:09:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 293 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Oct 14 09:09:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3750195773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.072 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.072 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.072 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.073 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.073 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.074 2 INFO nova.compute.manager [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Terminating instance
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.075 2 DEBUG nova.compute.manager [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:09:54 compute-0 kernel: tape18648ba-61 (unregistering): left promiscuous mode
Oct 14 09:09:54 compute-0 NetworkManager[44885]: <info>  [1760432994.1206] device (tape18648ba-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:54 compute-0 ovn_controller[152662]: 2025-10-14T09:09:54Z|00744|binding|INFO|Releasing lport e18648ba-6112-40fa-85f6-bdf82a012079 from this chassis (sb_readonly=0)
Oct 14 09:09:54 compute-0 ovn_controller[152662]: 2025-10-14T09:09:54Z|00745|binding|INFO|Setting lport e18648ba-6112-40fa-85f6-bdf82a012079 down in Southbound
Oct 14 09:09:54 compute-0 ovn_controller[152662]: 2025-10-14T09:09:54Z|00746|binding|INFO|Removing iface tape18648ba-61 ovn-installed in OVS
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.142 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:9e:35 10.100.0.9'], port_security=['fa:16:3e:0b:9e:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e065d857-2df9-4199-aa98-41ca3c436bad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'baab55cf-9843-49b9-a43b-28ca1ab122c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e18648ba-6112-40fa-85f6-bdf82a012079) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.143 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e18648ba-6112-40fa-85f6-bdf82a012079 in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 unbound from our chassis
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.144 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.145 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f3187967-3415-4038-8ff4-09b9add71092]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.146 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409 namespace which is not needed anymore
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:54 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000040.scope: Deactivated successfully.
Oct 14 09:09:54 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000040.scope: Consumed 13.221s CPU time.
Oct 14 09:09:54 compute-0 systemd-machined[214636]: Machine qemu-88-instance-00000040 terminated.
Oct 14 09:09:54 compute-0 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [NOTICE]   (327696) : haproxy version is 2.8.14-c23fe91
Oct 14 09:09:54 compute-0 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [NOTICE]   (327696) : path to executable is /usr/sbin/haproxy
Oct 14 09:09:54 compute-0 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [WARNING]  (327696) : Exiting Master process...
Oct 14 09:09:54 compute-0 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [WARNING]  (327696) : Exiting Master process...
Oct 14 09:09:54 compute-0 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [ALERT]    (327696) : Current worker (327698) exited with code 143 (Terminated)
Oct 14 09:09:54 compute-0 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [WARNING]  (327696) : All workers exited. Exiting... (0)
Oct 14 09:09:54 compute-0 systemd[1]: libpod-d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a.scope: Deactivated successfully.
Oct 14 09:09:54 compute-0 podman[336470]: 2025-10-14 09:09:54.301124398 +0000 UTC m=+0.054652147 container died d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.317 2 INFO nova.virt.libvirt.driver [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance destroyed successfully.
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.318 2 DEBUG nova.objects.instance [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'resources' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-670696918cb8387e8757572bf34cc188f3c339f0a3acc9d8f14f93b3255809dc-merged.mount: Deactivated successfully.
Oct 14 09:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a-userdata-shm.mount: Deactivated successfully.
Oct 14 09:09:54 compute-0 podman[336470]: 2025-10-14 09:09:54.356922083 +0000 UTC m=+0.110449852 container cleanup d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:09:54 compute-0 systemd[1]: libpod-conmon-d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a.scope: Deactivated successfully.
Oct 14 09:09:54 compute-0 podman[336508]: 2025-10-14 09:09:54.440142352 +0000 UTC m=+0.050971406 container remove d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.449 2 DEBUG nova.virt.libvirt.vif [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1278548098',display_name='tempest-ServerActionsTestOtherB-server-1278548098',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1278548098',id=64,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN/kVgKkHzFM6KgYtJMEi52k+/MuBrPIt79IRFIgFmNTlVvXooEFluDr37nozPBAZXSiIdNHa7h8jeIafiglGDw1A5mNs3hIQ2Rxweba0GKcdCWJKvOM6RPyHsBm/r09+g==',key_name='tempest-keypair-1307751836',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j6ifs0px',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:09:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='695c749a8dce4506a31e2cec4f02876b',uuid=e065d857-2df9-4199-aa98-41ca3c436bad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.450 2 DEBUG nova.network.os_vif_util [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.451 2 DEBUG nova.network.os_vif_util [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.451 2 DEBUG os_vif [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape18648ba-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.458 2 INFO os_vif [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61')
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.458 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20ec2b91-c959-4ee7-b839-44877bd46b67]: (4, ('Tue Oct 14 09:09:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409 (d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a)\nd7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a\nTue Oct 14 09:09:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409 (d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a)\nd7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.465 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2a5da1-f7c5-4d5e-9ee0-43d71888fa71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.466 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:54 compute-0 kernel: tap9d540b01-e0: left promiscuous mode
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.490 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9343ba9c-f623-4ca7-b7de-948f688473bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.516 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[860fc929-e472-4467-b055-7de93b8ceee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f66c4d19-3a3b-464b-8895-5b21669eee4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.541 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da4db830-6f74-4727-b7ab-ebb01ae160f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662151, 'reachable_time': 26136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336539, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d540b01\x2de9c4\x2d4dc5\x2d9a51\x2d94512ad9a409.mount: Deactivated successfully.
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.544 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:09:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:54.544 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5916263e-59af-4440-8975-613b2369ebfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.899 2 INFO nova.virt.libvirt.driver [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deleting instance files /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad_del
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.900 2 INFO nova.virt.libvirt.driver [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deletion of /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad_del complete
Oct 14 09:09:54 compute-0 ceph-mon[74249]: pgmap v1663: 305 pgs: 305 active+clean; 293 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.906 2 DEBUG nova.compute.manager [req-b583bafb-43db-47b1-8e92-54df06fb7eb8 req-93171fd4-451f-4dda-be0a-a1a7dba44529 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-unplugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.906 2 DEBUG oslo_concurrency.lockutils [req-b583bafb-43db-47b1-8e92-54df06fb7eb8 req-93171fd4-451f-4dda-be0a-a1a7dba44529 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.906 2 DEBUG oslo_concurrency.lockutils [req-b583bafb-43db-47b1-8e92-54df06fb7eb8 req-93171fd4-451f-4dda-be0a-a1a7dba44529 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.906 2 DEBUG oslo_concurrency.lockutils [req-b583bafb-43db-47b1-8e92-54df06fb7eb8 req-93171fd4-451f-4dda-be0a-a1a7dba44529 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.906 2 DEBUG nova.compute.manager [req-b583bafb-43db-47b1-8e92-54df06fb7eb8 req-93171fd4-451f-4dda-be0a-a1a7dba44529 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] No waiting events found dispatching network-vif-unplugged-e18648ba-6112-40fa-85f6-bdf82a012079 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.907 2 DEBUG nova.compute.manager [req-b583bafb-43db-47b1-8e92-54df06fb7eb8 req-93171fd4-451f-4dda-be0a-a1a7dba44529 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-unplugged-e18648ba-6112-40fa-85f6-bdf82a012079 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.966 2 INFO nova.compute.manager [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.966 2 DEBUG oslo.service.loopingcall [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.966 2 DEBUG nova.compute.manager [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:09:54 compute-0 nova_compute[259627]: 2025-10-14 09:09:54.967 2 DEBUG nova.network.neutron [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:09:55 compute-0 nova_compute[259627]: 2025-10-14 09:09:55.008 2 DEBUG nova.compute.manager [req-5d92f34c-0757-4dd2-87f4-61b1d471669a req-6cb00f09-5975-4efd-9beb-afafebe989b9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Received event network-vif-deleted-6383be3b-4378-4c3f-a2be-af1c6ec32afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:55 compute-0 nova_compute[259627]: 2025-10-14 09:09:55.361 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:09:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 195 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.3 MiB/s wr, 347 op/s
Oct 14 09:09:55 compute-0 nova_compute[259627]: 2025-10-14 09:09:55.937 2 DEBUG nova.network.neutron [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:09:55 compute-0 nova_compute[259627]: 2025-10-14 09:09:55.959 2 INFO nova.compute.manager [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 0.99 seconds to deallocate network for instance.
Oct 14 09:09:55 compute-0 kernel: tap705e7559-ae (unregistering): left promiscuous mode
Oct 14 09:09:56 compute-0 NetworkManager[44885]: <info>  [1760432996.0008] device (tap705e7559-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:56 compute-0 ovn_controller[152662]: 2025-10-14T09:09:56Z|00747|binding|INFO|Releasing lport 705e7559-ae3c-461c-be70-b8dee31808cf from this chassis (sb_readonly=0)
Oct 14 09:09:56 compute-0 ovn_controller[152662]: 2025-10-14T09:09:56Z|00748|binding|INFO|Setting lport 705e7559-ae3c-461c-be70-b8dee31808cf down in Southbound
Oct 14 09:09:56 compute-0 ovn_controller[152662]: 2025-10-14T09:09:56Z|00749|binding|INFO|Removing iface tap705e7559-ae ovn-installed in OVS
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.017 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.017 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.022 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:ea:f5 10.100.0.11'], port_security=['fa:16:3e:60:ea:f5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dfb68a47-709d-40e3-8a17-01d9c3fb084b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=705e7559-ae3c-461c-be70-b8dee31808cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.023 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 705e7559-ae3c-461c-be70-b8dee31808cf in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 unbound from our chassis
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.025 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99db3452-8467-4a2b-a51d-30679c346bb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.026 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4776d54d-f2c4-431c-bf4f-9c943fda14b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.026 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace which is not needed anymore
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:56 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct 14 09:09:56 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Consumed 14.113s CPU time.
Oct 14 09:09:56 compute-0 systemd-machined[214636]: Machine qemu-87-instance-00000048 terminated.
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.141 2 DEBUG oslo_concurrency.processutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:56 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[335916]: [NOTICE]   (335938) : haproxy version is 2.8.14-c23fe91
Oct 14 09:09:56 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[335916]: [NOTICE]   (335938) : path to executable is /usr/sbin/haproxy
Oct 14 09:09:56 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[335916]: [WARNING]  (335938) : Exiting Master process...
Oct 14 09:09:56 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[335916]: [ALERT]    (335938) : Current worker (335953) exited with code 143 (Terminated)
Oct 14 09:09:56 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[335916]: [WARNING]  (335938) : All workers exited. Exiting... (0)
Oct 14 09:09:56 compute-0 podman[336562]: 2025-10-14 09:09:56.215071953 +0000 UTC m=+0.078971276 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:09:56 compute-0 systemd[1]: libpod-308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96.scope: Deactivated successfully.
Oct 14 09:09:56 compute-0 podman[336581]: 2025-10-14 09:09:56.223814459 +0000 UTC m=+0.047593344 container died 308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:09:56 compute-0 podman[336556]: 2025-10-14 09:09:56.245674187 +0000 UTC m=+0.113258540 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6374582ad90e28564932ca32b29633cb4772002e9ea004e655259bf072460bdb-merged.mount: Deactivated successfully.
Oct 14 09:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96-userdata-shm.mount: Deactivated successfully.
Oct 14 09:09:56 compute-0 podman[336581]: 2025-10-14 09:09:56.263984698 +0000 UTC m=+0.087763583 container cleanup 308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:09:56 compute-0 systemd[1]: libpod-conmon-308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96.scope: Deactivated successfully.
Oct 14 09:09:56 compute-0 podman[336642]: 2025-10-14 09:09:56.328815124 +0000 UTC m=+0.042186049 container remove 308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.336 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae82307-1627-466e-9879-5e10daa8affa]: (4, ('Tue Oct 14 09:09:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96)\n308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96\nTue Oct 14 09:09:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96)\n308add926168e52dd9667134d4fc2d5e0cf016525ba6b3b962363812902baf96\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[919893b0-f1d9-4714-ab8c-b788ea124a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:56 compute-0 kernel: tap99db3452-80: left promiscuous mode
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.365 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3f906b8d-621a-417c-807a-305d48388f93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.392 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2780116c-78f1-456b-8539-979792497e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.393 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ea59b19c-d56f-4132-97fb-384e919d8ed8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.407 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5010cd4c-9620-44d6-9396-33dc0fecefbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675571, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336679, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d99db3452\x2d8467\x2d4a2b\x2da51d\x2d30679c346bb2.mount: Deactivated successfully.
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.409 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:09:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:56.409 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2159f1-729e-4f3c-93a5-266bf42c04f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:09:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3414965148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.616 2 DEBUG oslo_concurrency.processutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.624 2 DEBUG nova.compute.provider_tree [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.650 2 DEBUG nova.scheduler.client.report [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.660 2 INFO nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance shutdown successfully after 13 seconds.
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.668 2 INFO nova.virt.libvirt.driver [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance destroyed successfully.
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.676 2 INFO nova.virt.libvirt.driver [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance destroyed successfully.
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.677 2 DEBUG nova.virt.libvirt.vif [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:09:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1083913759',display_name='tempest-ServerDiskConfigTestJSON-server-1083913759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1083913759',id=72,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-0sy3orhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:42Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=dfb68a47-709d-40e3-8a17-01d9c3fb084b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.678 2 DEBUG nova.network.os_vif_util [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.680 2 DEBUG nova.network.os_vif_util [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.681 2 DEBUG os_vif [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.686 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705e7559-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.690 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.700 2 INFO os_vif [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae')
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.724 2 INFO nova.scheduler.client.report [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Deleted allocations for instance e065d857-2df9-4199-aa98-41ca3c436bad
Oct 14 09:09:56 compute-0 nova_compute[259627]: 2025-10-14 09:09:56.865 2 DEBUG oslo_concurrency.lockutils [None req-4a42b940-4db2-48fa-8d8c-2ac1379762fa 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:56 compute-0 ceph-mon[74249]: pgmap v1664: 305 pgs: 305 active+clean; 195 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.3 MiB/s wr, 347 op/s
Oct 14 09:09:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3414965148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.015 2 DEBUG nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.016 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.018 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.018 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.019 2 DEBUG nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] No waiting events found dispatching network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.019 2 WARNING nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received unexpected event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 for instance with vm_state deleted and task_state None.
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.020 2 DEBUG nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-deleted-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.021 2 DEBUG nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-unplugged-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.021 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.022 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.022 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.023 2 DEBUG nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] No waiting events found dispatching network-vif-unplugged-705e7559-ae3c-461c-be70-b8dee31808cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.023 2 WARNING nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received unexpected event network-vif-unplugged-705e7559-ae3c-461c-be70-b8dee31808cf for instance with vm_state active and task_state rebuilding.
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.024 2 DEBUG nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.025 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.025 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.026 2 DEBUG oslo_concurrency.lockutils [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.026 2 DEBUG nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] No waiting events found dispatching network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.027 2 WARNING nova.compute.manager [req-4f2930a7-6a41-4a6b-883a-861e3e08bc0d req-8ca48455-6d7d-4223-a52d-b6eae17697d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received unexpected event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf for instance with vm_state active and task_state rebuilding.
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.181 2 INFO nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Deleting instance files /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b_del
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.183 2 INFO nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Deletion of /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b_del complete
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.358 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.359 2 INFO nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Creating image(s)
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.393 2 DEBUG nova.storage.rbd_utils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.429 2 DEBUG nova.storage.rbd_utils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.456 2 DEBUG nova.storage.rbd_utils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.461 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.564 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.564 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.565 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.565 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.590 2 DEBUG nova.storage.rbd_utils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.594 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:09:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 195 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.6 MiB/s wr, 305 op/s
Oct 14 09:09:57 compute-0 nova_compute[259627]: 2025-10-14 09:09:57.963 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:57 compute-0 kernel: tap2aa7bfa4-d8 (unregistering): left promiscuous mode
Oct 14 09:09:57 compute-0 NetworkManager[44885]: <info>  [1760432997.9853] device (tap2aa7bfa4-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:09:57 compute-0 ovn_controller[152662]: 2025-10-14T09:09:57Z|00750|binding|INFO|Releasing lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 from this chassis (sb_readonly=0)
Oct 14 09:09:57 compute-0 ovn_controller[152662]: 2025-10-14T09:09:57Z|00751|binding|INFO|Setting lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 down in Southbound
Oct 14 09:09:57 compute-0 ovn_controller[152662]: 2025-10-14T09:09:57Z|00752|binding|INFO|Removing iface tap2aa7bfa4-d8 ovn-installed in OVS
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:58.004 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:8d:59 10.100.0.4'], port_security=['fa:16:3e:f5:8d:59 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '16c93e17-00f2-4710-a0e4-83eb60430088', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:09:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:58.009 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace unbound from our chassis
Oct 14 09:09:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:58.010 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:09:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:09:58.012 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[95af7357-583d-46ca-bd28-ec42fdc97a9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.050 2 DEBUG nova.storage.rbd_utils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] resizing rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:09:58 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000049.scope: Deactivated successfully.
Oct 14 09:09:58 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000049.scope: Consumed 14.349s CPU time.
Oct 14 09:09:58 compute-0 systemd-machined[214636]: Machine qemu-89-instance-00000049 terminated.
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.137 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.138 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Ensure instance console log exists: /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.138 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.139 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.139 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.142 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Start _get_guest_xml network_info=[{"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.146 2 WARNING nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.151 2 DEBUG nova.virt.libvirt.host [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.152 2 DEBUG nova.virt.libvirt.host [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.156 2 DEBUG nova.virt.libvirt.host [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.157 2 DEBUG nova.virt.libvirt.host [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.158 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.158 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.159 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.159 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.160 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.160 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.160 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.161 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.161 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.161 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.162 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.162 2 DEBUG nova.virt.hardware [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.163 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'vcpu_model' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.183 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.368 2 DEBUG nova.compute.manager [req-50233ca3-583e-40d0-af5e-8b6ff03e59af req-7ad90e44-1a02-400f-a471-9daab7852c76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-unplugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.369 2 DEBUG oslo_concurrency.lockutils [req-50233ca3-583e-40d0-af5e-8b6ff03e59af req-7ad90e44-1a02-400f-a471-9daab7852c76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.369 2 DEBUG oslo_concurrency.lockutils [req-50233ca3-583e-40d0-af5e-8b6ff03e59af req-7ad90e44-1a02-400f-a471-9daab7852c76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.369 2 DEBUG oslo_concurrency.lockutils [req-50233ca3-583e-40d0-af5e-8b6ff03e59af req-7ad90e44-1a02-400f-a471-9daab7852c76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.370 2 DEBUG nova.compute.manager [req-50233ca3-583e-40d0-af5e-8b6ff03e59af req-7ad90e44-1a02-400f-a471-9daab7852c76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] No waiting events found dispatching network-vif-unplugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.370 2 WARNING nova.compute.manager [req-50233ca3-583e-40d0-af5e-8b6ff03e59af req-7ad90e44-1a02-400f-a471-9daab7852c76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received unexpected event network-vif-unplugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for instance with vm_state active and task_state rescuing.
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.514 2 INFO nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance shutdown successfully after 13 seconds.
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.519 2 INFO nova.virt.libvirt.driver [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance destroyed successfully.
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.520 2 DEBUG nova.objects.instance [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'numa_topology' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.536 2 INFO nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Attempting rescue
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.536 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.540 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.541 2 INFO nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Creating image(s)
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.564 2 DEBUG nova.storage.rbd_utils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.567 2 DEBUG nova.objects.instance [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.618 2 DEBUG nova.storage.rbd_utils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.652 2 DEBUG nova.storage.rbd_utils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732764564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.657 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.713 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.738 2 DEBUG nova.storage.rbd_utils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.743 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.791 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.792 2 DEBUG oslo_concurrency.lockutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.792 2 DEBUG oslo_concurrency.lockutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.792 2 DEBUG oslo_concurrency.lockutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.814 2 DEBUG nova.storage.rbd_utils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:58 compute-0 nova_compute[259627]: 2025-10-14 09:09:58.817 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 16c93e17-00f2-4710-a0e4-83eb60430088_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:58 compute-0 ceph-mon[74249]: pgmap v1665: 305 pgs: 305 active+clean; 195 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.6 MiB/s wr, 305 op/s
Oct 14 09:09:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2732764564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.144 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 16c93e17-00f2-4710-a0e4-83eb60430088_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.145 2 DEBUG nova.objects.instance [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'migration_context' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.173 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.175 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Start _get_guest_xml network_info=[{"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-237057849-network", "vif_mac": "fa:16:3e:f5:8d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.176 2 DEBUG nova.objects.instance [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'resources' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.201 2 WARNING nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.208 2 DEBUG nova.virt.libvirt.host [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.209 2 DEBUG nova.virt.libvirt.host [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.214 2 DEBUG nova.virt.libvirt.host [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.215 2 DEBUG nova.virt.libvirt.host [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.216 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.217 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.218 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.218 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.219 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.219 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.220 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.221 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.221 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.222 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.223 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.223 2 DEBUG nova.virt.hardware [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.224 2 DEBUG nova.objects.instance [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503815270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.242 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.244 2 DEBUG nova.virt.libvirt.vif [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:09:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1083913759',display_name='tempest-ServerDiskConfigTestJSON-server-1083913759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1083913759',id=72,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-0sy3orhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:57Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=dfb68a47-709d-40e3-8a17-01d9c3fb084b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.245 2 DEBUG nova.network.os_vif_util [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.246 2 DEBUG nova.network.os_vif_util [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.251 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <uuid>dfb68a47-709d-40e3-8a17-01d9c3fb084b</uuid>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <name>instance-00000048</name>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1083913759</nova:name>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:09:58</nova:creationTime>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <nova:user uuid="979aa20794dc414f91c59f224a0db083">tempest-ServerDiskConfigTestJSON-1253454894-project-member</nova:user>
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <nova:project uuid="9099e3128b584ff7a140b8021451223e">tempest-ServerDiskConfigTestJSON-1253454894</nova:project>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <nova:port uuid="705e7559-ae3c-461c-be70-b8dee31808cf">
Oct 14 09:09:59 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <system>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <entry name="serial">dfb68a47-709d-40e3-8a17-01d9c3fb084b</entry>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <entry name="uuid">dfb68a47-709d-40e3-8a17-01d9c3fb084b</entry>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </system>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <os>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   </os>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <features>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   </features>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk">
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config">
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       </source>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:09:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:60:ea:f5"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <target dev="tap705e7559-ae"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/console.log" append="off"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <video>
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </video>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:09:59 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:09:59 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:09:59 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:09:59 compute-0 nova_compute[259627]: </domain>
Oct 14 09:09:59 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.254 2 DEBUG nova.compute.manager [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Preparing to wait for external event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.254 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.255 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.255 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.257 2 DEBUG nova.virt.libvirt.vif [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:09:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1083913759',display_name='tempest-ServerDiskConfigTestJSON-server-1083913759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1083913759',id=72,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-0sy3orhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:57Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=dfb68a47-709d-40e3-8a17-01d9c3fb084b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.257 2 DEBUG nova.network.os_vif_util [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.258 2 DEBUG nova.network.os_vif_util [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.259 2 DEBUG os_vif [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.263 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.268 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.311 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705e7559-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap705e7559-ae, col_values=(('external_ids', {'iface-id': '705e7559-ae3c-461c-be70-b8dee31808cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:ea:f5', 'vm-uuid': 'dfb68a47-709d-40e3-8a17-01d9c3fb084b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:09:59 compute-0 NetworkManager[44885]: <info>  [1760432999.3151] manager: (tap705e7559-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.327 2 INFO os_vif [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae')
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.537 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.537 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.538 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No VIF found with MAC fa:16:3e:60:ea:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.539 2 INFO nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Using config drive
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.570 2 DEBUG nova.storage.rbd_utils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.589 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'ec2_ids' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.612 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'keypairs' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:09:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:09:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478550959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.706 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:09:59 compute-0 nova_compute[259627]: 2025-10-14 09:09:59.707 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:09:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 195 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.1 MiB/s wr, 277 op/s
Oct 14 09:09:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/503815270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:09:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3478550959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881019098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.132 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.135 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.494 2 DEBUG nova.compute.manager [req-adb3b66f-1bc6-4be0-bb55-d4b2b793eb57 req-d0452d1f-d0a2-4544-b8ec-154b074980ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.495 2 DEBUG oslo_concurrency.lockutils [req-adb3b66f-1bc6-4be0-bb55-d4b2b793eb57 req-d0452d1f-d0a2-4544-b8ec-154b074980ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.495 2 DEBUG oslo_concurrency.lockutils [req-adb3b66f-1bc6-4be0-bb55-d4b2b793eb57 req-d0452d1f-d0a2-4544-b8ec-154b074980ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.496 2 DEBUG oslo_concurrency.lockutils [req-adb3b66f-1bc6-4be0-bb55-d4b2b793eb57 req-d0452d1f-d0a2-4544-b8ec-154b074980ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.496 2 DEBUG nova.compute.manager [req-adb3b66f-1bc6-4be0-bb55-d4b2b793eb57 req-d0452d1f-d0a2-4544-b8ec-154b074980ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] No waiting events found dispatching network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.497 2 WARNING nova.compute.manager [req-adb3b66f-1bc6-4be0-bb55-d4b2b793eb57 req-d0452d1f-d0a2-4544-b8ec-154b074980ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received unexpected event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for instance with vm_state active and task_state rescuing.
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.501 2 INFO nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Creating config drive at /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.509 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb92eu5w8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2665364063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.555 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.559 2 DEBUG nova.virt.libvirt.vif [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:09:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-963382784',display_name='tempest-ServerRescueTestJSON-server-963382784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-963382784',id=73,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51ae58236f6a432e93764d455a502033',ramdisk_id='',reservation_id='r-x8za0q3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-156376826',owner_user_name='tempest-ServerRescueTestJSON-156376826-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:09:41Z,user_data=None,user_id='7a268118aae14d449097f4a26371415e',uuid=16c93e17-00f2-4710-a0e4-83eb60430088,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-237057849-network", "vif_mac": "fa:16:3e:f5:8d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.560 2 DEBUG nova.network.os_vif_util [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converting VIF {"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-237057849-network", "vif_mac": "fa:16:3e:f5:8d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.562 2 DEBUG nova.network.os_vif_util [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:8d:59,bridge_name='br-int',has_traffic_filtering=True,id=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2aa7bfa4-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.564 2 DEBUG nova.objects.instance [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'pci_devices' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.587 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <uuid>16c93e17-00f2-4710-a0e4-83eb60430088</uuid>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <name>instance-00000049</name>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueTestJSON-server-963382784</nova:name>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:09:59</nova:creationTime>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <nova:user uuid="7a268118aae14d449097f4a26371415e">tempest-ServerRescueTestJSON-156376826-project-member</nova:user>
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <nova:project uuid="51ae58236f6a432e93764d455a502033">tempest-ServerRescueTestJSON-156376826</nova:project>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <nova:port uuid="2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549">
Oct 14 09:10:00 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <system>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <entry name="serial">16c93e17-00f2-4710-a0e4-83eb60430088</entry>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <entry name="uuid">16c93e17-00f2-4710-a0e4-83eb60430088</entry>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </system>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <os>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   </os>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <features>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   </features>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/16c93e17-00f2-4710-a0e4-83eb60430088_disk.rescue">
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/16c93e17-00f2-4710-a0e4-83eb60430088_disk">
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <target dev="vdb" bus="virtio"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/16c93e17-00f2-4710-a0e4-83eb60430088_disk.config.rescue">
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:00 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:f5:8d:59"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <target dev="tap2aa7bfa4-d8"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/console.log" append="off"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <video>
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </video>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:10:00 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:10:00 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:10:00 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:10:00 compute-0 nova_compute[259627]: </domain>
Oct 14 09:10:00 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.599 2 INFO nova.virt.libvirt.driver [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance destroyed successfully.
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.655 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb92eu5w8" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.684 2 DEBUG nova.storage.rbd_utils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.688 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.746 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.747 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.747 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.748 2 DEBUG nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No VIF found with MAC fa:16:3e:f5:8d:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.748 2 INFO nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Using config drive
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.775 2 DEBUG nova.storage.rbd_utils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.795 2 DEBUG nova.objects.instance [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.828 2 DEBUG nova.objects.instance [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'keypairs' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.880 2 DEBUG oslo_concurrency.processutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config dfb68a47-709d-40e3-8a17-01d9c3fb084b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.881 2 INFO nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Deleting local config drive /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b/disk.config because it was imported into RBD.
Oct 14 09:10:00 compute-0 systemd-udevd[336824]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:00 compute-0 NetworkManager[44885]: <info>  [1760433000.9342] manager: (tap705e7559-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Oct 14 09:10:00 compute-0 kernel: tap705e7559-ae: entered promiscuous mode
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:00 compute-0 ovn_controller[152662]: 2025-10-14T09:10:00Z|00753|binding|INFO|Claiming lport 705e7559-ae3c-461c-be70-b8dee31808cf for this chassis.
Oct 14 09:10:00 compute-0 ovn_controller[152662]: 2025-10-14T09:10:00Z|00754|binding|INFO|705e7559-ae3c-461c-be70-b8dee31808cf: Claiming fa:16:3e:60:ea:f5 10.100.0.11
Oct 14 09:10:00 compute-0 NetworkManager[44885]: <info>  [1760433000.9484] device (tap705e7559-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:10:00 compute-0 NetworkManager[44885]: <info>  [1760433000.9507] device (tap705e7559-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.950 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:ea:f5 10.100.0.11'], port_security=['fa:16:3e:60:ea:f5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dfb68a47-709d-40e3-8a17-01d9c3fb084b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=705e7559-ae3c-461c-be70-b8dee31808cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.952 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 705e7559-ae3c-461c-be70-b8dee31808cf in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 bound to our chassis
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.954 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.965 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91efb336-dfb4-458a-916b-54c3469b8556]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.966 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99db3452-81 in ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.968 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99db3452-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.968 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8fac64-c4e4-4148-9665-a1a7b5413f40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.969 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[93fd7887-474e-4d18-bf9b-1cb22666d828]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:00 compute-0 systemd-machined[214636]: New machine qemu-90-instance-00000048.
Oct 14 09:10:00 compute-0 ceph-mon[74249]: pgmap v1666: 305 pgs: 305 active+clean; 195 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 5.1 MiB/s wr, 277 op/s
Oct 14 09:10:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2881019098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2665364063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:00 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-00000048.
Oct 14 09:10:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:00.982 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[62619cf6-8ec5-4df0-a5a4-bc6040fcb800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:00 compute-0 ovn_controller[152662]: 2025-10-14T09:10:00Z|00755|binding|INFO|Setting lport 705e7559-ae3c-461c-be70-b8dee31808cf ovn-installed in OVS
Oct 14 09:10:00 compute-0 ovn_controller[152662]: 2025-10-14T09:10:00Z|00756|binding|INFO|Setting lport 705e7559-ae3c-461c-be70-b8dee31808cf up in Southbound
Oct 14 09:10:00 compute-0 nova_compute[259627]: 2025-10-14 09:10:00.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.005 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[053d94af-cde6-4463-a60a-c351eef8ccae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.036 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d236d39c-7ee2-4b20-b6b4-b706e794b45a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.044 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b18fa25-4daa-43ac-a82c-8290be3170f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 NetworkManager[44885]: <info>  [1760433001.0461] manager: (tap99db3452-80): new Veth device (/org/freedesktop/NetworkManager/Devices/318)
Oct 14 09:10:01 compute-0 systemd-udevd[337218]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.077 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[71f91723-916f-4d26-8cf5-1bada5a6c386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.080 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38ad361c-b947-4fad-9a04-c36761259cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 NetworkManager[44885]: <info>  [1760433001.1086] device (tap99db3452-80): carrier: link connected
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.115 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9eaba517-f991-4404-92d0-efca66ccfabf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.129 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34a3e631-f6af-4168-a699-1de073ced1fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677987, 'reachable_time': 17801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337240, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.147 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef079191-b2f9-419a-ba49-d7058eab8973]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:a670'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677987, 'tstamp': 677987}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337241, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.160 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32214f98-8ac8-40a2-aaab-25870c0903b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677987, 'reachable_time': 17801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337242, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.191 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7eb8fa-f02c-4b93-8d25-b25246170352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f84af08f-4475-47c0-a444-1613586aea84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.259 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.259 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.260 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99db3452-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 NetworkManager[44885]: <info>  [1760433001.2629] manager: (tap99db3452-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct 14 09:10:01 compute-0 kernel: tap99db3452-80: entered promiscuous mode
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.270 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99db3452-80, col_values=(('external_ids', {'iface-id': '59e7d558-49d1-48cf-b926-27e93fe381b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 ovn_controller[152662]: 2025-10-14T09:10:01Z|00757|binding|INFO|Releasing lport 59e7d558-49d1-48cf-b926-27e93fe381b1 from this chassis (sb_readonly=0)
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.299 2 DEBUG nova.compute.manager [req-8188657d-21b2-4446-855b-4ca668ae99b7 req-c816b5ec-5e2b-413a-8767-66a557f905b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.299 2 DEBUG oslo_concurrency.lockutils [req-8188657d-21b2-4446-855b-4ca668ae99b7 req-c816b5ec-5e2b-413a-8767-66a557f905b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.300 2 DEBUG oslo_concurrency.lockutils [req-8188657d-21b2-4446-855b-4ca668ae99b7 req-c816b5ec-5e2b-413a-8767-66a557f905b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.300 2 DEBUG oslo_concurrency.lockutils [req-8188657d-21b2-4446-855b-4ca668ae99b7 req-c816b5ec-5e2b-413a-8767-66a557f905b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.301 2 DEBUG nova.compute.manager [req-8188657d-21b2-4446-855b-4ca668ae99b7 req-c816b5ec-5e2b-413a-8767-66a557f905b6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Processing event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.308 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.309 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b640f30-bdad-40a9-81ef-451037a77da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.309 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.310 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'env', 'PROCESS_TAG=haproxy-99db3452-8467-4a2b-a51d-30679c346bb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99db3452-8467-4a2b-a51d-30679c346bb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.429 2 INFO nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Creating config drive at /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config.rescue
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.437 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgos8w110 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.598 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgos8w110" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.634 2 DEBUG nova.storage.rbd_utils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.640 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config.rescue 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:01 compute-0 podman[337295]: 2025-10-14 09:10:01.715400072 +0000 UTC m=+0.073011979 container create 63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 09:10:01 compute-0 systemd[1]: Started libpod-conmon-63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903.scope.
Oct 14 09:10:01 compute-0 podman[337295]: 2025-10-14 09:10:01.671420309 +0000 UTC m=+0.029032216 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:10:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2dffb398b53bc36ddea72a8fdf813e83b91c33fabb79b3429d94f804380a9ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:01 compute-0 podman[337295]: 2025-10-14 09:10:01.814751549 +0000 UTC m=+0.172363546 container init 63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 09:10:01 compute-0 podman[337295]: 2025-10-14 09:10:01.824427887 +0000 UTC m=+0.182039834 container start 63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:10:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 213 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 9.4 MiB/s wr, 345 op/s
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.832 2 DEBUG oslo_concurrency.processutils [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config.rescue 16c93e17-00f2-4710-a0e4-83eb60430088_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.833 2 INFO nova.virt.libvirt.driver [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Deleting local config drive /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088/disk.config.rescue because it was imported into RBD.
Oct 14 09:10:01 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[337329]: [NOTICE]   (337336) : New worker (337338) forked
Oct 14 09:10:01 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[337329]: [NOTICE]   (337336) : Loading success.
Oct 14 09:10:01 compute-0 kernel: tap2aa7bfa4-d8: entered promiscuous mode
Oct 14 09:10:01 compute-0 NetworkManager[44885]: <info>  [1760433001.9163] manager: (tap2aa7bfa4-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Oct 14 09:10:01 compute-0 systemd-udevd[337231]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:01 compute-0 ovn_controller[152662]: 2025-10-14T09:10:01Z|00758|binding|INFO|Claiming lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for this chassis.
Oct 14 09:10:01 compute-0 ovn_controller[152662]: 2025-10-14T09:10:01Z|00759|binding|INFO|2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549: Claiming fa:16:3e:f5:8d:59 10.100.0.4
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 NetworkManager[44885]: <info>  [1760433001.9332] device (tap2aa7bfa4-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:10:01 compute-0 NetworkManager[44885]: <info>  [1760433001.9340] device (tap2aa7bfa4-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.936 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:8d:59 10.100.0.4'], port_security=['fa:16:3e:f5:8d:59 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '16c93e17-00f2-4710-a0e4-83eb60430088', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.937 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace bound to our chassis
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.938 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:10:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:01.939 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26679a3b-1cc9-486b-bd55-1187cf11d764]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 ovn_controller[152662]: 2025-10-14T09:10:01Z|00760|binding|INFO|Setting lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 ovn-installed in OVS
Oct 14 09:10:01 compute-0 ovn_controller[152662]: 2025-10-14T09:10:01Z|00761|binding|INFO|Setting lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 up in Southbound
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 nova_compute[259627]: 2025-10-14 09:10:01.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:01 compute-0 systemd-machined[214636]: New machine qemu-91-instance-00000049.
Oct 14 09:10:01 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-00000049.
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.601 2 DEBUG nova.compute.manager [req-88c4b3c1-f4b3-427e-ba3f-38870fd63046 req-4a13573a-a1c5-468b-a518-1dcdfe494c80 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.602 2 DEBUG oslo_concurrency.lockutils [req-88c4b3c1-f4b3-427e-ba3f-38870fd63046 req-4a13573a-a1c5-468b-a518-1dcdfe494c80 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.602 2 DEBUG oslo_concurrency.lockutils [req-88c4b3c1-f4b3-427e-ba3f-38870fd63046 req-4a13573a-a1c5-468b-a518-1dcdfe494c80 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.602 2 DEBUG oslo_concurrency.lockutils [req-88c4b3c1-f4b3-427e-ba3f-38870fd63046 req-4a13573a-a1c5-468b-a518-1dcdfe494c80 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.602 2 DEBUG nova.compute.manager [req-88c4b3c1-f4b3-427e-ba3f-38870fd63046 req-4a13573a-a1c5-468b-a518-1dcdfe494c80 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] No waiting events found dispatching network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.603 2 WARNING nova.compute.manager [req-88c4b3c1-f4b3-427e-ba3f-38870fd63046 req-4a13573a-a1c5-468b-a518-1dcdfe494c80 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received unexpected event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for instance with vm_state active and task_state rescuing.
Oct 14 09:10:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:10:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:10:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:10:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:10:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:10:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.894 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 16c93e17-00f2-4710-a0e4-83eb60430088 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.895 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433002.8935013, 16c93e17-00f2-4710-a0e4-83eb60430088 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.895 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] VM Resumed (Lifecycle Event)
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.900 2 DEBUG nova.compute.manager [None req-c758ad4d-637f-4527-84eb-bd9f15980c02 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.912 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.915 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.924 2 DEBUG nova.compute.manager [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.927 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.930 2 INFO nova.virt.libvirt.driver [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance spawned successfully.
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.931 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.953 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.954 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433002.8945792, 16c93e17-00f2-4710-a0e4-83eb60430088 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.954 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] VM Started (Lifecycle Event)
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.963 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.964 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.965 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.965 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.966 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.966 2 DEBUG nova.virt.libvirt.driver [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.975 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:02 compute-0 nova_compute[259627]: 2025-10-14 09:10:02.979 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.005 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for dfb68a47-709d-40e3-8a17-01d9c3fb084b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.006 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433002.92356, dfb68a47-709d-40e3-8a17-01d9c3fb084b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.006 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] VM Started (Lifecycle Event)
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.025 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.029 2 DEBUG nova.compute.manager [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.031 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:03 compute-0 ceph-mon[74249]: pgmap v1667: 305 pgs: 305 active+clean; 213 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 9.4 MiB/s wr, 345 op/s
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.057 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.057 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433002.9236345, dfb68a47-709d-40e3-8a17-01d9c3fb084b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.058 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] VM Paused (Lifecycle Event)
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.080 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.085 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433002.926429, dfb68a47-709d-40e3-8a17-01d9c3fb084b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.085 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] VM Resumed (Lifecycle Event)
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.100 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.101 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.101 2 DEBUG nova.objects.instance [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.107 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.111 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.165 2 DEBUG oslo_concurrency.lockutils [None req-2924ac11-1eff-44d0-bed7-446585b2fafb 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.544 2 DEBUG nova.compute.manager [req-157fcb74-fcf4-4893-a23f-62d93d983b6d req-de7ef463-fedc-4523-ad85-d8107b4a2317 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.545 2 DEBUG oslo_concurrency.lockutils [req-157fcb74-fcf4-4893-a23f-62d93d983b6d req-de7ef463-fedc-4523-ad85-d8107b4a2317 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.546 2 DEBUG oslo_concurrency.lockutils [req-157fcb74-fcf4-4893-a23f-62d93d983b6d req-de7ef463-fedc-4523-ad85-d8107b4a2317 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.547 2 DEBUG oslo_concurrency.lockutils [req-157fcb74-fcf4-4893-a23f-62d93d983b6d req-de7ef463-fedc-4523-ad85-d8107b4a2317 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.547 2 DEBUG nova.compute.manager [req-157fcb74-fcf4-4893-a23f-62d93d983b6d req-de7ef463-fedc-4523-ad85-d8107b4a2317 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] No waiting events found dispatching network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.548 2 WARNING nova.compute.manager [req-157fcb74-fcf4-4893-a23f-62d93d983b6d req-de7ef463-fedc-4523-ad85-d8107b4a2317 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received unexpected event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf for instance with vm_state active and task_state None.
Oct 14 09:10:03 compute-0 ovn_controller[152662]: 2025-10-14T09:10:03Z|00762|binding|INFO|Releasing lport 59e7d558-49d1-48cf-b926-27e93fe381b1 from this chassis (sb_readonly=0)
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:03 compute-0 ovn_controller[152662]: 2025-10-14T09:10:03Z|00763|binding|INFO|Releasing lport 59e7d558-49d1-48cf-b926-27e93fe381b1 from this chassis (sb_readonly=0)
Oct 14 09:10:03 compute-0 nova_compute[259627]: 2025-10-14 09:10:03.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 213 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 8.5 MiB/s wr, 311 op/s
Oct 14 09:10:04 compute-0 nova_compute[259627]: 2025-10-14 09:10:04.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:05 compute-0 ceph-mon[74249]: pgmap v1668: 305 pgs: 305 active+clean; 213 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 8.5 MiB/s wr, 311 op/s
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.198 2 DEBUG nova.compute.manager [req-91ea278e-a619-4b5c-9d2a-c3006e47677f req-23de3105-cf4b-4f03-b408-46de60c40032 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.198 2 DEBUG oslo_concurrency.lockutils [req-91ea278e-a619-4b5c-9d2a-c3006e47677f req-23de3105-cf4b-4f03-b408-46de60c40032 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.199 2 DEBUG oslo_concurrency.lockutils [req-91ea278e-a619-4b5c-9d2a-c3006e47677f req-23de3105-cf4b-4f03-b408-46de60c40032 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.199 2 DEBUG oslo_concurrency.lockutils [req-91ea278e-a619-4b5c-9d2a-c3006e47677f req-23de3105-cf4b-4f03-b408-46de60c40032 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.200 2 DEBUG nova.compute.manager [req-91ea278e-a619-4b5c-9d2a-c3006e47677f req-23de3105-cf4b-4f03-b408-46de60c40032 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] No waiting events found dispatching network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.200 2 WARNING nova.compute.manager [req-91ea278e-a619-4b5c-9d2a-c3006e47677f req-23de3105-cf4b-4f03-b408-46de60c40032 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received unexpected event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for instance with vm_state rescued and task_state None.
Oct 14 09:10:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:10:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/36587151' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:10:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:10:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/36587151' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:10:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 214 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.9 MiB/s wr, 438 op/s
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.932 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432990.930514, 7167ef21-b041-47b9-8d93-55b5853f4d01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.932 2 INFO nova.compute.manager [-] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] VM Stopped (Lifecycle Event)
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.955 2 DEBUG nova.compute.manager [None req-2d31c334-2a53-46a4-b6ae-b94d973f31f0 - - - - - -] [instance: 7167ef21-b041-47b9-8d93-55b5853f4d01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.988 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.989 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.989 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.989 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.990 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.991 2 INFO nova.compute.manager [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Terminating instance
Oct 14 09:10:05 compute-0 nova_compute[259627]: 2025-10-14 09:10:05.992 2 DEBUG nova.compute.manager [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:10:06 compute-0 kernel: tap705e7559-ae (unregistering): left promiscuous mode
Oct 14 09:10:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/36587151' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:10:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/36587151' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:10:06 compute-0 NetworkManager[44885]: <info>  [1760433006.0606] device (tap705e7559-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:06 compute-0 ovn_controller[152662]: 2025-10-14T09:10:06Z|00764|binding|INFO|Releasing lport 705e7559-ae3c-461c-be70-b8dee31808cf from this chassis (sb_readonly=0)
Oct 14 09:10:06 compute-0 ovn_controller[152662]: 2025-10-14T09:10:06Z|00765|binding|INFO|Setting lport 705e7559-ae3c-461c-be70-b8dee31808cf down in Southbound
Oct 14 09:10:06 compute-0 ovn_controller[152662]: 2025-10-14T09:10:06Z|00766|binding|INFO|Removing iface tap705e7559-ae ovn-installed in OVS
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.134 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:ea:f5 10.100.0.11'], port_security=['fa:16:3e:60:ea:f5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dfb68a47-709d-40e3-8a17-01d9c3fb084b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=705e7559-ae3c-461c-be70-b8dee31808cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.137 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 705e7559-ae3c-461c-be70-b8dee31808cf in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 unbound from our chassis
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.139 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99db3452-8467-4a2b-a51d-30679c346bb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.141 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e602073e-ff4b-4131-8763-5be68a20e654]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.142 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace which is not needed anymore
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:06 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct 14 09:10:06 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000048.scope: Consumed 4.963s CPU time.
Oct 14 09:10:06 compute-0 systemd-machined[214636]: Machine qemu-90-instance-00000048 terminated.
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.238 2 INFO nova.virt.libvirt.driver [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Instance destroyed successfully.
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.240 2 DEBUG nova.objects.instance [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'resources' on Instance uuid dfb68a47-709d-40e3-8a17-01d9c3fb084b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.255 2 DEBUG nova.virt.libvirt.vif [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:09:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1083913759',display_name='tempest-ServerDiskConfigTestJSON-server-1083913759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1083913759',id=72,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:10:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-0sy3orhw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:10:03Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=dfb68a47-709d-40e3-8a17-01d9c3fb084b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.256 2 DEBUG nova.network.os_vif_util [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "705e7559-ae3c-461c-be70-b8dee31808cf", "address": "fa:16:3e:60:ea:f5", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705e7559-ae", "ovs_interfaceid": "705e7559-ae3c-461c-be70-b8dee31808cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.256 2 DEBUG nova.network.os_vif_util [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.257 2 DEBUG os_vif [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705e7559-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.268 2 INFO os_vif [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ea:f5,bridge_name='br-int',has_traffic_filtering=True,id=705e7559-ae3c-461c-be70-b8dee31808cf,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap705e7559-ae')
Oct 14 09:10:06 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[337329]: [NOTICE]   (337336) : haproxy version is 2.8.14-c23fe91
Oct 14 09:10:06 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[337329]: [NOTICE]   (337336) : path to executable is /usr/sbin/haproxy
Oct 14 09:10:06 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[337329]: [WARNING]  (337336) : Exiting Master process...
Oct 14 09:10:06 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[337329]: [ALERT]    (337336) : Current worker (337338) exited with code 143 (Terminated)
Oct 14 09:10:06 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[337329]: [WARNING]  (337336) : All workers exited. Exiting... (0)
Oct 14 09:10:06 compute-0 systemd[1]: libpod-63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903.scope: Deactivated successfully.
Oct 14 09:10:06 compute-0 podman[337500]: 2025-10-14 09:10:06.334398545 +0000 UTC m=+0.047118702 container died 63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903-userdata-shm.mount: Deactivated successfully.
Oct 14 09:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2dffb398b53bc36ddea72a8fdf813e83b91c33fabb79b3429d94f804380a9ca-merged.mount: Deactivated successfully.
Oct 14 09:10:06 compute-0 podman[337500]: 2025-10-14 09:10:06.37319683 +0000 UTC m=+0.085916997 container cleanup 63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:10:06 compute-0 systemd[1]: libpod-conmon-63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903.scope: Deactivated successfully.
Oct 14 09:10:06 compute-0 podman[337547]: 2025-10-14 09:10:06.458951572 +0000 UTC m=+0.056854621 container remove 63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.466 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b99d8def-9770-49f8-ab62-1495696c1bb0]: (4, ('Tue Oct 14 09:10:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903)\n63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903\nTue Oct 14 09:10:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903)\n63d8349d0d90ef5ce5e9ee9f63b581b6a576957391b29b83e801ef4dc4101903\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11e950c0-5106-4613-801f-c5d8018c71ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.470 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:06 compute-0 kernel: tap99db3452-80: left promiscuous mode
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.503 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11dd6d5c-0ff4-4ea0-b7f8-3b86b301d144]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.533 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e234275e-7de1-4744-9cae-ffd24bfb7059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.535 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a91eb2e6-678b-4fbc-b082-531999231508]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.551 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[732e0d0d-f5e1-4011-8bfa-3f5ac3c4716b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677979, 'reachable_time': 42412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337563, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d99db3452\x2d8467\x2d4a2b\x2da51d\x2d30679c346bb2.mount: Deactivated successfully.
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.557 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:10:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:06.557 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[43a1e696-4a3a-4327-b0c2-5296fca93407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.666 2 INFO nova.virt.libvirt.driver [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Deleting instance files /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b_del
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.667 2 INFO nova.virt.libvirt.driver [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Deletion of /var/lib/nova/instances/dfb68a47-709d-40e3-8a17-01d9c3fb084b_del complete
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.720 2 INFO nova.compute.manager [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.721 2 DEBUG oslo.service.loopingcall [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.721 2 DEBUG nova.compute.manager [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:10:06 compute-0 nova_compute[259627]: 2025-10-14 09:10:06.722 2 DEBUG nova.network.neutron [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:10:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:07.027 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:07 compute-0 ceph-mon[74249]: pgmap v1669: 305 pgs: 305 active+clean; 214 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.9 MiB/s wr, 438 op/s
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.551 2 DEBUG nova.compute.manager [req-546c1375-795d-44bc-966c-ddb91a99be71 req-38ff9d42-5f60-4040-b201-25a89e3d1886 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-unplugged-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.552 2 DEBUG oslo_concurrency.lockutils [req-546c1375-795d-44bc-966c-ddb91a99be71 req-38ff9d42-5f60-4040-b201-25a89e3d1886 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.552 2 DEBUG oslo_concurrency.lockutils [req-546c1375-795d-44bc-966c-ddb91a99be71 req-38ff9d42-5f60-4040-b201-25a89e3d1886 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.553 2 DEBUG oslo_concurrency.lockutils [req-546c1375-795d-44bc-966c-ddb91a99be71 req-38ff9d42-5f60-4040-b201-25a89e3d1886 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.553 2 DEBUG nova.compute.manager [req-546c1375-795d-44bc-966c-ddb91a99be71 req-38ff9d42-5f60-4040-b201-25a89e3d1886 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] No waiting events found dispatching network-vif-unplugged-705e7559-ae3c-461c-be70-b8dee31808cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.554 2 DEBUG nova.compute.manager [req-546c1375-795d-44bc-966c-ddb91a99be71 req-38ff9d42-5f60-4040-b201-25a89e3d1886 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-unplugged-705e7559-ae3c-461c-be70-b8dee31808cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:10:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.760 2 DEBUG nova.network.neutron [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.780 2 INFO nova.compute.manager [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Took 1.06 seconds to deallocate network for instance.
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.829 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.830 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 214 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 229 op/s
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.875 2 DEBUG nova.compute.manager [req-e714ced1-bfda-4320-98c9-9c73dea50ad0 req-89da6d79-7fb1-402b-84ef-0c8581820a93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-deleted-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:07 compute-0 nova_compute[259627]: 2025-10-14 09:10:07.901 2 DEBUG oslo_concurrency.processutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.167 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.168 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.187 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.250 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423723725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.347 2 DEBUG oslo_concurrency.processutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.354 2 DEBUG nova.compute.provider_tree [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.368 2 DEBUG nova.scheduler.client.report [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.390 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.392 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.398 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.398 2 INFO nova.compute.claims [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.418 2 INFO nova.scheduler.client.report [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Deleted allocations for instance dfb68a47-709d-40e3-8a17-01d9c3fb084b
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.486 2 DEBUG oslo_concurrency.lockutils [None req-05a296b2-89d1-4e25-9c27-f435d4305e87 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.545 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.594 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.595 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.612 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.680 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.963 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquiring lock "2a2a6043-9b27-4acb-b7db-927b576ffafc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.964 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "2a2a6043-9b27-4acb-b7db-927b576ffafc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183969461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.982 2 DEBUG nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:10:08 compute-0 nova_compute[259627]: 2025-10-14 09:10:08.993 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.002 2 DEBUG nova.compute.provider_tree [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.035 2 DEBUG nova.scheduler.client.report [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.064 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.068 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.069 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:10:09 compute-0 ceph-mon[74249]: pgmap v1670: 305 pgs: 305 active+clean; 214 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 229 op/s
Oct 14 09:10:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1423723725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2183969461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.073 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.081 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.081 2 INFO nova.compute.claims [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.175 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.176 2 DEBUG nova.network.neutron [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.199 2 INFO nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.220 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.298 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.349 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432994.3129086, e065d857-2df9-4199-aa98-41ca3c436bad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.351 2 INFO nova.compute.manager [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Stopped (Lifecycle Event)
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.355 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.358 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.359 2 INFO nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Creating image(s)
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.396 2 DEBUG nova.storage.rbd_utils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.422 2 DEBUG nova.storage.rbd_utils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.450 2 DEBUG nova.storage.rbd_utils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.454 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.522 2 DEBUG nova.compute.manager [None req-26bd2753-aaa8-43f7-9e20-25a0696331a1 - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.556 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.559 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.559 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.559 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.578 2 DEBUG nova.storage.rbd_utils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.581 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.696 2 DEBUG nova.policy [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a268118aae14d449097f4a26371415e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51ae58236f6a432e93764d455a502033', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.717 2 DEBUG nova.compute.manager [req-ba9c4b14-a683-4231-ba33-14b87f6c7976 req-5c479441-ed72-4ac4-874b-efde0b2901f9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.717 2 DEBUG oslo_concurrency.lockutils [req-ba9c4b14-a683-4231-ba33-14b87f6c7976 req-5c479441-ed72-4ac4-874b-efde0b2901f9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.718 2 DEBUG oslo_concurrency.lockutils [req-ba9c4b14-a683-4231-ba33-14b87f6c7976 req-5c479441-ed72-4ac4-874b-efde0b2901f9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.720 2 DEBUG oslo_concurrency.lockutils [req-ba9c4b14-a683-4231-ba33-14b87f6c7976 req-5c479441-ed72-4ac4-874b-efde0b2901f9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dfb68a47-709d-40e3-8a17-01d9c3fb084b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.721 2 DEBUG nova.compute.manager [req-ba9c4b14-a683-4231-ba33-14b87f6c7976 req-5c479441-ed72-4ac4-874b-efde0b2901f9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] No waiting events found dispatching network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.723 2 WARNING nova.compute.manager [req-ba9c4b14-a683-4231-ba33-14b87f6c7976 req-5c479441-ed72-4ac4-874b-efde0b2901f9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Received unexpected event network-vif-plugged-705e7559-ae3c-461c-be70-b8dee31808cf for instance with vm_state deleted and task_state None.
Oct 14 09:10:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/599321733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.805 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.814 2 DEBUG nova.compute.provider_tree [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 214 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 229 op/s
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.840 2 DEBUG nova.scheduler.client.report [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.863 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.864 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.866 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.866 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.926 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.927 2 INFO nova.compute.claims [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.930 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.931 2 DEBUG nova.network.neutron [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.940 2 DEBUG nova.storage.rbd_utils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] resizing rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:10:09 compute-0 nova_compute[259627]: 2025-10-14 09:10:09.977 2 INFO nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.000 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.046 2 DEBUG nova.objects.instance [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'migration_context' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.058 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.058 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Ensure instance console log exists: /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.059 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.059 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.060 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/599321733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.098 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.099 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.099 2 INFO nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Creating image(s)
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.118 2 DEBUG nova.storage.rbd_utils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.137 2 DEBUG nova.storage.rbd_utils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.156 2 DEBUG nova.storage.rbd_utils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.160 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.206 2 DEBUG nova.policy [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '979aa20794dc414f91c59f224a0db083', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9099e3128b584ff7a140b8021451223e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.233 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.234 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.234 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.234 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.256 2 DEBUG nova.storage.rbd_utils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.260 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.308 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.353 2 DEBUG nova.network.neutron [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Successfully created port: 5bb94dcb-f594-4577-9c20-65816d7c57a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.627 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.679 2 DEBUG nova.storage.rbd_utils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] resizing rbd image 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.757 2 DEBUG nova.objects.instance [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'migration_context' on Instance uuid 2f8845c8-0bc8-4ce6-bf89-f775899b2666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.771 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.772 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Ensure instance console log exists: /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.772 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.773 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.773 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2921864910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.818 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.824 2 DEBUG nova.compute.provider_tree [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.839 2 DEBUG nova.scheduler.client.report [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.861 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.862 2 DEBUG nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.924 2 DEBUG nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.943 2 INFO nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:10:10 compute-0 nova_compute[259627]: 2025-10-14 09:10:10.959 2 DEBUG nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:10:11 compute-0 ceph-mon[74249]: pgmap v1671: 305 pgs: 305 active+clean; 214 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 229 op/s
Oct 14 09:10:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2921864910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.091 2 DEBUG nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.093 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.094 2 INFO nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Creating image(s)
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.125 2 DEBUG nova.storage.rbd_utils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] rbd image 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.150 2 DEBUG nova.storage.rbd_utils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] rbd image 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.175 2 DEBUG nova.storage.rbd_utils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] rbd image 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.181 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.237 2 DEBUG nova.network.neutron [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Successfully created port: 9ecc282f-cf6e-46ad-96b5-188f7693ce1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.289 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.290 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.292 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.292 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.323 2 DEBUG nova.storage.rbd_utils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] rbd image 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.327 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.375 2 DEBUG nova.network.neutron [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Successfully updated port: 5bb94dcb-f594-4577-9c20-65816d7c57a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.409 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.410 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquired lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.411 2 DEBUG nova.network.neutron [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.440 2 DEBUG nova.compute.manager [req-3c128b88-d39a-4a5b-99a4-da733a4e1ded req-6a752e9d-5de1-486d-a6a6-4039e166cea9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-changed-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.441 2 DEBUG nova.compute.manager [req-3c128b88-d39a-4a5b-99a4-da733a4e1ded req-6a752e9d-5de1-486d-a6a6-4039e166cea9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Refreshing instance network info cache due to event network-changed-5bb94dcb-f594-4577-9c20-65816d7c57a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.441 2 DEBUG oslo_concurrency.lockutils [req-3c128b88-d39a-4a5b-99a4-da733a4e1ded req-6a752e9d-5de1-486d-a6a6-4039e166cea9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.630 2 DEBUG nova.network.neutron [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.655 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.709 2 DEBUG nova.storage.rbd_utils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] resizing rbd image 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.817 2 DEBUG nova.objects.instance [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2a2a6043-9b27-4acb-b7db-927b576ffafc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.831 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.831 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Ensure instance console log exists: /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.832 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.832 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.833 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 220 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.9 MiB/s wr, 280 op/s
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.835 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.839 2 WARNING nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.844 2 DEBUG nova.virt.libvirt.host [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.845 2 DEBUG nova.virt.libvirt.host [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.849 2 DEBUG nova.virt.libvirt.host [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.850 2 DEBUG nova.virt.libvirt.host [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.850 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.851 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.851 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.851 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.852 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.852 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.852 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.853 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.853 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.853 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.853 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.854 2 DEBUG nova.virt.hardware [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:10:11 compute-0 nova_compute[259627]: 2025-10-14 09:10:11.857 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970095419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.334 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.359 2 DEBUG nova.storage.rbd_utils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] rbd image 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.364 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.411 2 DEBUG nova.network.neutron [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Successfully updated port: 9ecc282f-cf6e-46ad-96b5-188f7693ce1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.431 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "refresh_cache-2f8845c8-0bc8-4ce6-bf89-f775899b2666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.431 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquired lock "refresh_cache-2f8845c8-0bc8-4ce6-bf89-f775899b2666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.432 2 DEBUG nova.network.neutron [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151824805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.819 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.821 2 DEBUG nova.objects.instance [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2a2a6043-9b27-4acb-b7db-927b576ffafc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.842 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <uuid>2a2a6043-9b27-4acb-b7db-927b576ffafc</uuid>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <name>instance-0000004c</name>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersAaction247Test-server-192233660</nova:name>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:10:11</nova:creationTime>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <nova:user uuid="0c82bf3580174f1486f9188a0fcf0e2f">tempest-ServersAaction247Test-419477546-project-member</nova:user>
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <nova:project uuid="1aebe9af1e584ceda68760afadd7b7e1">tempest-ServersAaction247Test-419477546</nova:project>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <system>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <entry name="serial">2a2a6043-9b27-4acb-b7db-927b576ffafc</entry>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <entry name="uuid">2a2a6043-9b27-4acb-b7db-927b576ffafc</entry>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     </system>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <os>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   </os>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <features>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   </features>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2a2a6043-9b27-4acb-b7db-927b576ffafc_disk">
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2a2a6043-9b27-4acb-b7db-927b576ffafc_disk.config">
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc/console.log" append="off"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <video>
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     </video>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:10:12 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:10:12 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:10:12 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:10:12 compute-0 nova_compute[259627]: </domain>
Oct 14 09:10:12 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.905 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.906 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.906 2 INFO nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Using config drive
Oct 14 09:10:12 compute-0 nova_compute[259627]: 2025-10-14 09:10:12.947 2 DEBUG nova.storage.rbd_utils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] rbd image 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:12 compute-0 podman[338214]: 2025-10-14 09:10:12.97396565 +0000 UTC m=+0.085308472 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:10:12 compute-0 podman[338213]: 2025-10-14 09:10:12.987479863 +0000 UTC m=+0.094611352 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 09:10:13 compute-0 ceph-mon[74249]: pgmap v1672: 305 pgs: 305 active+clean; 220 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.9 MiB/s wr, 280 op/s
Oct 14 09:10:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2970095419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3151824805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.194 2 INFO nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Creating config drive at /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc/disk.config
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.204 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_z34ydh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.264 2 DEBUG nova.network.neutron [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Updating instance_info_cache with network_info: [{"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.267 2 DEBUG nova.network.neutron [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.299 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Releasing lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.300 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance network_info: |[{"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.300 2 DEBUG oslo_concurrency.lockutils [req-3c128b88-d39a-4a5b-99a4-da733a4e1ded req-6a752e9d-5de1-486d-a6a6-4039e166cea9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.301 2 DEBUG nova.network.neutron [req-3c128b88-d39a-4a5b-99a4-da733a4e1ded req-6a752e9d-5de1-486d-a6a6-4039e166cea9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Refreshing network info cache for port 5bb94dcb-f594-4577-9c20-65816d7c57a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.304 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Start _get_guest_xml network_info=[{"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.310 2 WARNING nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.315 2 DEBUG nova.virt.libvirt.host [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.316 2 DEBUG nova.virt.libvirt.host [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.329 2 DEBUG nova.virt.libvirt.host [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.330 2 DEBUG nova.virt.libvirt.host [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.331 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.331 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.332 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.333 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.333 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.334 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.334 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.335 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.335 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.336 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.336 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.337 2 DEBUG nova.virt.hardware [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.341 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.381 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_z34ydh" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.410 2 DEBUG nova.storage.rbd_utils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] rbd image 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.414 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc/disk.config 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.531 2 DEBUG nova.compute.manager [req-43f7326f-7d7a-4347-be4e-7c33acc77fff req-fd488c85-c86f-4c2f-9bae-a8ff9c8fac45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Received event network-changed-9ecc282f-cf6e-46ad-96b5-188f7693ce1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.532 2 DEBUG nova.compute.manager [req-43f7326f-7d7a-4347-be4e-7c33acc77fff req-fd488c85-c86f-4c2f-9bae-a8ff9c8fac45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Refreshing instance network info cache due to event network-changed-9ecc282f-cf6e-46ad-96b5-188f7693ce1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.532 2 DEBUG oslo_concurrency.lockutils [req-43f7326f-7d7a-4347-be4e-7c33acc77fff req-fd488c85-c86f-4c2f-9bae-a8ff9c8fac45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2f8845c8-0bc8-4ce6-bf89-f775899b2666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.595 2 DEBUG oslo_concurrency.processutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc/disk.config 2a2a6043-9b27-4acb-b7db-927b576ffafc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.596 2 INFO nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Deleting local config drive /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc/disk.config because it was imported into RBD.
Oct 14 09:10:13 compute-0 systemd-machined[214636]: New machine qemu-92-instance-0000004c.
Oct 14 09:10:13 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-0000004c.
Oct 14 09:10:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135866274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 220 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 201 op/s
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.849 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.881 2 DEBUG nova.storage.rbd_utils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:13 compute-0 nova_compute[259627]: 2025-10-14 09:10:13.885 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2135866274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278650194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.405 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.408 2 DEBUG nova.virt.libvirt.vif [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1594806687',display_name='tempest-ServerRescueTestJSON-server-1594806687',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1594806687',id=74,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51ae58236f6a432e93764d455a502033',ramdisk_id='',reservation_id='r-ojp0uarx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-156376826',owner_user_name='tempest-ServerRescueTestJSON-156376826-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:09Z,user_data=None,user_id='7a268118aae14d449097f4a26371415e',uuid=e503e351-20ec-43fb-b5f8-7af68dff5bcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.408 2 DEBUG nova.network.os_vif_util [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converting VIF {"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.410 2 DEBUG nova.network.os_vif_util [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:fd:b8,bridge_name='br-int',has_traffic_filtering=True,id=5bb94dcb-f594-4577-9c20-65816d7c57a0,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bb94dcb-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.412 2 DEBUG nova.objects.instance [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'pci_devices' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.438 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <uuid>e503e351-20ec-43fb-b5f8-7af68dff5bcd</uuid>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <name>instance-0000004a</name>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueTestJSON-server-1594806687</nova:name>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:10:13</nova:creationTime>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <nova:user uuid="7a268118aae14d449097f4a26371415e">tempest-ServerRescueTestJSON-156376826-project-member</nova:user>
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <nova:project uuid="51ae58236f6a432e93764d455a502033">tempest-ServerRescueTestJSON-156376826</nova:project>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <nova:port uuid="5bb94dcb-f594-4577-9c20-65816d7c57a0">
Oct 14 09:10:14 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <system>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <entry name="serial">e503e351-20ec-43fb-b5f8-7af68dff5bcd</entry>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <entry name="uuid">e503e351-20ec-43fb-b5f8-7af68dff5bcd</entry>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </system>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <os>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   </os>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <features>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   </features>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk">
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config">
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:14 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:bb:fd:b8"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <target dev="tap5bb94dcb-f5"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/console.log" append="off"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <video>
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </video>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:10:14 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:10:14 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:10:14 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:10:14 compute-0 nova_compute[259627]: </domain>
Oct 14 09:10:14 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.440 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Preparing to wait for external event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.441 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.441 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.442 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.443 2 DEBUG nova.virt.libvirt.vif [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1594806687',display_name='tempest-ServerRescueTestJSON-server-1594806687',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1594806687',id=74,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51ae58236f6a432e93764d455a502033',ramdisk_id='',reservation_id='r-ojp0uarx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-156376826',owner_user_name='tempest-ServerRescueTestJSON-156376826-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:09Z,user_data=None,user_id='7a268118aae14d449097f4a26371415e',uuid=e503e351-20ec-43fb-b5f8-7af68dff5bcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.443 2 DEBUG nova.network.os_vif_util [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converting VIF {"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.444 2 DEBUG nova.network.os_vif_util [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:fd:b8,bridge_name='br-int',has_traffic_filtering=True,id=5bb94dcb-f594-4577-9c20-65816d7c57a0,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bb94dcb-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.445 2 DEBUG os_vif [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:fd:b8,bridge_name='br-int',has_traffic_filtering=True,id=5bb94dcb-f594-4577-9c20-65816d7c57a0,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bb94dcb-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.447 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.447 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb94dcb-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5bb94dcb-f5, col_values=(('external_ids', {'iface-id': '5bb94dcb-f594-4577-9c20-65816d7c57a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:fd:b8', 'vm-uuid': 'e503e351-20ec-43fb-b5f8-7af68dff5bcd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:14 compute-0 NetworkManager[44885]: <info>  [1760433014.4606] manager: (tap5bb94dcb-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.467 2 INFO os_vif [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:fd:b8,bridge_name='br-int',has_traffic_filtering=True,id=5bb94dcb-f594-4577-9c20-65816d7c57a0,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bb94dcb-f5')
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.533 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.534 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.534 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No VIF found with MAC fa:16:3e:bb:fd:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.536 2 INFO nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Using config drive
Oct 14 09:10:14 compute-0 nova_compute[259627]: 2025-10-14 09:10:14.565 2 DEBUG nova.storage.rbd_utils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:15 compute-0 ceph-mon[74249]: pgmap v1673: 305 pgs: 305 active+clean; 220 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 201 op/s
Oct 14 09:10:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3278650194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.208 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433015.2077727, 2a2a6043-9b27-4acb-b7db-927b576ffafc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.210 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] VM Resumed (Lifecycle Event)
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.213 2 DEBUG nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.213 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.219 2 INFO nova.virt.libvirt.driver [-] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Instance spawned successfully.
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.221 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.236 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.245 2 INFO nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Creating config drive at /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.252 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphwi05rdf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.323 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.329 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.330 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.331 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.332 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.333 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.333 2 DEBUG nova.virt.libvirt.driver [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.350 2 DEBUG nova.network.neutron [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Updating instance_info_cache with network_info: [{"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.362 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.363 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433015.208607, 2a2a6043-9b27-4acb-b7db-927b576ffafc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.363 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] VM Started (Lifecycle Event)
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.381 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Releasing lock "refresh_cache-2f8845c8-0bc8-4ce6-bf89-f775899b2666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.381 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Instance network_info: |[{"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.382 2 DEBUG oslo_concurrency.lockutils [req-43f7326f-7d7a-4347-be4e-7c33acc77fff req-fd488c85-c86f-4c2f-9bae-a8ff9c8fac45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2f8845c8-0bc8-4ce6-bf89-f775899b2666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.382 2 DEBUG nova.network.neutron [req-43f7326f-7d7a-4347-be4e-7c33acc77fff req-fd488c85-c86f-4c2f-9bae-a8ff9c8fac45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Refreshing network info cache for port 9ecc282f-cf6e-46ad-96b5-188f7693ce1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.386 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Start _get_guest_xml network_info=[{"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.391 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.395 2 WARNING nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.397 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.409 2 DEBUG nova.virt.libvirt.host [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.410 2 DEBUG nova.virt.libvirt.host [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.413 2 INFO nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Took 4.32 seconds to spawn the instance on the hypervisor.
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.414 2 DEBUG nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.422 2 DEBUG nova.virt.libvirt.host [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.423 2 DEBUG nova.virt.libvirt.host [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.423 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.424 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.424 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.425 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.425 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.425 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.426 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.426 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.426 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.427 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.427 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.427 2 DEBUG nova.virt.hardware [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.431 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.482 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphwi05rdf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.519 2 DEBUG nova.storage.rbd_utils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.524 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.573 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.593 2 INFO nova.compute.manager [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Took 6.55 seconds to build instance.
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.624 2 DEBUG oslo_concurrency.lockutils [None req-d4e540c5-f5a3-4972-ac13-fe8931d0b10f 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "2a2a6043-9b27-4acb-b7db-927b576ffafc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.723 2 DEBUG oslo_concurrency.processutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.724 2 INFO nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Deleting local config drive /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config because it was imported into RBD.
Oct 14 09:10:15 compute-0 NetworkManager[44885]: <info>  [1760433015.7884] manager: (tap5bb94dcb-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Oct 14 09:10:15 compute-0 systemd-udevd[338448]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:15 compute-0 kernel: tap5bb94dcb-f5: entered promiscuous mode
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:15 compute-0 ovn_controller[152662]: 2025-10-14T09:10:15Z|00767|binding|INFO|Claiming lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 for this chassis.
Oct 14 09:10:15 compute-0 ovn_controller[152662]: 2025-10-14T09:10:15Z|00768|binding|INFO|5bb94dcb-f594-4577-9c20-65816d7c57a0: Claiming fa:16:3e:bb:fd:b8 10.100.0.9
Oct 14 09:10:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:15.803 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:fd:b8 10.100.0.9'], port_security=['fa:16:3e:bb:fd:b8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e503e351-20ec-43fb-b5f8-7af68dff5bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5bb94dcb-f594-4577-9c20-65816d7c57a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:15.805 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5bb94dcb-f594-4577-9c20-65816d7c57a0 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace bound to our chassis
Oct 14 09:10:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:15.806 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:10:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:15.809 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9934b4a7-bc70-49d3-925a-75c44f3befa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:15 compute-0 NetworkManager[44885]: <info>  [1760433015.8162] device (tap5bb94dcb-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:10:15 compute-0 NetworkManager[44885]: <info>  [1760433015.8194] device (tap5bb94dcb-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:10:15 compute-0 ovn_controller[152662]: 2025-10-14T09:10:15Z|00769|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 ovn-installed in OVS
Oct 14 09:10:15 compute-0 ovn_controller[152662]: 2025-10-14T09:10:15Z|00770|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 up in Southbound
Oct 14 09:10:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 306 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.4 MiB/s wr, 325 op/s
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:15 compute-0 systemd-machined[214636]: New machine qemu-93-instance-0000004a.
Oct 14 09:10:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3200175896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:15 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004a.
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.900 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.918 2 DEBUG nova.storage.rbd_utils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:15 compute-0 nova_compute[259627]: 2025-10-14 09:10:15.923 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3200175896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.276 2 DEBUG nova.network.neutron [req-3c128b88-d39a-4a5b-99a4-da733a4e1ded req-6a752e9d-5de1-486d-a6a6-4039e166cea9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Updated VIF entry in instance network info cache for port 5bb94dcb-f594-4577-9c20-65816d7c57a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.279 2 DEBUG nova.network.neutron [req-3c128b88-d39a-4a5b-99a4-da733a4e1ded req-6a752e9d-5de1-486d-a6a6-4039e166cea9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Updating instance_info_cache with network_info: [{"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.301 2 DEBUG oslo_concurrency.lockutils [req-3c128b88-d39a-4a5b-99a4-da733a4e1ded req-6a752e9d-5de1-486d-a6a6-4039e166cea9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470951332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.406 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.409 2 DEBUG nova.virt.libvirt.vif [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1316946340',display_name='tempest-ServerDiskConfigTestJSON-server-1316946340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1316946340',id=75,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-eokpjxke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:10Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=2f8845c8-0bc8-4ce6-bf89-f775899b2666,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.409 2 DEBUG nova.network.os_vif_util [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.411 2 DEBUG nova.network.os_vif_util [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:96:2b,bridge_name='br-int',has_traffic_filtering=True,id=9ecc282f-cf6e-46ad-96b5-188f7693ce1f,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc282f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.414 2 DEBUG nova.objects.instance [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f8845c8-0bc8-4ce6-bf89-f775899b2666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.428 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <uuid>2f8845c8-0bc8-4ce6-bf89-f775899b2666</uuid>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <name>instance-0000004b</name>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1316946340</nova:name>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:10:15</nova:creationTime>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <nova:user uuid="979aa20794dc414f91c59f224a0db083">tempest-ServerDiskConfigTestJSON-1253454894-project-member</nova:user>
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <nova:project uuid="9099e3128b584ff7a140b8021451223e">tempest-ServerDiskConfigTestJSON-1253454894</nova:project>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <nova:port uuid="9ecc282f-cf6e-46ad-96b5-188f7693ce1f">
Oct 14 09:10:16 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <system>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <entry name="serial">2f8845c8-0bc8-4ce6-bf89-f775899b2666</entry>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <entry name="uuid">2f8845c8-0bc8-4ce6-bf89-f775899b2666</entry>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </system>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <os>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   </os>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <features>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   </features>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk">
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk.config">
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:70:96:2b"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <target dev="tap9ecc282f-cf"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666/console.log" append="off"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <video>
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </video>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:10:16 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:10:16 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:10:16 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:10:16 compute-0 nova_compute[259627]: </domain>
Oct 14 09:10:16 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.436 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Preparing to wait for external event network-vif-plugged-9ecc282f-cf6e-46ad-96b5-188f7693ce1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.436 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.437 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.437 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.438 2 DEBUG nova.virt.libvirt.vif [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1316946340',display_name='tempest-ServerDiskConfigTestJSON-server-1316946340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1316946340',id=75,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-eokpjxke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:10Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=2f8845c8-0bc8-4ce6-bf89-f775899b2666,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.439 2 DEBUG nova.network.os_vif_util [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.440 2 DEBUG nova.network.os_vif_util [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:96:2b,bridge_name='br-int',has_traffic_filtering=True,id=9ecc282f-cf6e-46ad-96b5-188f7693ce1f,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc282f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.441 2 DEBUG os_vif [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:96:2b,bridge_name='br-int',has_traffic_filtering=True,id=9ecc282f-cf6e-46ad-96b5-188f7693ce1f,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc282f-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ecc282f-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ecc282f-cf, col_values=(('external_ids', {'iface-id': '9ecc282f-cf6e-46ad-96b5-188f7693ce1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:96:2b', 'vm-uuid': '2f8845c8-0bc8-4ce6-bf89-f775899b2666'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:16 compute-0 NetworkManager[44885]: <info>  [1760433016.4514] manager: (tap9ecc282f-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.456 2 INFO os_vif [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:96:2b,bridge_name='br-int',has_traffic_filtering=True,id=9ecc282f-cf6e-46ad-96b5-188f7693ce1f,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc282f-cf')
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.504 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.504 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.504 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] No VIF found with MAC fa:16:3e:70:96:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.505 2 INFO nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Using config drive
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.525 2 DEBUG nova.storage.rbd_utils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.552 2 DEBUG nova.compute.manager [req-fe13b420-10d4-4885-a678-7ac8454438e7 req-3d52cc67-4236-4e89-80e9-6b8103c1a8d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.552 2 DEBUG oslo_concurrency.lockutils [req-fe13b420-10d4-4885-a678-7ac8454438e7 req-3d52cc67-4236-4e89-80e9-6b8103c1a8d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.553 2 DEBUG oslo_concurrency.lockutils [req-fe13b420-10d4-4885-a678-7ac8454438e7 req-3d52cc67-4236-4e89-80e9-6b8103c1a8d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.553 2 DEBUG oslo_concurrency.lockutils [req-fe13b420-10d4-4885-a678-7ac8454438e7 req-3d52cc67-4236-4e89-80e9-6b8103c1a8d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.553 2 DEBUG nova.compute.manager [req-fe13b420-10d4-4885-a678-7ac8454438e7 req-3d52cc67-4236-4e89-80e9-6b8103c1a8d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Processing event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.902 2 DEBUG nova.compute.manager [None req-860f943d-18a2-4ac0-b0db-48d4d2d4d0b0 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.946 2 INFO nova.compute.manager [None req-860f943d-18a2-4ac0-b0db-48d4d2d4d0b0 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] instance snapshotting
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.947 2 DEBUG nova.objects.instance [None req-860f943d-18a2-4ac0-b0db-48d4d2d4d0b0 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lazy-loading 'flavor' on Instance uuid 2a2a6043-9b27-4acb-b7db-927b576ffafc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.978 2 INFO nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Creating config drive at /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666/disk.config
Oct 14 09:10:16 compute-0 nova_compute[259627]: 2025-10-14 09:10:16.983 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5mmd67ku execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.045 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquiring lock "2a2a6043-9b27-4acb-b7db-927b576ffafc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.045 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "2a2a6043-9b27-4acb-b7db-927b576ffafc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.046 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquiring lock "2a2a6043-9b27-4acb-b7db-927b576ffafc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.046 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "2a2a6043-9b27-4acb-b7db-927b576ffafc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.046 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "2a2a6043-9b27-4acb-b7db-927b576ffafc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.047 2 INFO nova.compute.manager [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Terminating instance
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.048 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquiring lock "refresh_cache-2a2a6043-9b27-4acb-b7db-927b576ffafc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.048 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquired lock "refresh_cache-2a2a6043-9b27-4acb-b7db-927b576ffafc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.048 2 DEBUG nova.network.neutron [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.120 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5mmd67ku" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:17 compute-0 ceph-mon[74249]: pgmap v1674: 305 pgs: 305 active+clean; 306 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.4 MiB/s wr, 325 op/s
Oct 14 09:10:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/470951332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.145 2 DEBUG nova.storage.rbd_utils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] rbd image 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.148 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666/disk.config 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.192 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433017.1621656, e503e351-20ec-43fb-b5f8-7af68dff5bcd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.193 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] VM Started (Lifecycle Event)
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.196 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.207 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.216 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.238 2 INFO nova.virt.libvirt.driver [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance spawned successfully.
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.239 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.243 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.256 2 DEBUG nova.network.neutron [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.265 2 DEBUG nova.network.neutron [req-43f7326f-7d7a-4347-be4e-7c33acc77fff req-fd488c85-c86f-4c2f-9bae-a8ff9c8fac45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Updated VIF entry in instance network info cache for port 9ecc282f-cf6e-46ad-96b5-188f7693ce1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.267 2 DEBUG nova.network.neutron [req-43f7326f-7d7a-4347-be4e-7c33acc77fff req-fd488c85-c86f-4c2f-9bae-a8ff9c8fac45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Updating instance_info_cache with network_info: [{"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.281 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.282 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433017.162603, e503e351-20ec-43fb-b5f8-7af68dff5bcd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.283 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] VM Paused (Lifecycle Event)
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.292 2 DEBUG oslo_concurrency.lockutils [req-43f7326f-7d7a-4347-be4e-7c33acc77fff req-fd488c85-c86f-4c2f-9bae-a8ff9c8fac45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2f8845c8-0bc8-4ce6-bf89-f775899b2666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.301 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.301 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.302 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.302 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.302 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.303 2 DEBUG nova.virt.libvirt.driver [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.309 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.312 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433017.205572, e503e351-20ec-43fb-b5f8-7af68dff5bcd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.312 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] VM Resumed (Lifecycle Event)
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.327 2 DEBUG oslo_concurrency.processutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666/disk.config 2f8845c8-0bc8-4ce6-bf89-f775899b2666_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.329 2 INFO nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Deleting local config drive /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666/disk.config because it was imported into RBD.
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.335 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.340 2 INFO nova.virt.libvirt.driver [None req-860f943d-18a2-4ac0-b0db-48d4d2d4d0b0 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Beginning live snapshot process
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.346 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.372 2 INFO nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Took 8.02 seconds to spawn the instance on the hypervisor.
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.372 2 DEBUG nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.376 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:17 compute-0 kernel: tap9ecc282f-cf: entered promiscuous mode
Oct 14 09:10:17 compute-0 NetworkManager[44885]: <info>  [1760433017.3897] manager: (tap9ecc282f-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Oct 14 09:10:17 compute-0 systemd-udevd[338522]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:17 compute-0 ovn_controller[152662]: 2025-10-14T09:10:17Z|00771|binding|INFO|Claiming lport 9ecc282f-cf6e-46ad-96b5-188f7693ce1f for this chassis.
Oct 14 09:10:17 compute-0 ovn_controller[152662]: 2025-10-14T09:10:17Z|00772|binding|INFO|9ecc282f-cf6e-46ad-96b5-188f7693ce1f: Claiming fa:16:3e:70:96:2b 10.100.0.12
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.399 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:96:2b 10.100.0.12'], port_security=['fa:16:3e:70:96:2b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2f8845c8-0bc8-4ce6-bf89-f775899b2666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc282f-cf6e-46ad-96b5-188f7693ce1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.400 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc282f-cf6e-46ad-96b5-188f7693ce1f in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 bound to our chassis
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.402 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.402 2 DEBUG nova.compute.manager [None req-860f943d-18a2-4ac0-b0db-48d4d2d4d0b0 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Oct 14 09:10:17 compute-0 NetworkManager[44885]: <info>  [1760433017.4048] device (tap9ecc282f-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:10:17 compute-0 NetworkManager[44885]: <info>  [1760433017.4060] device (tap9ecc282f-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[099459e5-8c67-46f8-8fac-2914fd1a586c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.415 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99db3452-81 in ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.420 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99db3452-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.420 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aed3d44c-d8a0-456d-a467-4890a6efb3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_controller[152662]: 2025-10-14T09:10:17Z|00773|binding|INFO|Setting lport 9ecc282f-cf6e-46ad-96b5-188f7693ce1f ovn-installed in OVS
Oct 14 09:10:17 compute-0 ovn_controller[152662]: 2025-10-14T09:10:17Z|00774|binding|INFO|Setting lport 9ecc282f-cf6e-46ad-96b5-188f7693ce1f up in Southbound
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.423 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cedacf2-6305-4804-8609-c35c75630fa8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:17 compute-0 systemd-machined[214636]: New machine qemu-94-instance-0000004b.
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.451 2 INFO nova.compute.manager [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Took 9.22 seconds to build instance.
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.452 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2e63dd-8752-49ca-a25b-a8561bddd02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-0000004b.
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.469 2 DEBUG oslo_concurrency.lockutils [None req-b22d57c6-4e43-4dd5-adc8-544346411fcf 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.473 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a52903-90d0-4680-b5c9-347908f96719]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.500 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbfec52-b514-47a3-b9ca-f36a48c3c28c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.507 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f04c1124-8921-4c1d-8a8e-d0753cbef7e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 NetworkManager[44885]: <info>  [1760433017.5085] manager: (tap99db3452-80): new Veth device (/org/freedesktop/NetworkManager/Devices/325)
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.540 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f07bb5a9-3837-4cfd-a16d-7cdde97bb638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.544 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[73501a3d-2ec8-4293-9bf8-6b96f3ce60f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 NetworkManager[44885]: <info>  [1760433017.5685] device (tap99db3452-80): carrier: link connected
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.575 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[efe9152b-733b-46a6-a148-5104d15f12ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.593 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4636f8de-4648-412f-a3a2-03d576598a30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 226], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679633, 'reachable_time': 42731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338721, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.611 2 DEBUG nova.network.neutron [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24aadbae-33ab-4a1f-9868-ac9cb5d1bfbf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:a670'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679633, 'tstamp': 679633}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338722, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.624 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Releasing lock "refresh_cache-2a2a6043-9b27-4acb-b7db-927b576ffafc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.624 2 DEBUG nova.compute.manager [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[69d180a7-a132-4745-be70-705a07859c6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99db3452-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:a6:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 226], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679633, 'reachable_time': 42731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338723, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1918fe-9331-4123-9806-96d7428d4100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Oct 14 09:10:17 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004c.scope: Consumed 3.837s CPU time.
Oct 14 09:10:17 compute-0 systemd-machined[214636]: Machine qemu-92-instance-0000004c terminated.
Oct 14 09:10:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[63ecedd7-8cc5-413b-8894-3a4d9097fc00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.764 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.764 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.764 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99db3452-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:17 compute-0 kernel: tap99db3452-80: entered promiscuous mode
Oct 14 09:10:17 compute-0 NetworkManager[44885]: <info>  [1760433017.7669] manager: (tap99db3452-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.768 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99db3452-80, col_values=(('external_ids', {'iface-id': '59e7d558-49d1-48cf-b926-27e93fe381b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:17 compute-0 ovn_controller[152662]: 2025-10-14T09:10:17Z|00775|binding|INFO|Releasing lport 59e7d558-49d1-48cf-b926-27e93fe381b1 from this chassis (sb_readonly=0)
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.787 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.789 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[02f7e4c4-f20e-4ead-839b-964b6bd8a8e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.790 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/99db3452-8467-4a2b-a51d-30679c346bb2.pid.haproxy
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 99db3452-8467-4a2b-a51d-30679c346bb2
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:10:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:17.790 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'env', 'PROCESS_TAG=haproxy-99db3452-8467-4a2b-a51d-30679c346bb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99db3452-8467-4a2b-a51d-30679c346bb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.846 2 INFO nova.virt.libvirt.driver [-] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Instance destroyed successfully.
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.846 2 DEBUG nova.objects.instance [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lazy-loading 'resources' on Instance uuid 2a2a6043-9b27-4acb-b7db-927b576ffafc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 306 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 683 KiB/s rd, 5.3 MiB/s wr, 176 op/s
Oct 14 09:10:17 compute-0 nova_compute[259627]: 2025-10-14 09:10:17.954 2 DEBUG nova.compute.manager [None req-860f943d-18a2-4ac0-b0db-48d4d2d4d0b0 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.257 2 INFO nova.virt.libvirt.driver [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Deleting instance files /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc_del
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.258 2 INFO nova.virt.libvirt.driver [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Deletion of /var/lib/nova/instances/2a2a6043-9b27-4acb-b7db-927b576ffafc_del complete
Oct 14 09:10:18 compute-0 podman[338817]: 2025-10-14 09:10:18.260078033 +0000 UTC m=+0.084076982 container create e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 09:10:18 compute-0 systemd[1]: Started libpod-conmon-e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40.scope.
Oct 14 09:10:18 compute-0 podman[338817]: 2025-10-14 09:10:18.216971361 +0000 UTC m=+0.040970360 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:10:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b28811f0eba69b926f8c12aae3330bb9c962d0919cdc644ab96129ab8804f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:18 compute-0 podman[338817]: 2025-10-14 09:10:18.358998929 +0000 UTC m=+0.182997898 container init e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:10:18 compute-0 podman[338817]: 2025-10-14 09:10:18.367066398 +0000 UTC m=+0.191065347 container start e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.372 2 INFO nova.compute.manager [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.373 2 DEBUG oslo.service.loopingcall [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.373 2 DEBUG nova.compute.manager [-] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.373 2 DEBUG nova.network.neutron [-] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:10:18 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[338835]: [NOTICE]   (338839) : New worker (338841) forked
Oct 14 09:10:18 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[338835]: [NOTICE]   (338839) : Loading success.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.567 2 DEBUG nova.network.neutron [-] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.580 2 DEBUG nova.network.neutron [-] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.594 2 INFO nova.compute.manager [-] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Took 0.22 seconds to deallocate network for instance.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.644 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.646 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.705 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433018.7051144, 2f8845c8-0bc8-4ce6-bf89-f775899b2666 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.706 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] VM Started (Lifecycle Event)
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.729 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.736 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433018.7064757, 2f8845c8-0bc8-4ce6-bf89-f775899b2666 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.736 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] VM Paused (Lifecycle Event)
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.753 2 DEBUG nova.compute.manager [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.754 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.754 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.754 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.755 2 DEBUG nova.compute.manager [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] No waiting events found dispatching network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.756 2 WARNING nova.compute.manager [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received unexpected event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 for instance with vm_state active and task_state None.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.756 2 DEBUG nova.compute.manager [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Received event network-vif-plugged-9ecc282f-cf6e-46ad-96b5-188f7693ce1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.757 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.757 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.757 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.758 2 DEBUG nova.compute.manager [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Processing event network-vif-plugged-9ecc282f-cf6e-46ad-96b5-188f7693ce1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.758 2 DEBUG nova.compute.manager [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Received event network-vif-plugged-9ecc282f-cf6e-46ad-96b5-188f7693ce1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.759 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.759 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.759 2 DEBUG oslo_concurrency.lockutils [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.760 2 DEBUG nova.compute.manager [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] No waiting events found dispatching network-vif-plugged-9ecc282f-cf6e-46ad-96b5-188f7693ce1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.760 2 WARNING nova.compute.manager [req-e4a27dc9-1bea-446f-b724-cb9a2be62990 req-d8ee2264-0c1b-4151-b1a1-79659b58ed21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Received unexpected event network-vif-plugged-9ecc282f-cf6e-46ad-96b5-188f7693ce1f for instance with vm_state building and task_state spawning.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.762 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.767 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.769 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.773 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433018.7666354, 2f8845c8-0bc8-4ce6-bf89-f775899b2666 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.774 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] VM Resumed (Lifecycle Event)
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.779 2 INFO nova.virt.libvirt.driver [-] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Instance spawned successfully.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.779 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.799 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.810 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.811 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.811 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.812 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.812 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.813 2 DEBUG nova.virt.libvirt.driver [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.832 2 DEBUG oslo_concurrency.processutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.881 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.883 2 INFO nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Took 8.79 seconds to spawn the instance on the hypervisor.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.884 2 DEBUG nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.934 2 INFO nova.compute.manager [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Rescuing
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.935 2 DEBUG oslo_concurrency.lockutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.935 2 DEBUG oslo_concurrency.lockutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquired lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.935 2 DEBUG nova.network.neutron [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.947 2 INFO nova.compute.manager [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Took 10.29 seconds to build instance.
Oct 14 09:10:18 compute-0 nova_compute[259627]: 2025-10-14 09:10:18.964 2 DEBUG oslo_concurrency.lockutils [None req-258f3b3c-42e7-46b6-8fa4-f8bf2d034a4f 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:19 compute-0 ceph-mon[74249]: pgmap v1675: 305 pgs: 305 active+clean; 306 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 683 KiB/s rd, 5.3 MiB/s wr, 176 op/s
Oct 14 09:10:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031114954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:19 compute-0 nova_compute[259627]: 2025-10-14 09:10:19.341 2 DEBUG oslo_concurrency.processutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:19 compute-0 nova_compute[259627]: 2025-10-14 09:10:19.349 2 DEBUG nova.compute.provider_tree [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:19 compute-0 nova_compute[259627]: 2025-10-14 09:10:19.372 2 DEBUG nova.scheduler.client.report [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:19 compute-0 nova_compute[259627]: 2025-10-14 09:10:19.415 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:19 compute-0 nova_compute[259627]: 2025-10-14 09:10:19.458 2 INFO nova.scheduler.client.report [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Deleted allocations for instance 2a2a6043-9b27-4acb-b7db-927b576ffafc
Oct 14 09:10:19 compute-0 nova_compute[259627]: 2025-10-14 09:10:19.559 2 DEBUG oslo_concurrency.lockutils [None req-670d8ea8-bc66-4b85-9c8e-5b31a2cc2763 0c82bf3580174f1486f9188a0fcf0e2f 1aebe9af1e584ceda68760afadd7b7e1 - - default default] Lock "2a2a6043-9b27-4acb-b7db-927b576ffafc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 306 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 683 KiB/s rd, 5.3 MiB/s wr, 176 op/s
Oct 14 09:10:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1031114954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:20 compute-0 nova_compute[259627]: 2025-10-14 09:10:20.575 2 DEBUG nova.network.neutron [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Updating instance_info_cache with network_info: [{"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:20 compute-0 nova_compute[259627]: 2025-10-14 09:10:20.603 2 DEBUG oslo_concurrency.lockutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Releasing lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:20 compute-0 nova_compute[259627]: 2025-10-14 09:10:20.929 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:10:21 compute-0 ceph-mon[74249]: pgmap v1676: 305 pgs: 305 active+clean; 306 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 683 KiB/s rd, 5.3 MiB/s wr, 176 op/s
Oct 14 09:10:21 compute-0 nova_compute[259627]: 2025-10-14 09:10:21.230 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433006.2283092, dfb68a47-709d-40e3-8a17-01d9c3fb084b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:21 compute-0 nova_compute[259627]: 2025-10-14 09:10:21.231 2 INFO nova.compute.manager [-] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] VM Stopped (Lifecycle Event)
Oct 14 09:10:21 compute-0 nova_compute[259627]: 2025-10-14 09:10:21.252 2 DEBUG nova.compute.manager [None req-d59b9a78-61f2-4672-bcba-6b1835c6b99b - - - - - -] [instance: dfb68a47-709d-40e3-8a17-01d9c3fb084b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:21 compute-0 nova_compute[259627]: 2025-10-14 09:10:21.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 262 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.4 MiB/s wr, 409 op/s
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.959 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.959 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.960 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.960 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.960 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.962 2 INFO nova.compute.manager [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Terminating instance
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.963 2 DEBUG nova.compute.manager [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:10:22 compute-0 nova_compute[259627]: 2025-10-14 09:10:22.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:23 compute-0 kernel: tap9ecc282f-cf (unregistering): left promiscuous mode
Oct 14 09:10:23 compute-0 NetworkManager[44885]: <info>  [1760433023.0127] device (tap9ecc282f-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 ovn_controller[152662]: 2025-10-14T09:10:23Z|00776|binding|INFO|Releasing lport 9ecc282f-cf6e-46ad-96b5-188f7693ce1f from this chassis (sb_readonly=0)
Oct 14 09:10:23 compute-0 ovn_controller[152662]: 2025-10-14T09:10:23Z|00777|binding|INFO|Setting lport 9ecc282f-cf6e-46ad-96b5-188f7693ce1f down in Southbound
Oct 14 09:10:23 compute-0 ovn_controller[152662]: 2025-10-14T09:10:23Z|00778|binding|INFO|Removing iface tap9ecc282f-cf ovn-installed in OVS
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.030 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:96:2b 10.100.0.12'], port_security=['fa:16:3e:70:96:2b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2f8845c8-0bc8-4ce6-bf89-f775899b2666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99db3452-8467-4a2b-a51d-30679c346bb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9099e3128b584ff7a140b8021451223e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '14cbcf9b-48b8-496d-985e-160ef22d10a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cb6279-57ca-4d4a-8018-5af2d7c42670, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc282f-cf6e-46ad-96b5-188f7693ce1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.032 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc282f-cf6e-46ad-96b5-188f7693ce1f in datapath 99db3452-8467-4a2b-a51d-30679c346bb2 unbound from our chassis
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.034 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99db3452-8467-4a2b-a51d-30679c346bb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.035 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07b9f289-4ca5-4c82-95e9-3c45218d5f60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.036 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 namespace which is not needed anymore
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Oct 14 09:10:23 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004b.scope: Consumed 5.405s CPU time.
Oct 14 09:10:23 compute-0 systemd-machined[214636]: Machine qemu-94-instance-0000004b terminated.
Oct 14 09:10:23 compute-0 ceph-mon[74249]: pgmap v1677: 305 pgs: 305 active+clean; 262 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.4 MiB/s wr, 409 op/s
Oct 14 09:10:23 compute-0 kernel: tap9ecc282f-cf: entered promiscuous mode
Oct 14 09:10:23 compute-0 kernel: tap9ecc282f-cf (unregistering): left promiscuous mode
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.213 2 INFO nova.virt.libvirt.driver [-] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Instance destroyed successfully.
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.214 2 DEBUG nova.objects.instance [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lazy-loading 'resources' on Instance uuid 2f8845c8-0bc8-4ce6-bf89-f775899b2666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.236 2 DEBUG nova.virt.libvirt.vif [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1316946340',display_name='tempest-ServerDiskConfigTestJSON-server-1316946340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1316946340',id=75,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:10:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9099e3128b584ff7a140b8021451223e',ramdisk_id='',reservation_id='r-eokpjxke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1253454894',owner_user_name='tempest-ServerDiskConfigTestJSON-1253454894-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:10:21Z,user_data=None,user_id='979aa20794dc414f91c59f224a0db083',uuid=2f8845c8-0bc8-4ce6-bf89-f775899b2666,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.237 2 DEBUG nova.network.os_vif_util [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converting VIF {"id": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "address": "fa:16:3e:70:96:2b", "network": {"id": "99db3452-8467-4a2b-a51d-30679c346bb2", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-255539657-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9099e3128b584ff7a140b8021451223e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc282f-cf", "ovs_interfaceid": "9ecc282f-cf6e-46ad-96b5-188f7693ce1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.238 2 DEBUG nova.network.os_vif_util [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:96:2b,bridge_name='br-int',has_traffic_filtering=True,id=9ecc282f-cf6e-46ad-96b5-188f7693ce1f,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc282f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.239 2 DEBUG os_vif [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:96:2b,bridge_name='br-int',has_traffic_filtering=True,id=9ecc282f-cf6e-46ad-96b5-188f7693ce1f,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc282f-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.243 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ecc282f-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[338835]: [NOTICE]   (338839) : haproxy version is 2.8.14-c23fe91
Oct 14 09:10:23 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[338835]: [NOTICE]   (338839) : path to executable is /usr/sbin/haproxy
Oct 14 09:10:23 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[338835]: [WARNING]  (338839) : Exiting Master process...
Oct 14 09:10:23 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[338835]: [WARNING]  (338839) : Exiting Master process...
Oct 14 09:10:23 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[338835]: [ALERT]    (338839) : Current worker (338841) exited with code 143 (Terminated)
Oct 14 09:10:23 compute-0 neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2[338835]: [WARNING]  (338839) : All workers exited. Exiting... (0)
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.251 2 INFO os_vif [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:96:2b,bridge_name='br-int',has_traffic_filtering=True,id=9ecc282f-cf6e-46ad-96b5-188f7693ce1f,network=Network(99db3452-8467-4a2b-a51d-30679c346bb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc282f-cf')
Oct 14 09:10:23 compute-0 systemd[1]: libpod-e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40.scope: Deactivated successfully.
Oct 14 09:10:23 compute-0 podman[338893]: 2025-10-14 09:10:23.260734837 +0000 UTC m=+0.085358393 container died e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40-userdata-shm.mount: Deactivated successfully.
Oct 14 09:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-09b28811f0eba69b926f8c12aae3330bb9c962d0919cdc644ab96129ab8804f6-merged.mount: Deactivated successfully.
Oct 14 09:10:23 compute-0 podman[338893]: 2025-10-14 09:10:23.305583501 +0000 UTC m=+0.130207057 container cleanup e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:10:23 compute-0 systemd[1]: libpod-conmon-e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40.scope: Deactivated successfully.
Oct 14 09:10:23 compute-0 podman[338949]: 2025-10-14 09:10:23.400070238 +0000 UTC m=+0.059546527 container remove e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8cc606-d2ff-4d08-9d75-d5870369032c]: (4, ('Tue Oct 14 09:10:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40)\ne3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40\nTue Oct 14 09:10:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 (e3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40)\ne3412da2ed47ba97fcaa6ed26556c30c1ae75777e219f160f225041967bd2e40\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.409 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61bdd61e-2d1f-4994-80e8-8f5201032ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.410 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99db3452-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 kernel: tap99db3452-80: left promiscuous mode
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.457 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04c51e5a-3e28-4659-a774-756fedddb108]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11f72dcc-4081-4347-b78f-d0a737d2949c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.491 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb38d031-d8f1-47be-a201-988aa5e79337]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.513 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8c91e9a2-cd03-4d6e-8a3b-4f917f57d114]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679625, 'reachable_time': 24889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338965, 'error': None, 'target': 'ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d99db3452\x2d8467\x2d4a2b\x2da51d\x2d30679c346bb2.mount: Deactivated successfully.
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.519 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99db3452-8467-4a2b-a51d-30679c346bb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:10:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:23.519 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6147b003-23f6-4e63-ba65-7718049bd683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.676 2 INFO nova.virt.libvirt.driver [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Deleting instance files /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666_del
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.677 2 INFO nova.virt.libvirt.driver [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Deletion of /var/lib/nova/instances/2f8845c8-0bc8-4ce6-bf89-f775899b2666_del complete
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.742 2 INFO nova.compute.manager [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.743 2 DEBUG oslo.service.loopingcall [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.743 2 DEBUG nova.compute.manager [-] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.743 2 DEBUG nova.network.neutron [-] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:10:23 compute-0 sudo[338966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:23 compute-0 sudo[338966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:23 compute-0 sudo[338966]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 262 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.2 MiB/s wr, 357 op/s
Oct 14 09:10:23 compute-0 sudo[338991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:10:23 compute-0 sudo[338991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:23 compute-0 sudo[338991]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:23 compute-0 nova_compute[259627]: 2025-10-14 09:10:23.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:23 compute-0 sudo[339016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:23 compute-0 sudo[339016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:23 compute-0 sudo[339016]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:24 compute-0 sudo[339041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:10:24 compute-0 sudo[339041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:24 compute-0 nova_compute[259627]: 2025-10-14 09:10:24.314 2 DEBUG nova.network.neutron [-] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:24 compute-0 nova_compute[259627]: 2025-10-14 09:10:24.336 2 INFO nova.compute.manager [-] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Took 0.59 seconds to deallocate network for instance.
Oct 14 09:10:24 compute-0 nova_compute[259627]: 2025-10-14 09:10:24.409 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:24 compute-0 nova_compute[259627]: 2025-10-14 09:10:24.410 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:24 compute-0 nova_compute[259627]: 2025-10-14 09:10:24.414 2 DEBUG nova.compute.manager [req-2a9f0099-28cc-4477-8ec1-240d1a223007 req-ca217188-e48e-4b4e-aa3e-a589c1fb92b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Received event network-vif-deleted-9ecc282f-cf6e-46ad-96b5-188f7693ce1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:24 compute-0 nova_compute[259627]: 2025-10-14 09:10:24.519 2 DEBUG oslo_concurrency.processutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:24 compute-0 sudo[339041]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:10:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:10:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:10:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:10:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:10:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:10:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d672299d-31b9-4d79-8321-18b902e8b6dd does not exist
Oct 14 09:10:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev dac189f2-f3e4-4fa1-8afa-d797f3260646 does not exist
Oct 14 09:10:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8dac579d-58cc-42ca-9e47-8cb373463946 does not exist
Oct 14 09:10:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:10:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:10:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:10:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:10:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:10:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:10:24 compute-0 sudo[339117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:24 compute-0 sudo[339117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:24 compute-0 sudo[339117]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:24 compute-0 sudo[339142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:10:24 compute-0 sudo[339142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:24 compute-0 sudo[339142]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950983331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:25 compute-0 nova_compute[259627]: 2025-10-14 09:10:25.031 2 DEBUG oslo_concurrency.processutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:25 compute-0 nova_compute[259627]: 2025-10-14 09:10:25.042 2 DEBUG nova.compute.provider_tree [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:25 compute-0 nova_compute[259627]: 2025-10-14 09:10:25.064 2 DEBUG nova.scheduler.client.report [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:25 compute-0 sudo[339167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:25 compute-0 sudo[339167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:25 compute-0 sudo[339167]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:25 compute-0 nova_compute[259627]: 2025-10-14 09:10:25.100 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:25 compute-0 nova_compute[259627]: 2025-10-14 09:10:25.139 2 INFO nova.scheduler.client.report [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Deleted allocations for instance 2f8845c8-0bc8-4ce6-bf89-f775899b2666
Oct 14 09:10:25 compute-0 sudo[339194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:10:25 compute-0 ceph-mon[74249]: pgmap v1678: 305 pgs: 305 active+clean; 262 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.2 MiB/s wr, 357 op/s
Oct 14 09:10:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:10:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:10:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:10:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:10:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:10:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:10:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/950983331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:25 compute-0 sudo[339194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:25 compute-0 nova_compute[259627]: 2025-10-14 09:10:25.234 2 DEBUG oslo_concurrency.lockutils [None req-3262bcda-a228-4bce-8962-5c74411cba91 979aa20794dc414f91c59f224a0db083 9099e3128b584ff7a140b8021451223e - - default default] Lock "2f8845c8-0bc8-4ce6-bf89-f775899b2666" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:25 compute-0 podman[339258]: 2025-10-14 09:10:25.619332622 +0000 UTC m=+0.048160927 container create d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_driscoll, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:10:25 compute-0 systemd[1]: Started libpod-conmon-d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c.scope.
Oct 14 09:10:25 compute-0 podman[339258]: 2025-10-14 09:10:25.600310903 +0000 UTC m=+0.029139188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:10:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:25 compute-0 podman[339258]: 2025-10-14 09:10:25.710960609 +0000 UTC m=+0.139788894 container init d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_driscoll, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:10:25 compute-0 podman[339258]: 2025-10-14 09:10:25.717430848 +0000 UTC m=+0.146259113 container start d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:10:25 compute-0 podman[339258]: 2025-10-14 09:10:25.720622587 +0000 UTC m=+0.149450892 container attach d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:10:25 compute-0 sleepy_driscoll[339274]: 167 167
Oct 14 09:10:25 compute-0 systemd[1]: libpod-d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c.scope: Deactivated successfully.
Oct 14 09:10:25 compute-0 conmon[339274]: conmon d33e1e4a9e8a7fa8da70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c.scope/container/memory.events
Oct 14 09:10:25 compute-0 podman[339258]: 2025-10-14 09:10:25.724597764 +0000 UTC m=+0.153426039 container died d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 14 09:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6054e824848c7f35db780334534b0e085b96f4e66e7baa21e715338cc8b10fc-merged.mount: Deactivated successfully.
Oct 14 09:10:25 compute-0 podman[339258]: 2025-10-14 09:10:25.765079171 +0000 UTC m=+0.193907436 container remove d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 09:10:25 compute-0 systemd[1]: libpod-conmon-d33e1e4a9e8a7fa8da701d44e075c9a7ddf84317bf11fdfe4a93a0dd0cfd914c.scope: Deactivated successfully.
Oct 14 09:10:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 215 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 3.2 MiB/s wr, 388 op/s
Oct 14 09:10:26 compute-0 podman[339299]: 2025-10-14 09:10:26.006825775 +0000 UTC m=+0.075360507 container create 46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:10:26 compute-0 podman[339299]: 2025-10-14 09:10:25.97412625 +0000 UTC m=+0.042661022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:10:26 compute-0 systemd[1]: Started libpod-conmon-46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396.scope.
Oct 14 09:10:26 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afad33a17de1be77f5d69bbcd9b640370c99bb82d7671deddf33b9683fead366/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afad33a17de1be77f5d69bbcd9b640370c99bb82d7671deddf33b9683fead366/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afad33a17de1be77f5d69bbcd9b640370c99bb82d7671deddf33b9683fead366/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afad33a17de1be77f5d69bbcd9b640370c99bb82d7671deddf33b9683fead366/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afad33a17de1be77f5d69bbcd9b640370c99bb82d7671deddf33b9683fead366/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:26 compute-0 podman[339299]: 2025-10-14 09:10:26.146262629 +0000 UTC m=+0.214797401 container init 46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shockley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:10:26 compute-0 podman[339299]: 2025-10-14 09:10:26.157392073 +0000 UTC m=+0.225926805 container start 46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shockley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:10:26 compute-0 podman[339299]: 2025-10-14 09:10:26.163335239 +0000 UTC m=+0.231869971 container attach 46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:10:26 compute-0 podman[339322]: 2025-10-14 09:10:26.700077908 +0000 UTC m=+0.097906582 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:10:26 compute-0 podman[339321]: 2025-10-14 09:10:26.750443889 +0000 UTC m=+0.150924968 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller)
Oct 14 09:10:27 compute-0 ceph-mon[74249]: pgmap v1679: 305 pgs: 305 active+clean; 215 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 3.2 MiB/s wr, 388 op/s
Oct 14 09:10:27 compute-0 intelligent_shockley[339315]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:10:27 compute-0 intelligent_shockley[339315]: --> relative data size: 1.0
Oct 14 09:10:27 compute-0 intelligent_shockley[339315]: --> All data devices are unavailable
Oct 14 09:10:27 compute-0 systemd[1]: libpod-46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396.scope: Deactivated successfully.
Oct 14 09:10:27 compute-0 podman[339299]: 2025-10-14 09:10:27.220398601 +0000 UTC m=+1.288933303 container died 46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:10:27 compute-0 systemd[1]: libpod-46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396.scope: Consumed 1.011s CPU time.
Oct 14 09:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-afad33a17de1be77f5d69bbcd9b640370c99bb82d7671deddf33b9683fead366-merged.mount: Deactivated successfully.
Oct 14 09:10:27 compute-0 podman[339299]: 2025-10-14 09:10:27.278311738 +0000 UTC m=+1.346846440 container remove 46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shockley, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:10:27 compute-0 systemd[1]: libpod-conmon-46237730f6ec4cc6120b5408b90ffac37d1e95bbc8738bc874396552ced51396.scope: Deactivated successfully.
Oct 14 09:10:27 compute-0 sudo[339194]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:27 compute-0 sudo[339402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:27 compute-0 sudo[339402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:27 compute-0 sudo[339402]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:27 compute-0 sudo[339427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:10:27 compute-0 sudo[339427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:27 compute-0 sudo[339427]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:27 compute-0 nova_compute[259627]: 2025-10-14 09:10:27.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:27 compute-0 sudo[339452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:27 compute-0 sudo[339452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:27 compute-0 sudo[339452]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:27 compute-0 sudo[339477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:10:27 compute-0 sudo[339477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 215 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 264 op/s
Oct 14 09:10:27 compute-0 nova_compute[259627]: 2025-10-14 09:10:27.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.013 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.013 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:28 compute-0 podman[339540]: 2025-10-14 09:10:28.055499018 +0000 UTC m=+0.067270828 container create f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:10:28 compute-0 systemd[1]: Started libpod-conmon-f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79.scope.
Oct 14 09:10:28 compute-0 podman[339540]: 2025-10-14 09:10:28.025687324 +0000 UTC m=+0.037459114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:10:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:28 compute-0 podman[339540]: 2025-10-14 09:10:28.155949942 +0000 UTC m=+0.167721802 container init f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 09:10:28 compute-0 podman[339540]: 2025-10-14 09:10:28.16603199 +0000 UTC m=+0.177803770 container start f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:10:28 compute-0 charming_mcnulty[339557]: 167 167
Oct 14 09:10:28 compute-0 systemd[1]: libpod-f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79.scope: Deactivated successfully.
Oct 14 09:10:28 compute-0 podman[339540]: 2025-10-14 09:10:28.172741715 +0000 UTC m=+0.184513525 container attach f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:10:28 compute-0 conmon[339557]: conmon f9f16f9069f82a359c80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79.scope/container/memory.events
Oct 14 09:10:28 compute-0 podman[339540]: 2025-10-14 09:10:28.173660028 +0000 UTC m=+0.185431838 container died f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a19d23c48a05dba8f8c4bc207a654ef49dfe790a1ab3669864a6e18ac5219d38-merged.mount: Deactivated successfully.
Oct 14 09:10:28 compute-0 podman[339540]: 2025-10-14 09:10:28.217069007 +0000 UTC m=+0.228840787 container remove f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:10:28 compute-0 systemd[1]: libpod-conmon-f9f16f9069f82a359c80bb7fbe8182fd0b3495a1acba95b371ef63efec224c79.scope: Deactivated successfully.
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2101907560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.492 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:28 compute-0 podman[339599]: 2025-10-14 09:10:28.499672497 +0000 UTC m=+0.112582424 container create 0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_driscoll, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 09:10:28 compute-0 podman[339599]: 2025-10-14 09:10:28.467513325 +0000 UTC m=+0.080423342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:10:28 compute-0 systemd[1]: Started libpod-conmon-0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53.scope.
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.587 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.588 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:10:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d518725fb089d33582d580be7c6bcf5e166bd7b83e367096752df014f62de9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d518725fb089d33582d580be7c6bcf5e166bd7b83e367096752df014f62de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d518725fb089d33582d580be7c6bcf5e166bd7b83e367096752df014f62de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d518725fb089d33582d580be7c6bcf5e166bd7b83e367096752df014f62de9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:28 compute-0 podman[339599]: 2025-10-14 09:10:28.608657401 +0000 UTC m=+0.221567368 container init 0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.614 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.614 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.615 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:10:28 compute-0 podman[339599]: 2025-10-14 09:10:28.619635571 +0000 UTC m=+0.232545518 container start 0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:10:28 compute-0 podman[339599]: 2025-10-14 09:10:28.623044375 +0000 UTC m=+0.235954312 container attach 0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.805 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.808 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3523MB free_disk=59.90094757080078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.809 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.810 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.897 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 16c93e17-00f2-4710-a0e4-83eb60430088 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.898 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e503e351-20ec-43fb-b5f8-7af68dff5bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.898 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.899 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:10:28 compute-0 nova_compute[259627]: 2025-10-14 09:10:28.975 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:29 compute-0 ceph-mon[74249]: pgmap v1680: 305 pgs: 305 active+clean; 215 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 264 op/s
Oct 14 09:10:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2101907560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:29 compute-0 serene_driscoll[339618]: {
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:     "0": [
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:         {
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "devices": [
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "/dev/loop3"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             ],
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_name": "ceph_lv0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_size": "21470642176",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "name": "ceph_lv0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "tags": {
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cluster_name": "ceph",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.crush_device_class": "",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.encrypted": "0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osd_id": "0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.type": "block",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.vdo": "0"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             },
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "type": "block",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "vg_name": "ceph_vg0"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:         }
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:     ],
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:     "1": [
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:         {
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "devices": [
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "/dev/loop4"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             ],
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_name": "ceph_lv1",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_size": "21470642176",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "name": "ceph_lv1",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "tags": {
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cluster_name": "ceph",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.crush_device_class": "",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.encrypted": "0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osd_id": "1",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.type": "block",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.vdo": "0"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             },
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "type": "block",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "vg_name": "ceph_vg1"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:         }
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:     ],
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:     "2": [
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:         {
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "devices": [
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "/dev/loop5"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             ],
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_name": "ceph_lv2",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_size": "21470642176",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "name": "ceph_lv2",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "tags": {
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.cluster_name": "ceph",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.crush_device_class": "",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.encrypted": "0",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osd_id": "2",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.type": "block",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:                 "ceph.vdo": "0"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             },
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "type": "block",
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:             "vg_name": "ceph_vg2"
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:         }
Oct 14 09:10:29 compute-0 serene_driscoll[339618]:     ]
Oct 14 09:10:29 compute-0 serene_driscoll[339618]: }
Oct 14 09:10:29 compute-0 systemd[1]: libpod-0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53.scope: Deactivated successfully.
Oct 14 09:10:29 compute-0 podman[339599]: 2025-10-14 09:10:29.420261109 +0000 UTC m=+1.033171096 container died 0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2d518725fb089d33582d580be7c6bcf5e166bd7b83e367096752df014f62de9-merged.mount: Deactivated successfully.
Oct 14 09:10:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338346173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:29 compute-0 podman[339599]: 2025-10-14 09:10:29.497605333 +0000 UTC m=+1.110515300 container remove 0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_driscoll, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:10:29 compute-0 nova_compute[259627]: 2025-10-14 09:10:29.506 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:29 compute-0 nova_compute[259627]: 2025-10-14 09:10:29.514 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:29 compute-0 systemd[1]: libpod-conmon-0bfbf05c30e0207ce93923bea78b7456c71679438ff0bd635b9a84444faa6d53.scope: Deactivated successfully.
Oct 14 09:10:29 compute-0 sudo[339477]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:29 compute-0 nova_compute[259627]: 2025-10-14 09:10:29.534 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:29 compute-0 nova_compute[259627]: 2025-10-14 09:10:29.563 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:10:29 compute-0 nova_compute[259627]: 2025-10-14 09:10:29.564 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:29 compute-0 sudo[339660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:29 compute-0 sudo[339660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:29 compute-0 sudo[339660]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:29 compute-0 sudo[339685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:10:29 compute-0 sudo[339685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:29 compute-0 sudo[339685]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:29 compute-0 sudo[339710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:29 compute-0 sudo[339710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:29 compute-0 sudo[339710]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:29 compute-0 sudo[339735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:10:29 compute-0 sudo[339735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 215 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 264 op/s
Oct 14 09:10:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/338346173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:30 compute-0 podman[339800]: 2025-10-14 09:10:30.221680226 +0000 UTC m=+0.060272286 container create 2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:10:30 compute-0 systemd[1]: Started libpod-conmon-2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db.scope.
Oct 14 09:10:30 compute-0 podman[339800]: 2025-10-14 09:10:30.189902963 +0000 UTC m=+0.028495073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:10:30 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:30 compute-0 podman[339800]: 2025-10-14 09:10:30.326974749 +0000 UTC m=+0.165566849 container init 2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:10:30 compute-0 podman[339800]: 2025-10-14 09:10:30.338163784 +0000 UTC m=+0.176755804 container start 2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:10:30 compute-0 podman[339800]: 2025-10-14 09:10:30.341963278 +0000 UTC m=+0.180555328 container attach 2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:10:30 compute-0 dazzling_ritchie[339816]: 167 167
Oct 14 09:10:30 compute-0 systemd[1]: libpod-2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db.scope: Deactivated successfully.
Oct 14 09:10:30 compute-0 podman[339800]: 2025-10-14 09:10:30.34487131 +0000 UTC m=+0.183463380 container died 2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b6ed53816313679c1b792a234b76af8e4910205df74b92dbb7a3b5196da2691-merged.mount: Deactivated successfully.
Oct 14 09:10:30 compute-0 podman[339800]: 2025-10-14 09:10:30.400704145 +0000 UTC m=+0.239296195 container remove 2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:10:30 compute-0 systemd[1]: libpod-conmon-2d2893647e25ce1e957f5022b056f79db09a37a8ad435d016d6ac1285802e9db.scope: Deactivated successfully.
Oct 14 09:10:30 compute-0 nova_compute[259627]: 2025-10-14 09:10:30.565 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:30 compute-0 nova_compute[259627]: 2025-10-14 09:10:30.565 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:30 compute-0 nova_compute[259627]: 2025-10-14 09:10:30.566 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:10:30 compute-0 podman[339840]: 2025-10-14 09:10:30.646585779 +0000 UTC m=+0.077131129 container create 8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:10:30 compute-0 systemd[1]: Started libpod-conmon-8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5.scope.
Oct 14 09:10:30 compute-0 podman[339840]: 2025-10-14 09:10:30.615446062 +0000 UTC m=+0.045991472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:10:30 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb7920f08431eac287f5132c0bc37c47b2a1cac8da062d62102329db58315269/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb7920f08431eac287f5132c0bc37c47b2a1cac8da062d62102329db58315269/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb7920f08431eac287f5132c0bc37c47b2a1cac8da062d62102329db58315269/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb7920f08431eac287f5132c0bc37c47b2a1cac8da062d62102329db58315269/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:30 compute-0 podman[339840]: 2025-10-14 09:10:30.767096847 +0000 UTC m=+0.197642227 container init 8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:10:30 compute-0 podman[339840]: 2025-10-14 09:10:30.782792374 +0000 UTC m=+0.213337724 container start 8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:10:30 compute-0 podman[339840]: 2025-10-14 09:10:30.787736955 +0000 UTC m=+0.218282365 container attach 8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:10:31 compute-0 nova_compute[259627]: 2025-10-14 09:10:31.042 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:10:31 compute-0 ceph-mon[74249]: pgmap v1681: 305 pgs: 305 active+clean; 215 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 264 op/s
Oct 14 09:10:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 248 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.2 MiB/s wr, 329 op/s
Oct 14 09:10:31 compute-0 hopeful_elion[339857]: {
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "osd_id": 2,
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "type": "bluestore"
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:     },
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "osd_id": 1,
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "type": "bluestore"
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:     },
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "osd_id": 0,
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:         "type": "bluestore"
Oct 14 09:10:31 compute-0 hopeful_elion[339857]:     }
Oct 14 09:10:31 compute-0 hopeful_elion[339857]: }
Oct 14 09:10:31 compute-0 systemd[1]: libpod-8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5.scope: Deactivated successfully.
Oct 14 09:10:31 compute-0 systemd[1]: libpod-8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5.scope: Consumed 1.162s CPU time.
Oct 14 09:10:31 compute-0 podman[339840]: 2025-10-14 09:10:31.941814718 +0000 UTC m=+1.372360038 container died 8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb7920f08431eac287f5132c0bc37c47b2a1cac8da062d62102329db58315269-merged.mount: Deactivated successfully.
Oct 14 09:10:31 compute-0 nova_compute[259627]: 2025-10-14 09:10:31.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:31 compute-0 nova_compute[259627]: 2025-10-14 09:10:31.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:10:31 compute-0 nova_compute[259627]: 2025-10-14 09:10:31.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:10:32 compute-0 podman[339840]: 2025-10-14 09:10:32.016978769 +0000 UTC m=+1.447524079 container remove 8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:10:32 compute-0 systemd[1]: libpod-conmon-8d95a93e00ab75c07a9790604d2b295d48269e4aa5462f71006f82b5bb922bd5.scope: Deactivated successfully.
Oct 14 09:10:32 compute-0 sudo[339735]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:10:32 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:10:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:10:32 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 4d21a876-1f78-41af-956b-371d0eeb8c88 does not exist
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a69a663c-279f-438a-9cbc-1a911dd0d4ab does not exist
Oct 14 09:10:32 compute-0 sudo[339904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:10:32 compute-0 sudo[339904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:32 compute-0 sudo[339904]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:32 compute-0 sudo[339929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:10:32 compute-0 sudo[339929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:10:32 compute-0 sudo[339929]: pam_unix(sudo:session): session closed for user root
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.305 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.306 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.306 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.306 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:10:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:10:32
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.meta', 'images', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root']
Oct 14 09:10:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.844 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433017.8431153, 2a2a6043-9b27-4acb-b7db-927b576ffafc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.844 2 INFO nova.compute.manager [-] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] VM Stopped (Lifecycle Event)
Oct 14 09:10:32 compute-0 nova_compute[259627]: 2025-10-14 09:10:32.864 2 DEBUG nova.compute.manager [None req-769ba335-bbb8-48ad-b23f-2e7a15a840f2 - - - - - -] [instance: 2a2a6043-9b27-4acb-b7db-927b576ffafc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:10:33 compute-0 ceph-mon[74249]: pgmap v1682: 305 pgs: 305 active+clean; 248 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.2 MiB/s wr, 329 op/s
Oct 14 09:10:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:10:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:10:33 compute-0 nova_compute[259627]: 2025-10-14 09:10:33.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:33 compute-0 kernel: tap5bb94dcb-f5 (unregistering): left promiscuous mode
Oct 14 09:10:33 compute-0 NetworkManager[44885]: <info>  [1760433033.3803] device (tap5bb94dcb-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:10:33 compute-0 nova_compute[259627]: 2025-10-14 09:10:33.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:33 compute-0 ovn_controller[152662]: 2025-10-14T09:10:33Z|00779|binding|INFO|Releasing lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 from this chassis (sb_readonly=0)
Oct 14 09:10:33 compute-0 ovn_controller[152662]: 2025-10-14T09:10:33Z|00780|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 down in Southbound
Oct 14 09:10:33 compute-0 ovn_controller[152662]: 2025-10-14T09:10:33Z|00781|binding|INFO|Removing iface tap5bb94dcb-f5 ovn-installed in OVS
Oct 14 09:10:33 compute-0 nova_compute[259627]: 2025-10-14 09:10:33.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:33.401 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:fd:b8 10.100.0.9'], port_security=['fa:16:3e:bb:fd:b8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e503e351-20ec-43fb-b5f8-7af68dff5bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5bb94dcb-f594-4577-9c20-65816d7c57a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:33.404 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5bb94dcb-f594-4577-9c20-65816d7c57a0 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace unbound from our chassis
Oct 14 09:10:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:33.406 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:10:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:33.407 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4a1572-59cb-4512-834d-441217371ef3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:33 compute-0 nova_compute[259627]: 2025-10-14 09:10:33.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:33 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct 14 09:10:33 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004a.scope: Consumed 13.151s CPU time.
Oct 14 09:10:33 compute-0 systemd-machined[214636]: Machine qemu-93-instance-0000004a terminated.
Oct 14 09:10:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 248 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 14 09:10:33 compute-0 nova_compute[259627]: 2025-10-14 09:10:33.898 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:33 compute-0 nova_compute[259627]: 2025-10-14 09:10:33.899 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:33 compute-0 nova_compute[259627]: 2025-10-14 09:10:33.924 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.010 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.010 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.019 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.020 2 INFO nova.compute.claims [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.066 2 INFO nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance shutdown successfully after 13 seconds.
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.072 2 INFO nova.virt.libvirt.driver [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance destroyed successfully.
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.072 2 DEBUG nova.objects.instance [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'numa_topology' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.087 2 INFO nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Attempting rescue
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.088 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.092 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.092 2 INFO nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Creating image(s)
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.126 2 DEBUG nova.storage.rbd_utils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.132 2 DEBUG nova.objects.instance [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.204 2 DEBUG nova.storage.rbd_utils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.244 2 DEBUG nova.storage.rbd_utils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.249 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.314 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.350 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.351 2 DEBUG oslo_concurrency.lockutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.352 2 DEBUG oslo_concurrency.lockutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.352 2 DEBUG oslo_concurrency.lockutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.380 2 DEBUG nova.storage.rbd_utils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.384 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.487 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Updating instance_info_cache with network_info: [{"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.506 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-16c93e17-00f2-4710-a0e4-83eb60430088" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.506 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.507 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.747 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.748 2 DEBUG nova.objects.instance [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'migration_context' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263609671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.769 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.770 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Start _get_guest_xml network_info=[{"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-237057849-network", "vif_mac": "fa:16:3e:bb:fd:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.771 2 DEBUG nova.objects.instance [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'resources' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.780 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.788 2 DEBUG nova.compute.provider_tree [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.796 2 WARNING nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.802 2 DEBUG nova.virt.libvirt.host [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.803 2 DEBUG nova.virt.libvirt.host [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.809 2 DEBUG nova.scheduler.client.report [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.817 2 DEBUG nova.virt.libvirt.host [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.818 2 DEBUG nova.virt.libvirt.host [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.819 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.819 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.820 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.820 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.821 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.822 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.822 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.823 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.823 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.824 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.824 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.824 2 DEBUG nova.virt.hardware [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.825 2 DEBUG nova.objects.instance [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.844 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.845 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.859 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.919 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.920 2 DEBUG nova.network.neutron [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.948 2 INFO nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:10:34 compute-0 nova_compute[259627]: 2025-10-14 09:10:34.974 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:10:35 compute-0 ceph-mon[74249]: pgmap v1683: 305 pgs: 305 active+clean; 248 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 14 09:10:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1263609671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.083 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.085 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.086 2 INFO nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Creating image(s)
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.122 2 DEBUG nova.storage.rbd_utils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] rbd image 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.155 2 DEBUG nova.storage.rbd_utils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] rbd image 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.185 2 DEBUG nova.storage.rbd_utils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] rbd image 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.189 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.230 2 DEBUG nova.policy [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5d68fca3e4934055a82c08459fe7da4f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '043faa2f5adf405d84227d60182cd0c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.274 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.275 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.276 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.276 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.300 2 DEBUG nova.storage.rbd_utils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] rbd image 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.304 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679000467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.357 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.359 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.655 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.716 2 DEBUG nova.storage.rbd_utils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] resizing rbd image 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.782 2 DEBUG nova.network.neutron [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Successfully created port: 08a396fa-6247-4d31-89a2-80d630d0fb9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.834 2 DEBUG nova.objects.instance [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.853 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.854 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Ensure instance console log exists: /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.854 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.855 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707455795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.855 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 294 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 559 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.872 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:35 compute-0 nova_compute[259627]: 2025-10-14 09:10:35.873 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3679000467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/707455795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.101542) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433036101603, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1557, "num_deletes": 255, "total_data_size": 2202010, "memory_usage": 2246064, "flush_reason": "Manual Compaction"}
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433036117256, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2165938, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33844, "largest_seqno": 35400, "table_properties": {"data_size": 2158721, "index_size": 4160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16057, "raw_average_key_size": 20, "raw_value_size": 2143867, "raw_average_value_size": 2752, "num_data_blocks": 184, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432907, "oldest_key_time": 1760432907, "file_creation_time": 1760433036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 15861 microseconds, and 10598 cpu microseconds.
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.117410) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2165938 bytes OK
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.117491) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.119664) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.119685) EVENT_LOG_v1 {"time_micros": 1760433036119678, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.119709) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2195056, prev total WAL file size 2195056, number of live WAL files 2.
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.121267) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2115KB)], [74(9250KB)]
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433036121322, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11638924, "oldest_snapshot_seqno": -1}
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6007 keys, 9908247 bytes, temperature: kUnknown
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433036182885, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9908247, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9865194, "index_size": 26943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 151833, "raw_average_key_size": 25, "raw_value_size": 9754417, "raw_average_value_size": 1623, "num_data_blocks": 1091, "num_entries": 6007, "num_filter_entries": 6007, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.183156) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9908247 bytes
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.184499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.9 rd, 160.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.0 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(9.9) write-amplify(4.6) OK, records in: 6531, records dropped: 524 output_compression: NoCompression
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.184518) EVENT_LOG_v1 {"time_micros": 1760433036184509, "job": 42, "event": "compaction_finished", "compaction_time_micros": 61620, "compaction_time_cpu_micros": 29125, "output_level": 6, "num_output_files": 1, "total_output_size": 9908247, "num_input_records": 6531, "num_output_records": 6007, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433036185221, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433036187114, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.121206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.187145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.187150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.187152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.187154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:10:36 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:10:36.187156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:10:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3421827097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.446 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.449 2 DEBUG nova.virt.libvirt.vif [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1594806687',display_name='tempest-ServerRescueTestJSON-server-1594806687',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1594806687',id=74,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:10:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51ae58236f6a432e93764d455a502033',ramdisk_id='',reservation_id='r-ojp0uarx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-156376826',owner_user_name='tempest-ServerRescueTestJSON-156376826-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:17Z,user_data=None,user_id='7a268118aae14d449097f4a26371415e',uuid=e503e351-20ec-43fb-b5f8-7af68dff5bcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-237057849-network", "vif_mac": "fa:16:3e:bb:fd:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.450 2 DEBUG nova.network.os_vif_util [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converting VIF {"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-237057849-network", "vif_mac": "fa:16:3e:bb:fd:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.451 2 DEBUG nova.network.os_vif_util [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:fd:b8,bridge_name='br-int',has_traffic_filtering=True,id=5bb94dcb-f594-4577-9c20-65816d7c57a0,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bb94dcb-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.453 2 DEBUG nova.objects.instance [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'pci_devices' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.478 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <uuid>e503e351-20ec-43fb-b5f8-7af68dff5bcd</uuid>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <name>instance-0000004a</name>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueTestJSON-server-1594806687</nova:name>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:10:34</nova:creationTime>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <nova:user uuid="7a268118aae14d449097f4a26371415e">tempest-ServerRescueTestJSON-156376826-project-member</nova:user>
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <nova:project uuid="51ae58236f6a432e93764d455a502033">tempest-ServerRescueTestJSON-156376826</nova:project>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <nova:port uuid="5bb94dcb-f594-4577-9c20-65816d7c57a0">
Oct 14 09:10:36 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <system>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <entry name="serial">e503e351-20ec-43fb-b5f8-7af68dff5bcd</entry>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <entry name="uuid">e503e351-20ec-43fb-b5f8-7af68dff5bcd</entry>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </system>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <os>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   </os>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <features>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   </features>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.rescue">
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk">
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <target dev="vdb" bus="virtio"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config.rescue">
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:bb:fd:b8"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <target dev="tap5bb94dcb-f5"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/console.log" append="off"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <video>
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </video>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:10:36 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:10:36 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:10:36 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:10:36 compute-0 nova_compute[259627]: </domain>
Oct 14 09:10:36 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.490 2 INFO nova.virt.libvirt.driver [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance destroyed successfully.
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.511 2 DEBUG nova.network.neutron [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Successfully updated port: 08a396fa-6247-4d31-89a2-80d630d0fb9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.539 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "refresh_cache-654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.539 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquired lock "refresh_cache-654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.539 2 DEBUG nova.network.neutron [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.569 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.570 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.570 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.571 2 DEBUG nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] No VIF found with MAC fa:16:3e:bb:fd:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.571 2 INFO nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Using config drive
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.603 2 DEBUG nova.storage.rbd_utils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.626 2 DEBUG nova.objects.instance [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.659 2 DEBUG nova.objects.instance [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'keypairs' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.819 2 DEBUG nova.network.neutron [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.934 2 DEBUG nova.compute.manager [req-9625127b-db46-4cf7-ac2a-853cff6b8f29 req-fbaaf24d-3e30-4271-afd1-2dee085d1b9c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Received event network-changed-08a396fa-6247-4d31-89a2-80d630d0fb9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.935 2 DEBUG nova.compute.manager [req-9625127b-db46-4cf7-ac2a-853cff6b8f29 req-fbaaf24d-3e30-4271-afd1-2dee085d1b9c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Refreshing instance network info cache due to event network-changed-08a396fa-6247-4d31-89a2-80d630d0fb9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.935 2 DEBUG oslo_concurrency.lockutils [req-9625127b-db46-4cf7-ac2a-853cff6b8f29 req-fbaaf24d-3e30-4271-afd1-2dee085d1b9c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:36 compute-0 nova_compute[259627]: 2025-10-14 09:10:36.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.031 2 INFO nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Creating config drive at /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config.rescue
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.039 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68_qhkbr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:37 compute-0 ceph-mon[74249]: pgmap v1684: 305 pgs: 305 active+clean; 294 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 559 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 14 09:10:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3421827097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.199 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68_qhkbr" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.235 2 DEBUG nova.storage.rbd_utils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] rbd image e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.240 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config.rescue e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.437 2 DEBUG oslo_concurrency.processutils [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config.rescue e503e351-20ec-43fb-b5f8-7af68dff5bcd_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.438 2 INFO nova.virt.libvirt.driver [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Deleting local config drive /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd/disk.config.rescue because it was imported into RBD.
Oct 14 09:10:37 compute-0 kernel: tap5bb94dcb-f5: entered promiscuous mode
Oct 14 09:10:37 compute-0 ovn_controller[152662]: 2025-10-14T09:10:37Z|00782|binding|INFO|Claiming lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 for this chassis.
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:37 compute-0 ovn_controller[152662]: 2025-10-14T09:10:37Z|00783|binding|INFO|5bb94dcb-f594-4577-9c20-65816d7c57a0: Claiming fa:16:3e:bb:fd:b8 10.100.0.9
Oct 14 09:10:37 compute-0 NetworkManager[44885]: <info>  [1760433037.5202] manager: (tap5bb94dcb-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Oct 14 09:10:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:37.525 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:fd:b8 10.100.0.9'], port_security=['fa:16:3e:bb:fd:b8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e503e351-20ec-43fb-b5f8-7af68dff5bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5bb94dcb-f594-4577-9c20-65816d7c57a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:37.527 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5bb94dcb-f594-4577-9c20-65816d7c57a0 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace bound to our chassis
Oct 14 09:10:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:37.529 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:10:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:37.530 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f87af59-0380-4236-8d12-d414d4fe90a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:37 compute-0 ovn_controller[152662]: 2025-10-14T09:10:37Z|00784|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 ovn-installed in OVS
Oct 14 09:10:37 compute-0 ovn_controller[152662]: 2025-10-14T09:10:37Z|00785|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 up in Southbound
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:37 compute-0 nova_compute[259627]: 2025-10-14 09:10:37.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:37 compute-0 systemd-udevd[340392]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:37 compute-0 systemd-machined[214636]: New machine qemu-95-instance-0000004a.
Oct 14 09:10:37 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-0000004a.
Oct 14 09:10:37 compute-0 NetworkManager[44885]: <info>  [1760433037.5867] device (tap5bb94dcb-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:10:37 compute-0 NetworkManager[44885]: <info>  [1760433037.5883] device (tap5bb94dcb-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:10:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 294 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 398 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.210 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433023.2096512, 2f8845c8-0bc8-4ce6-bf89-f775899b2666 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.211 2 INFO nova.compute.manager [-] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] VM Stopped (Lifecycle Event)
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.251 2 DEBUG nova.compute.manager [None req-3ac85e0c-884e-4599-9a85-49cccf0de999 - - - - - -] [instance: 2f8845c8-0bc8-4ce6-bf89-f775899b2666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.512 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for e503e351-20ec-43fb-b5f8-7af68dff5bcd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.513 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433038.5125241, e503e351-20ec-43fb-b5f8-7af68dff5bcd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.513 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] VM Resumed (Lifecycle Event)
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.517 2 DEBUG nova.compute.manager [None req-aaddb70e-30c0-4990-b3ea-c1e746daf7ea 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.551 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.554 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.585 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.586 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433038.5180073, e503e351-20ec-43fb-b5f8-7af68dff5bcd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.586 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] VM Started (Lifecycle Event)
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.607 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:38 compute-0 nova_compute[259627]: 2025-10-14 09:10:38.612 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:39 compute-0 ceph-mon[74249]: pgmap v1685: 305 pgs: 305 active+clean; 294 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 398 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.353 2 DEBUG nova.network.neutron [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Updating instance_info_cache with network_info: [{"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.375 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Releasing lock "refresh_cache-654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.376 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Instance network_info: |[{"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.376 2 DEBUG oslo_concurrency.lockutils [req-9625127b-db46-4cf7-ac2a-853cff6b8f29 req-fbaaf24d-3e30-4271-afd1-2dee085d1b9c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.377 2 DEBUG nova.network.neutron [req-9625127b-db46-4cf7-ac2a-853cff6b8f29 req-fbaaf24d-3e30-4271-afd1-2dee085d1b9c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Refreshing network info cache for port 08a396fa-6247-4d31-89a2-80d630d0fb9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.383 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Start _get_guest_xml network_info=[{"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.390 2 WARNING nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.396 2 DEBUG nova.virt.libvirt.host [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.397 2 DEBUG nova.virt.libvirt.host [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.408 2 DEBUG nova.virt.libvirt.host [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.409 2 DEBUG nova.virt.libvirt.host [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.409 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.410 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.410 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.411 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.411 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.412 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.412 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.412 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.413 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.413 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.413 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.414 2 DEBUG nova.virt.hardware [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.419 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 294 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 398 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Oct 14 09:10:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2789044835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.936 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.960 2 DEBUG nova.storage.rbd_utils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] rbd image 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:39 compute-0 nova_compute[259627]: 2025-10-14 09:10:39.964 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2789044835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580437364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.396 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.398 2 DEBUG nova.virt.libvirt.vif [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-850128583',display_name='tempest-ServerAddressesNegativeTestJSON-server-850128583',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-850128583',id=77,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='043faa2f5adf405d84227d60182cd0c3',ramdisk_id='',reservation_id='r-ll1o50cb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-2075326915',owner_user_name='tempest-ServerAddressesNegativeTestJSON-2075326915-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:35Z,user_data=None,user_id='5d68fca3e4934055a82c08459fe7da4f',uuid=654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.398 2 DEBUG nova.network.os_vif_util [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Converting VIF {"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.399 2 DEBUG nova.network.os_vif_util [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:cf:39,bridge_name='br-int',has_traffic_filtering=True,id=08a396fa-6247-4d31-89a2-80d630d0fb9c,network=Network(3c61a1c5-c59e-4518-bc34-110b2cb730d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08a396fa-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.401 2 DEBUG nova.objects.instance [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.424 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <uuid>654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3</uuid>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <name>instance-0000004d</name>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-850128583</nova:name>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:10:39</nova:creationTime>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <nova:user uuid="5d68fca3e4934055a82c08459fe7da4f">tempest-ServerAddressesNegativeTestJSON-2075326915-project-member</nova:user>
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <nova:project uuid="043faa2f5adf405d84227d60182cd0c3">tempest-ServerAddressesNegativeTestJSON-2075326915</nova:project>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <nova:port uuid="08a396fa-6247-4d31-89a2-80d630d0fb9c">
Oct 14 09:10:40 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <system>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <entry name="serial">654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3</entry>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <entry name="uuid">654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3</entry>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </system>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <os>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   </os>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <features>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   </features>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk">
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk.config">
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:40 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:4d:cf:39"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <target dev="tap08a396fa-62"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3/console.log" append="off"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <video>
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </video>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:10:40 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:10:40 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:10:40 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:10:40 compute-0 nova_compute[259627]: </domain>
Oct 14 09:10:40 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.425 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Preparing to wait for external event network-vif-plugged-08a396fa-6247-4d31-89a2-80d630d0fb9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.425 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.426 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.426 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.427 2 DEBUG nova.virt.libvirt.vif [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-850128583',display_name='tempest-ServerAddressesNegativeTestJSON-server-850128583',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-850128583',id=77,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='043faa2f5adf405d84227d60182cd0c3',ramdisk_id='',reservation_id='r-ll1o50cb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-2075326915',owner_user_name='tempest-ServerAddressesNegativeTestJSON-2075326915-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:35Z,user_data=None,user_id='5d68fca3e4934055a82c08459fe7da4f',uuid=654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.427 2 DEBUG nova.network.os_vif_util [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Converting VIF {"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.427 2 DEBUG nova.network.os_vif_util [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:cf:39,bridge_name='br-int',has_traffic_filtering=True,id=08a396fa-6247-4d31-89a2-80d630d0fb9c,network=Network(3c61a1c5-c59e-4518-bc34-110b2cb730d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08a396fa-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.428 2 DEBUG os_vif [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:cf:39,bridge_name='br-int',has_traffic_filtering=True,id=08a396fa-6247-4d31-89a2-80d630d0fb9c,network=Network(3c61a1c5-c59e-4518-bc34-110b2cb730d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08a396fa-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08a396fa-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08a396fa-62, col_values=(('external_ids', {'iface-id': '08a396fa-6247-4d31-89a2-80d630d0fb9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:cf:39', 'vm-uuid': '654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:40 compute-0 NetworkManager[44885]: <info>  [1760433040.4347] manager: (tap08a396fa-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.441 2 INFO os_vif [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:cf:39,bridge_name='br-int',has_traffic_filtering=True,id=08a396fa-6247-4d31-89a2-80d630d0fb9c,network=Network(3c61a1c5-c59e-4518-bc34-110b2cb730d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08a396fa-62')
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.495 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.495 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.495 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] No VIF found with MAC fa:16:3e:4d:cf:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.495 2 INFO nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Using config drive
Oct 14 09:10:40 compute-0 nova_compute[259627]: 2025-10-14 09:10:40.516 2 DEBUG nova.storage.rbd_utils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] rbd image 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.001 2 INFO nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Creating config drive at /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3/disk.config
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.011 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzkvz1vg0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.097 2 INFO nova.compute.manager [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Unrescuing
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.099 2 DEBUG oslo_concurrency.lockutils [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.099 2 DEBUG oslo_concurrency.lockutils [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquired lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.100 2 DEBUG nova.network.neutron [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:10:41 compute-0 ceph-mon[74249]: pgmap v1686: 305 pgs: 305 active+clean; 294 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 398 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Oct 14 09:10:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1580437364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.162 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzkvz1vg0" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.202 2 DEBUG nova.storage.rbd_utils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] rbd image 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.207 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3/disk.config 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.398 2 DEBUG oslo_concurrency.processutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3/disk.config 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.399 2 INFO nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Deleting local config drive /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3/disk.config because it was imported into RBD.
Oct 14 09:10:41 compute-0 kernel: tap08a396fa-62: entered promiscuous mode
Oct 14 09:10:41 compute-0 NetworkManager[44885]: <info>  [1760433041.4444] manager: (tap08a396fa-62): new Tun device (/org/freedesktop/NetworkManager/Devices/329)
Oct 14 09:10:41 compute-0 ovn_controller[152662]: 2025-10-14T09:10:41Z|00786|binding|INFO|Claiming lport 08a396fa-6247-4d31-89a2-80d630d0fb9c for this chassis.
Oct 14 09:10:41 compute-0 ovn_controller[152662]: 2025-10-14T09:10:41Z|00787|binding|INFO|08a396fa-6247-4d31-89a2-80d630d0fb9c: Claiming fa:16:3e:4d:cf:39 10.100.0.4
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.465 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:cf:39 10.100.0.4'], port_security=['fa:16:3e:4d:cf:39 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c61a1c5-c59e-4518-bc34-110b2cb730d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043faa2f5adf405d84227d60182cd0c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1902ee8-52a4-43ff-892a-de64b4e25b7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=714ba0db-616b-4427-ae01-f109dd1694f7, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=08a396fa-6247-4d31-89a2-80d630d0fb9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.466 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 08a396fa-6247-4d31-89a2-80d630d0fb9c in datapath 3c61a1c5-c59e-4518-bc34-110b2cb730d8 bound to our chassis
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.467 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c61a1c5-c59e-4518-bc34-110b2cb730d8
Oct 14 09:10:41 compute-0 systemd-machined[214636]: New machine qemu-96-instance-0000004d.
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[304f1f38-afb5-4463-8dde-b5b886d91560]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.482 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c61a1c5-c1 in ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.483 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c61a1c5-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.483 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[736fc3de-a472-431f-b4d2-d2bf597f15c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.484 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67c4400b-7221-462e-8ac0-5acba6d7d5cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-0000004d.
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.496 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4188bb22-e23d-4917-9102-f31b5acd8ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 systemd-udevd[340600]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:41 compute-0 NetworkManager[44885]: <info>  [1760433041.5209] device (tap08a396fa-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:10:41 compute-0 NetworkManager[44885]: <info>  [1760433041.5219] device (tap08a396fa-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.523 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f79a68e-f0e5-4459-97dd-9775c0862429]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_controller[152662]: 2025-10-14T09:10:41Z|00788|binding|INFO|Setting lport 08a396fa-6247-4d31-89a2-80d630d0fb9c ovn-installed in OVS
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:41 compute-0 ovn_controller[152662]: 2025-10-14T09:10:41Z|00789|binding|INFO|Setting lport 08a396fa-6247-4d31-89a2-80d630d0fb9c up in Southbound
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.564 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc04ddc-0ed7-4dea-8dba-a5b7a094c414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 systemd-udevd[340602]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.569 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17aac972-08d2-499d-a33a-b9ba68132a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 NetworkManager[44885]: <info>  [1760433041.5709] manager: (tap3c61a1c5-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/330)
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.610 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d45819d0-7668-419a-bd11-39e78d1806de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.614 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b9088b-a891-496b-a9c3-8ad262efb234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 NetworkManager[44885]: <info>  [1760433041.6399] device (tap3c61a1c5-c0): carrier: link connected
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.646 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e203a21d-6871-4923-a450-9efa7c027a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd68a21-989c-41c2-b2bf-e830878b9492]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c61a1c5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:a5:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682040, 'reachable_time': 33214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340630, 'error': None, 'target': 'ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.691 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[47e6204d-beac-4ba4-ab59-500ac1376489]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:a5a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682040, 'tstamp': 682040}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340631, 'error': None, 'target': 'ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.709 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d56414a-7aa0-4962-9aed-618a465de9a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c61a1c5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:a5:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682040, 'reachable_time': 33214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340632, 'error': None, 'target': 'ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.741 2 DEBUG nova.network.neutron [req-9625127b-db46-4cf7-ac2a-853cff6b8f29 req-fbaaf24d-3e30-4271-afd1-2dee085d1b9c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Updated VIF entry in instance network info cache for port 08a396fa-6247-4d31-89a2-80d630d0fb9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.741 2 DEBUG nova.network.neutron [req-9625127b-db46-4cf7-ac2a-853cff6b8f29 req-fbaaf24d-3e30-4271-afd1-2dee085d1b9c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Updating instance_info_cache with network_info: [{"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.750 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff023084-4a6d-461c-a32c-934c5eb7a6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.756 2 DEBUG oslo_concurrency.lockutils [req-9625127b-db46-4cf7-ac2a-853cff6b8f29 req-fbaaf24d-3e30-4271-afd1-2dee085d1b9c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.820 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5106d799-aee3-4e06-bb67-0d22a59a769b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.822 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c61a1c5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.822 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.822 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c61a1c5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:41 compute-0 NetworkManager[44885]: <info>  [1760433041.8246] manager: (tap3c61a1c5-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Oct 14 09:10:41 compute-0 kernel: tap3c61a1c5-c0: entered promiscuous mode
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.830 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c61a1c5-c0, col_values=(('external_ids', {'iface-id': 'e1093a18-ad03-4852-9164-2910132d4e56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:41 compute-0 ovn_controller[152662]: 2025-10-14T09:10:41Z|00790|binding|INFO|Releasing lport e1093a18-ad03-4852-9164-2910132d4e56 from this chassis (sb_readonly=0)
Oct 14 09:10:41 compute-0 nova_compute[259627]: 2025-10-14 09:10:41.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.867 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c61a1c5-c59e-4518-bc34-110b2cb730d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c61a1c5-c59e-4518-bc34-110b2cb730d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.868 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0c675b8b-6c99-41d5-ad0f-85e58609f37a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.868 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-3c61a1c5-c59e-4518-bc34-110b2cb730d8
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/3c61a1c5-c59e-4518-bc34-110b2cb730d8.pid.haproxy
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 3c61a1c5-c59e-4518-bc34-110b2cb730d8
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:10:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:41.870 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8', 'env', 'PROCESS_TAG=haproxy-3c61a1c5-c59e-4518-bc34-110b2cb730d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c61a1c5-c59e-4518-bc34-110b2cb730d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:10:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 341 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 188 op/s
Oct 14 09:10:42 compute-0 podman[340704]: 2025-10-14 09:10:42.363151016 +0000 UTC m=+0.081954769 container create 8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:10:42 compute-0 podman[340704]: 2025-10-14 09:10:42.322376142 +0000 UTC m=+0.041179925 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:10:42 compute-0 systemd[1]: Started libpod-conmon-8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2.scope.
Oct 14 09:10:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5be722c88682ab45963701a7a47dc1f788d7d499872dbbcb1d212710957dae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:42 compute-0 podman[340704]: 2025-10-14 09:10:42.503332859 +0000 UTC m=+0.222136692 container init 8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:10:42 compute-0 podman[340704]: 2025-10-14 09:10:42.513599402 +0000 UTC m=+0.232403175 container start 8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.534 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433042.5338728, 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.535 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] VM Started (Lifecycle Event)
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.558 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:42 compute-0 neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8[340720]: [NOTICE]   (340724) : New worker (340726) forked
Oct 14 09:10:42 compute-0 neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8[340720]: [NOTICE]   (340724) : Loading success.
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.563 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433042.5341072, 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.563 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] VM Paused (Lifecycle Event)
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.581 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.585 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:42 compute-0 nova_compute[259627]: 2025-10-14 09:10:42.605 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002560225487562675 of space, bias 1.0, pg target 0.7680676462688024 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:10:43 compute-0 ceph-mon[74249]: pgmap v1687: 305 pgs: 305 active+clean; 341 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 188 op/s
Oct 14 09:10:43 compute-0 nova_compute[259627]: 2025-10-14 09:10:43.674 2 DEBUG nova.network.neutron [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Updating instance_info_cache with network_info: [{"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:43 compute-0 nova_compute[259627]: 2025-10-14 09:10:43.693 2 DEBUG oslo_concurrency.lockutils [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Releasing lock "refresh_cache-e503e351-20ec-43fb-b5f8-7af68dff5bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:43 compute-0 nova_compute[259627]: 2025-10-14 09:10:43.694 2 DEBUG nova.objects.instance [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'flavor' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:43 compute-0 podman[340736]: 2025-10-14 09:10:43.705383172 +0000 UTC m=+0.099988433 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:10:43 compute-0 podman[340735]: 2025-10-14 09:10:43.710386956 +0000 UTC m=+0.107504789 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 14 09:10:43 compute-0 kernel: tap5bb94dcb-f5 (unregistering): left promiscuous mode
Oct 14 09:10:43 compute-0 NetworkManager[44885]: <info>  [1760433043.7805] device (tap5bb94dcb-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:10:43 compute-0 nova_compute[259627]: 2025-10-14 09:10:43.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:43 compute-0 ovn_controller[152662]: 2025-10-14T09:10:43Z|00791|binding|INFO|Releasing lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 from this chassis (sb_readonly=0)
Oct 14 09:10:43 compute-0 ovn_controller[152662]: 2025-10-14T09:10:43Z|00792|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 down in Southbound
Oct 14 09:10:43 compute-0 ovn_controller[152662]: 2025-10-14T09:10:43Z|00793|binding|INFO|Removing iface tap5bb94dcb-f5 ovn-installed in OVS
Oct 14 09:10:43 compute-0 nova_compute[259627]: 2025-10-14 09:10:43.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:43.806 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:fd:b8 10.100.0.9'], port_security=['fa:16:3e:bb:fd:b8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e503e351-20ec-43fb-b5f8-7af68dff5bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5bb94dcb-f594-4577-9c20-65816d7c57a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:43.808 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5bb94dcb-f594-4577-9c20-65816d7c57a0 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace unbound from our chassis
Oct 14 09:10:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:43.810 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:10:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:43.812 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[23539f61-2b32-418c-bfe5-b6948b184669]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:43 compute-0 nova_compute[259627]: 2025-10-14 09:10:43.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:43 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct 14 09:10:43 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004a.scope: Consumed 6.249s CPU time.
Oct 14 09:10:43 compute-0 systemd-machined[214636]: Machine qemu-95-instance-0000004a terminated.
Oct 14 09:10:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 341 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 122 op/s
Oct 14 09:10:43 compute-0 nova_compute[259627]: 2025-10-14 09:10:43.968 2 INFO nova.virt.libvirt.driver [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance destroyed successfully.
Oct 14 09:10:43 compute-0 nova_compute[259627]: 2025-10-14 09:10:43.969 2 DEBUG nova.objects.instance [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'numa_topology' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:44 compute-0 kernel: tap5bb94dcb-f5: entered promiscuous mode
Oct 14 09:10:44 compute-0 NetworkManager[44885]: <info>  [1760433044.1026] manager: (tap5bb94dcb-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/332)
Oct 14 09:10:44 compute-0 nova_compute[259627]: 2025-10-14 09:10:44.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:44 compute-0 systemd-udevd[340618]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:44 compute-0 ovn_controller[152662]: 2025-10-14T09:10:44Z|00794|binding|INFO|Claiming lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 for this chassis.
Oct 14 09:10:44 compute-0 ovn_controller[152662]: 2025-10-14T09:10:44Z|00795|binding|INFO|5bb94dcb-f594-4577-9c20-65816d7c57a0: Claiming fa:16:3e:bb:fd:b8 10.100.0.9
Oct 14 09:10:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:44.115 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:fd:b8 10.100.0.9'], port_security=['fa:16:3e:bb:fd:b8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e503e351-20ec-43fb-b5f8-7af68dff5bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5bb94dcb-f594-4577-9c20-65816d7c57a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:44.117 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5bb94dcb-f594-4577-9c20-65816d7c57a0 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace bound to our chassis
Oct 14 09:10:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:44.119 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:10:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:44.120 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d69a4f8-5e8a-4897-8917-8ba318fa88dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:44 compute-0 NetworkManager[44885]: <info>  [1760433044.1283] device (tap5bb94dcb-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:10:44 compute-0 NetworkManager[44885]: <info>  [1760433044.1307] device (tap5bb94dcb-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:10:44 compute-0 ovn_controller[152662]: 2025-10-14T09:10:44Z|00796|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 ovn-installed in OVS
Oct 14 09:10:44 compute-0 ovn_controller[152662]: 2025-10-14T09:10:44Z|00797|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 up in Southbound
Oct 14 09:10:44 compute-0 nova_compute[259627]: 2025-10-14 09:10:44.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:44 compute-0 systemd-machined[214636]: New machine qemu-97-instance-0000004a.
Oct 14 09:10:44 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-0000004a.
Oct 14 09:10:45 compute-0 ceph-mon[74249]: pgmap v1688: 305 pgs: 305 active+clean; 341 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 122 op/s
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.222 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for e503e351-20ec-43fb-b5f8-7af68dff5bcd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.223 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433045.2034614, e503e351-20ec-43fb-b5f8-7af68dff5bcd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.223 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] VM Resumed (Lifecycle Event)
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.248 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.251 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.273 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.273 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433045.203873, e503e351-20ec-43fb-b5f8-7af68dff5bcd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.273 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] VM Started (Lifecycle Event)
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.298 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.301 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.318 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:45 compute-0 nova_compute[259627]: 2025-10-14 09:10:45.619 2 DEBUG nova.compute.manager [None req-867a10c1-eccb-4af4-99c5-a5d6265b4f9f 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 319 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Oct 14 09:10:46 compute-0 nova_compute[259627]: 2025-10-14 09:10:46.431 2 DEBUG nova.compute.manager [req-e55e1e3c-ea3d-4b83-96c6-1b0dc1bd61c0 req-d995f15d-ed2b-472b-8b1c-905e45df89aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-unplugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:46 compute-0 nova_compute[259627]: 2025-10-14 09:10:46.432 2 DEBUG oslo_concurrency.lockutils [req-e55e1e3c-ea3d-4b83-96c6-1b0dc1bd61c0 req-d995f15d-ed2b-472b-8b1c-905e45df89aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:46 compute-0 nova_compute[259627]: 2025-10-14 09:10:46.432 2 DEBUG oslo_concurrency.lockutils [req-e55e1e3c-ea3d-4b83-96c6-1b0dc1bd61c0 req-d995f15d-ed2b-472b-8b1c-905e45df89aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:46 compute-0 nova_compute[259627]: 2025-10-14 09:10:46.433 2 DEBUG oslo_concurrency.lockutils [req-e55e1e3c-ea3d-4b83-96c6-1b0dc1bd61c0 req-d995f15d-ed2b-472b-8b1c-905e45df89aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:46 compute-0 nova_compute[259627]: 2025-10-14 09:10:46.433 2 DEBUG nova.compute.manager [req-e55e1e3c-ea3d-4b83-96c6-1b0dc1bd61c0 req-d995f15d-ed2b-472b-8b1c-905e45df89aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] No waiting events found dispatching network-vif-unplugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:46 compute-0 nova_compute[259627]: 2025-10-14 09:10:46.434 2 WARNING nova.compute.manager [req-e55e1e3c-ea3d-4b83-96c6-1b0dc1bd61c0 req-d995f15d-ed2b-472b-8b1c-905e45df89aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received unexpected event network-vif-unplugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 for instance with vm_state active and task_state None.
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.091 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.092 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.092 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.092 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.092 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.093 2 INFO nova.compute.manager [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Terminating instance
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.094 2 DEBUG nova.compute.manager [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:10:47 compute-0 kernel: tap5bb94dcb-f5 (unregistering): left promiscuous mode
Oct 14 09:10:47 compute-0 NetworkManager[44885]: <info>  [1760433047.1350] device (tap5bb94dcb-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 ovn_controller[152662]: 2025-10-14T09:10:47Z|00798|binding|INFO|Releasing lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 from this chassis (sb_readonly=0)
Oct 14 09:10:47 compute-0 ovn_controller[152662]: 2025-10-14T09:10:47Z|00799|binding|INFO|Setting lport 5bb94dcb-f594-4577-9c20-65816d7c57a0 down in Southbound
Oct 14 09:10:47 compute-0 ovn_controller[152662]: 2025-10-14T09:10:47Z|00800|binding|INFO|Removing iface tap5bb94dcb-f5 ovn-installed in OVS
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:47.153 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:fd:b8 10.100.0.9'], port_security=['fa:16:3e:bb:fd:b8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e503e351-20ec-43fb-b5f8-7af68dff5bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5bb94dcb-f594-4577-9c20-65816d7c57a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:47.155 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5bb94dcb-f594-4577-9c20-65816d7c57a0 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace unbound from our chassis
Oct 14 09:10:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:47.157 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:10:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:47.157 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33634de9-ae74-42ff-9405-bad9fd74c13e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 ceph-mon[74249]: pgmap v1689: 305 pgs: 305 active+clean; 319 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Oct 14 09:10:47 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct 14 09:10:47 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004a.scope: Consumed 2.958s CPU time.
Oct 14 09:10:47 compute-0 systemd-machined[214636]: Machine qemu-97-instance-0000004a terminated.
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.342 2 INFO nova.virt.libvirt.driver [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Instance destroyed successfully.
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.342 2 DEBUG nova.objects.instance [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'resources' on Instance uuid e503e351-20ec-43fb-b5f8-7af68dff5bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.358 2 DEBUG nova.virt.libvirt.vif [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1594806687',display_name='tempest-ServerRescueTestJSON-server-1594806687',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1594806687',id=74,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:10:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51ae58236f6a432e93764d455a502033',ramdisk_id='',reservation_id='r-ojp0uarx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-156376826',owner_user_name='tempest-ServerRescueTestJSON-156376826-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:10:45Z,user_data=None,user_id='7a268118aae14d449097f4a26371415e',uuid=e503e351-20ec-43fb-b5f8-7af68dff5bcd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.358 2 DEBUG nova.network.os_vif_util [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converting VIF {"id": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "address": "fa:16:3e:bb:fd:b8", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bb94dcb-f5", "ovs_interfaceid": "5bb94dcb-f594-4577-9c20-65816d7c57a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.359 2 DEBUG nova.network.os_vif_util [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:fd:b8,bridge_name='br-int',has_traffic_filtering=True,id=5bb94dcb-f594-4577-9c20-65816d7c57a0,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bb94dcb-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.360 2 DEBUG os_vif [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:fd:b8,bridge_name='br-int',has_traffic_filtering=True,id=5bb94dcb-f594-4577-9c20-65816d7c57a0,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bb94dcb-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.363 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb94dcb-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.371 2 INFO os_vif [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:fd:b8,bridge_name='br-int',has_traffic_filtering=True,id=5bb94dcb-f594-4577-9c20-65816d7c57a0,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bb94dcb-f5')
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.765 2 INFO nova.virt.libvirt.driver [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Deleting instance files /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd_del
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.766 2 INFO nova.virt.libvirt.driver [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Deletion of /var/lib/nova/instances/e503e351-20ec-43fb-b5f8-7af68dff5bcd_del complete
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.822 2 INFO nova.compute.manager [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.823 2 DEBUG oslo.service.loopingcall [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.823 2 DEBUG nova.compute.manager [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:10:47 compute-0 nova_compute[259627]: 2025-10-14 09:10:47.823 2 DEBUG nova.network.neutron [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:10:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 319 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.854 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.855 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.855 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.856 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.856 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] No waiting events found dispatching network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.857 2 WARNING nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received unexpected event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 for instance with vm_state active and task_state deleting.
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.857 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.858 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.858 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.859 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.859 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] No waiting events found dispatching network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.860 2 WARNING nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received unexpected event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 for instance with vm_state active and task_state deleting.
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.860 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.861 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.861 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.862 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.863 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] No waiting events found dispatching network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.863 2 WARNING nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received unexpected event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 for instance with vm_state active and task_state deleting.
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.864 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Received event network-vif-plugged-08a396fa-6247-4d31-89a2-80d630d0fb9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.864 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.865 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.866 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.866 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Processing event network-vif-plugged-08a396fa-6247-4d31-89a2-80d630d0fb9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.867 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Received event network-vif-plugged-08a396fa-6247-4d31-89a2-80d630d0fb9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.867 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.868 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.868 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.868 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] No waiting events found dispatching network-vif-plugged-08a396fa-6247-4d31-89a2-80d630d0fb9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.869 2 WARNING nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Received unexpected event network-vif-plugged-08a396fa-6247-4d31-89a2-80d630d0fb9c for instance with vm_state building and task_state spawning.
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.869 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-unplugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.870 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.870 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.871 2 DEBUG oslo_concurrency.lockutils [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.871 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] No waiting events found dispatching network-vif-unplugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.872 2 DEBUG nova.compute.manager [req-f84b4d09-77f5-43c8-843b-de6fc89b27e8 req-282b164d-e510-4532-b82d-cfc049503a6d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-unplugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.873 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.878 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433048.878156, 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.878 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] VM Resumed (Lifecycle Event)
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.888 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.894 2 INFO nova.virt.libvirt.driver [-] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Instance spawned successfully.
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.895 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.907 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.912 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.925 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.925 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.926 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.927 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.928 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.928 2 DEBUG nova.virt.libvirt.driver [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.934 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.990 2 INFO nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Took 13.91 seconds to spawn the instance on the hypervisor.
Oct 14 09:10:48 compute-0 nova_compute[259627]: 2025-10-14 09:10:48.990 2 DEBUG nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.015 2 DEBUG nova.network.neutron [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.045 2 INFO nova.compute.manager [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Took 1.22 seconds to deallocate network for instance.
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.065 2 INFO nova.compute.manager [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Took 15.09 seconds to build instance.
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.091 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.092 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.094 2 DEBUG oslo_concurrency.lockutils [None req-df3ec9e3-77e5-44d0-af8f-84133d129910 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:49 compute-0 ceph-mon[74249]: pgmap v1690: 305 pgs: 305 active+clean; 319 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.203 2 DEBUG oslo_concurrency.processutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657568009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.707 2 DEBUG oslo_concurrency.processutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.715 2 DEBUG nova.compute.provider_tree [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.751 2 DEBUG nova.scheduler.client.report [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.779 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.803 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "b21251cd-e46b-49fa-aebe-6250e4c587a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.803 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.813 2 INFO nova.scheduler.client.report [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Deleted allocations for instance e503e351-20ec-43fb-b5f8-7af68dff5bcd
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.833 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:10:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 319 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.955 2 DEBUG oslo_concurrency.lockutils [None req-5991cccd-f6fe-4168-90e8-20d660c01c28 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.957 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.958 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.966 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:10:49 compute-0 nova_compute[259627]: 2025-10-14 09:10:49.967 2 INFO nova.compute.claims [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.132 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/657568009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793902925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.662 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.671 2 DEBUG nova.compute.provider_tree [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.707 2 DEBUG nova.scheduler.client.report [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.742 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.743 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.746 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.746 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.746 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.746 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.747 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.748 2 INFO nova.compute.manager [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Terminating instance
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.749 2 DEBUG nova.compute.manager [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.791 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.792 2 DEBUG nova.network.neutron [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:10:50 compute-0 kernel: tap2aa7bfa4-d8 (unregistering): left promiscuous mode
Oct 14 09:10:50 compute-0 NetworkManager[44885]: <info>  [1760433050.8062] device (tap2aa7bfa4-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.817 2 INFO nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:10:50 compute-0 ovn_controller[152662]: 2025-10-14T09:10:50Z|00801|binding|INFO|Releasing lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 from this chassis (sb_readonly=0)
Oct 14 09:10:50 compute-0 ovn_controller[152662]: 2025-10-14T09:10:50Z|00802|binding|INFO|Setting lport 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 down in Southbound
Oct 14 09:10:50 compute-0 ovn_controller[152662]: 2025-10-14T09:10:50Z|00803|binding|INFO|Removing iface tap2aa7bfa4-d8 ovn-installed in OVS
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:50.827 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:8d:59 10.100.0.4'], port_security=['fa:16:3e:f5:8d:59 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '16c93e17-00f2-4710-a0e4-83eb60430088', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb52c397-b1e0-4244-ac38-60ca1e4abace', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51ae58236f6a432e93764d455a502033', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fcb895cc-6512-4f14-be87-dd3d6289c2b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=400ed612-fb0d-4559-97c6-466a993e3d60, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:50.828 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 in datapath fb52c397-b1e0-4244-ac38-60ca1e4abace unbound from our chassis
Oct 14 09:10:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:50.829 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fb52c397-b1e0-4244-ac38-60ca1e4abace or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:10:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:50.830 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfc4b6a-3c69-4ad7-a150-1e1f0aedbcbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.848 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:50 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000049.scope: Deactivated successfully.
Oct 14 09:10:50 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000049.scope: Consumed 13.725s CPU time.
Oct 14 09:10:50 compute-0 systemd-machined[214636]: Machine qemu-91-instance-00000049 terminated.
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.991 2 INFO nova.virt.libvirt.driver [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Instance destroyed successfully.
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.992 2 DEBUG nova.objects.instance [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lazy-loading 'resources' on Instance uuid 16c93e17-00f2-4710-a0e4-83eb60430088 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.995 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.996 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:10:50 compute-0 nova_compute[259627]: 2025-10-14 09:10:50.996 2 INFO nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Creating image(s)
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.025 2 DEBUG nova.storage.rbd_utils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] rbd image b21251cd-e46b-49fa-aebe-6250e4c587a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.048 2 DEBUG nova.storage.rbd_utils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] rbd image b21251cd-e46b-49fa-aebe-6250e4c587a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.071 2 DEBUG nova.storage.rbd_utils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] rbd image b21251cd-e46b-49fa-aebe-6250e4c587a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.074 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.113 2 DEBUG nova.policy [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7dc57afe994e4351943a7d3a7213a8b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '02c85c54d57243a8821bac819c665be9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.117 2 DEBUG nova.virt.libvirt.vif [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:09:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-963382784',display_name='tempest-ServerRescueTestJSON-server-963382784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-963382784',id=73,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:10:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51ae58236f6a432e93764d455a502033',ramdisk_id='',reservation_id='r-x8za0q3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-156376826',owner_user_name='tempest-ServerRescueTestJSON-156376826-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:10:02Z,user_data=None,user_id='7a268118aae14d449097f4a26371415e',uuid=16c93e17-00f2-4710-a0e4-83eb60430088,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.117 2 DEBUG nova.network.os_vif_util [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converting VIF {"id": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "address": "fa:16:3e:f5:8d:59", "network": {"id": "fb52c397-b1e0-4244-ac38-60ca1e4abace", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-237057849-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "51ae58236f6a432e93764d455a502033", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2aa7bfa4-d8", "ovs_interfaceid": "2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.119 2 DEBUG nova.network.os_vif_util [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:8d:59,bridge_name='br-int',has_traffic_filtering=True,id=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2aa7bfa4-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.119 2 DEBUG os_vif [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:8d:59,bridge_name='br-int',has_traffic_filtering=True,id=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2aa7bfa4-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.122 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.122 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.123 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.124 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.124 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.126 2 INFO nova.compute.manager [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Terminating instance
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.127 2 DEBUG nova.compute.manager [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2aa7bfa4-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.136 2 INFO os_vif [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:8d:59,bridge_name='br-int',has_traffic_filtering=True,id=2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549,network=Network(fb52c397-b1e0-4244-ac38-60ca1e4abace),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2aa7bfa4-d8')
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.155 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.156 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.157 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.157 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.179 2 DEBUG nova.storage.rbd_utils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] rbd image b21251cd-e46b-49fa-aebe-6250e4c587a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.181 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b21251cd-e46b-49fa-aebe-6250e4c587a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:51 compute-0 kernel: tap08a396fa-62 (unregistering): left promiscuous mode
Oct 14 09:10:51 compute-0 NetworkManager[44885]: <info>  [1760433051.2003] device (tap08a396fa-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:10:51 compute-0 ovn_controller[152662]: 2025-10-14T09:10:51Z|00804|binding|INFO|Releasing lport 08a396fa-6247-4d31-89a2-80d630d0fb9c from this chassis (sb_readonly=0)
Oct 14 09:10:51 compute-0 ovn_controller[152662]: 2025-10-14T09:10:51Z|00805|binding|INFO|Setting lport 08a396fa-6247-4d31-89a2-80d630d0fb9c down in Southbound
Oct 14 09:10:51 compute-0 ovn_controller[152662]: 2025-10-14T09:10:51Z|00806|binding|INFO|Removing iface tap08a396fa-62 ovn-installed in OVS
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.215 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:cf:39 10.100.0.4'], port_security=['fa:16:3e:4d:cf:39 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c61a1c5-c59e-4518-bc34-110b2cb730d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043faa2f5adf405d84227d60182cd0c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1902ee8-52a4-43ff-892a-de64b4e25b7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=714ba0db-616b-4427-ae01-f109dd1694f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=08a396fa-6247-4d31-89a2-80d630d0fb9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.216 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 08a396fa-6247-4d31-89a2-80d630d0fb9c in datapath 3c61a1c5-c59e-4518-bc34-110b2cb730d8 unbound from our chassis
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.217 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c61a1c5-c59e-4518-bc34-110b2cb730d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.219 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e92980a3-c386-461a-a7a1-9170448a5649]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.219 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8 namespace which is not needed anymore
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.220 2 DEBUG nova.compute.manager [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.221 2 DEBUG oslo_concurrency.lockutils [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.221 2 DEBUG oslo_concurrency.lockutils [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.221 2 DEBUG oslo_concurrency.lockutils [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.221 2 DEBUG nova.compute.manager [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] No waiting events found dispatching network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.221 2 WARNING nova.compute.manager [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received unexpected event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 for instance with vm_state deleted and task_state None.
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.222 2 DEBUG nova.compute.manager [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.222 2 DEBUG oslo_concurrency.lockutils [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.222 2 DEBUG oslo_concurrency.lockutils [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.222 2 DEBUG oslo_concurrency.lockutils [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e503e351-20ec-43fb-b5f8-7af68dff5bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.222 2 DEBUG nova.compute.manager [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] No waiting events found dispatching network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.222 2 WARNING nova.compute.manager [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received unexpected event network-vif-plugged-5bb94dcb-f594-4577-9c20-65816d7c57a0 for instance with vm_state deleted and task_state None.
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.223 2 DEBUG nova.compute.manager [req-8f5855d2-8cd6-4ef2-ae6b-bb955288db2e req-90f40a56-2997-4f35-8902-141a91b43337 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Received event network-vif-deleted-5bb94dcb-f594-4577-9c20-65816d7c57a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:51 compute-0 ceph-mon[74249]: pgmap v1691: 305 pgs: 305 active+clean; 319 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 14 09:10:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2793902925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:51 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 14 09:10:51 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Consumed 3.256s CPU time.
Oct 14 09:10:51 compute-0 systemd-machined[214636]: Machine qemu-96-instance-0000004d terminated.
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.282 2 DEBUG nova.compute.manager [req-aeb18476-5416-45f4-b5d8-a4216c1b4647 req-b6ebb7aa-af07-4201-b271-3103cad4aa66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-unplugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.283 2 DEBUG oslo_concurrency.lockutils [req-aeb18476-5416-45f4-b5d8-a4216c1b4647 req-b6ebb7aa-af07-4201-b271-3103cad4aa66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.283 2 DEBUG oslo_concurrency.lockutils [req-aeb18476-5416-45f4-b5d8-a4216c1b4647 req-b6ebb7aa-af07-4201-b271-3103cad4aa66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.284 2 DEBUG oslo_concurrency.lockutils [req-aeb18476-5416-45f4-b5d8-a4216c1b4647 req-b6ebb7aa-af07-4201-b271-3103cad4aa66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.284 2 DEBUG nova.compute.manager [req-aeb18476-5416-45f4-b5d8-a4216c1b4647 req-b6ebb7aa-af07-4201-b271-3103cad4aa66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] No waiting events found dispatching network-vif-unplugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.284 2 DEBUG nova.compute.manager [req-aeb18476-5416-45f4-b5d8-a4216c1b4647 req-b6ebb7aa-af07-4201-b271-3103cad4aa66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-unplugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.295 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8[340720]: [NOTICE]   (340724) : haproxy version is 2.8.14-c23fe91
Oct 14 09:10:51 compute-0 neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8[340720]: [NOTICE]   (340724) : path to executable is /usr/sbin/haproxy
Oct 14 09:10:51 compute-0 neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8[340720]: [WARNING]  (340724) : Exiting Master process...
Oct 14 09:10:51 compute-0 neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8[340720]: [WARNING]  (340724) : Exiting Master process...
Oct 14 09:10:51 compute-0 neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8[340720]: [ALERT]    (340724) : Current worker (340726) exited with code 143 (Terminated)
Oct 14 09:10:51 compute-0 neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8[340720]: [WARNING]  (340724) : All workers exited. Exiting... (0)
Oct 14 09:10:51 compute-0 systemd[1]: libpod-8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2.scope: Deactivated successfully.
Oct 14 09:10:51 compute-0 podman[341098]: 2025-10-14 09:10:51.361210745 +0000 UTC m=+0.044214360 container died 8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:10:51 compute-0 NetworkManager[44885]: <info>  [1760433051.3739] manager: (tap08a396fa-62): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.389 2 INFO nova.virt.libvirt.driver [-] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Instance destroyed successfully.
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.390 2 DEBUG nova.objects.instance [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lazy-loading 'resources' on Instance uuid 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.409 2 DEBUG nova.virt.libvirt.vif [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-850128583',display_name='tempest-ServerAddressesNegativeTestJSON-server-850128583',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-850128583',id=77,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:10:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='043faa2f5adf405d84227d60182cd0c3',ramdisk_id='',reservation_id='r-ll1o50cb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-2075326915',owner_user_name='tempest-ServerAddressesNegativeTestJSON-2075326915-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:10:49Z,user_data=None,user_id='5d68fca3e4934055a82c08459fe7da4f',uuid=654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.410 2 DEBUG nova.network.os_vif_util [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Converting VIF {"id": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "address": "fa:16:3e:4d:cf:39", "network": {"id": "3c61a1c5-c59e-4518-bc34-110b2cb730d8", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1535219491-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "043faa2f5adf405d84227d60182cd0c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08a396fa-62", "ovs_interfaceid": "08a396fa-6247-4d31-89a2-80d630d0fb9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.411 2 DEBUG nova.network.os_vif_util [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:cf:39,bridge_name='br-int',has_traffic_filtering=True,id=08a396fa-6247-4d31-89a2-80d630d0fb9c,network=Network(3c61a1c5-c59e-4518-bc34-110b2cb730d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08a396fa-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.412 2 DEBUG os_vif [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:cf:39,bridge_name='br-int',has_traffic_filtering=True,id=08a396fa-6247-4d31-89a2-80d630d0fb9c,network=Network(3c61a1c5-c59e-4518-bc34-110b2cb730d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08a396fa-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08a396fa-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.418 2 INFO os_vif [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:cf:39,bridge_name='br-int',has_traffic_filtering=True,id=08a396fa-6247-4d31-89a2-80d630d0fb9c,network=Network(3c61a1c5-c59e-4518-bc34-110b2cb730d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08a396fa-62')
Oct 14 09:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2-userdata-shm.mount: Deactivated successfully.
Oct 14 09:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb5be722c88682ab45963701a7a47dc1f788d7d499872dbbcb1d212710957dae-merged.mount: Deactivated successfully.
Oct 14 09:10:51 compute-0 podman[341098]: 2025-10-14 09:10:51.469748808 +0000 UTC m=+0.152752413 container cleanup 8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:10:51 compute-0 systemd[1]: libpod-conmon-8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2.scope: Deactivated successfully.
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.513 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b21251cd-e46b-49fa-aebe-6250e4c587a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:51 compute-0 podman[341159]: 2025-10-14 09:10:51.53924188 +0000 UTC m=+0.048353552 container remove 8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.544 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f592e78c-f5df-47cd-b842-f1d968055385]: (4, ('Tue Oct 14 09:10:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8 (8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2)\n8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2\nTue Oct 14 09:10:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8 (8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2)\n8ccff7061e6aaca6e0e154e9812d6a1c124bdde4ec1cd4636e487fa590d76ef2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.546 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef8c813-d7c4-4166-b671-354133755281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.549 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c61a1c5-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:51 compute-0 kernel: tap3c61a1c5-c0: left promiscuous mode
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.572 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afce2f15-69fd-4882-a02f-5e2a48575087]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.574 2 DEBUG nova.storage.rbd_utils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] resizing rbd image b21251cd-e46b-49fa-aebe-6250e4c587a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.594 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf11b89-bd2b-4957-939a-632743fa4caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.595 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e01ea7a4-a2dc-493b-9c64-80ffebcf1b64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.608 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa6b0b1-20d2-44f5-b85f-e0afad006d46]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682031, 'reachable_time': 35592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341229, 'error': None, 'target': 'ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.611 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c61a1c5-c59e-4518-bc34-110b2cb730d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.611 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d01966-9d2a-47ed-bbe2-2566fca2e673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.611 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:10:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d3c61a1c5\x2dc59e\x2d4518\x2dbc34\x2d110b2cb730d8.mount: Deactivated successfully.
Oct 14 09:10:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:51.612 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.720 2 DEBUG nova.network.neutron [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Successfully created port: 1e22b95e-0347-486e-98fa-fa15c7eb422d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.732 2 DEBUG nova.objects.instance [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lazy-loading 'migration_context' on Instance uuid b21251cd-e46b-49fa-aebe-6250e4c587a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.752 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.752 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Ensure instance console log exists: /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.753 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.753 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.754 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 215 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 278 op/s
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.979 2 INFO nova.virt.libvirt.driver [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Deleting instance files /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_del
Oct 14 09:10:51 compute-0 nova_compute[259627]: 2025-10-14 09:10:51.980 2 INFO nova.virt.libvirt.driver [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Deletion of /var/lib/nova/instances/654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3_del complete
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.067 2 INFO nova.compute.manager [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.068 2 DEBUG oslo.service.loopingcall [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.068 2 DEBUG nova.compute.manager [-] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.068 2 DEBUG nova.network.neutron [-] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.272 2 INFO nova.virt.libvirt.driver [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Deleting instance files /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088_del
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.273 2 INFO nova.virt.libvirt.driver [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Deletion of /var/lib/nova/instances/16c93e17-00f2-4710-a0e4-83eb60430088_del complete
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.329 2 INFO nova.compute.manager [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Took 1.58 seconds to destroy the instance on the hypervisor.
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.329 2 DEBUG oslo.service.loopingcall [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.330 2 DEBUG nova.compute.manager [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.330 2 DEBUG nova.network.neutron [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.625 2 DEBUG nova.network.neutron [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Successfully updated port: 1e22b95e-0347-486e-98fa-fa15c7eb422d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.642 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "refresh_cache-b21251cd-e46b-49fa-aebe-6250e4c587a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.642 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquired lock "refresh_cache-b21251cd-e46b-49fa-aebe-6250e4c587a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.642 2 DEBUG nova.network.neutron [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:10:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.873 2 DEBUG nova.network.neutron [-] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.895 2 INFO nova.compute.manager [-] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Took 0.83 seconds to deallocate network for instance.
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.915 2 DEBUG nova.network.neutron [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.952 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:52 compute-0 nova_compute[259627]: 2025-10-14 09:10:52.953 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.052 2 DEBUG nova.network.neutron [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.069 2 DEBUG oslo_concurrency.processutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.139 2 INFO nova.compute.manager [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Took 0.81 seconds to deallocate network for instance.
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.193 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:53 compute-0 ceph-mon[74249]: pgmap v1692: 305 pgs: 305 active+clean; 215 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 278 op/s
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.377 2 DEBUG nova.compute.manager [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received event network-changed-1e22b95e-0347-486e-98fa-fa15c7eb422d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.377 2 DEBUG nova.compute.manager [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Refreshing instance network info cache due to event network-changed-1e22b95e-0347-486e-98fa-fa15c7eb422d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.378 2 DEBUG oslo_concurrency.lockutils [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b21251cd-e46b-49fa-aebe-6250e4c587a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:10:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621079942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.537 2 DEBUG oslo_concurrency.processutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.541 2 DEBUG nova.compute.manager [req-fb64c3a6-ea4c-4e7f-963d-ecdbd10f07ab req-d55e9c62-a273-4755-a120-10dc7a3fe80f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.541 2 DEBUG oslo_concurrency.lockutils [req-fb64c3a6-ea4c-4e7f-963d-ecdbd10f07ab req-d55e9c62-a273-4755-a120-10dc7a3fe80f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.541 2 DEBUG oslo_concurrency.lockutils [req-fb64c3a6-ea4c-4e7f-963d-ecdbd10f07ab req-d55e9c62-a273-4755-a120-10dc7a3fe80f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.542 2 DEBUG oslo_concurrency.lockutils [req-fb64c3a6-ea4c-4e7f-963d-ecdbd10f07ab req-d55e9c62-a273-4755-a120-10dc7a3fe80f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.542 2 DEBUG nova.compute.manager [req-fb64c3a6-ea4c-4e7f-963d-ecdbd10f07ab req-d55e9c62-a273-4755-a120-10dc7a3fe80f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] No waiting events found dispatching network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.542 2 WARNING nova.compute.manager [req-fb64c3a6-ea4c-4e7f-963d-ecdbd10f07ab req-d55e9c62-a273-4755-a120-10dc7a3fe80f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received unexpected event network-vif-plugged-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 for instance with vm_state deleted and task_state None.
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.542 2 DEBUG nova.compute.manager [req-fb64c3a6-ea4c-4e7f-963d-ecdbd10f07ab req-d55e9c62-a273-4755-a120-10dc7a3fe80f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Received event network-vif-deleted-08a396fa-6247-4d31-89a2-80d630d0fb9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.549 2 DEBUG nova.compute.provider_tree [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.571 2 DEBUG nova.scheduler.client.report [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.602 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.604 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.628 2 INFO nova.scheduler.client.report [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Deleted allocations for instance 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.705 2 DEBUG oslo_concurrency.processutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:53 compute-0 nova_compute[259627]: 2025-10-14 09:10:53.758 2 DEBUG oslo_concurrency.lockutils [None req-95cdd427-8764-4a48-9b43-826474109835 5d68fca3e4934055a82c08459fe7da4f 043faa2f5adf405d84227d60182cd0c3 - - default default] Lock "654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 215 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 17 KiB/s wr, 175 op/s
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.036 2 DEBUG nova.network.neutron [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Updating instance_info_cache with network_info: [{"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.074 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Releasing lock "refresh_cache-b21251cd-e46b-49fa-aebe-6250e4c587a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.074 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Instance network_info: |[{"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.075 2 DEBUG oslo_concurrency.lockutils [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b21251cd-e46b-49fa-aebe-6250e4c587a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.075 2 DEBUG nova.network.neutron [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Refreshing network info cache for port 1e22b95e-0347-486e-98fa-fa15c7eb422d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.078 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Start _get_guest_xml network_info=[{"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.084 2 WARNING nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.089 2 DEBUG nova.virt.libvirt.host [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.090 2 DEBUG nova.virt.libvirt.host [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.097 2 DEBUG nova.virt.libvirt.host [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.098 2 DEBUG nova.virt.libvirt.host [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.098 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.099 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.099 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.100 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.100 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.100 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.100 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.101 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.101 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.101 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.101 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.102 2 DEBUG nova.virt.hardware [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.106 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:10:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3908162110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.177 2 DEBUG oslo_concurrency.processutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.184 2 DEBUG nova.compute.provider_tree [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.202 2 DEBUG nova.scheduler.client.report [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.232 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2621079942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3908162110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.266 2 INFO nova.scheduler.client.report [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Deleted allocations for instance 16c93e17-00f2-4710-a0e4-83eb60430088
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.341 2 DEBUG oslo_concurrency.lockutils [None req-e07d638f-9aad-4e48-9ca2-1a856b8202fd 7a268118aae14d449097f4a26371415e 51ae58236f6a432e93764d455a502033 - - default default] Lock "16c93e17-00f2-4710-a0e4-83eb60430088" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300239225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.524 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.551 2 DEBUG nova.storage.rbd_utils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] rbd image b21251cd-e46b-49fa-aebe-6250e4c587a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:54 compute-0 nova_compute[259627]: 2025-10-14 09:10:54.557 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:10:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461967249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.001 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.004 2 DEBUG nova.virt.libvirt.vif [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:10:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-485089637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-485089637',id=78,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02c85c54d57243a8821bac819c665be9',ramdisk_id='',reservation_id='r-ybz8fre0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-303004445',owner_user_name='tempest-InstanceActionsV221TestJSON-303004445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:50Z,user_data=None,user_id='7dc57afe994e4351943a7d3a7213a8b3',uuid=b21251cd-e46b-49fa-aebe-6250e4c587a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.005 2 DEBUG nova.network.os_vif_util [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Converting VIF {"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.008 2 DEBUG nova.network.os_vif_util [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:a4,bridge_name='br-int',has_traffic_filtering=True,id=1e22b95e-0347-486e-98fa-fa15c7eb422d,network=Network(2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e22b95e-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.010 2 DEBUG nova.objects.instance [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b21251cd-e46b-49fa-aebe-6250e4c587a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.032 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <uuid>b21251cd-e46b-49fa-aebe-6250e4c587a0</uuid>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <name>instance-0000004e</name>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-485089637</nova:name>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:10:54</nova:creationTime>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <nova:user uuid="7dc57afe994e4351943a7d3a7213a8b3">tempest-InstanceActionsV221TestJSON-303004445-project-member</nova:user>
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <nova:project uuid="02c85c54d57243a8821bac819c665be9">tempest-InstanceActionsV221TestJSON-303004445</nova:project>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <nova:port uuid="1e22b95e-0347-486e-98fa-fa15c7eb422d">
Oct 14 09:10:55 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <system>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <entry name="serial">b21251cd-e46b-49fa-aebe-6250e4c587a0</entry>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <entry name="uuid">b21251cd-e46b-49fa-aebe-6250e4c587a0</entry>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </system>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <os>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   </os>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <features>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   </features>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b21251cd-e46b-49fa-aebe-6250e4c587a0_disk">
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b21251cd-e46b-49fa-aebe-6250e4c587a0_disk.config">
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       </source>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:10:55 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:a8:8e:a4"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <target dev="tap1e22b95e-03"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0/console.log" append="off"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <video>
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </video>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:10:55 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:10:55 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:10:55 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:10:55 compute-0 nova_compute[259627]: </domain>
Oct 14 09:10:55 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.034 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Preparing to wait for external event network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.035 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.035 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.036 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.037 2 DEBUG nova.virt.libvirt.vif [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:10:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-485089637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-485089637',id=78,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02c85c54d57243a8821bac819c665be9',ramdisk_id='',reservation_id='r-ybz8fre0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-303004445',owner_user_name='tempest-InstanceActionsV221TestJSON-303004445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:10:50Z,user_data=None,user_id='7dc57afe994e4351943a7d3a7213a8b3',uuid=b21251cd-e46b-49fa-aebe-6250e4c587a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.038 2 DEBUG nova.network.os_vif_util [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Converting VIF {"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.039 2 DEBUG nova.network.os_vif_util [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:a4,bridge_name='br-int',has_traffic_filtering=True,id=1e22b95e-0347-486e-98fa-fa15c7eb422d,network=Network(2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e22b95e-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.040 2 DEBUG os_vif [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:a4,bridge_name='br-int',has_traffic_filtering=True,id=1e22b95e-0347-486e-98fa-fa15c7eb422d,network=Network(2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e22b95e-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e22b95e-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e22b95e-03, col_values=(('external_ids', {'iface-id': '1e22b95e-0347-486e-98fa-fa15c7eb422d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:8e:a4', 'vm-uuid': 'b21251cd-e46b-49fa-aebe-6250e4c587a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:55 compute-0 NetworkManager[44885]: <info>  [1760433055.0513] manager: (tap1e22b95e-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.060 2 INFO os_vif [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:a4,bridge_name='br-int',has_traffic_filtering=True,id=1e22b95e-0347-486e-98fa-fa15c7eb422d,network=Network(2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e22b95e-03')
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.133 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.134 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.134 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] No VIF found with MAC fa:16:3e:a8:8e:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.136 2 INFO nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Using config drive
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.170 2 DEBUG nova.storage.rbd_utils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] rbd image b21251cd-e46b-49fa-aebe-6250e4c587a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:55 compute-0 ceph-mon[74249]: pgmap v1693: 305 pgs: 305 active+clean; 215 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 17 KiB/s wr, 175 op/s
Oct 14 09:10:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2300239225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1461967249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.578 2 INFO nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Creating config drive at /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0/disk.config
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.584 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sivsqrt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.723 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sivsqrt" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.754 2 DEBUG nova.storage.rbd_utils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] rbd image b21251cd-e46b-49fa-aebe-6250e4c587a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.758 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0/disk.config b21251cd-e46b-49fa-aebe-6250e4c587a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:10:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 88 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 303 op/s
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.943 2 DEBUG oslo_concurrency.processutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0/disk.config b21251cd-e46b-49fa-aebe-6250e4c587a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:10:55 compute-0 nova_compute[259627]: 2025-10-14 09:10:55.944 2 INFO nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Deleting local config drive /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0/disk.config because it was imported into RBD.
Oct 14 09:10:56 compute-0 kernel: tap1e22b95e-03: entered promiscuous mode
Oct 14 09:10:56 compute-0 NetworkManager[44885]: <info>  [1760433056.0229] manager: (tap1e22b95e-03): new Tun device (/org/freedesktop/NetworkManager/Devices/335)
Oct 14 09:10:56 compute-0 ovn_controller[152662]: 2025-10-14T09:10:56Z|00807|binding|INFO|Claiming lport 1e22b95e-0347-486e-98fa-fa15c7eb422d for this chassis.
Oct 14 09:10:56 compute-0 ovn_controller[152662]: 2025-10-14T09:10:56Z|00808|binding|INFO|1e22b95e-0347-486e-98fa-fa15c7eb422d: Claiming fa:16:3e:a8:8e:a4 10.100.0.5
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.052 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:8e:a4 10.100.0.5'], port_security=['fa:16:3e:a8:8e:a4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b21251cd-e46b-49fa-aebe-6250e4c587a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02c85c54d57243a8821bac819c665be9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b7eea1c-aa0d-4728-856a-aa229574a14b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35be9187-f52a-4c1f-9463-ee1a62c06211, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1e22b95e-0347-486e-98fa-fa15c7eb422d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.055 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1e22b95e-0347-486e-98fa-fa15c7eb422d in datapath 2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76 bound to our chassis
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.058 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76
Oct 14 09:10:56 compute-0 systemd-udevd[341429]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:10:56 compute-0 systemd-machined[214636]: New machine qemu-98-instance-0000004e.
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01be3260-731f-4fe1-b966-a8f313053441]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.085 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fa36ac5-b1 in ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.088 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fa36ac5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.088 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cd1222-9132-4dc3-9f4d-eb364d6f50be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.089 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aee6923a-3507-47d5-8300-6c8f7da3c2eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-0000004e.
Oct 14 09:10:56 compute-0 NetworkManager[44885]: <info>  [1760433056.0995] device (tap1e22b95e-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:10:56 compute-0 NetworkManager[44885]: <info>  [1760433056.1008] device (tap1e22b95e-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.104 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad9f586-6649-41b4-bd42-1337b660eeca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.131 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[764c49b7-6bf2-4ef6-a4ea-91984fa6549d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:56 compute-0 ovn_controller[152662]: 2025-10-14T09:10:56Z|00809|binding|INFO|Setting lport 1e22b95e-0347-486e-98fa-fa15c7eb422d ovn-installed in OVS
Oct 14 09:10:56 compute-0 ovn_controller[152662]: 2025-10-14T09:10:56Z|00810|binding|INFO|Setting lport 1e22b95e-0347-486e-98fa-fa15c7eb422d up in Southbound
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.168 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1456c87-1731-4983-a78d-b67fab05e6c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.173 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3fc9b6-c2f7-448a-8412-9c0ee5e83702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 NetworkManager[44885]: <info>  [1760433056.1783] manager: (tap2fa36ac5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/336)
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.209 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7ecc0b20-6501-4985-a1c8-a9ef151fd6d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.213 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d281059c-af81-4224-a005-ddd625d27abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 NetworkManager[44885]: <info>  [1760433056.2355] device (tap2fa36ac5-b0): carrier: link connected
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.241 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[21ba7e1b-7465-42ee-b6e6-f6a377491da0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.259 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65c4c3bb-3355-4a57-a2ff-a3f6a36354b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fa36ac5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:e8:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683499, 'reachable_time': 24237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341461, 'error': None, 'target': 'ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.274 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0da051-fa74-4bd9-a024-89aa43c3f325]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:e8d9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683499, 'tstamp': 683499}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341462, 'error': None, 'target': 'ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.294 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4698df-834a-4979-a442-58d2183ea44d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fa36ac5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:e8:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683499, 'reachable_time': 24237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341463, 'error': None, 'target': 'ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.326 2 DEBUG nova.compute.manager [req-6e845325-a66d-46b8-aebf-3e90d36bcf82 req-2ff2534e-b1cc-4097-a4d1-14402414a892 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received event network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.327 2 DEBUG oslo_concurrency.lockutils [req-6e845325-a66d-46b8-aebf-3e90d36bcf82 req-2ff2534e-b1cc-4097-a4d1-14402414a892 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.327 2 DEBUG oslo_concurrency.lockutils [req-6e845325-a66d-46b8-aebf-3e90d36bcf82 req-2ff2534e-b1cc-4097-a4d1-14402414a892 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.328 2 DEBUG oslo_concurrency.lockutils [req-6e845325-a66d-46b8-aebf-3e90d36bcf82 req-2ff2534e-b1cc-4097-a4d1-14402414a892 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.328 2 DEBUG nova.compute.manager [req-6e845325-a66d-46b8-aebf-3e90d36bcf82 req-2ff2534e-b1cc-4097-a4d1-14402414a892 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Processing event network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.347 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c813a95d-90af-442f-b640-af9d559c0cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.413 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5c6455-d21c-4853-8d74-d56a9f2eb0a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.414 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fa36ac5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.415 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.415 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fa36ac5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:56 compute-0 kernel: tap2fa36ac5-b0: entered promiscuous mode
Oct 14 09:10:56 compute-0 NetworkManager[44885]: <info>  [1760433056.4179] manager: (tap2fa36ac5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.420 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fa36ac5-b0, col_values=(('external_ids', {'iface-id': '1eb87687-9e48-456c-8b30-e425f2155f70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:56 compute-0 ovn_controller[152662]: 2025-10-14T09:10:56Z|00811|binding|INFO|Releasing lport 1eb87687-9e48-456c-8b30-e425f2155f70 from this chassis (sb_readonly=0)
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.444 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1973d70b-2949-46bc-bf0d-05e41ca28f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.445 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76.pid.haproxy
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:10:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:56.446 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76', 'env', 'PROCESS_TAG=haproxy-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.517 2 DEBUG nova.network.neutron [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Updated VIF entry in instance network info cache for port 1e22b95e-0347-486e-98fa-fa15c7eb422d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.518 2 DEBUG nova.network.neutron [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Updating instance_info_cache with network_info: [{"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.537 2 DEBUG oslo_concurrency.lockutils [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b21251cd-e46b-49fa-aebe-6250e4c587a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:10:56 compute-0 nova_compute[259627]: 2025-10-14 09:10:56.537 2 DEBUG nova.compute.manager [req-94b6c6d8-9fe0-48a4-b895-38ca6ba5af01 req-d602dd0f-5baf-4d5c-a5ae-012984134f1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Received event network-vif-deleted-2aa7bfa4-d82d-4c07-8fe8-d5266e5b7549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:56 compute-0 podman[341527]: 2025-10-14 09:10:56.896714449 +0000 UTC m=+0.071487901 container create 0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:10:56 compute-0 systemd[1]: Started libpod-conmon-0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9.scope.
Oct 14 09:10:56 compute-0 podman[341527]: 2025-10-14 09:10:56.856457808 +0000 UTC m=+0.031231320 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:10:56 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b322f6d683cb3a7341eb3e634cb012bd1509db226e45cac3cdd9352f8197226/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:10:56 compute-0 podman[341527]: 2025-10-14 09:10:56.983869986 +0000 UTC m=+0.158643428 container init 0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:10:56 compute-0 podman[341527]: 2025-10-14 09:10:56.992798585 +0000 UTC m=+0.167572017 container start 0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:10:57 compute-0 neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76[341554]: [NOTICE]   (341579) : New worker (341591) forked
Oct 14 09:10:57 compute-0 neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76[341554]: [NOTICE]   (341579) : Loading success.
Oct 14 09:10:57 compute-0 podman[341553]: 2025-10-14 09:10:57.059779515 +0000 UTC m=+0.112947433 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:10:57 compute-0 podman[341549]: 2025-10-14 09:10:57.060142684 +0000 UTC m=+0.120132850 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 09:10:57 compute-0 ceph-mon[74249]: pgmap v1694: 305 pgs: 305 active+clean; 88 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 303 op/s
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.375 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433057.374798, b21251cd-e46b-49fa-aebe-6250e4c587a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.375 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] VM Started (Lifecycle Event)
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.377 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.379 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.383 2 INFO nova.virt.libvirt.driver [-] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Instance spawned successfully.
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.383 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.407 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.412 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.413 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.414 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.414 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.415 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.415 2 DEBUG nova.virt.libvirt.driver [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.420 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.460 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.460 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433057.377991, b21251cd-e46b-49fa-aebe-6250e4c587a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.461 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] VM Paused (Lifecycle Event)
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.484 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.486 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433057.3820944, b21251cd-e46b-49fa-aebe-6250e4c587a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.487 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] VM Resumed (Lifecycle Event)
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.489 2 INFO nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Took 6.49 seconds to spawn the instance on the hypervisor.
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.489 2 DEBUG nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.515 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.518 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.551 2 INFO nova.compute.manager [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Took 7.62 seconds to build instance.
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:57 compute-0 nova_compute[259627]: 2025-10-14 09:10:57.567 2 DEBUG oslo_concurrency.lockutils [None req-e83a159a-6c06-4f3d-82fd-3e7e6b9f4ae3 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:10:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 88 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 279 op/s
Oct 14 09:10:58 compute-0 nova_compute[259627]: 2025-10-14 09:10:58.456 2 DEBUG nova.compute.manager [req-9d4202f1-3b14-48f8-a9e2-76e0bba0bcf9 req-d0e102f2-fb88-47b5-859c-e12ab06a4616 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received event network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:10:58 compute-0 nova_compute[259627]: 2025-10-14 09:10:58.456 2 DEBUG oslo_concurrency.lockutils [req-9d4202f1-3b14-48f8-a9e2-76e0bba0bcf9 req-d0e102f2-fb88-47b5-859c-e12ab06a4616 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:58 compute-0 nova_compute[259627]: 2025-10-14 09:10:58.457 2 DEBUG oslo_concurrency.lockutils [req-9d4202f1-3b14-48f8-a9e2-76e0bba0bcf9 req-d0e102f2-fb88-47b5-859c-e12ab06a4616 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:58 compute-0 nova_compute[259627]: 2025-10-14 09:10:58.457 2 DEBUG oslo_concurrency.lockutils [req-9d4202f1-3b14-48f8-a9e2-76e0bba0bcf9 req-d0e102f2-fb88-47b5-859c-e12ab06a4616 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:58 compute-0 nova_compute[259627]: 2025-10-14 09:10:58.457 2 DEBUG nova.compute.manager [req-9d4202f1-3b14-48f8-a9e2-76e0bba0bcf9 req-d0e102f2-fb88-47b5-859c-e12ab06a4616 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] No waiting events found dispatching network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:10:58 compute-0 nova_compute[259627]: 2025-10-14 09:10:58.457 2 WARNING nova.compute.manager [req-9d4202f1-3b14-48f8-a9e2-76e0bba0bcf9 req-d0e102f2-fb88-47b5-859c-e12ab06a4616 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received unexpected event network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d for instance with vm_state active and task_state None.
Oct 14 09:10:59 compute-0 ceph-mon[74249]: pgmap v1695: 305 pgs: 305 active+clean; 88 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 279 op/s
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.451 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "b21251cd-e46b-49fa-aebe-6250e4c587a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.452 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.452 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.452 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.452 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.453 2 INFO nova.compute.manager [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Terminating instance
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.454 2 DEBUG nova.compute.manager [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:10:59 compute-0 kernel: tap1e22b95e-03 (unregistering): left promiscuous mode
Oct 14 09:10:59 compute-0 NetworkManager[44885]: <info>  [1760433059.5032] device (tap1e22b95e-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:10:59 compute-0 ovn_controller[152662]: 2025-10-14T09:10:59Z|00812|binding|INFO|Releasing lport 1e22b95e-0347-486e-98fa-fa15c7eb422d from this chassis (sb_readonly=0)
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:59 compute-0 ovn_controller[152662]: 2025-10-14T09:10:59Z|00813|binding|INFO|Setting lport 1e22b95e-0347-486e-98fa-fa15c7eb422d down in Southbound
Oct 14 09:10:59 compute-0 ovn_controller[152662]: 2025-10-14T09:10:59Z|00814|binding|INFO|Removing iface tap1e22b95e-03 ovn-installed in OVS
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.527 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:8e:a4 10.100.0.5'], port_security=['fa:16:3e:a8:8e:a4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b21251cd-e46b-49fa-aebe-6250e4c587a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02c85c54d57243a8821bac819c665be9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b7eea1c-aa0d-4728-856a-aa229574a14b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35be9187-f52a-4c1f-9463-ee1a62c06211, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1e22b95e-0347-486e-98fa-fa15c7eb422d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.529 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1e22b95e-0347-486e-98fa-fa15c7eb422d in datapath 2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76 unbound from our chassis
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.531 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.531 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1a927b-050c-48fa-8592-59b861cdf9f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.532 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76 namespace which is not needed anymore
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:59 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct 14 09:10:59 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d0000004e.scope: Consumed 3.335s CPU time.
Oct 14 09:10:59 compute-0 systemd-machined[214636]: Machine qemu-98-instance-0000004e terminated.
Oct 14 09:10:59 compute-0 neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76[341554]: [NOTICE]   (341579) : haproxy version is 2.8.14-c23fe91
Oct 14 09:10:59 compute-0 neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76[341554]: [NOTICE]   (341579) : path to executable is /usr/sbin/haproxy
Oct 14 09:10:59 compute-0 neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76[341554]: [WARNING]  (341579) : Exiting Master process...
Oct 14 09:10:59 compute-0 neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76[341554]: [WARNING]  (341579) : Exiting Master process...
Oct 14 09:10:59 compute-0 neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76[341554]: [ALERT]    (341579) : Current worker (341591) exited with code 143 (Terminated)
Oct 14 09:10:59 compute-0 neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76[341554]: [WARNING]  (341579) : All workers exited. Exiting... (0)
Oct 14 09:10:59 compute-0 systemd[1]: libpod-0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9.scope: Deactivated successfully.
Oct 14 09:10:59 compute-0 conmon[341554]: conmon 0122aeff301eb853be27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9.scope/container/memory.events
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.696 2 INFO nova.virt.libvirt.driver [-] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Instance destroyed successfully.
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.696 2 DEBUG nova.objects.instance [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lazy-loading 'resources' on Instance uuid b21251cd-e46b-49fa-aebe-6250e4c587a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:10:59 compute-0 podman[341633]: 2025-10-14 09:10:59.699653497 +0000 UTC m=+0.057329372 container died 0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.711 2 DEBUG nova.virt.libvirt.vif [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:10:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-485089637',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-485089637',id=78,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:10:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='02c85c54d57243a8821bac819c665be9',ramdisk_id='',reservation_id='r-ybz8fre0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-303004445',owner_user_name='tempest-InstanceActionsV221TestJSON-303004445-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:10:57Z,user_data=None,user_id='7dc57afe994e4351943a7d3a7213a8b3',uuid=b21251cd-e46b-49fa-aebe-6250e4c587a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.711 2 DEBUG nova.network.os_vif_util [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Converting VIF {"id": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "address": "fa:16:3e:a8:8e:a4", "network": {"id": "2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-59590451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02c85c54d57243a8821bac819c665be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e22b95e-03", "ovs_interfaceid": "1e22b95e-0347-486e-98fa-fa15c7eb422d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.712 2 DEBUG nova.network.os_vif_util [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:a4,bridge_name='br-int',has_traffic_filtering=True,id=1e22b95e-0347-486e-98fa-fa15c7eb422d,network=Network(2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e22b95e-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.713 2 DEBUG os_vif [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:a4,bridge_name='br-int',has_traffic_filtering=True,id=1e22b95e-0347-486e-98fa-fa15c7eb422d,network=Network(2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e22b95e-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e22b95e-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.725 2 INFO os_vif [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:a4,bridge_name='br-int',has_traffic_filtering=True,id=1e22b95e-0347-486e-98fa-fa15c7eb422d,network=Network(2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e22b95e-03')
Oct 14 09:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9-userdata-shm.mount: Deactivated successfully.
Oct 14 09:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b322f6d683cb3a7341eb3e634cb012bd1509db226e45cac3cdd9352f8197226-merged.mount: Deactivated successfully.
Oct 14 09:10:59 compute-0 podman[341633]: 2025-10-14 09:10:59.757583814 +0000 UTC m=+0.115259659 container cleanup 0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:10:59 compute-0 systemd[1]: libpod-conmon-0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9.scope: Deactivated successfully.
Oct 14 09:10:59 compute-0 podman[341688]: 2025-10-14 09:10:59.838751583 +0000 UTC m=+0.055350964 container remove 0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.849 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d343d32e-7f3f-4635-86ed-2161b4eb61b5]: (4, ('Tue Oct 14 09:10:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76 (0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9)\n0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9\nTue Oct 14 09:10:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76 (0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9)\n0122aeff301eb853be27c15e7eda127087534da89a4b617f1d4d5293333a39d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.851 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77dfc97b-8f43-4def-a41e-2d1e1dab1fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.852 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fa36ac5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:59 compute-0 kernel: tap2fa36ac5-b0: left promiscuous mode
Oct 14 09:10:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 88 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 279 op/s
Oct 14 09:10:59 compute-0 nova_compute[259627]: 2025-10-14 09:10:59.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.891 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6ae4f5-0c21-431e-bc14-d020abe427d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.915 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa37e45-4f9e-4876-8a47-779e3e00d7fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.916 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[09fe6260-8c77-4179-9af0-311cb9d790e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.938 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5439e242-7b48-4e6a-9807-5d091376fd30]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683492, 'reachable_time': 21422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341706, 'error': None, 'target': 'ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:10:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d2fa36ac5\x2dbbba\x2d4aa0\x2d91b7\x2dc3ffc6dfba76.mount: Deactivated successfully.
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.943 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fa36ac5-bbba-4aa0-91b7-c3ffc6dfba76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:10:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:10:59.943 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c246eb-6d09-491d-9224-e4799c222a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.161 2 INFO nova.virt.libvirt.driver [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Deleting instance files /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0_del
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.162 2 INFO nova.virt.libvirt.driver [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Deletion of /var/lib/nova/instances/b21251cd-e46b-49fa-aebe-6250e4c587a0_del complete
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.229 2 INFO nova.compute.manager [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.230 2 DEBUG oslo.service.loopingcall [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.230 2 DEBUG nova.compute.manager [-] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.231 2 DEBUG nova.network.neutron [-] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.550 2 DEBUG nova.compute.manager [req-070d1604-706e-4fce-9526-02b458639eb3 req-1f8adbb0-f67e-48f9-b94d-1910ef08b908 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received event network-vif-unplugged-1e22b95e-0347-486e-98fa-fa15c7eb422d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.551 2 DEBUG oslo_concurrency.lockutils [req-070d1604-706e-4fce-9526-02b458639eb3 req-1f8adbb0-f67e-48f9-b94d-1910ef08b908 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.552 2 DEBUG oslo_concurrency.lockutils [req-070d1604-706e-4fce-9526-02b458639eb3 req-1f8adbb0-f67e-48f9-b94d-1910ef08b908 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.553 2 DEBUG oslo_concurrency.lockutils [req-070d1604-706e-4fce-9526-02b458639eb3 req-1f8adbb0-f67e-48f9-b94d-1910ef08b908 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.554 2 DEBUG nova.compute.manager [req-070d1604-706e-4fce-9526-02b458639eb3 req-1f8adbb0-f67e-48f9-b94d-1910ef08b908 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] No waiting events found dispatching network-vif-unplugged-1e22b95e-0347-486e-98fa-fa15c7eb422d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:00 compute-0 nova_compute[259627]: 2025-10-14 09:11:00.554 2 DEBUG nova.compute.manager [req-070d1604-706e-4fce-9526-02b458639eb3 req-1f8adbb0-f67e-48f9-b94d-1910ef08b908 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received event network-vif-unplugged-1e22b95e-0347-486e-98fa-fa15c7eb422d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:11:01 compute-0 ceph-mon[74249]: pgmap v1696: 305 pgs: 305 active+clean; 88 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 279 op/s
Oct 14 09:11:01 compute-0 nova_compute[259627]: 2025-10-14 09:11:01.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:01 compute-0 nova_compute[259627]: 2025-10-14 09:11:01.609 2 DEBUG nova.network.neutron [-] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:01 compute-0 nova_compute[259627]: 2025-10-14 09:11:01.646 2 INFO nova.compute.manager [-] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Took 1.42 seconds to deallocate network for instance.
Oct 14 09:11:01 compute-0 nova_compute[259627]: 2025-10-14 09:11:01.704 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:01 compute-0 nova_compute[259627]: 2025-10-14 09:11:01.705 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:01 compute-0 nova_compute[259627]: 2025-10-14 09:11:01.731 2 DEBUG nova.compute.manager [req-cd9031f4-3a94-4ae3-84b2-5d6918fbbdbc req-bd426dc5-0893-40a9-be65-1c505cfcdc2a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received event network-vif-deleted-1e22b95e-0347-486e-98fa-fa15c7eb422d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:01 compute-0 nova_compute[259627]: 2025-10-14 09:11:01.779 2 DEBUG oslo_concurrency.processutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.8 MiB/s wr, 371 op/s
Oct 14 09:11:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8297160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.271 2 DEBUG oslo_concurrency.processutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.276 2 DEBUG nova.compute.provider_tree [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/8297160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.304 2 DEBUG nova.scheduler.client.report [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.340 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433047.3388267, e503e351-20ec-43fb-b5f8-7af68dff5bcd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.341 2 INFO nova.compute.manager [-] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] VM Stopped (Lifecycle Event)
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.355 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.377 2 DEBUG nova.compute.manager [None req-f5714eb8-dac2-4e39-b802-c3674409425e - - - - - -] [instance: e503e351-20ec-43fb-b5f8-7af68dff5bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.428 2 INFO nova.scheduler.client.report [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Deleted allocations for instance b21251cd-e46b-49fa-aebe-6250e4c587a0
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.537 2 DEBUG oslo_concurrency.lockutils [None req-95f48f8b-bf5f-4dac-a0c3-67537bc14d9f 7dc57afe994e4351943a7d3a7213a8b3 02c85c54d57243a8821bac819c665be9 - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:11:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:11:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:11:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:11:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:11:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.790 2 DEBUG nova.compute.manager [req-b82ef661-654c-4016-9496-9609f4c89e91 req-c6d17c6f-761a-45a6-8f55-911a4d3be556 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received event network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.791 2 DEBUG oslo_concurrency.lockutils [req-b82ef661-654c-4016-9496-9609f4c89e91 req-c6d17c6f-761a-45a6-8f55-911a4d3be556 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.792 2 DEBUG oslo_concurrency.lockutils [req-b82ef661-654c-4016-9496-9609f4c89e91 req-c6d17c6f-761a-45a6-8f55-911a4d3be556 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.792 2 DEBUG oslo_concurrency.lockutils [req-b82ef661-654c-4016-9496-9609f4c89e91 req-c6d17c6f-761a-45a6-8f55-911a4d3be556 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b21251cd-e46b-49fa-aebe-6250e4c587a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.793 2 DEBUG nova.compute.manager [req-b82ef661-654c-4016-9496-9609f4c89e91 req-c6d17c6f-761a-45a6-8f55-911a4d3be556 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] No waiting events found dispatching network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:02 compute-0 nova_compute[259627]: 2025-10-14 09:11:02.793 2 WARNING nova.compute.manager [req-b82ef661-654c-4016-9496-9609f4c89e91 req-c6d17c6f-761a-45a6-8f55-911a4d3be556 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Received unexpected event network-vif-plugged-1e22b95e-0347-486e-98fa-fa15c7eb422d for instance with vm_state deleted and task_state None.
Oct 14 09:11:03 compute-0 ceph-mon[74249]: pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.8 MiB/s wr, 371 op/s
Oct 14 09:11:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 14 09:11:04 compute-0 nova_compute[259627]: 2025-10-14 09:11:04.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:05 compute-0 ceph-mon[74249]: pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 14 09:11:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:11:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4219576076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:11:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:11:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4219576076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:11:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 14 09:11:05 compute-0 nova_compute[259627]: 2025-10-14 09:11:05.990 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433050.989361, 16c93e17-00f2-4710-a0e4-83eb60430088 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:05 compute-0 nova_compute[259627]: 2025-10-14 09:11:05.990 2 INFO nova.compute.manager [-] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] VM Stopped (Lifecycle Event)
Oct 14 09:11:06 compute-0 nova_compute[259627]: 2025-10-14 09:11:06.017 2 DEBUG nova.compute.manager [None req-f0c3520b-3048-428e-b5ee-074df5c27c23 - - - - - -] [instance: 16c93e17-00f2-4710-a0e4-83eb60430088] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4219576076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:11:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4219576076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:11:06 compute-0 nova_compute[259627]: 2025-10-14 09:11:06.395 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433051.3949115, 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:06 compute-0 nova_compute[259627]: 2025-10-14 09:11:06.396 2 INFO nova.compute.manager [-] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] VM Stopped (Lifecycle Event)
Oct 14 09:11:06 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:11:06 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:11:06 compute-0 nova_compute[259627]: 2025-10-14 09:11:06.413 2 DEBUG nova.compute.manager [None req-67dd76dc-ae4e-4326-b352-a99582875172 - - - - - -] [instance: 654ea66c-b372-4b53-b1aa-1ddc9bb3a9a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:07.027 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:07 compute-0 ceph-mon[74249]: pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 14 09:11:07 compute-0 nova_compute[259627]: 2025-10-14 09:11:07.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:11:09 compute-0 ceph-mon[74249]: pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:11:09 compute-0 nova_compute[259627]: 2025-10-14 09:11:09.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:11:11 compute-0 ceph-mon[74249]: pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:11:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:11:12 compute-0 nova_compute[259627]: 2025-10-14 09:11:12.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:13 compute-0 ceph-mon[74249]: pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:11:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.647 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "290980d2-08b4-4029-a1c3-becd3457a410" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.648 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.672 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:11:14 compute-0 podman[341731]: 2025-10-14 09:11:14.687567967 +0000 UTC m=+0.098292092 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.693 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433059.6899798, b21251cd-e46b-49fa-aebe-6250e4c587a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.693 2 INFO nova.compute.manager [-] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] VM Stopped (Lifecycle Event)
Oct 14 09:11:14 compute-0 podman[341732]: 2025-10-14 09:11:14.701526131 +0000 UTC m=+0.099060661 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.733 2 DEBUG nova.compute.manager [None req-5a119abd-e6d4-4512-95d7-fa66e094ce8d - - - - - -] [instance: b21251cd-e46b-49fa-aebe-6250e4c587a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.789 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.790 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.799 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.799 2 INFO nova.compute.claims [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:11:14 compute-0 nova_compute[259627]: 2025-10-14 09:11:14.950 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:15 compute-0 ceph-mon[74249]: pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Oct 14 09:11:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1661997175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.436 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.446 2 DEBUG nova.compute.provider_tree [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.467 2 DEBUG nova.scheduler.client.report [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.506 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.508 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.560 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.561 2 DEBUG nova.network.neutron [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.586 2 INFO nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.608 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.717 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.719 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.719 2 INFO nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Creating image(s)
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.754 2 DEBUG nova.storage.rbd_utils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 290980d2-08b4-4029-a1c3-becd3457a410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.793 2 DEBUG nova.storage.rbd_utils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 290980d2-08b4-4029-a1c3-becd3457a410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.829 2 DEBUG nova.storage.rbd_utils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 290980d2-08b4-4029-a1c3-becd3457a410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.834 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.893 2 DEBUG nova.policy [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.941 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.943 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.944 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.944 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.979 2 DEBUG nova.storage.rbd_utils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 290980d2-08b4-4029-a1c3-becd3457a410_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:15 compute-0 nova_compute[259627]: 2025-10-14 09:11:15.984 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 290980d2-08b4-4029-a1c3-becd3457a410_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:16 compute-0 nova_compute[259627]: 2025-10-14 09:11:16.300 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 290980d2-08b4-4029-a1c3-becd3457a410_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:16 compute-0 nova_compute[259627]: 2025-10-14 09:11:16.352 2 DEBUG nova.storage.rbd_utils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image 290980d2-08b4-4029-a1c3-becd3457a410_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:11:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1661997175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:16 compute-0 nova_compute[259627]: 2025-10-14 09:11:16.450 2 DEBUG nova.objects.instance [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid 290980d2-08b4-4029-a1c3-becd3457a410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:16 compute-0 nova_compute[259627]: 2025-10-14 09:11:16.466 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:11:16 compute-0 nova_compute[259627]: 2025-10-14 09:11:16.467 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Ensure instance console log exists: /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:11:16 compute-0 nova_compute[259627]: 2025-10-14 09:11:16.467 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:16 compute-0 nova_compute[259627]: 2025-10-14 09:11:16.467 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:16 compute-0 nova_compute[259627]: 2025-10-14 09:11:16.467 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:17 compute-0 ceph-mon[74249]: pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Oct 14 09:11:17 compute-0 nova_compute[259627]: 2025-10-14 09:11:17.570 2 DEBUG nova.network.neutron [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Successfully created port: 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:11:17 compute-0 nova_compute[259627]: 2025-10-14 09:11:17.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:11:19 compute-0 ceph-mon[74249]: pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:11:19 compute-0 nova_compute[259627]: 2025-10-14 09:11:19.653 2 DEBUG nova.network.neutron [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Successfully updated port: 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:11:19 compute-0 nova_compute[259627]: 2025-10-14 09:11:19.670 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:19 compute-0 nova_compute[259627]: 2025-10-14 09:11:19.671 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:19 compute-0 nova_compute[259627]: 2025-10-14 09:11:19.671 2 DEBUG nova.network.neutron [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:11:19 compute-0 nova_compute[259627]: 2025-10-14 09:11:19.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:11:20 compute-0 nova_compute[259627]: 2025-10-14 09:11:20.035 2 DEBUG nova.network.neutron [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:11:21 compute-0 ceph-mon[74249]: pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 612 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:11:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 88 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.617 2 DEBUG nova.compute.manager [req-24aab028-4207-471f-ad3d-92acc1dc9aa8 req-6f5eaabf-f77d-44ab-a11b-4e523d4a7522 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received event network-changed-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.617 2 DEBUG nova.compute.manager [req-24aab028-4207-471f-ad3d-92acc1dc9aa8 req-6f5eaabf-f77d-44ab-a11b-4e523d4a7522 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Refreshing instance network info cache due to event network-changed-106349ae-cfaa-43ec-9bda-16f36a6ac3d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.618 2 DEBUG oslo_concurrency.lockutils [req-24aab028-4207-471f-ad3d-92acc1dc9aa8 req-6f5eaabf-f77d-44ab-a11b-4e523d4a7522 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.753 2 DEBUG nova.network.neutron [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updating instance_info_cache with network_info: [{"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.770 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.770 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Instance network_info: |[{"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.771 2 DEBUG oslo_concurrency.lockutils [req-24aab028-4207-471f-ad3d-92acc1dc9aa8 req-6f5eaabf-f77d-44ab-a11b-4e523d4a7522 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.771 2 DEBUG nova.network.neutron [req-24aab028-4207-471f-ad3d-92acc1dc9aa8 req-6f5eaabf-f77d-44ab-a11b-4e523d4a7522 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Refreshing network info cache for port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.774 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Start _get_guest_xml network_info=[{"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.779 2 WARNING nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.788 2 DEBUG nova.virt.libvirt.host [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.789 2 DEBUG nova.virt.libvirt.host [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.794 2 DEBUG nova.virt.libvirt.host [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.794 2 DEBUG nova.virt.libvirt.host [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.795 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.795 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.795 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.796 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.796 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.796 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.796 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.797 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.797 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.797 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.797 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.797 2 DEBUG nova.virt.hardware [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.800 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.853 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.853 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:22 compute-0 nova_compute[259627]: 2025-10-14 09:11:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.018 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.100 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.101 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.108 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.109 2 INFO nova.compute.claims [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.261 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155789488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.314 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.351 2 DEBUG nova.storage.rbd_utils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 290980d2-08b4-4029-a1c3-becd3457a410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.356 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:23 compute-0 ceph-mon[74249]: pgmap v1707: 305 pgs: 305 active+clean; 88 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:11:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/155789488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1247305526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.759 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "d1c0470c-5f74-43c3-a206-07147fa01d5e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.760 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.767 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.780 2 DEBUG nova.compute.provider_tree [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.794 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.806 2 DEBUG nova.scheduler.client.report [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1653402711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.830 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.832 2 DEBUG nova.virt.libvirt.vif [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2139849378',display_name='tempest-TestNetworkAdvancedServerOps-server-2139849378',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2139849378',id=79,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhJsgZxd+OWpD7zaKfBBpwRrzG5y2svlcIl3JOB/vCmtEivnvjVbVGSOAYp3d5tHvQ3QI5rFGEYwlQUr4eteTREKUKNxmmu7QDjX9h1ezH3YG5N/CgGPytwUcQasRNPJg==',key_name='tempest-TestNetworkAdvancedServerOps-1442263360',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-se9y79xx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:15Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=290980d2-08b4-4029-a1c3-becd3457a410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.833 2 DEBUG nova.network.os_vif_util [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.835 2 DEBUG nova.network.os_vif_util [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.837 2 DEBUG nova.objects.instance [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 290980d2-08b4-4029-a1c3-becd3457a410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.840 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.841 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.889 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <uuid>290980d2-08b4-4029-a1c3-becd3457a410</uuid>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <name>instance-0000004f</name>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2139849378</nova:name>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:11:22</nova:creationTime>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <nova:port uuid="106349ae-cfaa-43ec-9bda-16f36a6ac3d6">
Oct 14 09:11:23 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <system>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <entry name="serial">290980d2-08b4-4029-a1c3-becd3457a410</entry>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <entry name="uuid">290980d2-08b4-4029-a1c3-becd3457a410</entry>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </system>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <os>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   </os>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <features>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   </features>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/290980d2-08b4-4029-a1c3-becd3457a410_disk">
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/290980d2-08b4-4029-a1c3-becd3457a410_disk.config">
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:07:23:80"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <target dev="tap106349ae-cf"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410/console.log" append="off"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <video>
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </video>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:11:23 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:11:23 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:11:23 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:11:23 compute-0 nova_compute[259627]: </domain>
Oct 14 09:11:23 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:11:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 88 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.891 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Preparing to wait for external event network-vif-plugged-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.892 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "290980d2-08b4-4029-a1c3-becd3457a410-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.892 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.893 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.893 2 DEBUG nova.virt.libvirt.vif [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2139849378',display_name='tempest-TestNetworkAdvancedServerOps-server-2139849378',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2139849378',id=79,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhJsgZxd+OWpD7zaKfBBpwRrzG5y2svlcIl3JOB/vCmtEivnvjVbVGSOAYp3d5tHvQ3QI5rFGEYwlQUr4eteTREKUKNxmmu7QDjX9h1ezH3YG5N/CgGPytwUcQasRNPJg==',key_name='tempest-TestNetworkAdvancedServerOps-1442263360',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-se9y79xx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:15Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=290980d2-08b4-4029-a1c3-becd3457a410,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.894 2 DEBUG nova.network.os_vif_util [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.895 2 DEBUG nova.network.os_vif_util [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.895 2 DEBUG os_vif [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.896 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.897 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.902 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.902 2 DEBUG nova.network.neutron [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap106349ae-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap106349ae-cf, col_values=(('external_ids', {'iface-id': '106349ae-cfaa-43ec-9bda-16f36a6ac3d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:23:80', 'vm-uuid': '290980d2-08b4-4029-a1c3-becd3457a410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:23 compute-0 NetworkManager[44885]: <info>  [1760433083.9136] manager: (tap106349ae-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.918 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.919 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.923 2 INFO os_vif [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf')
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.925 2 INFO nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.929 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.929 2 INFO nova.compute.claims [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.946 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:11:23 compute-0 nova_compute[259627]: 2025-10-14 09:11:23.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.021 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.021 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.021 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:07:23:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.028 2 INFO nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Using config drive
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.062 2 DEBUG nova.storage.rbd_utils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 290980d2-08b4-4029-a1c3-becd3457a410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.077 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.078 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.078 2 INFO nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Creating image(s)
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.107 2 DEBUG nova.storage.rbd_utils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.132 2 DEBUG nova.storage.rbd_utils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.158 2 DEBUG nova.storage.rbd_utils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.162 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.263 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.264 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.265 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.265 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.290 2 DEBUG nova.storage.rbd_utils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.296 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.337 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.379 2 DEBUG nova.policy [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa1425f7fdfc4218bdabfe2458cd1c60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f10ae705d9a34608a922683282b952b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:11:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1247305526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1653402711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.571 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.677 2 DEBUG nova.storage.rbd_utils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] resizing rbd image 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.717 2 INFO nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Creating config drive at /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410/disk.config
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.727 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzs7bpb6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.762 2 DEBUG nova.network.neutron [req-24aab028-4207-471f-ad3d-92acc1dc9aa8 req-6f5eaabf-f77d-44ab-a11b-4e523d4a7522 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updated VIF entry in instance network info cache for port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.762 2 DEBUG nova.network.neutron [req-24aab028-4207-471f-ad3d-92acc1dc9aa8 req-6f5eaabf-f77d-44ab-a11b-4e523d4a7522 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updating instance_info_cache with network_info: [{"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.777 2 DEBUG oslo_concurrency.lockutils [req-24aab028-4207-471f-ad3d-92acc1dc9aa8 req-6f5eaabf-f77d-44ab-a11b-4e523d4a7522 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.818 2 DEBUG nova.objects.instance [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 70e3c250-cd38-4718-9a7f-0fbf7bf471fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3289197509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.838 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.840 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.841 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Ensure instance console log exists: /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.841 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.841 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.842 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.846 2 DEBUG nova.compute.provider_tree [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.865 2 DEBUG nova.scheduler.client.report [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.868 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzs7bpb6" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.888 2 DEBUG nova.storage.rbd_utils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 290980d2-08b4-4029-a1c3-becd3457a410_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.891 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410/disk.config 290980d2-08b4-4029-a1c3-becd3457a410_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.943 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.975 2 DEBUG nova.network.neutron [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Successfully created port: f027d20e-665b-4bd0-836c-7e8edb2b6bf7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.993 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "71690f51-95cf-4ec1-a234-d2c54b68a0ca" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:24 compute-0 nova_compute[259627]: 2025-10-14 09:11:24.993 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "71690f51-95cf-4ec1-a234-d2c54b68a0ca" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.002 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "71690f51-95cf-4ec1-a234-d2c54b68a0ca" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.003 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.083 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.083 2 DEBUG nova.network.neutron [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.105 2 DEBUG oslo_concurrency.processutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410/disk.config 290980d2-08b4-4029-a1c3-becd3457a410_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.106 2 INFO nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Deleting local config drive /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410/disk.config because it was imported into RBD.
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.147 2 INFO nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:11:25 compute-0 kernel: tap106349ae-cf: entered promiscuous mode
Oct 14 09:11:25 compute-0 ovn_controller[152662]: 2025-10-14T09:11:25Z|00815|binding|INFO|Claiming lport 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 for this chassis.
Oct 14 09:11:25 compute-0 ovn_controller[152662]: 2025-10-14T09:11:25Z|00816|binding|INFO|106349ae-cfaa-43ec-9bda-16f36a6ac3d6: Claiming fa:16:3e:07:23:80 10.100.0.11
Oct 14 09:11:25 compute-0 NetworkManager[44885]: <info>  [1760433085.1762] manager: (tap106349ae-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.179 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.186 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:23:80 10.100.0.11'], port_security=['fa:16:3e:07:23:80 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '290980d2-08b4-4029-a1c3-becd3457a410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2389730-ae66-46f4-aea3-6a67311703e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fef16ebb-8e3c-4cd9-b046-59c4109d4508', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45135636-9d1b-4fd5-951f-3d4d3d97b1e9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=106349ae-cfaa-43ec-9bda-16f36a6ac3d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.187 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 in datapath b2389730-ae66-46f4-aea3-6a67311703e9 bound to our chassis
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.189 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2389730-ae66-46f4-aea3-6a67311703e9
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.200 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f36399-697f-41a5-a1e1-9637c3595383]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.201 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2389730-a1 in ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.204 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2389730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.204 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3e3641-4d1e-4b8b-947d-80554e73f2e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.205 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8496dd-7035-41b8-b4ee-8c60e068e4a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 systemd-machined[214636]: New machine qemu-99-instance-0000004f.
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.225 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4c12eb63-3c63-48b1-a979-0e9c68b3ce7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-0000004f.
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.252 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcabd1e-cf7a-4d84-a659-d4fbb43ea2b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:25 compute-0 ovn_controller[152662]: 2025-10-14T09:11:25Z|00817|binding|INFO|Setting lport 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 ovn-installed in OVS
Oct 14 09:11:25 compute-0 ovn_controller[152662]: 2025-10-14T09:11:25Z|00818|binding|INFO|Setting lport 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 up in Southbound
Oct 14 09:11:25 compute-0 systemd-udevd[342309]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.282 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.283 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.283 2 INFO nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Creating image(s)
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.290 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4120b431-8cf6-4c89-8ab6-1c325a5f281c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 NetworkManager[44885]: <info>  [1760433085.2965] device (tap106349ae-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:11:25 compute-0 NetworkManager[44885]: <info>  [1760433085.2974] device (tap106349ae-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:11:25 compute-0 systemd-udevd[342323]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.300 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf62598-50ce-4af5-9d88-42525244daf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 NetworkManager[44885]: <info>  [1760433085.3017] manager: (tapb2389730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.335 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[22427786-2299-4a38-aae9-3f60b85d12dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.337 2 DEBUG nova.storage.rbd_utils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] rbd image d1c0470c-5f74-43c3-a206-07147fa01d5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.339 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3cba3344-4f0d-4edc-8629-cf05d3bc1dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 NetworkManager[44885]: <info>  [1760433085.3656] device (tapb2389730-a0): carrier: link connected
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.370 2 DEBUG nova.storage.rbd_utils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] rbd image d1c0470c-5f74-43c3-a206-07147fa01d5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.372 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77aaace3-5322-49c1-829e-bc366ba94047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.390 2 DEBUG nova.storage.rbd_utils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] rbd image d1c0470c-5f74-43c3-a206-07147fa01d5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.392 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5b0067-dc66-4252-b198-8acb392a46bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2389730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:78:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686412, 'reachable_time': 34890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342373, 'error': None, 'target': 'ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.393 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a8ea4f-49cc-42e4-b50f-b9d0a5a368bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:784b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686412, 'tstamp': 686412}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342392, 'error': None, 'target': 'ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.433 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5b709377-6d7a-4636-807f-77a8094cfc3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2389730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:78:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686412, 'reachable_time': 34890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342394, 'error': None, 'target': 'ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 ceph-mon[74249]: pgmap v1708: 305 pgs: 305 active+clean; 88 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:11:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3289197509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.451 2 DEBUG nova.policy [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9bf2c63a22b4957a35bb3b62129ab7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e735ef38811e4376af72e0a380aba1bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff765ce1-43d6-4837-95c4-13113abaa374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.501 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.502 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.503 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.504 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.529 2 DEBUG nova.storage.rbd_utils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] rbd image d1c0470c-5f74-43c3-a206-07147fa01d5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.534 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ca136904-f362-4579-bbe0-bacf838a7427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.535 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2389730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.535 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d1c0470c-5f74-43c3-a206-07147fa01d5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.536 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.536 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2389730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:25 compute-0 kernel: tapb2389730-a0: entered promiscuous mode
Oct 14 09:11:25 compute-0 NetworkManager[44885]: <info>  [1760433085.5405] manager: (tapb2389730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.541 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2389730-a0, col_values=(('external_ids', {'iface-id': 'd2ea0b85-033d-47f5-af89-2df076f2ce40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:25 compute-0 ovn_controller[152662]: 2025-10-14T09:11:25Z|00819|binding|INFO|Releasing lport d2ea0b85-033d-47f5-af89-2df076f2ce40 from this chassis (sb_readonly=0)
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.544 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2389730-ae66-46f4-aea3-6a67311703e9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2389730-ae66-46f4-aea3-6a67311703e9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[974a31c6-9b4d-4b63-8680-175852dcee94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.546 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-b2389730-ae66-46f4-aea3-6a67311703e9
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/b2389730-ae66-46f4-aea3-6a67311703e9.pid.haproxy
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID b2389730-ae66-46f4-aea3-6a67311703e9
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:11:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:25.548 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9', 'env', 'PROCESS_TAG=haproxy-b2389730-ae66-46f4-aea3-6a67311703e9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2389730-ae66-46f4-aea3-6a67311703e9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.833 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d1c0470c-5f74-43c3-a206-07147fa01d5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 134 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.920 2 DEBUG nova.storage.rbd_utils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] resizing rbd image d1c0470c-5f74-43c3-a206-07147fa01d5e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:11:25 compute-0 podman[342524]: 2025-10-14 09:11:25.941279906 +0000 UTC m=+0.051832768 container create 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:11:25 compute-0 systemd[1]: Started libpod-conmon-519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc.scope.
Oct 14 09:11:25 compute-0 nova_compute[259627]: 2025-10-14 09:11:25.982 2 DEBUG nova.network.neutron [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Successfully created port: 85b2e7ea-f418-4eba-9a49-be2af576436f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:11:26 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:26 compute-0 podman[342524]: 2025-10-14 09:11:25.913555993 +0000 UTC m=+0.024108905 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d3769d648c5314a067d97a55b05a97e81509531f785e9c48153acfc6141c6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.024 2 DEBUG nova.objects.instance [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lazy-loading 'migration_context' on Instance uuid d1c0470c-5f74-43c3-a206-07147fa01d5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:26 compute-0 podman[342524]: 2025-10-14 09:11:26.031762854 +0000 UTC m=+0.142315736 container init 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:11:26 compute-0 podman[342524]: 2025-10-14 09:11:26.036645145 +0000 UTC m=+0.147198007 container start 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.039 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.039 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Ensure instance console log exists: /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.041 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.041 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.041 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:26 compute-0 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [NOTICE]   (342597) : New worker (342599) forked
Oct 14 09:11:26 compute-0 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [NOTICE]   (342597) : Loading success.
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.196 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.197 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.218 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.232 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433086.2324317, 290980d2-08b4-4029-a1c3-becd3457a410 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.233 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Started (Lifecycle Event)
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.258 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.262 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433086.2330706, 290980d2-08b4-4029-a1c3-becd3457a410 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.262 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Paused (Lifecycle Event)
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.265 2 DEBUG nova.network.neutron [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Successfully updated port: f027d20e-665b-4bd0-836c-7e8edb2b6bf7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.296 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "refresh_cache-70e3c250-cd38-4718-9a7f-0fbf7bf471fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.296 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquired lock "refresh_cache-70e3c250-cd38-4718-9a7f-0fbf7bf471fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.296 2 DEBUG nova.network.neutron [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.299 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.302 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.321 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.324 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.325 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.330 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.330 2 INFO nova.compute.claims [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.365 2 DEBUG nova.compute.manager [req-29f4891f-21ea-4d56-a0d1-8970050325ee req-f4f35b49-4c41-4ce9-9f6b-27dec00d2cca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-changed-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.365 2 DEBUG nova.compute.manager [req-29f4891f-21ea-4d56-a0d1-8970050325ee req-f4f35b49-4c41-4ce9-9f6b-27dec00d2cca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Refreshing instance network info cache due to event network-changed-f027d20e-665b-4bd0-836c-7e8edb2b6bf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.365 2 DEBUG oslo_concurrency.lockutils [req-29f4891f-21ea-4d56-a0d1-8970050325ee req-f4f35b49-4c41-4ce9-9f6b-27dec00d2cca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-70e3c250-cd38-4718-9a7f-0fbf7bf471fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.445 2 DEBUG nova.network.neutron [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.479 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.560 2 DEBUG nova.network.neutron [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Successfully updated port: 85b2e7ea-f418-4eba-9a49-be2af576436f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.579 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "refresh_cache-d1c0470c-5f74-43c3-a206-07147fa01d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.580 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquired lock "refresh_cache-d1c0470c-5f74-43c3-a206-07147fa01d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.580 2 DEBUG nova.network.neutron [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.734 2 DEBUG nova.network.neutron [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:11:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/671837221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.955 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.962 2 DEBUG nova.compute.provider_tree [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:26 compute-0 nova_compute[259627]: 2025-10-14 09:11:26.985 2 DEBUG nova.scheduler.client.report [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.020 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.021 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.093 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.093 2 DEBUG nova.network.neutron [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.124 2 INFO nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.156 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.297 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.299 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.300 2 INFO nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Creating image(s)
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.335 2 DEBUG nova.storage.rbd_utils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.373 2 DEBUG nova.storage.rbd_utils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.408 2 DEBUG nova.storage.rbd_utils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.413 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:27 compute-0 ceph-mon[74249]: pgmap v1709: 305 pgs: 305 active+clean; 134 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Oct 14 09:11:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/671837221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.469 2 DEBUG nova.policy [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa1425f7fdfc4218bdabfe2458cd1c60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f10ae705d9a34608a922683282b952b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.488 2 DEBUG nova.compute.manager [req-9386519e-7dcf-44f2-9d43-a3f2e8194377 req-a7662684-7bc0-42df-bd8b-7b71f0369f97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received event network-changed-85b2e7ea-f418-4eba-9a49-be2af576436f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.488 2 DEBUG nova.compute.manager [req-9386519e-7dcf-44f2-9d43-a3f2e8194377 req-a7662684-7bc0-42df-bd8b-7b71f0369f97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Refreshing instance network info cache due to event network-changed-85b2e7ea-f418-4eba-9a49-be2af576436f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.489 2 DEBUG oslo_concurrency.lockutils [req-9386519e-7dcf-44f2-9d43-a3f2e8194377 req-a7662684-7bc0-42df-bd8b-7b71f0369f97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d1c0470c-5f74-43c3-a206-07147fa01d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.517 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.518 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.519 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.519 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.544 2 DEBUG nova.storage.rbd_utils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.548 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1141f79e-2e47-40f1-91b0-275a9fac765c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:27 compute-0 podman[342705]: 2025-10-14 09:11:27.673585359 +0000 UTC m=+0.091145716 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:11:27 compute-0 podman[342707]: 2025-10-14 09:11:27.684412515 +0000 UTC m=+0.089945826 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.865 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1141f79e-2e47-40f1-91b0-275a9fac765c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 134 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.951 2 DEBUG nova.storage.rbd_utils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] resizing rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:11:27 compute-0 nova_compute[259627]: 2025-10-14 09:11:27.998 2 DEBUG nova.network.neutron [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Updating instance_info_cache with network_info: [{"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.022 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Releasing lock "refresh_cache-70e3c250-cd38-4718-9a7f-0fbf7bf471fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.022 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Instance network_info: |[{"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.022 2 DEBUG oslo_concurrency.lockutils [req-29f4891f-21ea-4d56-a0d1-8970050325ee req-f4f35b49-4c41-4ce9-9f6b-27dec00d2cca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-70e3c250-cd38-4718-9a7f-0fbf7bf471fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.023 2 DEBUG nova.network.neutron [req-29f4891f-21ea-4d56-a0d1-8970050325ee req-f4f35b49-4c41-4ce9-9f6b-27dec00d2cca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Refreshing network info cache for port f027d20e-665b-4bd0-836c-7e8edb2b6bf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.026 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Start _get_guest_xml network_info=[{"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.065 2 DEBUG nova.network.neutron [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Successfully created port: 7ce99440-fa49-4876-bb38-fce631d40400 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.075 2 DEBUG nova.objects.instance [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.077 2 WARNING nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.082 2 DEBUG nova.virt.libvirt.host [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.082 2 DEBUG nova.virt.libvirt.host [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.085 2 DEBUG nova.virt.libvirt.host [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.086 2 DEBUG nova.virt.libvirt.host [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.086 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.086 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.087 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.087 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.087 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.088 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.088 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.088 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.088 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.089 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.089 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.089 2 DEBUG nova.virt.hardware [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.092 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.128 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.129 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Ensure instance console log exists: /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.129 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.129 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.130 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.342 2 DEBUG nova.network.neutron [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Updating instance_info_cache with network_info: [{"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.370 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Releasing lock "refresh_cache-d1c0470c-5f74-43c3-a206-07147fa01d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.370 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Instance network_info: |[{"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.372 2 DEBUG oslo_concurrency.lockutils [req-9386519e-7dcf-44f2-9d43-a3f2e8194377 req-a7662684-7bc0-42df-bd8b-7b71f0369f97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d1c0470c-5f74-43c3-a206-07147fa01d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.372 2 DEBUG nova.network.neutron [req-9386519e-7dcf-44f2-9d43-a3f2e8194377 req-a7662684-7bc0-42df-bd8b-7b71f0369f97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Refreshing network info cache for port 85b2e7ea-f418-4eba-9a49-be2af576436f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.377 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Start _get_guest_xml network_info=[{"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.383 2 WARNING nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.388 2 DEBUG nova.virt.libvirt.host [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.389 2 DEBUG nova.virt.libvirt.host [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.402 2 DEBUG nova.virt.libvirt.host [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.403 2 DEBUG nova.virt.libvirt.host [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.404 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.404 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.405 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.405 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.406 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.406 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.407 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.407 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.408 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.408 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.409 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.409 2 DEBUG nova.virt.hardware [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.415 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.476 2 DEBUG nova.compute.manager [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received event network-vif-plugged-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.477 2 DEBUG oslo_concurrency.lockutils [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "290980d2-08b4-4029-a1c3-becd3457a410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.477 2 DEBUG oslo_concurrency.lockutils [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.477 2 DEBUG oslo_concurrency.lockutils [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.478 2 DEBUG nova.compute.manager [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Processing event network-vif-plugged-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.478 2 DEBUG nova.compute.manager [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received event network-vif-plugged-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.478 2 DEBUG oslo_concurrency.lockutils [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "290980d2-08b4-4029-a1c3-becd3457a410-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.478 2 DEBUG oslo_concurrency.lockutils [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.478 2 DEBUG oslo_concurrency.lockutils [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.479 2 DEBUG nova.compute.manager [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] No waiting events found dispatching network-vif-plugged-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.479 2 WARNING nova.compute.manager [req-c36991f9-43e5-4bd3-8a4d-73e8a5e4f169 req-048d4482-2d61-4efb-826b-1913325511c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received unexpected event network-vif-plugged-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 for instance with vm_state building and task_state spawning.
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.479 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.487 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433088.4827783, 290980d2-08b4-4029-a1c3-becd3457a410 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.487 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Resumed (Lifecycle Event)
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.491 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.496 2 INFO nova.virt.libvirt.driver [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Instance spawned successfully.
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.497 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.510 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.516 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751700693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.527 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.528 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.528 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.529 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.529 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.529 2 DEBUG nova.virt.libvirt.driver [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.533 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.537 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.556 2 DEBUG nova.storage.rbd_utils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.560 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.599 2 INFO nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Took 12.88 seconds to spawn the instance on the hypervisor.
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.600 2 DEBUG nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.667 2 INFO nova.compute.manager [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Took 13.92 seconds to build instance.
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.692 2 DEBUG oslo_concurrency.lockutils [None req-1e4cb4fc-ce22-4995-bb64-4b6d697cd78a e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.807 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.808 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2017152192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.839 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.847 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.848 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.848 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.872 2 DEBUG nova.storage.rbd_utils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] rbd image d1c0470c-5f74-43c3-a206-07147fa01d5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.877 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963436889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.990 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.991 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.994 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.994 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:28 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.995 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:28.999 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.001 2 DEBUG nova.virt.libvirt.vif [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1354708936',display_name='tempest-ServerRescueNegativeTestJSON-server-1354708936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1354708936',id=80,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-vlzoqsi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:23Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=70e3c250-cd38-4718-9a7f-0fbf7bf471fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.001 2 DEBUG nova.network.os_vif_util [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.002 2 DEBUG nova.network.os_vif_util [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.004 2 DEBUG nova.objects.instance [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 70e3c250-cd38-4718-9a7f-0fbf7bf471fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.029 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.029 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.036 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.036 2 INFO nova.compute.claims [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.087 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <uuid>70e3c250-cd38-4718-9a7f-0fbf7bf471fe</uuid>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <name>instance-00000050</name>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1354708936</nova:name>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:11:28</nova:creationTime>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:user uuid="aa1425f7fdfc4218bdabfe2458cd1c60">tempest-ServerRescueNegativeTestJSON-1031174086-project-member</nova:user>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:project uuid="f10ae705d9a34608a922683282b952b5">tempest-ServerRescueNegativeTestJSON-1031174086</nova:project>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:port uuid="f027d20e-665b-4bd0-836c-7e8edb2b6bf7">
Oct 14 09:11:29 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <system>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="serial">70e3c250-cd38-4718-9a7f-0fbf7bf471fe</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="uuid">70e3c250-cd38-4718-9a7f-0fbf7bf471fe</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </system>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <os>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </os>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <features>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </features>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk.config">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:1a:5e:c8"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <target dev="tapf027d20e-66"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe/console.log" append="off"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <video>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </video>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:11:29 compute-0 nova_compute[259627]: </domain>
Oct 14 09:11:29 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.087 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Preparing to wait for external event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.088 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.088 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.088 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.090 2 DEBUG nova.virt.libvirt.vif [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1354708936',display_name='tempest-ServerRescueNegativeTestJSON-server-1354708936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1354708936',id=80,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-vlzoqsi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:23Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=70e3c250-cd38-4718-9a7f-0fbf7bf471fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.090 2 DEBUG nova.network.os_vif_util [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.091 2 DEBUG nova.network.os_vif_util [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.092 2 DEBUG os_vif [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.093 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.095 2 DEBUG nova.network.neutron [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Successfully updated port: 7ce99440-fa49-4876-bb38-fce631d40400 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.100 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf027d20e-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf027d20e-66, col_values=(('external_ids', {'iface-id': 'f027d20e-665b-4bd0-836c-7e8edb2b6bf7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:5e:c8', 'vm-uuid': '70e3c250-cd38-4718-9a7f-0fbf7bf471fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:29 compute-0 NetworkManager[44885]: <info>  [1760433089.1134] manager: (tapf027d20e-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.126 2 INFO os_vif [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66')
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.129 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.147 2 DEBUG nova.network.neutron [req-29f4891f-21ea-4d56-a0d1-8970050325ee req-f4f35b49-4c41-4ce9-9f6b-27dec00d2cca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Updated VIF entry in instance network info cache for port f027d20e-665b-4bd0-836c-7e8edb2b6bf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.147 2 DEBUG nova.network.neutron [req-29f4891f-21ea-4d56-a0d1-8970050325ee req-f4f35b49-4c41-4ce9-9f6b-27dec00d2cca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Updating instance_info_cache with network_info: [{"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.251 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.251 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquired lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.251 2 DEBUG nova.network.neutron [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.279 2 DEBUG oslo_concurrency.lockutils [req-29f4891f-21ea-4d56-a0d1-8970050325ee req-f4f35b49-4c41-4ce9-9f6b-27dec00d2cca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-70e3c250-cd38-4718-9a7f-0fbf7bf471fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610423191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.306 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.306 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.307 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No VIF found with MAC fa:16:3e:1a:5e:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.307 2 INFO nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Using config drive
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.332 2 DEBUG nova.storage.rbd_utils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.338 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.339 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.340 2 DEBUG nova.virt.libvirt.vif [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-846214533',display_name='tempest-ServerGroupTestJSON-server-846214533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-846214533',id=81,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e735ef38811e4376af72e0a380aba1bb',ramdisk_id='',reservation_id='r-j643qn5d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1602782331',owner_user_name='tempest-ServerGroupTestJSON-1602782331-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:25Z,user_data=None,user_id='f9bf2c63a22b4957a35bb3b62129ab7c',uuid=d1c0470c-5f74-43c3-a206-07147fa01d5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.340 2 DEBUG nova.network.os_vif_util [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Converting VIF {"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.340 2 DEBUG nova.network.os_vif_util [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6f:91,bridge_name='br-int',has_traffic_filtering=True,id=85b2e7ea-f418-4eba-9a49-be2af576436f,network=Network(10ee435a-b254-4b1c-8c18-f92d44f39cd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b2e7ea-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.341 2 DEBUG nova.objects.instance [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lazy-loading 'pci_devices' on Instance uuid d1c0470c-5f74-43c3-a206-07147fa01d5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.367 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <uuid>d1c0470c-5f74-43c3-a206-07147fa01d5e</uuid>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <name>instance-00000051</name>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerGroupTestJSON-server-846214533</nova:name>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:11:28</nova:creationTime>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:user uuid="f9bf2c63a22b4957a35bb3b62129ab7c">tempest-ServerGroupTestJSON-1602782331-project-member</nova:user>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:project uuid="e735ef38811e4376af72e0a380aba1bb">tempest-ServerGroupTestJSON-1602782331</nova:project>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <nova:port uuid="85b2e7ea-f418-4eba-9a49-be2af576436f">
Oct 14 09:11:29 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <system>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="serial">d1c0470c-5f74-43c3-a206-07147fa01d5e</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="uuid">d1c0470c-5f74-43c3-a206-07147fa01d5e</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </system>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <os>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </os>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <features>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </features>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/d1c0470c-5f74-43c3-a206-07147fa01d5e_disk">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/d1c0470c-5f74-43c3-a206-07147fa01d5e_disk.config">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:29 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:64:6f:91"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <target dev="tap85b2e7ea-f4"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e/console.log" append="off"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <video>
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </video>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:11:29 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:11:29 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:11:29 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:11:29 compute-0 nova_compute[259627]: </domain>
Oct 14 09:11:29 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.368 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Preparing to wait for external event network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.368 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.368 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.368 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.369 2 DEBUG nova.virt.libvirt.vif [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-846214533',display_name='tempest-ServerGroupTestJSON-server-846214533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-846214533',id=81,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e735ef38811e4376af72e0a380aba1bb',ramdisk_id='',reservation_id='r-j643qn5d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1602782331',owner_user_name='tempest-ServerGroupTestJSON-1602782331-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:25Z,user_data=None,user_id='f9bf2c63a22b4957a35bb3b62129ab7c',uuid=d1c0470c-5f74-43c3-a206-07147fa01d5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.369 2 DEBUG nova.network.os_vif_util [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Converting VIF {"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.370 2 DEBUG nova.network.os_vif_util [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6f:91,bridge_name='br-int',has_traffic_filtering=True,id=85b2e7ea-f418-4eba-9a49-be2af576436f,network=Network(10ee435a-b254-4b1c-8c18-f92d44f39cd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b2e7ea-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.370 2 DEBUG os_vif [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6f:91,bridge_name='br-int',has_traffic_filtering=True,id=85b2e7ea-f418-4eba-9a49-be2af576436f,network=Network(10ee435a-b254-4b1c-8c18-f92d44f39cd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b2e7ea-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b2e7ea-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85b2e7ea-f4, col_values=(('external_ids', {'iface-id': '85b2e7ea-f418-4eba-9a49-be2af576436f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:6f:91', 'vm-uuid': 'd1c0470c-5f74-43c3-a206-07147fa01d5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:29 compute-0 NetworkManager[44885]: <info>  [1760433089.3777] manager: (tap85b2e7ea-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.388 2 INFO os_vif [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6f:91,bridge_name='br-int',has_traffic_filtering=True,id=85b2e7ea-f418-4eba-9a49-be2af576436f,network=Network(10ee435a-b254-4b1c-8c18-f92d44f39cd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b2e7ea-f4')
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.450 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.451 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.451 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] No VIF found with MAC fa:16:3e:64:6f:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.451 2 INFO nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Using config drive
Oct 14 09:11:29 compute-0 ceph-mon[74249]: pgmap v1710: 305 pgs: 305 active+clean; 134 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Oct 14 09:11:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3751700693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2017152192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/963436889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2610423191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.481 2 DEBUG nova.storage.rbd_utils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] rbd image d1c0470c-5f74-43c3-a206-07147fa01d5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.488 2 DEBUG nova.network.neutron [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.501 2 DEBUG nova.compute.manager [req-18d19ff5-6031-4a6a-abd1-4a07b0401ce1 req-8820d0a8-394c-4c4f-afa4-e702cae91e55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-changed-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.501 2 DEBUG nova.compute.manager [req-18d19ff5-6031-4a6a-abd1-4a07b0401ce1 req-8820d0a8-394c-4c4f-afa4-e702cae91e55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Refreshing instance network info cache due to event network-changed-7ce99440-fa49-4876-bb38-fce631d40400. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.502 2 DEBUG oslo_concurrency.lockutils [req-18d19ff5-6031-4a6a-abd1-4a07b0401ce1 req-8820d0a8-394c-4c4f-afa4-e702cae91e55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.586 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.715 2 INFO nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Creating config drive at /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe/disk.config
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.722 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_yrpsh2p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.814 2 INFO nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Creating config drive at /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e/disk.config
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.822 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl4h6a9zs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 134 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.893 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_yrpsh2p" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.927 2 DEBUG nova.storage.rbd_utils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.930 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe/disk.config 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:29 compute-0 nova_compute[259627]: 2025-10-14 09:11:29.975 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl4h6a9zs" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.003 2 DEBUG nova.storage.rbd_utils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] rbd image d1c0470c-5f74-43c3-a206-07147fa01d5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.009 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e/disk.config d1c0470c-5f74-43c3-a206-07147fa01d5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/992613432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.079 2 DEBUG oslo_concurrency.processutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe/disk.config 70e3c250-cd38-4718-9a7f-0fbf7bf471fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.080 2 INFO nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Deleting local config drive /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe/disk.config because it was imported into RBD.
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.087 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.095 2 DEBUG nova.compute.provider_tree [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.119 2 DEBUG nova.scheduler.client.report [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.1273] manager: (tapf027d20e-66): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Oct 14 09:11:30 compute-0 kernel: tapf027d20e-66: entered promiscuous mode
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.150 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.151 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.153 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.153 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.153 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.153 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:30 compute-0 systemd-udevd[343118]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00820|binding|INFO|Claiming lport f027d20e-665b-4bd0-836c-7e8edb2b6bf7 for this chassis.
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00821|binding|INFO|f027d20e-665b-4bd0-836c-7e8edb2b6bf7: Claiming fa:16:3e:1a:5e:c8 10.100.0.5
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.1852] device (tapf027d20e-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.1865] device (tapf027d20e-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.188 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:5e:c8 10.100.0.5'], port_security=['fa:16:3e:1a:5e:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '70e3c250-cd38-4718-9a7f-0fbf7bf471fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f027d20e-665b-4bd0-836c-7e8edb2b6bf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.189 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f027d20e-665b-4bd0-836c-7e8edb2b6bf7 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 bound to our chassis
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.191 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e6e95c-6cd2-4631-98f6-9ed276458c39
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.198 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.204 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8e633c-d308-499d-a481-da82deede479]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.205 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15e6e95c-61 in ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.207 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15e6e95c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.207 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18c75245-f15f-4626-8288-b31a58247964]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.210 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aac49a2b-7640-4d8c-a92f-b74e887c0391]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 systemd-machined[214636]: New machine qemu-100-instance-00000050.
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.218 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.218 2 INFO nova.compute.claims [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:11:30 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000050.
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.224 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8eebf490-d646-40b3-9deb-81a40d512a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.255 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[63d7a0ec-73cf-4ba8-9c69-95c937c589bf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.278 2 DEBUG oslo_concurrency.processutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e/disk.config d1c0470c-5f74-43c3-a206-07147fa01d5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.278 2 INFO nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Deleting local config drive /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e/disk.config because it was imported into RBD.
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00822|binding|INFO|Setting lport f027d20e-665b-4bd0-836c-7e8edb2b6bf7 ovn-installed in OVS
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00823|binding|INFO|Setting lport f027d20e-665b-4bd0-836c-7e8edb2b6bf7 up in Southbound
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.287 2 DEBUG nova.network.neutron [req-9386519e-7dcf-44f2-9d43-a3f2e8194377 req-a7662684-7bc0-42df-bd8b-7b71f0369f97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Updated VIF entry in instance network info cache for port 85b2e7ea-f418-4eba-9a49-be2af576436f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.287 2 DEBUG nova.network.neutron [req-9386519e-7dcf-44f2-9d43-a3f2e8194377 req-a7662684-7bc0-42df-bd8b-7b71f0369f97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Updating instance_info_cache with network_info: [{"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.295 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.296 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.301 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4e15ce82-e93a-44ef-aafa-b8efbab100a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.3080] manager: (tap15e6e95c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/345)
Oct 14 09:11:30 compute-0 systemd-udevd[343126]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.307 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07bb96c8-1769-4098-b269-05176f7d48b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 kernel: tap85b2e7ea-f4: entered promiscuous mode
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.3415] manager: (tap85b2e7ea-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00824|binding|INFO|Claiming lport 85b2e7ea-f418-4eba-9a49-be2af576436f for this chassis.
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00825|binding|INFO|85b2e7ea-f418-4eba-9a49-be2af576436f: Claiming fa:16:3e:64:6f:91 10.100.0.12
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:30 compute-0 systemd-udevd[343176]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.349 2 DEBUG oslo_concurrency.lockutils [req-9386519e-7dcf-44f2-9d43-a3f2e8194377 req-a7662684-7bc0-42df-bd8b-7b71f0369f97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d1c0470c-5f74-43c3-a206-07147fa01d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.350 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.351 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:6f:91 10.100.0.12'], port_security=['fa:16:3e:64:6f:91 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd1c0470c-5f74-43c3-a206-07147fa01d5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10ee435a-b254-4b1c-8c18-f92d44f39cd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e735ef38811e4376af72e0a380aba1bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dc264b84-ffad-463b-86d4-19fe6d17f564', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cba23c9-07ca-4841-bf24-574c4d75ed7e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=85b2e7ea-f418-4eba-9a49-be2af576436f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.359 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[87b1933f-2ac3-4a3e-a6bc-22f99976fcf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.3625] device (tap85b2e7ea-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.3640] device (tap85b2e7ea-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.363 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf060896-bafa-40e5-8f01-0d193c50256e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.379 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.3861] device (tap15e6e95c-60): carrier: link connected
Oct 14 09:11:30 compute-0 systemd-machined[214636]: New machine qemu-101-instance-00000051.
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.390 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab226a2-6c7a-4a97-afe0-2bb60d2b5a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.408 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[261cba2d-6015-4b58-aa44-020c124dc4f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343191, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.423 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65cc20eb-d198-49aa-9df8-ec3880e1e75d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:6b0e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686914, 'tstamp': 686914}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343192, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00826|binding|INFO|Setting lport 85b2e7ea-f418-4eba-9a49-be2af576436f ovn-installed in OVS
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00827|binding|INFO|Setting lport 85b2e7ea-f418-4eba-9a49-be2af576436f up in Southbound
Oct 14 09:11:30 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000051.
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de59896e-94b3-42d3-a627-54f0ce616539]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343193, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/992613432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.485 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1098d6d-883e-4052-ad9b-4de18b93540f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.507 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.508 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.509 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Creating image(s)
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.529 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ab77dbf7-4458-4b16-a2e7-ed73be047838_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.557 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ab77dbf7-4458-4b16-a2e7-ed73be047838_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308289792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.577 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ab77dbf7-4458-4b16-a2e7-ed73be047838_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.582 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.614 2 DEBUG nova.policy [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bca591d7e37e4881bc4de44ee172b2f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d2f1337472a4407869e16f2271280ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.619 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.636 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.663 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d10ce345-d7e4-4dd5-aebe-f89da67dc2b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.664 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.664 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.664 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e6e95c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:30 compute-0 kernel: tap15e6e95c-60: entered promiscuous mode
Oct 14 09:11:30 compute-0 NetworkManager[44885]: <info>  [1760433090.6669] manager: (tap15e6e95c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.670 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e6e95c-60, col_values=(('external_ids', {'iface-id': '970f2645-7ec3-4b7f-8527-871800c728d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:30 compute-0 ovn_controller[152662]: 2025-10-14T09:11:30Z|00828|binding|INFO|Releasing lport 970f2645-7ec3-4b7f-8527-871800c728d8 from this chassis (sb_readonly=0)
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.671 2 DEBUG nova.network.neutron [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Updating instance_info_cache with network_info: [{"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.676 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.676 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.677 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.677 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.698 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ab77dbf7-4458-4b16-a2e7-ed73be047838_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.701 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ab77dbf7-4458-4b16-a2e7-ed73be047838_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.701 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15e6e95c-6cd2-4631-98f6-9ed276458c39.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15e6e95c-6cd2-4631-98f6-9ed276458c39.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.703 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5312244c-3f4f-4867-9517-dbe3c5452651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.707 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-15e6e95c-6cd2-4631-98f6-9ed276458c39
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/15e6e95c-6cd2-4631-98f6-9ed276458c39.pid.haproxy
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 15e6e95c-6cd2-4631-98f6-9ed276458c39
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:11:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:30.709 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'env', 'PROCESS_TAG=haproxy-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15e6e95c-6cd2-4631-98f6-9ed276458c39.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.736 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Releasing lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.737 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance network_info: |[{"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.737 2 DEBUG oslo_concurrency.lockutils [req-18d19ff5-6031-4a6a-abd1-4a07b0401ce1 req-8820d0a8-394c-4c4f-afa4-e702cae91e55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.737 2 DEBUG nova.network.neutron [req-18d19ff5-6031-4a6a-abd1-4a07b0401ce1 req-8820d0a8-394c-4c4f-afa4-e702cae91e55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Refreshing network info cache for port 7ce99440-fa49-4876-bb38-fce631d40400 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.740 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Start _get_guest_xml network_info=[{"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.749 2 WARNING nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.757 2 DEBUG nova.virt.libvirt.host [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.757 2 DEBUG nova.virt.libvirt.host [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.761 2 DEBUG nova.virt.libvirt.host [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.761 2 DEBUG nova.virt.libvirt.host [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.762 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.762 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.762 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.762 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.763 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.763 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.763 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.763 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.763 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.763 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.763 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.764 2 DEBUG nova.virt.hardware [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.767 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.898 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.899 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.905 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.905 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.911 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.911 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:11:30 compute-0 nova_compute[259627]: 2025-10-14 09:11:30.982 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ab77dbf7-4458-4b16-a2e7-ed73be047838_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.121 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] resizing rbd image ab77dbf7-4458-4b16-a2e7-ed73be047838_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:11:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2339625777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.157 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433091.139845, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.157 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Started (Lifecycle Event)
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.162 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.170 2 DEBUG nova.compute.provider_tree [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.181 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.186 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433091.1399314, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.186 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Paused (Lifecycle Event)
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.189 2 DEBUG nova.scheduler.client.report [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:31 compute-0 podman[343443]: 2025-10-14 09:11:31.209416577 +0000 UTC m=+0.067380591 container create 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:11:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397503356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.246 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.248 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.249 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.251 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:31 compute-0 systemd[1]: Started libpod-conmon-73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7.scope.
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.255 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.259 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.260 2 INFO nova.compute.claims [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.262 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:31 compute-0 podman[343443]: 2025-10-14 09:11:31.178901635 +0000 UTC m=+0.036865669 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:11:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.282 2 DEBUG nova.storage.rbd_utils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.287 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a5ab24f7466258d49f499f0da82e5d8b268704cbd206b0609a9e9ddb16bcab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:31 compute-0 podman[343443]: 2025-10-14 09:11:31.301771931 +0000 UTC m=+0.159735965 container init 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:11:31 compute-0 podman[343443]: 2025-10-14 09:11:31.307317438 +0000 UTC m=+0.165281452 container start 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:11:31 compute-0 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [NOTICE]   (343564) : New worker (343566) forked
Oct 14 09:11:31 compute-0 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [NOTICE]   (343564) : Loading success.
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.345 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.354 2 DEBUG nova.objects.instance [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'migration_context' on Instance uuid ab77dbf7-4458-4b16-a2e7-ed73be047838 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.361 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.361 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.369 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 85b2e7ea-f418-4eba-9a49-be2af576436f in datapath 10ee435a-b254-4b1c-8c18-f92d44f39cd7 unbound from our chassis
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.372 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10ee435a-b254-4b1c-8c18-f92d44f39cd7
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.374 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.375 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Ensure instance console log exists: /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.375 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.375 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.376 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.386 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb6491f-4bd1-4505-b4af-170dabd24243]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.388 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10ee435a-b1 in ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.390 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10ee435a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85537d6b-0a16-454b-92a4-8816e1cfe855]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.391 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b0c804-ac28-4edd-ade6-8e84eb9ef043]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.401 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.408 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0b4cb1-e630-4778-8820-ebf93c911d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.420 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.426 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcebcc5-43d7-4044-a21b-39854691146a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.458 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c75ba344-2fc8-4830-aeab-3ae60a8239cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 NetworkManager[44885]: <info>  [1760433091.4656] manager: (tap10ee435a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Oct 14 09:11:31 compute-0 systemd-udevd[343179]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.464 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f98f728-e7c6-4ff6-af69-aa94725a2388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 ceph-mon[74249]: pgmap v1711: 305 pgs: 305 active+clean; 134 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Oct 14 09:11:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2308289792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2339625777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/397503356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.503 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.504 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.504 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Creating image(s)
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.515 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcebac3-3996-4ec4-8a8e-694daba6ea65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.518 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bb059e88-b1bc-4aeb-ab3a-c4b290ca34d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.531 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:31 compute-0 NetworkManager[44885]: <info>  [1760433091.5473] device (tap10ee435a-b0): carrier: link connected
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.551 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aefbc871-a55a-41ed-bf06-34f318965282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.574 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.575 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f08ff476-d329-4224-a8ac-88f61db783dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10ee435a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:5f:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687030, 'reachable_time': 34820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343637, 'error': None, 'target': 'ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.595 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.602 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9dead019-fe11-4083-8843-ac5acf4da8a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:5f2e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687030, 'tstamp': 687030}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343641, 'error': None, 'target': 'ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.603 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.625 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9895c56-4f1c-4ea9-8e75-81b71c8bc3a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10ee435a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:5f:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687030, 'reachable_time': 34820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343660, 'error': None, 'target': 'ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.647 2 DEBUG nova.policy [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bca591d7e37e4881bc4de44ee172b2f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d2f1337472a4407869e16f2271280ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.657 2 DEBUG nova.compute.manager [req-02a255ff-7dc0-42af-b2a3-74d31898f04e req-34128d41-3910-409f-a6ac-f53fe0686182 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.657 2 DEBUG oslo_concurrency.lockutils [req-02a255ff-7dc0-42af-b2a3-74d31898f04e req-34128d41-3910-409f-a6ac-f53fe0686182 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.658 2 DEBUG oslo_concurrency.lockutils [req-02a255ff-7dc0-42af-b2a3-74d31898f04e req-34128d41-3910-409f-a6ac-f53fe0686182 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.658 2 DEBUG oslo_concurrency.lockutils [req-02a255ff-7dc0-42af-b2a3-74d31898f04e req-34128d41-3910-409f-a6ac-f53fe0686182 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.658 2 DEBUG nova.compute.manager [req-02a255ff-7dc0-42af-b2a3-74d31898f04e req-34128d41-3910-409f-a6ac-f53fe0686182 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Processing event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.660 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.666 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433091.6661086, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.666 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Resumed (Lifecycle Event)
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.668 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.672 2 INFO nova.virt.libvirt.driver [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Instance spawned successfully.
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.672 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.684 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[74c6e937-e64a-4eec-a9ce-5122df5e66e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.685 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.688 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.693 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.694 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.695 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.695 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.738 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.746 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.753 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17d6faa5-6658-4465-8cdc-1b21aec9899d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.754 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10ee435a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.754 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.755 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10ee435a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:31 compute-0 NetworkManager[44885]: <info>  [1760433091.7571] manager: (tap10ee435a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct 14 09:11:31 compute-0 kernel: tap10ee435a-b0: entered promiscuous mode
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.760 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10ee435a-b0, col_values=(('external_ids', {'iface-id': '6a2de077-dcb5-47e7-be2d-17a595dbafb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:31 compute-0 ovn_controller[152662]: 2025-10-14T09:11:31Z|00829|binding|INFO|Releasing lport 6a2de077-dcb5-47e7-be2d-17a595dbafb6 from this chassis (sb_readonly=0)
Oct 14 09:11:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512426425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.785 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10ee435a-b254-4b1c-8c18-f92d44f39cd7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10ee435a-b254-4b1c-8c18-f92d44f39cd7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.786 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2daf24f8-09ca-4372-b2f3-8439191948c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.787 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-10ee435a-b254-4b1c-8c18-f92d44f39cd7
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/10ee435a-b254-4b1c-8c18-f92d44f39cd7.pid.haproxy
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 10ee435a-b254-4b1c-8c18-f92d44f39cd7
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:11:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:31.789 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7', 'env', 'PROCESS_TAG=haproxy-10ee435a-b254-4b1c-8c18-f92d44f39cd7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10ee435a-b254-4b1c-8c18-f92d44f39cd7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.801 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.804 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.805 2 DEBUG nova.virt.libvirt.vif [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-624290930',display_name='tempest-ServerRescueNegativeTestJSON-server-624290930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-624290930',id=82,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-njr7lxft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:27Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=1141f79e-2e47-40f1-91b0-275a9fac765c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.806 2 DEBUG nova.network.os_vif_util [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.806 2 DEBUG nova.network.os_vif_util [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.807 2 DEBUG nova.objects.instance [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.812 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.812 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.813 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.813 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.814 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.814 2 DEBUG nova.virt.libvirt.driver [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.824 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <uuid>1141f79e-2e47-40f1-91b0-275a9fac765c</uuid>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <name>instance-00000052</name>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-624290930</nova:name>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:11:30</nova:creationTime>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <nova:user uuid="aa1425f7fdfc4218bdabfe2458cd1c60">tempest-ServerRescueNegativeTestJSON-1031174086-project-member</nova:user>
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <nova:project uuid="f10ae705d9a34608a922683282b952b5">tempest-ServerRescueNegativeTestJSON-1031174086</nova:project>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <nova:port uuid="7ce99440-fa49-4876-bb38-fce631d40400">
Oct 14 09:11:31 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <system>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <entry name="serial">1141f79e-2e47-40f1-91b0-275a9fac765c</entry>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <entry name="uuid">1141f79e-2e47-40f1-91b0-275a9fac765c</entry>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </system>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <os>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   </os>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <features>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   </features>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1141f79e-2e47-40f1-91b0-275a9fac765c_disk">
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config">
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:2b:08:ac"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <target dev="tap7ce99440-fa"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/console.log" append="off"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <video>
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </video>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:11:31 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:11:31 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:11:31 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:11:31 compute-0 nova_compute[259627]: </domain>
Oct 14 09:11:31 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.825 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Preparing to wait for external event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.825 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.825 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.826 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.826 2 DEBUG nova.virt.libvirt.vif [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-624290930',display_name='tempest-ServerRescueNegativeTestJSON-server-624290930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-624290930',id=82,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-njr7lxft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:27Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=1141f79e-2e47-40f1-91b0-275a9fac765c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.826 2 DEBUG nova.network.os_vif_util [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.827 2 DEBUG nova.network.os_vif_util [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.827 2 DEBUG os_vif [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.832 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ce99440-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7ce99440-fa, col_values=(('external_ids', {'iface-id': '7ce99440-fa49-4876-bb38-fce631d40400', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:08:ac', 'vm-uuid': '1141f79e-2e47-40f1-91b0-275a9fac765c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:31 compute-0 NetworkManager[44885]: <info>  [1760433091.8372] manager: (tap7ce99440-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.837 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433091.8367493, d1c0470c-5f74-43c3-a206-07147fa01d5e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.838 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] VM Started (Lifecycle Event)
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.840 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 227 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 188 op/s
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.899 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.901 2 INFO os_vif [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa')
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.904 2 INFO nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Took 7.83 seconds to spawn the instance on the hypervisor.
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.905 2 DEBUG nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.912 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433091.8368437, d1c0470c-5f74-43c3-a206-07147fa01d5e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.913 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] VM Paused (Lifecycle Event)
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.956 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:31 compute-0 nova_compute[259627]: 2025-10-14 09:11:31.981 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.016 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.020 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.021 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.021 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No VIF found with MAC fa:16:3e:2b:08:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.021 2 INFO nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Using config drive
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.040 2 DEBUG nova.storage.rbd_utils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.057 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.083 2 INFO nova.compute.manager [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Took 9.02 seconds to build instance.
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.130 2 DEBUG oslo_concurrency.lockutils [None req-11b51a48-0ee6-48fd-b9c8-2d788a9549cb aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.140 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] resizing rbd image ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.166 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Successfully created port: c9b0841b-401d-4f72-aa51-209173353afe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:11:32 compute-0 podman[343808]: 2025-10-14 09:11:32.218304492 +0000 UTC m=+0.063918825 container create 5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.255 2 DEBUG nova.objects.instance [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'migration_context' on Instance uuid ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:32 compute-0 systemd[1]: Started libpod-conmon-5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719.scope.
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.273 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.274 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Ensure instance console log exists: /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.274 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.275 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.275 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:32 compute-0 podman[343808]: 2025-10-14 09:11:32.188294783 +0000 UTC m=+0.033909126 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:11:32 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0b53b799c8ab532c2d51b1503f0766a825e5739e1660f083ea2ea939ffa4b34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183628897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.295 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.296 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3738MB free_disk=59.9467658996582GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.297 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:32 compute-0 podman[343808]: 2025-10-14 09:11:32.302390973 +0000 UTC m=+0.148005326 container init 5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:11:32 compute-0 sudo[343856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:32 compute-0 sudo[343856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.310 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:32 compute-0 podman[343808]: 2025-10-14 09:11:32.31160489 +0000 UTC m=+0.157219223 container start 5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:11:32 compute-0 sudo[343856]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.320 2 DEBUG nova.compute.provider_tree [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.337 2 DEBUG nova.scheduler.client.report [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:32 compute-0 neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7[343873]: [NOTICE]   (343889) : New worker (343912) forked
Oct 14 09:11:32 compute-0 neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7[343873]: [NOTICE]   (343889) : Loading success.
Oct 14 09:11:32 compute-0 sudo[343890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.366 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.367 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:11:32 compute-0 sudo[343890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.369 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:32 compute-0 sudo[343890]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:32 compute-0 sudo[343925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:32 compute-0 sudo[343925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:32 compute-0 sudo[343925]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.435 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Successfully created port: 8fb091af-e492-4374-b3c2-7ab4157389a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.437 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.437 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.461 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:11:32 compute-0 sudo[343950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:11:32 compute-0 sudo[343950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.478 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 290980d2-08b4-4029-a1c3-becd3457a410 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.479 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 70e3c250-cd38-4718-9a7f-0fbf7bf471fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.479 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d1c0470c-5f74-43c3-a206-07147fa01d5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.479 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 1141f79e-2e47-40f1-91b0-275a9fac765c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.479 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance ab77dbf7-4458-4b16-a2e7-ed73be047838 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.479 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.479 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 62fe725d-b24a-477a-a275-06d2cd960aaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.479 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.480 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.489 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:11:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3512426425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2183628897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.596 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.597 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.597 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Creating image(s)
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.617 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image 62fe725d-b24a-477a-a275-06d2cd960aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.649 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image 62fe725d-b24a-477a-a275-06d2cd960aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.672 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image 62fe725d-b24a-477a-a275-06d2cd960aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.681 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.743 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:11:32
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups']
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:11:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.783 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.785 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.786 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.787 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.823 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image 62fe725d-b24a-477a-a275-06d2cd960aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.828 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 62fe725d-b24a-477a-a275-06d2cd960aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.890 2 INFO nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Creating config drive at /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.900 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxf985708 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:32 compute-0 nova_compute[259627]: 2025-10-14 09:11:32.943 2 DEBUG nova.policy [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bca591d7e37e4881bc4de44ee172b2f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d2f1337472a4407869e16f2271280ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.044 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxf985708" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:11:33 compute-0 sudo[343950]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.094 2 DEBUG nova.storage.rbd_utils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.107 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:11:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:11:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:11:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:11:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0d3e67c3-3157-4b87-9db8-d773b5e0c01f does not exist
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 89f15118-1f4e-461d-9002-986f35d982ea does not exist
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c4a4015d-a3c4-4efd-80da-ca5ec8d8a1f5 does not exist
Oct 14 09:11:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:11:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:11:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:11:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.160 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 62fe725d-b24a-477a-a275-06d2cd960aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898762317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.209 2 DEBUG nova.network.neutron [req-18d19ff5-6031-4a6a-abd1-4a07b0401ce1 req-8820d0a8-394c-4c4f-afa4-e702cae91e55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Updated VIF entry in instance network info cache for port 7ce99440-fa49-4876-bb38-fce631d40400. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.211 2 DEBUG nova.network.neutron [req-18d19ff5-6031-4a6a-abd1-4a07b0401ce1 req-8820d0a8-394c-4c4f-afa4-e702cae91e55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Updating instance_info_cache with network_info: [{"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:33 compute-0 sudo[344141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:33 compute-0 sudo[344141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:33 compute-0 sudo[344141]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.269 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.270 2 DEBUG oslo_concurrency.lockutils [req-18d19ff5-6031-4a6a-abd1-4a07b0401ce1 req-8820d0a8-394c-4c4f-afa4-e702cae91e55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.282 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] resizing rbd image 62fe725d-b24a-477a-a275-06d2cd960aaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:11:33 compute-0 sudo[344218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:11:33 compute-0 sudo[344218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:33 compute-0 sudo[344218]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:33 compute-0 NetworkManager[44885]: <info>  [1760433093.3296] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Oct 14 09:11:33 compute-0 NetworkManager[44885]: <info>  [1760433093.3306] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.376 2 DEBUG oslo_concurrency.processutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.376 2 INFO nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deleting local config drive /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config because it was imported into RBD.
Oct 14 09:11:33 compute-0 sudo[344261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.389 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:33 compute-0 sudo[344261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:33 compute-0 sudo[344261]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.410 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:33 compute-0 NetworkManager[44885]: <info>  [1760433093.4492] manager: (tap7ce99440-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Oct 14 09:11:33 compute-0 kernel: tap7ce99440-fa: entered promiscuous mode
Oct 14 09:11:33 compute-0 ovn_controller[152662]: 2025-10-14T09:11:33Z|00830|binding|INFO|Releasing lport 6a2de077-dcb5-47e7-be2d-17a595dbafb6 from this chassis (sb_readonly=0)
Oct 14 09:11:33 compute-0 ovn_controller[152662]: 2025-10-14T09:11:33Z|00831|binding|INFO|Releasing lport 970f2645-7ec3-4b7f-8527-871800c728d8 from this chassis (sb_readonly=0)
Oct 14 09:11:33 compute-0 sudo[344293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:11:33 compute-0 ovn_controller[152662]: 2025-10-14T09:11:33Z|00832|binding|INFO|Claiming lport 7ce99440-fa49-4876-bb38-fce631d40400 for this chassis.
Oct 14 09:11:33 compute-0 ovn_controller[152662]: 2025-10-14T09:11:33Z|00833|binding|INFO|7ce99440-fa49-4876-bb38-fce631d40400: Claiming fa:16:3e:2b:08:ac 10.100.0.4
Oct 14 09:11:33 compute-0 ovn_controller[152662]: 2025-10-14T09:11:33Z|00834|binding|INFO|Releasing lport d2ea0b85-033d-47f5-af89-2df076f2ce40 from this chassis (sb_readonly=0)
Oct 14 09:11:33 compute-0 sudo[344293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:33 compute-0 systemd-machined[214636]: New machine qemu-102-instance-00000052.
Oct 14 09:11:33 compute-0 ceph-mon[74249]: pgmap v1712: 305 pgs: 305 active+clean; 227 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 188 op/s
Oct 14 09:11:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:11:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:11:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1898762317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:33 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000052.
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.509 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:ac 10.100.0.4'], port_security=['fa:16:3e:2b:08:ac 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1141f79e-2e47-40f1-91b0-275a9fac765c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7ce99440-fa49-4876-bb38-fce631d40400) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.510 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7ce99440-fa49-4876-bb38-fce631d40400 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 bound to our chassis
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.512 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e6e95c-6cd2-4631-98f6-9ed276458c39
Oct 14 09:11:33 compute-0 ovn_controller[152662]: 2025-10-14T09:11:33Z|00835|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 ovn-installed in OVS
Oct 14 09:11:33 compute-0 ovn_controller[152662]: 2025-10-14T09:11:33Z|00836|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 up in Southbound
Oct 14 09:11:33 compute-0 systemd-udevd[344346]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.537 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0dbb4e9-1315-40d4-837d-d31509cdbe0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:33 compute-0 NetworkManager[44885]: <info>  [1760433093.5503] device (tap7ce99440-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:11:33 compute-0 NetworkManager[44885]: <info>  [1760433093.5528] device (tap7ce99440-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.560 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.561 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.571 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6e410f59-b923-4701-baab-fc760f8d6c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.574 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[703fd801-8b0b-4f8b-834a-6febfe5540e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.579 2 DEBUG nova.objects.instance [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'migration_context' on Instance uuid 62fe725d-b24a-477a-a275-06d2cd960aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.600 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ebeaf463-0f91-4a3c-9824-ff0e5404a2a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.604 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.605 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Ensure instance console log exists: /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.605 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.606 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.606 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6997cd-214c-43a4-ae98-ef50dc5c0238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344363, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.633 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee457211-64ce-48b3-bb47-f48c9d866145]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686938, 'tstamp': 686938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344369, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686941, 'tstamp': 686941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344369, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.634 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.637 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e6e95c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.638 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.638 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e6e95c-60, col_values=(('external_ids', {'iface-id': '970f2645-7ec3-4b7f-8527-871800c728d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:33.638 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.761 2 DEBUG nova.compute.manager [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.761 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.761 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.762 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.762 2 DEBUG nova.compute.manager [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] No waiting events found dispatching network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.762 2 WARNING nova.compute.manager [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received unexpected event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 for instance with vm_state active and task_state None.
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.762 2 DEBUG nova.compute.manager [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received event network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.763 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.763 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.763 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.763 2 DEBUG nova.compute.manager [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Processing event network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.763 2 DEBUG nova.compute.manager [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received event network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.764 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.764 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.764 2 DEBUG oslo_concurrency.lockutils [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.764 2 DEBUG nova.compute.manager [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] No waiting events found dispatching network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.765 2 WARNING nova.compute.manager [req-62e49842-b1c9-4b3a-9f14-8a87293ee587 req-4ba0d972-06e3-40f9-8f88-d18c32ed6f8c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received unexpected event network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f for instance with vm_state building and task_state spawning.
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.765 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.780 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433093.7732406, d1c0470c-5f74-43c3-a206-07147fa01d5e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.780 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] VM Resumed (Lifecycle Event)
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.781 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.797 2 INFO nova.virt.libvirt.driver [-] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Instance spawned successfully.
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.797 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.812 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.823 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.828 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.829 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.829 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.829 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.830 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.830 2 DEBUG nova.virt.libvirt.driver [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.852 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.859 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Successfully created port: 4006119e-fa08-4095-bba7-d338c82ac066 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:11:33 compute-0 podman[344401]: 2025-10-14 09:11:33.877444593 +0000 UTC m=+0.058881281 container create 2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.889 2 INFO nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Took 8.61 seconds to spawn the instance on the hypervisor.
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.889 2 DEBUG nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 227 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 161 op/s
Oct 14 09:11:33 compute-0 systemd[1]: Started libpod-conmon-2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf.scope.
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.936 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Successfully updated port: c9b0841b-401d-4f72-aa51-209173353afe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:11:33 compute-0 podman[344401]: 2025-10-14 09:11:33.852179941 +0000 UTC m=+0.033616649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:11:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.957 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "refresh_cache-ab77dbf7-4458-4b16-a2e7-ed73be047838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.957 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquired lock "refresh_cache-ab77dbf7-4458-4b16-a2e7-ed73be047838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.957 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.961 2 INFO nova.compute.manager [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Took 10.08 seconds to build instance.
Oct 14 09:11:33 compute-0 podman[344401]: 2025-10-14 09:11:33.965349588 +0000 UTC m=+0.146786346 container init 2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:11:33 compute-0 podman[344401]: 2025-10-14 09:11:33.977707522 +0000 UTC m=+0.159144190 container start 2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:11:33 compute-0 podman[344401]: 2025-10-14 09:11:33.98128398 +0000 UTC m=+0.162720678 container attach 2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:11:33 compute-0 systemd[1]: libpod-2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf.scope: Deactivated successfully.
Oct 14 09:11:33 compute-0 fervent_cerf[344439]: 167 167
Oct 14 09:11:33 compute-0 conmon[344439]: conmon 2e755ff4157429b12b92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf.scope/container/memory.events
Oct 14 09:11:33 compute-0 podman[344401]: 2025-10-14 09:11:33.984767656 +0000 UTC m=+0.166204324 container died 2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:11:33 compute-0 nova_compute[259627]: 2025-10-14 09:11:33.991 2 DEBUG oslo_concurrency.lockutils [None req-59ff065c-de83-472c-a720-2412d5d45aee f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a328003e34dd9b8dd8a2d353504c2b9e9c56817ed93de460c20c40df2082b344-merged.mount: Deactivated successfully.
Oct 14 09:11:34 compute-0 podman[344401]: 2025-10-14 09:11:34.032584734 +0000 UTC m=+0.214021402 container remove 2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:11:34 compute-0 systemd[1]: libpod-conmon-2e755ff4157429b12b92ed9d6e42c2666aaca7b012276ae9fbb6622e69c6e3bf.scope: Deactivated successfully.
Oct 14 09:11:34 compute-0 podman[344479]: 2025-10-14 09:11:34.238679059 +0000 UTC m=+0.049733646 container create 8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.282 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.285 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Successfully updated port: 8fb091af-e492-4374-b3c2-7ab4157389a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:11:34 compute-0 systemd[1]: Started libpod-conmon-8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf.scope.
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.312 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "refresh_cache-ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.313 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquired lock "refresh_cache-ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.313 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:11:34 compute-0 podman[344479]: 2025-10-14 09:11:34.221894786 +0000 UTC m=+0.032949403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:11:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4df8e3d0c223607856f90c2695e345afeeadbf3ec9f8fd5c00008b8cfb355c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4df8e3d0c223607856f90c2695e345afeeadbf3ec9f8fd5c00008b8cfb355c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4df8e3d0c223607856f90c2695e345afeeadbf3ec9f8fd5c00008b8cfb355c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4df8e3d0c223607856f90c2695e345afeeadbf3ec9f8fd5c00008b8cfb355c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4df8e3d0c223607856f90c2695e345afeeadbf3ec9f8fd5c00008b8cfb355c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:34 compute-0 podman[344479]: 2025-10-14 09:11:34.361060753 +0000 UTC m=+0.172115340 container init 8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:11:34 compute-0 podman[344479]: 2025-10-14 09:11:34.376032662 +0000 UTC m=+0.187087249 container start 8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:11:34 compute-0 podman[344479]: 2025-10-14 09:11:34.379150669 +0000 UTC m=+0.190205256 container attach 8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.480 2 DEBUG nova.compute.manager [req-633978d9-3711-4df3-9902-b7f9c3f08912 req-7c187418-9c56-4b09-b09d-afbc1079d602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-changed-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.481 2 DEBUG nova.compute.manager [req-633978d9-3711-4df3-9902-b7f9c3f08912 req-7c187418-9c56-4b09-b09d-afbc1079d602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Refreshing instance network info cache due to event network-changed-8fb091af-e492-4374-b3c2-7ab4157389a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.481 2 DEBUG oslo_concurrency.lockutils [req-633978d9-3711-4df3-9902-b7f9c3f08912 req-7c187418-9c56-4b09-b09d-afbc1079d602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.573 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.579 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433094.579645, 1141f79e-2e47-40f1-91b0-275a9fac765c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.580 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Started (Lifecycle Event)
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.608 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.611 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433094.5797484, 1141f79e-2e47-40f1-91b0-275a9fac765c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.612 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Paused (Lifecycle Event)
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.633 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.636 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:34 compute-0 nova_compute[259627]: 2025-10-14 09:11:34.681 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.302 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Updating instance_info_cache with network_info: [{"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.328 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Releasing lock "refresh_cache-ab77dbf7-4458-4b16-a2e7-ed73be047838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.329 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Instance network_info: |[{"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.331 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Start _get_guest_xml network_info=[{"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.336 2 WARNING nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.342 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.343 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.346 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.347 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.348 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.348 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.349 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.349 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.349 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.350 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.350 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.350 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.351 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.351 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.351 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.352 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.354 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.389 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Successfully updated port: 4006119e-fa08-4095-bba7-d338c82ac066 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.410 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "refresh_cache-62fe725d-b24a-477a-a275-06d2cd960aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.411 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquired lock "refresh_cache-62fe725d-b24a-477a-a275-06d2cd960aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.411 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.415 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "d1c0470c-5f74-43c3-a206-07147fa01d5e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.415 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.416 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.416 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.416 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.418 2 INFO nova.compute.manager [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Terminating instance
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.418 2 DEBUG nova.compute.manager [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:11:35 compute-0 kernel: tap85b2e7ea-f4 (unregistering): left promiscuous mode
Oct 14 09:11:35 compute-0 NetworkManager[44885]: <info>  [1760433095.4710] device (tap85b2e7ea-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:11:35 compute-0 ceph-mon[74249]: pgmap v1713: 305 pgs: 305 active+clean; 227 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 161 op/s
Oct 14 09:11:35 compute-0 ovn_controller[152662]: 2025-10-14T09:11:35Z|00837|binding|INFO|Releasing lport 85b2e7ea-f418-4eba-9a49-be2af576436f from this chassis (sb_readonly=0)
Oct 14 09:11:35 compute-0 ovn_controller[152662]: 2025-10-14T09:11:35Z|00838|binding|INFO|Setting lport 85b2e7ea-f418-4eba-9a49-be2af576436f down in Southbound
Oct 14 09:11:35 compute-0 ovn_controller[152662]: 2025-10-14T09:11:35Z|00839|binding|INFO|Removing iface tap85b2e7ea-f4 ovn-installed in OVS
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.520 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Updating instance_info_cache with network_info: [{"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.523 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:6f:91 10.100.0.12'], port_security=['fa:16:3e:64:6f:91 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd1c0470c-5f74-43c3-a206-07147fa01d5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10ee435a-b254-4b1c-8c18-f92d44f39cd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e735ef38811e4376af72e0a380aba1bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dc264b84-ffad-463b-86d4-19fe6d17f564', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cba23c9-07ca-4841-bf24-574c4d75ed7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=85b2e7ea-f418-4eba-9a49-be2af576436f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.524 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 85b2e7ea-f418-4eba-9a49-be2af576436f in datapath 10ee435a-b254-4b1c-8c18-f92d44f39cd7 unbound from our chassis
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.526 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10ee435a-b254-4b1c-8c18-f92d44f39cd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.527 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[05b5d15e-1211-479e-90c9-1b45776025e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.529 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7 namespace which is not needed anymore
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:35 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct 14 09:11:35 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000051.scope: Consumed 2.785s CPU time.
Oct 14 09:11:35 compute-0 systemd-machined[214636]: Machine qemu-101-instance-00000051 terminated.
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.546 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.547 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.574 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Releasing lock "refresh_cache-ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.575 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Instance network_info: |[{"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.575 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.576 2 DEBUG oslo_concurrency.lockutils [req-633978d9-3711-4df3-9902-b7f9c3f08912 req-7c187418-9c56-4b09-b09d-afbc1079d602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.576 2 DEBUG nova.network.neutron [req-633978d9-3711-4df3-9902-b7f9c3f08912 req-7c187418-9c56-4b09-b09d-afbc1079d602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Refreshing network info cache for port 8fb091af-e492-4374-b3c2-7ab4157389a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.584 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Start _get_guest_xml network_info=[{"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.584 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.586 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.586 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.590 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.595 2 WARNING nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.606 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.606 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.610 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.610 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.611 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.611 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.612 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.612 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.612 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.612 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.613 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.613 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.613 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.613 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.614 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.614 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.617 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:35 compute-0 optimistic_moser[344496]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:11:35 compute-0 optimistic_moser[344496]: --> relative data size: 1.0
Oct 14 09:11:35 compute-0 optimistic_moser[344496]: --> All data devices are unavailable
Oct 14 09:11:35 compute-0 systemd[1]: libpod-8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf.scope: Deactivated successfully.
Oct 14 09:11:35 compute-0 systemd[1]: libpod-8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf.scope: Consumed 1.110s CPU time.
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.689 2 INFO nova.virt.libvirt.driver [-] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Instance destroyed successfully.
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.690 2 DEBUG nova.objects.instance [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lazy-loading 'resources' on Instance uuid d1c0470c-5f74-43c3-a206-07147fa01d5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.704 2 DEBUG nova.virt.libvirt.vif [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-846214533',display_name='tempest-ServerGroupTestJSON-server-846214533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-846214533',id=81,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e735ef38811e4376af72e0a380aba1bb',ramdisk_id='',reservation_id='r-j643qn5d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1602782331',owner_user_name='tempest-ServerGroupTestJSON-1602782331-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:33Z,user_data=None,user_id='f9bf2c63a22b4957a35bb3b62129ab7c',uuid=d1c0470c-5f74-43c3-a206-07147fa01d5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.705 2 DEBUG nova.network.os_vif_util [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Converting VIF {"id": "85b2e7ea-f418-4eba-9a49-be2af576436f", "address": "fa:16:3e:64:6f:91", "network": {"id": "10ee435a-b254-4b1c-8c18-f92d44f39cd7", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-141924003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e735ef38811e4376af72e0a380aba1bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b2e7ea-f4", "ovs_interfaceid": "85b2e7ea-f418-4eba-9a49-be2af576436f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.706 2 DEBUG nova.network.os_vif_util [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6f:91,bridge_name='br-int',has_traffic_filtering=True,id=85b2e7ea-f418-4eba-9a49-be2af576436f,network=Network(10ee435a-b254-4b1c-8c18-f92d44f39cd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b2e7ea-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.707 2 DEBUG os_vif [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6f:91,bridge_name='br-int',has_traffic_filtering=True,id=85b2e7ea-f418-4eba-9a49-be2af576436f,network=Network(10ee435a-b254-4b1c-8c18-f92d44f39cd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b2e7ea-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b2e7ea-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:35 compute-0 neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7[343873]: [NOTICE]   (343889) : haproxy version is 2.8.14-c23fe91
Oct 14 09:11:35 compute-0 neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7[343873]: [NOTICE]   (343889) : path to executable is /usr/sbin/haproxy
Oct 14 09:11:35 compute-0 neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7[343873]: [WARNING]  (343889) : Exiting Master process...
Oct 14 09:11:35 compute-0 neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7[343873]: [ALERT]    (343889) : Current worker (343912) exited with code 143 (Terminated)
Oct 14 09:11:35 compute-0 neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7[343873]: [WARNING]  (343889) : All workers exited. Exiting... (0)
Oct 14 09:11:35 compute-0 systemd[1]: libpod-5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719.scope: Deactivated successfully.
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.718 2 INFO os_vif [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6f:91,bridge_name='br-int',has_traffic_filtering=True,id=85b2e7ea-f418-4eba-9a49-be2af576436f,network=Network(10ee435a-b254-4b1c-8c18-f92d44f39cd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b2e7ea-f4')
Oct 14 09:11:35 compute-0 podman[344584]: 2025-10-14 09:11:35.720135493 +0000 UTC m=+0.032749678 container died 8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 09:11:35 compute-0 podman[344566]: 2025-10-14 09:11:35.728092259 +0000 UTC m=+0.073355808 container died 5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4df8e3d0c223607856f90c2695e345afeeadbf3ec9f8fd5c00008b8cfb355c4-merged.mount: Deactivated successfully.
Oct 14 09:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719-userdata-shm.mount: Deactivated successfully.
Oct 14 09:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0b53b799c8ab532c2d51b1503f0766a825e5739e1660f083ea2ea939ffa4b34-merged.mount: Deactivated successfully.
Oct 14 09:11:35 compute-0 podman[344566]: 2025-10-14 09:11:35.799721003 +0000 UTC m=+0.144984552 container cleanup 5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:11:35 compute-0 podman[344584]: 2025-10-14 09:11:35.80610833 +0000 UTC m=+0.118722485 container remove 8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:11:35 compute-0 systemd[1]: libpod-conmon-8cbc584856cc29a574c0029000e03a763c51bf297731e42e7b81a1d3de114fdf.scope: Deactivated successfully.
Oct 14 09:11:35 compute-0 sudo[344293]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:35 compute-0 systemd[1]: libpod-conmon-5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719.scope: Deactivated successfully.
Oct 14 09:11:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/483647968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.877 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:35 compute-0 podman[344640]: 2025-10-14 09:11:35.889104124 +0000 UTC m=+0.058682256 container remove 5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:11:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 366 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 11 MiB/s wr, 389 op/s
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.897 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[602daf05-37b9-417d-9c8f-8d89acd7c9d8]: (4, ('Tue Oct 14 09:11:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7 (5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719)\n5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719\nTue Oct 14 09:11:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7 (5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719)\n5d3442b6e39a5e06c2d73c1f47f648874019f854629b64f2476eb965e72d2719\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.901 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0a041f6a-33bb-4503-9af8-7276e59d3f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10ee435a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:35 compute-0 kernel: tap10ee435a-b0: left promiscuous mode
Oct 14 09:11:35 compute-0 sudo[344660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:35 compute-0 sudo[344660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:35 compute-0 sudo[344660]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.922 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ab77dbf7-4458-4b16-a2e7-ed73be047838_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.926 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04dabc0b-ce70-49be-a5a6-4bfdb86a85d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.941 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.943 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7214a15f-f070-4b58-be61-535739b5f2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.944 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cad2d156-59a1-4d97-af7c-ed96c694877a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.963 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e63fc06d-dd7a-4e88-b3f5-3a6b430cc65c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687021, 'reachable_time': 22045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344730, 'error': None, 'target': 'ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d10ee435a\x2db254\x2d4b1c\x2d8c18\x2df92d44f39cd7.mount: Deactivated successfully.
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.968 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10ee435a-b254-4b1c-8c18-f92d44f39cd7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:11:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:35.968 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8fcf31-d64f-457a-9d38-7eda7901a13b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:35 compute-0 sudo[344708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:11:35 compute-0 sudo[344708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:35 compute-0 sudo[344708]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:35 compute-0 nova_compute[259627]: 2025-10-14 09:11:35.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:36 compute-0 sudo[344737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:36 compute-0 sudo[344737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:36 compute-0 sudo[344737]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:36 compute-0 sudo[344763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:11:36 compute-0 sudo[344763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.190 2 INFO nova.virt.libvirt.driver [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Deleting instance files /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e_del
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.191 2 INFO nova.virt.libvirt.driver [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Deletion of /var/lib/nova/instances/d1c0470c-5f74-43c3-a206-07147fa01d5e_del complete
Oct 14 09:11:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3900333616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.240 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.271 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.279 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.325 2 INFO nova.compute.manager [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Took 0.91 seconds to destroy the instance on the hypervisor.
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.325 2 DEBUG oslo.service.loopingcall [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.326 2 DEBUG nova.compute.manager [-] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.326 2 DEBUG nova.network.neutron [-] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:11:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260673164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.433 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.435 2 DEBUG nova.virt.libvirt.vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-1',id=83,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:30Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=ab77dbf7-4458-4b16-a2e7-ed73be047838,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.435 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.436 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.437 2 DEBUG nova.objects.instance [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'pci_devices' on Instance uuid ab77dbf7-4458-4b16-a2e7-ed73be047838 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:36 compute-0 podman[344862]: 2025-10-14 09:11:36.448631414 +0000 UTC m=+0.051036098 container create 7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_matsumoto, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.459 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received event network-changed-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.459 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Refreshing instance network info cache due to event network-changed-106349ae-cfaa-43ec-9bda-16f36a6ac3d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.460 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.461 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.461 2 DEBUG nova.network.neutron [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Refreshing network info cache for port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.487 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <uuid>ab77dbf7-4458-4b16-a2e7-ed73be047838</uuid>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <name>instance-00000053</name>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:name>tempest-ListServersNegativeTestJSON-server-792258866-1</nova:name>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:11:35</nova:creationTime>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:user uuid="bca591d7e37e4881bc4de44ee172b2f4">tempest-ListServersNegativeTestJSON-1864242738-project-member</nova:user>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:project uuid="6d2f1337472a4407869e16f2271280ef">tempest-ListServersNegativeTestJSON-1864242738</nova:project>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:port uuid="c9b0841b-401d-4f72-aa51-209173353afe">
Oct 14 09:11:36 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <system>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="serial">ab77dbf7-4458-4b16-a2e7-ed73be047838</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="uuid">ab77dbf7-4458-4b16-a2e7-ed73be047838</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </system>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <os>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </os>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <features>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </features>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ab77dbf7-4458-4b16-a2e7-ed73be047838_disk">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ab77dbf7-4458-4b16-a2e7-ed73be047838_disk.config">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:6d:e1:a1"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <target dev="tapc9b0841b-40"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838/console.log" append="off"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <video>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </video>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:11:36 compute-0 nova_compute[259627]: </domain>
Oct 14 09:11:36 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.487 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Preparing to wait for external event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.487 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.488 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.488 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.488 2 DEBUG nova.virt.libvirt.vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-1',id=83,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:30Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=ab77dbf7-4458-4b16-a2e7-ed73be047838,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.488 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.489 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.489 2 DEBUG os_vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.492 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.492 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9b0841b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9b0841b-40, col_values=(('external_ids', {'iface-id': 'c9b0841b-401d-4f72-aa51-209173353afe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:e1:a1', 'vm-uuid': 'ab77dbf7-4458-4b16-a2e7-ed73be047838'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:36 compute-0 systemd[1]: Started libpod-conmon-7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb.scope.
Oct 14 09:11:36 compute-0 NetworkManager[44885]: <info>  [1760433096.4984] manager: (tapc9b0841b-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.504 2 INFO os_vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40')
Oct 14 09:11:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/483647968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3900333616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2260673164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:36 compute-0 podman[344862]: 2025-10-14 09:11:36.420569303 +0000 UTC m=+0.022974007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:11:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:36 compute-0 podman[344862]: 2025-10-14 09:11:36.553758333 +0000 UTC m=+0.156163037 container init 7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_matsumoto, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.555 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.555 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.555 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No VIF found with MAC fa:16:3e:6d:e1:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.556 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Using config drive
Oct 14 09:11:36 compute-0 podman[344862]: 2025-10-14 09:11:36.561901364 +0000 UTC m=+0.164306048 container start 7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:11:36 compute-0 podman[344862]: 2025-10-14 09:11:36.566112367 +0000 UTC m=+0.168517051 container attach 7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 09:11:36 compute-0 cranky_matsumoto[344900]: 167 167
Oct 14 09:11:36 compute-0 systemd[1]: libpod-7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb.scope: Deactivated successfully.
Oct 14 09:11:36 compute-0 conmon[344900]: conmon 7f3b2662b2b1648ab259 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb.scope/container/memory.events
Oct 14 09:11:36 compute-0 podman[344862]: 2025-10-14 09:11:36.569396998 +0000 UTC m=+0.171801692 container died 7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_matsumoto, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.585 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ab77dbf7-4458-4b16-a2e7-ed73be047838_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.601 2 DEBUG nova.compute.manager [req-960f745b-a182-48d2-922b-845f86d73237 req-d707c736-c5a9-4975-91a2-1d5b3f6aa885 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-changed-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.602 2 DEBUG nova.compute.manager [req-960f745b-a182-48d2-922b-845f86d73237 req-d707c736-c5a9-4975-91a2-1d5b3f6aa885 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Refreshing instance network info cache due to event network-changed-4006119e-fa08-4095-bba7-d338c82ac066. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.602 2 DEBUG oslo_concurrency.lockutils [req-960f745b-a182-48d2-922b-845f86d73237 req-d707c736-c5a9-4975-91a2-1d5b3f6aa885 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-62fe725d-b24a-477a-a275-06d2cd960aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:36 compute-0 podman[344862]: 2025-10-14 09:11:36.609052515 +0000 UTC m=+0.211457209 container remove 7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:11:36 compute-0 systemd[1]: libpod-conmon-7f3b2662b2b1648ab2596551ece3d4c1b91932c56eb40fcba87c4b69104e1beb.scope: Deactivated successfully.
Oct 14 09:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-97be7c6c2b10d87afdfa6a376f24603b69d2c811312e11ca36eccb00b001260a-merged.mount: Deactivated successfully.
Oct 14 09:11:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416368212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.783 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.784 2 DEBUG nova.virt.libvirt.vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-2',id=84,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:31Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=ed2aee1e-f632-4d7f-ae03-f5d9c41e9104,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.784 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.785 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.785 2 DEBUG nova.objects.instance [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'pci_devices' on Instance uuid ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:36 compute-0 podman[344941]: 2025-10-14 09:11:36.79685593 +0000 UTC m=+0.050977776 container create 2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.800 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <uuid>ed2aee1e-f632-4d7f-ae03-f5d9c41e9104</uuid>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <name>instance-00000054</name>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:name>tempest-ListServersNegativeTestJSON-server-792258866-2</nova:name>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:11:35</nova:creationTime>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:user uuid="bca591d7e37e4881bc4de44ee172b2f4">tempest-ListServersNegativeTestJSON-1864242738-project-member</nova:user>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:project uuid="6d2f1337472a4407869e16f2271280ef">tempest-ListServersNegativeTestJSON-1864242738</nova:project>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <nova:port uuid="8fb091af-e492-4374-b3c2-7ab4157389a6">
Oct 14 09:11:36 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <system>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="serial">ed2aee1e-f632-4d7f-ae03-f5d9c41e9104</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="uuid">ed2aee1e-f632-4d7f-ae03-f5d9c41e9104</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </system>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <os>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </os>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <features>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </features>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk.config">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:fa:b2:b6"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <target dev="tap8fb091af-e4"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104/console.log" append="off"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <video>
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </video>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:11:36 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:11:36 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:11:36 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:11:36 compute-0 nova_compute[259627]: </domain>
Oct 14 09:11:36 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.800 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Preparing to wait for external event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.800 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.801 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.801 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.801 2 DEBUG nova.virt.libvirt.vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-2',id=84,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:31Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=ed2aee1e-f632-4d7f-ae03-f5d9c41e9104,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.802 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.802 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.802 2 DEBUG os_vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8fb091af-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.806 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8fb091af-e4, col_values=(('external_ids', {'iface-id': '8fb091af-e492-4374-b3c2-7ab4157389a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:b2:b6', 'vm-uuid': 'ed2aee1e-f632-4d7f-ae03-f5d9c41e9104'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:36 compute-0 NetworkManager[44885]: <info>  [1760433096.8079] manager: (tap8fb091af-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.817 2 INFO os_vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4')
Oct 14 09:11:36 compute-0 systemd[1]: Started libpod-conmon-2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a.scope.
Oct 14 09:11:36 compute-0 podman[344941]: 2025-10-14 09:11:36.771918556 +0000 UTC m=+0.026040402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.866 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.866 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.867 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No VIF found with MAC fa:16:3e:fa:b2:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.867 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Using config drive
Oct 14 09:11:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd12a532c573cdca57bb6e234b8065b935ab89add6c86070ebebbe5100871d5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd12a532c573cdca57bb6e234b8065b935ab89add6c86070ebebbe5100871d5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd12a532c573cdca57bb6e234b8065b935ab89add6c86070ebebbe5100871d5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd12a532c573cdca57bb6e234b8065b935ab89add6c86070ebebbe5100871d5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:36 compute-0 podman[344941]: 2025-10-14 09:11:36.905999668 +0000 UTC m=+0.160121604 container init 2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:11:36 compute-0 podman[344941]: 2025-10-14 09:11:36.916413915 +0000 UTC m=+0.170535761 container start 2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:11:36 compute-0 podman[344941]: 2025-10-14 09:11:36.920466444 +0000 UTC m=+0.174588360 container attach 2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:11:36 compute-0 nova_compute[259627]: 2025-10-14 09:11:36.924 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.274 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Creating config drive at /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838/disk.config
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.284 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpefteq9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.324 2 DEBUG nova.network.neutron [req-633978d9-3711-4df3-9902-b7f9c3f08912 req-7c187418-9c56-4b09-b09d-afbc1079d602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Updated VIF entry in instance network info cache for port 8fb091af-e492-4374-b3c2-7ab4157389a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.325 2 DEBUG nova.network.neutron [req-633978d9-3711-4df3-9902-b7f9c3f08912 req-7c187418-9c56-4b09-b09d-afbc1079d602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Updating instance_info_cache with network_info: [{"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.327 2 DEBUG nova.network.neutron [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Updating instance_info_cache with network_info: [{"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.353 2 DEBUG oslo_concurrency.lockutils [req-633978d9-3711-4df3-9902-b7f9c3f08912 req-7c187418-9c56-4b09-b09d-afbc1079d602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.354 2 DEBUG nova.network.neutron [-] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.357 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Releasing lock "refresh_cache-62fe725d-b24a-477a-a275-06d2cd960aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.357 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Instance network_info: |[{"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.358 2 DEBUG oslo_concurrency.lockutils [req-960f745b-a182-48d2-922b-845f86d73237 req-d707c736-c5a9-4975-91a2-1d5b3f6aa885 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-62fe725d-b24a-477a-a275-06d2cd960aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.358 2 DEBUG nova.network.neutron [req-960f745b-a182-48d2-922b-845f86d73237 req-d707c736-c5a9-4975-91a2-1d5b3f6aa885 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Refreshing network info cache for port 4006119e-fa08-4095-bba7-d338c82ac066 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.362 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Start _get_guest_xml network_info=[{"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.367 2 WARNING nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.374 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.375 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.378 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.378 2 DEBUG nova.virt.libvirt.host [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.379 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.379 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.379 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.379 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.380 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.380 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.380 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.380 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.380 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.380 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.381 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.381 2 DEBUG nova.virt.hardware [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.384 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.426 2 INFO nova.compute.manager [-] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Took 1.10 seconds to deallocate network for instance.
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.433 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpefteq9" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.465 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ab77dbf7-4458-4b16-a2e7-ed73be047838_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.468 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838/disk.config ab77dbf7-4458-4b16-a2e7-ed73be047838_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.508 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Creating config drive at /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104/disk.config
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.515 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzozo5c9e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:37 compute-0 ceph-mon[74249]: pgmap v1714: 305 pgs: 305 active+clean; 366 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 11 MiB/s wr, 389 op/s
Oct 14 09:11:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1416368212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.559 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.560 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.640 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838/disk.config ab77dbf7-4458-4b16-a2e7-ed73be047838_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.640 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Deleting local config drive /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838/disk.config because it was imported into RBD.
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.666 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzozo5c9e" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:37 compute-0 NetworkManager[44885]: <info>  [1760433097.6945] manager: (tapc9b0841b-40): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Oct 14 09:11:37 compute-0 kernel: tapc9b0841b-40: entered promiscuous mode
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.720 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:37 compute-0 systemd-udevd[345078]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.730 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104/disk.config ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:37 compute-0 NetworkManager[44885]: <info>  [1760433097.7368] device (tapc9b0841b-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:11:37 compute-0 NetworkManager[44885]: <info>  [1760433097.7382] device (tapc9b0841b-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:11:37 compute-0 ovn_controller[152662]: 2025-10-14T09:11:37Z|00840|binding|INFO|Claiming lport c9b0841b-401d-4f72-aa51-209173353afe for this chassis.
Oct 14 09:11:37 compute-0 ovn_controller[152662]: 2025-10-14T09:11:37Z|00841|binding|INFO|c9b0841b-401d-4f72-aa51-209173353afe: Claiming fa:16:3e:6d:e1:a1 10.100.0.10
Oct 14 09:11:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.764 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:e1:a1 10.100.0.10'], port_security=['fa:16:3e:6d:e1:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ab77dbf7-4458-4b16-a2e7-ed73be047838', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c9b0841b-401d-4f72-aa51-209173353afe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.765 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c9b0841b-401d-4f72-aa51-209173353afe in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 bound to our chassis
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.767 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]: {
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:     "0": [
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:         {
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "devices": [
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "/dev/loop3"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             ],
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_name": "ceph_lv0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_size": "21470642176",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "name": "ceph_lv0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "tags": {
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cluster_name": "ceph",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.crush_device_class": "",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.encrypted": "0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osd_id": "0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.type": "block",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.vdo": "0"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             },
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "type": "block",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "vg_name": "ceph_vg0"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:         }
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:     ],
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:     "1": [
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:         {
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "devices": [
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "/dev/loop4"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             ],
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_name": "ceph_lv1",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_size": "21470642176",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "name": "ceph_lv1",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "tags": {
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cluster_name": "ceph",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.crush_device_class": "",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.encrypted": "0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osd_id": "1",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.type": "block",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.vdo": "0"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             },
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "type": "block",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "vg_name": "ceph_vg1"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:         }
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:     ],
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:     "2": [
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:         {
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "devices": [
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "/dev/loop5"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             ],
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_name": "ceph_lv2",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_size": "21470642176",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "name": "ceph_lv2",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "tags": {
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.cluster_name": "ceph",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.crush_device_class": "",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.encrypted": "0",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osd_id": "2",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.type": "block",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:                 "ceph.vdo": "0"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             },
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "type": "block",
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:             "vg_name": "ceph_vg2"
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:         }
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]:     ]
Oct 14 09:11:37 compute-0 vibrant_dijkstra[344961]: }
Oct 14 09:11:37 compute-0 ovn_controller[152662]: 2025-10-14T09:11:37Z|00842|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe ovn-installed in OVS
Oct 14 09:11:37 compute-0 ovn_controller[152662]: 2025-10-14T09:11:37Z|00843|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe up in Southbound
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bee6f24d-4d5e-45b1-80da-cf367ca4e731]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.792 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b7bccdd-41 in ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.794 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b7bccdd-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.794 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c572d304-275e-4e79-865a-ac51e6189ec8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[71f733cf-22d1-4743-bae2-c3a9981b7601]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 systemd-machined[214636]: New machine qemu-103-instance-00000053.
Oct 14 09:11:37 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000053.
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.807 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[000ca7b1-7a66-47d5-8c70-67a8a157e0db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 systemd[1]: libpod-2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a.scope: Deactivated successfully.
Oct 14 09:11:37 compute-0 conmon[344961]: conmon 2cb960ddff107a1d0957 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a.scope/container/memory.events
Oct 14 09:11:37 compute-0 podman[344941]: 2025-10-14 09:11:37.819793313 +0000 UTC m=+1.073915159 container died 2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.824 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7517942a-f78b-4689-849b-5d1c192a6447]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd12a532c573cdca57bb6e234b8065b935ab89add6c86070ebebbe5100871d5f-merged.mount: Deactivated successfully.
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.851 2 DEBUG oslo_concurrency.processutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.866 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38779195-ebb3-4c53-ab2a-633bedd9df2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4203135980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:37 compute-0 podman[344941]: 2025-10-14 09:11:37.876236413 +0000 UTC m=+1.130358259 container remove 2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:11:37 compute-0 NetworkManager[44885]: <info>  [1760433097.8864] manager: (tap7b7bccdd-40): new Veth device (/org/freedesktop/NetworkManager/Devices/357)
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.888 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bdcafa50-9d2a-43da-8bff-151737416c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 366 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 8.9 MiB/s wr, 354 op/s
Oct 14 09:11:37 compute-0 sudo[344763]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:37 compute-0 systemd[1]: libpod-conmon-2cb960ddff107a1d0957853b49c40a54f106d58563902a5fcc29bbf38cefc82a.scope: Deactivated successfully.
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.925 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b0207864-dac8-4f27-aad4-491fe5c3e7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.928 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1b860d9d-97df-4bbb-8fdc-af624519a47b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 nova_compute[259627]: 2025-10-14 09:11:37.950 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:37 compute-0 NetworkManager[44885]: <info>  [1760433097.9564] device (tap7b7bccdd-40): carrier: link connected
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.962 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d529fe3a-2831-4d0b-8608-66fb95cca25e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:37.987 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[741360c2-bf85-4bad-b5eb-9803762eda6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345180, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:37 compute-0 sudo[345148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:38 compute-0 sudo[345148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.000 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image 62fe725d-b24a-477a-a275-06d2cd960aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:38 compute-0 sudo[345148]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.010 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da069ab1-0731-4910-ae47-619e20a94866]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:8080'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687671, 'tstamp': 687671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345194, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.012 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.029 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d573d960-6e90-45ba-8220-ea16d91ea6dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345205, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 sudo[345200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:11:38 compute-0 sudo[345200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.059 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.060 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104/disk.config ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.061 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Deleting local config drive /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104/disk.config because it was imported into RBD.
Oct 14 09:11:38 compute-0 sudo[345200]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.087 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87d95c41-f11b-4c1e-bd4b-ab42f7aec523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 NetworkManager[44885]: <info>  [1760433098.1188] manager: (tap8fb091af-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Oct 14 09:11:38 compute-0 kernel: tap8fb091af-e4: entered promiscuous mode
Oct 14 09:11:38 compute-0 ovn_controller[152662]: 2025-10-14T09:11:38Z|00844|binding|INFO|Claiming lport 8fb091af-e492-4374-b3c2-7ab4157389a6 for this chassis.
Oct 14 09:11:38 compute-0 ovn_controller[152662]: 2025-10-14T09:11:38Z|00845|binding|INFO|8fb091af-e492-4374-b3c2-7ab4157389a6: Claiming fa:16:3e:fa:b2:b6 10.100.0.6
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:38 compute-0 systemd-udevd[345141]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.129 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b2:b6 10.100.0.6'], port_security=['fa:16:3e:fa:b2:b6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ed2aee1e-f632-4d7f-ae03-f5d9c41e9104', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8fb091af-e492-4374-b3c2-7ab4157389a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:38 compute-0 sudo[345229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:38 compute-0 sudo[345229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:38 compute-0 NetworkManager[44885]: <info>  [1760433098.1444] device (tap8fb091af-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:11:38 compute-0 NetworkManager[44885]: <info>  [1760433098.1454] device (tap8fb091af-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:11:38 compute-0 sudo[345229]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:38 compute-0 ovn_controller[152662]: 2025-10-14T09:11:38Z|00846|binding|INFO|Setting lport 8fb091af-e492-4374-b3c2-7ab4157389a6 ovn-installed in OVS
Oct 14 09:11:38 compute-0 ovn_controller[152662]: 2025-10-14T09:11:38Z|00847|binding|INFO|Setting lport 8fb091af-e492-4374-b3c2-7ab4157389a6 up in Southbound
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:38 compute-0 systemd-machined[214636]: New machine qemu-104-instance-00000054.
Oct 14 09:11:38 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000054.
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.200 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8db200-f951-44cc-8899-ae4b82764364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.202 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.202 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.203 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:38 compute-0 sudo[345285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:11:38 compute-0 NetworkManager[44885]: <info>  [1760433098.2055] manager: (tap7b7bccdd-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Oct 14 09:11:38 compute-0 kernel: tap7b7bccdd-40: entered promiscuous mode
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.207 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:38 compute-0 ovn_controller[152662]: 2025-10-14T09:11:38Z|00848|binding|INFO|Releasing lport 8bb0b8f1-e510-498b-862e-2d74544dc8a6 from this chassis (sb_readonly=0)
Oct 14 09:11:38 compute-0 sudo[345285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.235 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[92685dac-ccb7-4535-8c71-4f8b028a2117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.236 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57.pid.haproxy
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.238 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'env', 'PROCESS_TAG=haproxy-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.318 2 DEBUG nova.network.neutron [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updated VIF entry in instance network info cache for port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.319 2 DEBUG nova.network.neutron [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updating instance_info_cache with network_info: [{"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.352 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.352 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-changed-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.352 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Refreshing instance network info cache due to event network-changed-c9b0841b-401d-4f72-aa51-209173353afe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.353 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ab77dbf7-4458-4b16-a2e7-ed73be047838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.353 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ab77dbf7-4458-4b16-a2e7-ed73be047838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.353 2 DEBUG nova.network.neutron [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Refreshing network info cache for port c9b0841b-401d-4f72-aa51-209173353afe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4203135980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:11:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1119848970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.559 2 DEBUG nova.compute.manager [req-caee29fe-2540-46db-9a67-a1c09c0d14f2 req-36e28779-07f8-44a2-a6ba-9bb1fef4d49f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received event network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.561 2 DEBUG oslo_concurrency.lockutils [req-caee29fe-2540-46db-9a67-a1c09c0d14f2 req-36e28779-07f8-44a2-a6ba-9bb1fef4d49f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.561 2 DEBUG oslo_concurrency.lockutils [req-caee29fe-2540-46db-9a67-a1c09c0d14f2 req-36e28779-07f8-44a2-a6ba-9bb1fef4d49f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.561 2 DEBUG oslo_concurrency.lockutils [req-caee29fe-2540-46db-9a67-a1c09c0d14f2 req-36e28779-07f8-44a2-a6ba-9bb1fef4d49f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.562 2 DEBUG nova.compute.manager [req-caee29fe-2540-46db-9a67-a1c09c0d14f2 req-36e28779-07f8-44a2-a6ba-9bb1fef4d49f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] No waiting events found dispatching network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.562 2 WARNING nova.compute.manager [req-caee29fe-2540-46db-9a67-a1c09c0d14f2 req-36e28779-07f8-44a2-a6ba-9bb1fef4d49f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received unexpected event network-vif-plugged-85b2e7ea-f418-4eba-9a49-be2af576436f for instance with vm_state deleted and task_state None.
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.583 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.586 2 DEBUG nova.virt.libvirt.vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-3',id=85,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:32Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=62fe725d-b24a-477a-a275-06d2cd960aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.587 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.587 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.589 2 DEBUG nova.objects.instance [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'pci_devices' on Instance uuid 62fe725d-b24a-477a-a275-06d2cd960aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:38 compute-0 podman[345462]: 2025-10-14 09:11:38.602892918 +0000 UTC m=+0.077287134 container create 1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.608 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <uuid>62fe725d-b24a-477a-a275-06d2cd960aaf</uuid>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <name>instance-00000055</name>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <nova:name>tempest-ListServersNegativeTestJSON-server-792258866-3</nova:name>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:11:37</nova:creationTime>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <nova:user uuid="bca591d7e37e4881bc4de44ee172b2f4">tempest-ListServersNegativeTestJSON-1864242738-project-member</nova:user>
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <nova:project uuid="6d2f1337472a4407869e16f2271280ef">tempest-ListServersNegativeTestJSON-1864242738</nova:project>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <nova:port uuid="4006119e-fa08-4095-bba7-d338c82ac066">
Oct 14 09:11:38 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <system>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <entry name="serial">62fe725d-b24a-477a-a275-06d2cd960aaf</entry>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <entry name="uuid">62fe725d-b24a-477a-a275-06d2cd960aaf</entry>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </system>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <os>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   </os>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <features>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   </features>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/62fe725d-b24a-477a-a275-06d2cd960aaf_disk">
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/62fe725d-b24a-477a-a275-06d2cd960aaf_disk.config">
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       </source>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:11:38 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:be:3e:79"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <target dev="tap4006119e-fa"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf/console.log" append="off"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <video>
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </video>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:11:38 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:11:38 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:11:38 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:11:38 compute-0 nova_compute[259627]: </domain>
Oct 14 09:11:38 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.609 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Preparing to wait for external event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.609 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.610 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.610 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.610 2 DEBUG nova.virt.libvirt.vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-3',id=85,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:32Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=62fe725d-b24a-477a-a275-06d2cd960aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.611 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.611 2 DEBUG nova.network.os_vif_util [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.612 2 DEBUG os_vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4006119e-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.620 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4006119e-fa, col_values=(('external_ids', {'iface-id': '4006119e-fa08-4095-bba7-d338c82ac066', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:3e:79', 'vm-uuid': '62fe725d-b24a-477a-a275-06d2cd960aaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:38 compute-0 NetworkManager[44885]: <info>  [1760433098.6225] manager: (tap4006119e-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.630 2 INFO os_vif [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa')
Oct 14 09:11:38 compute-0 systemd[1]: Started libpod-conmon-1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889.scope.
Oct 14 09:11:38 compute-0 podman[345462]: 2025-10-14 09:11:38.567956628 +0000 UTC m=+0.042350864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:11:38 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:38 compute-0 podman[345462]: 2025-10-14 09:11:38.699714332 +0000 UTC m=+0.174108568 container init 1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.701 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.701 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.701 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] No VIF found with MAC fa:16:3e:be:3e:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.702 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Using config drive
Oct 14 09:11:38 compute-0 podman[345462]: 2025-10-14 09:11:38.708397656 +0000 UTC m=+0.182791872 container start 1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:11:38 compute-0 podman[345462]: 2025-10-14 09:11:38.712589239 +0000 UTC m=+0.186983475 container attach 1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:11:38 compute-0 compassionate_hermann[345506]: 167 167
Oct 14 09:11:38 compute-0 systemd[1]: libpod-1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889.scope: Deactivated successfully.
Oct 14 09:11:38 compute-0 podman[345486]: 2025-10-14 09:11:38.71750878 +0000 UTC m=+0.131245742 container create 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:11:38 compute-0 podman[345462]: 2025-10-14 09:11:38.722752659 +0000 UTC m=+0.197146875 container died 1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:11:38 compute-0 podman[345486]: 2025-10-14 09:11:38.65496491 +0000 UTC m=+0.068701872 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.759 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image 62fe725d-b24a-477a-a275-06d2cd960aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:38 compute-0 systemd[1]: Started libpod-conmon-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89.scope.
Oct 14 09:11:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039755345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:38 compute-0 podman[345462]: 2025-10-14 09:11:38.764571619 +0000 UTC m=+0.238965835 container remove 1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:11:38 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:38 compute-0 systemd[1]: libpod-conmon-1e17184c792263540febcf83ad458e65c31aa2d06df673aef247da0a6209e889.scope: Deactivated successfully.
Oct 14 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f890f6c75aac33a3bd8ad553f6090efd1c052d214d952027d2bd4c2c78b53ada/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.799 2 DEBUG nova.network.neutron [req-960f745b-a182-48d2-922b-845f86d73237 req-d707c736-c5a9-4975-91a2-1d5b3f6aa885 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Updated VIF entry in instance network info cache for port 4006119e-fa08-4095-bba7-d338c82ac066. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.800 2 DEBUG nova.network.neutron [req-960f745b-a182-48d2-922b-845f86d73237 req-d707c736-c5a9-4975-91a2-1d5b3f6aa885 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Updating instance_info_cache with network_info: [{"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8110313d6f547bde66f71b5d297ada7a0b0767b9e8349280e502cd05a758cc91-merged.mount: Deactivated successfully.
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.806 2 DEBUG nova.compute.manager [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received event network-vif-deleted-85b2e7ea-f418-4eba-9a49-be2af576436f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.807 2 DEBUG nova.compute.manager [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.807 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.807 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.808 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.808 2 DEBUG nova.compute.manager [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Processing event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.808 2 DEBUG nova.compute.manager [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.808 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.809 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.809 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.809 2 DEBUG nova.compute.manager [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.809 2 WARNING nova.compute.manager [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state building and task_state spawning.
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.810 2 DEBUG nova.compute.manager [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.810 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.810 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.810 2 DEBUG oslo_concurrency.lockutils [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.811 2 DEBUG nova.compute.manager [req-2be3bcfa-92c4-45c8-8694-bb5cf72353cc req-4b024a57-4075-4650-acef-8237ac54270c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Processing event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:11:38 compute-0 podman[345486]: 2025-10-14 09:11:38.812623593 +0000 UTC m=+0.226360555 container init 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.813 2 DEBUG oslo_concurrency.processutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.962s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:38 compute-0 podman[345486]: 2025-10-14 09:11:38.819655636 +0000 UTC m=+0.233392598 container start 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.826 2 DEBUG nova.compute.provider_tree [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.832 2 DEBUG oslo_concurrency.lockutils [req-960f745b-a182-48d2-922b-845f86d73237 req-d707c736-c5a9-4975-91a2-1d5b3f6aa885 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-62fe725d-b24a-477a-a275-06d2cd960aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:38 compute-0 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [NOTICE]   (345552) : New worker (345554) forked
Oct 14 09:11:38 compute-0 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [NOTICE]   (345552) : Loading success.
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.849 2 DEBUG nova.scheduler.client.report [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.883 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8fb091af-e492-4374-b3c2-7ab4157389a6 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.886 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.892 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.904 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bffc3cb6-0d01-469e-8e13-a02ee880aeae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 nova_compute[259627]: 2025-10-14 09:11:38.927 2 INFO nova.scheduler.client.report [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Deleted allocations for instance d1c0470c-5f74-43c3-a206-07147fa01d5e
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.935 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0d82ec5b-dd15-44e4-b030-cfc95ebb9c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.939 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[02a949fd-2cab-49f7-97a1-1251fdd74416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 podman[345570]: 2025-10-14 09:11:38.958123476 +0000 UTC m=+0.046778773 container create 421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.972 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e279f4c7-6c77-4d9d-bcd4-2cbd7dd8fd1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:38.993 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11d9186d-71f9-41da-b859-f84542746263]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345586, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:39.010 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c66609-096c-4d34-853c-c55d00fc133d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345589, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345589, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:39.012 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:39 compute-0 systemd[1]: Started libpod-conmon-421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f.scope.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:39.019 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:39.019 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:39.020 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:39.020 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.030 2 DEBUG oslo_concurrency.lockutils [None req-c1af6a8f-83ff-415b-b8ca-fb665a5a8866 f9bf2c63a22b4957a35bb3b62129ab7c e735ef38811e4376af72e0a380aba1bb - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:39 compute-0 podman[345570]: 2025-10-14 09:11:38.938238896 +0000 UTC m=+0.026894223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:11:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8e21184f62971629e1a30a26e09da171559e121ff14d128605f1c645658ec9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8e21184f62971629e1a30a26e09da171559e121ff14d128605f1c645658ec9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8e21184f62971629e1a30a26e09da171559e121ff14d128605f1c645658ec9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8e21184f62971629e1a30a26e09da171559e121ff14d128605f1c645658ec9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:11:39 compute-0 podman[345570]: 2025-10-14 09:11:39.069255663 +0000 UTC m=+0.157910970 container init 421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.069 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433099.0695472, ab77dbf7-4458-4b16-a2e7-ed73be047838 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.070 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] VM Started (Lifecycle Event)
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.071 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.076 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.079 2 INFO nova.virt.libvirt.driver [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Instance spawned successfully.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.079 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:11:39 compute-0 podman[345570]: 2025-10-14 09:11:39.081486444 +0000 UTC m=+0.170141741 container start 421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:11:39 compute-0 podman[345570]: 2025-10-14 09:11:39.086097798 +0000 UTC m=+0.174753145 container attach 421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.088 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.091 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.099 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.099 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.099 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.099 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.100 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.100 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.108 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.108 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433099.0711691, ab77dbf7-4458-4b16-a2e7-ed73be047838 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.109 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] VM Paused (Lifecycle Event)
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.135 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.138 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433099.075336, ab77dbf7-4458-4b16-a2e7-ed73be047838 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.138 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] VM Resumed (Lifecycle Event)
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.150 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.153 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.160 2 INFO nova.virt.libvirt.driver [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Instance spawned successfully.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.160 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.183 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.186 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.194 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.195 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.195 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.195 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.196 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.196 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.207 2 INFO nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Took 8.70 seconds to spawn the instance on the hypervisor.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.207 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.230 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.230 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433099.1507213, ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.230 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] VM Started (Lifecycle Event)
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.260 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.263 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.280 2 INFO nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Took 7.78 seconds to spawn the instance on the hypervisor.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.280 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.281 2 INFO nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Took 10.30 seconds to build instance.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.289 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.289 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433099.1513348, ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.289 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] VM Paused (Lifecycle Event)
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.306 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.312 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433099.1559663, ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.313 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] VM Resumed (Lifecycle Event)
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.316 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.334 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.337 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.351 2 INFO nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Took 10.24 seconds to build instance.
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.366 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.517 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Creating config drive at /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf/disk.config
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.523 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpptdrrahj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:39 compute-0 ceph-mon[74249]: pgmap v1715: 305 pgs: 305 active+clean; 366 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 8.9 MiB/s wr, 354 op/s
Oct 14 09:11:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1119848970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:11:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1039755345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.690 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpptdrrahj" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.761 2 DEBUG nova.storage.rbd_utils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] rbd image 62fe725d-b24a-477a-a275-06d2cd960aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.768 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf/disk.config 62fe725d-b24a-477a-a275-06d2cd960aaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 366 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 8.9 MiB/s wr, 354 op/s
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.966 2 DEBUG oslo_concurrency.processutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf/disk.config 62fe725d-b24a-477a-a275-06d2cd960aaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:39 compute-0 nova_compute[259627]: 2025-10-14 09:11:39.967 2 INFO nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Deleting local config drive /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf/disk.config because it was imported into RBD.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.012 2 DEBUG nova.network.neutron [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Updated VIF entry in instance network info cache for port c9b0841b-401d-4f72-aa51-209173353afe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.013 2 DEBUG nova.network.neutron [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Updating instance_info_cache with network_info: [{"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:40 compute-0 kernel: tap4006119e-fa: entered promiscuous mode
Oct 14 09:11:40 compute-0 NetworkManager[44885]: <info>  [1760433100.0267] manager: (tap4006119e-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Oct 14 09:11:40 compute-0 ovn_controller[152662]: 2025-10-14T09:11:40Z|00849|binding|INFO|Claiming lport 4006119e-fa08-4095-bba7-d338c82ac066 for this chassis.
Oct 14 09:11:40 compute-0 ovn_controller[152662]: 2025-10-14T09:11:40Z|00850|binding|INFO|4006119e-fa08-4095-bba7-d338c82ac066: Claiming fa:16:3e:be:3e:79 10.100.0.11
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:40 compute-0 NetworkManager[44885]: <info>  [1760433100.0393] device (tap4006119e-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.036 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:3e:79 10.100.0.11'], port_security=['fa:16:3e:be:3e:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '62fe725d-b24a-477a-a275-06d2cd960aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4006119e-fa08-4095-bba7-d338c82ac066) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.037 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4006119e-fa08-4095-bba7-d338c82ac066 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 bound to our chassis
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.038 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:40 compute-0 NetworkManager[44885]: <info>  [1760433100.0402] device (tap4006119e-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.038 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ab77dbf7-4458-4b16-a2e7-ed73be047838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.038 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.038 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.039 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.039 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.039 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Processing event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.039 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.039 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.039 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.040 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.040 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.040 2 WARNING nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state building and task_state spawning.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.040 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received event network-vif-unplugged-85b2e7ea-f418-4eba-9a49-be2af576436f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.040 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.040 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.040 2 DEBUG oslo_concurrency.lockutils [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1c0470c-5f74-43c3-a206-07147fa01d5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.041 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] No waiting events found dispatching network-vif-unplugged-85b2e7ea-f418-4eba-9a49-be2af576436f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.041 2 DEBUG nova.compute.manager [req-36c4febc-7742-4a81-8536-d9867c457eb8 req-12116d60-37e8-47b3-80be-61acb65513a6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Received event network-vif-unplugged-85b2e7ea-f418-4eba-9a49-be2af576436f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.041 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.050 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433100.0500762, 1141f79e-2e47-40f1-91b0-275a9fac765c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.050 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Resumed (Lifecycle Event)
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.053 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:11:40 compute-0 ovn_controller[152662]: 2025-10-14T09:11:40Z|00851|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 ovn-installed in OVS
Oct 14 09:11:40 compute-0 ovn_controller[152662]: 2025-10-14T09:11:40Z|00852|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 up in Southbound
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.063 2 INFO nova.virt.libvirt.driver [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance spawned successfully.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.064 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.063 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4344e426-7c76-4acb-9979-11b2b7cd12e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.069 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:40 compute-0 systemd-machined[214636]: New machine qemu-105-instance-00000055.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.079 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.084 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.084 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.085 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.085 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.085 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.086 2 DEBUG nova.virt.libvirt.driver [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:40 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000055.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.108 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.114 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8f44ac-d2a6-4865-8253-5608ce9858b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.117 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[08fbaacd-5073-4f76-8a3e-7e661a21fe45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.139 2 INFO nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Took 12.84 seconds to spawn the instance on the hypervisor.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.139 2 DEBUG nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.167 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8af5a9-250e-46c6-82e1-70c882b82d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:40 compute-0 sharp_fermi[345591]: {
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "osd_id": 2,
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "type": "bluestore"
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:     },
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "osd_id": 1,
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "type": "bluestore"
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:     },
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "osd_id": 0,
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:         "type": "bluestore"
Oct 14 09:11:40 compute-0 sharp_fermi[345591]:     }
Oct 14 09:11:40 compute-0 sharp_fermi[345591]: }
Oct 14 09:11:40 compute-0 systemd[1]: libpod-421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f.scope: Deactivated successfully.
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.205 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a0ca9a-10a3-4dca-8c75-f528be845458]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345692, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:40 compute-0 podman[345570]: 2025-10-14 09:11:40.206378566 +0000 UTC m=+1.295033863 container died 421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.216 2 INFO nova.compute.manager [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Took 13.91 seconds to build instance.
Oct 14 09:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a8e21184f62971629e1a30a26e09da171559e121ff14d128605f1c645658ec9-merged.mount: Deactivated successfully.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.238 2 DEBUG oslo_concurrency.lockutils [None req-9e48382c-e0b5-4040-9f08-e09efcd2e683 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:40 compute-0 podman[345570]: 2025-10-14 09:11:40.259561606 +0000 UTC m=+1.348216903 container remove 421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.262 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[280d715c-6b19-43e3-878a-a475a6bb4586]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345695, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345695, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.263 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.269 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.270 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.270 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:40.270 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:40 compute-0 systemd[1]: libpod-conmon-421d3afc6f180db4672a2d2f282b1327825e116b300a6045bd185f7fe6ebdb2f.scope: Deactivated successfully.
Oct 14 09:11:40 compute-0 sudo[345285]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:11:40 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:11:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:11:40 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:11:40 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 82265f9d-b95b-4a1d-8685-d3e21d2a2cab does not exist
Oct 14 09:11:40 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d703a5f0-a7cd-4bc2-80f5-510e2c060c96 does not exist
Oct 14 09:11:40 compute-0 sudo[345719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:11:40 compute-0 sudo[345719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:40 compute-0 sudo[345719]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:40 compute-0 sudo[345765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:11:40 compute-0 sudo[345765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:11:40 compute-0 sudo[345765]: pam_unix(sudo:session): session closed for user root
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.640 2 DEBUG nova.compute.manager [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.640 2 DEBUG oslo_concurrency.lockutils [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.640 2 DEBUG oslo_concurrency.lockutils [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.640 2 DEBUG oslo_concurrency.lockutils [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.640 2 DEBUG nova.compute.manager [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Processing event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.641 2 DEBUG nova.compute.manager [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.641 2 DEBUG oslo_concurrency.lockutils [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.641 2 DEBUG oslo_concurrency.lockutils [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.641 2 DEBUG oslo_concurrency.lockutils [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.641 2 DEBUG nova.compute.manager [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] No waiting events found dispatching network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.641 2 WARNING nova.compute.manager [req-39687059-2d02-46e2-921b-aaebb1d38fda req-e6e5ed99-7316-4326-b4b5-b41c6a8d4b5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received unexpected event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 for instance with vm_state building and task_state spawning.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.855 2 DEBUG nova.compute.manager [req-ccbb51eb-4ccd-42b5-93ab-8cc47dd984fe req-98e5fa31-593b-42f3-8532-1fa906b90824 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.855 2 DEBUG oslo_concurrency.lockutils [req-ccbb51eb-4ccd-42b5-93ab-8cc47dd984fe req-98e5fa31-593b-42f3-8532-1fa906b90824 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.856 2 DEBUG oslo_concurrency.lockutils [req-ccbb51eb-4ccd-42b5-93ab-8cc47dd984fe req-98e5fa31-593b-42f3-8532-1fa906b90824 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.856 2 DEBUG oslo_concurrency.lockutils [req-ccbb51eb-4ccd-42b5-93ab-8cc47dd984fe req-98e5fa31-593b-42f3-8532-1fa906b90824 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.856 2 DEBUG nova.compute.manager [req-ccbb51eb-4ccd-42b5-93ab-8cc47dd984fe req-98e5fa31-593b-42f3-8532-1fa906b90824 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] No waiting events found dispatching network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.856 2 WARNING nova.compute.manager [req-ccbb51eb-4ccd-42b5-93ab-8cc47dd984fe req-98e5fa31-593b-42f3-8532-1fa906b90824 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received unexpected event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 for instance with vm_state active and task_state None.
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.985 2 INFO nova.compute.manager [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Rescuing
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.986 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.986 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquired lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:40 compute-0 nova_compute[259627]: 2025-10-14 09:11:40.986 2 DEBUG nova.network.neutron [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.170 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433101.1698232, 62fe725d-b24a-477a-a275-06d2cd960aaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.171 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] VM Started (Lifecycle Event)
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.173 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.177 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.180 2 INFO nova.virt.libvirt.driver [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Instance spawned successfully.
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.180 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.187 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.190 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.198 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.199 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.199 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.200 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.200 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.201 2 DEBUG nova.virt.libvirt.driver [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.206 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.206 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433101.1699388, 62fe725d-b24a-477a-a275-06d2cd960aaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.206 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] VM Paused (Lifecycle Event)
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.236 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.239 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433101.175263, 62fe725d-b24a-477a-a275-06d2cd960aaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.239 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] VM Resumed (Lifecycle Event)
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.261 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.265 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.274 2 INFO nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Took 8.68 seconds to spawn the instance on the hypervisor.
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.275 2 DEBUG nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.285 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:11:41 compute-0 ceph-mon[74249]: pgmap v1716: 305 pgs: 305 active+clean; 366 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 8.9 MiB/s wr, 354 op/s
Oct 14 09:11:41 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:11:41 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.333 2 INFO nova.compute.manager [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Took 12.04 seconds to build instance.
Oct 14 09:11:41 compute-0 nova_compute[259627]: 2025-10-14 09:11:41.348 2 DEBUG oslo_concurrency.lockutils [None req-cbea5373-e616-46de-8e81-0595f48dbd3c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 341 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 502 op/s
Oct 14 09:11:41 compute-0 ovn_controller[152662]: 2025-10-14T09:11:41Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:23:80 10.100.0.11
Oct 14 09:11:41 compute-0 ovn_controller[152662]: 2025-10-14T09:11:41Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:23:80 10.100.0.11
Oct 14 09:11:42 compute-0 nova_compute[259627]: 2025-10-14 09:11:42.460 2 DEBUG nova.network.neutron [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Updating instance_info_cache with network_info: [{"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:42 compute-0 nova_compute[259627]: 2025-10-14 09:11:42.507 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Releasing lock "refresh_cache-1141f79e-2e47-40f1-91b0-275a9fac765c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:42 compute-0 nova_compute[259627]: 2025-10-14 09:11:42.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:42 compute-0 nova_compute[259627]: 2025-10-14 09:11:42.852 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0024856971779970045 of space, bias 1.0, pg target 0.7457091533991014 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:11:43 compute-0 ceph-mon[74249]: pgmap v1717: 305 pgs: 305 active+clean; 341 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 502 op/s
Oct 14 09:11:43 compute-0 nova_compute[259627]: 2025-10-14 09:11:43.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:43 compute-0 ovn_controller[152662]: 2025-10-14T09:11:43Z|00853|binding|INFO|Releasing lport 8bb0b8f1-e510-498b-862e-2d74544dc8a6 from this chassis (sb_readonly=0)
Oct 14 09:11:43 compute-0 ovn_controller[152662]: 2025-10-14T09:11:43Z|00854|binding|INFO|Releasing lport 970f2645-7ec3-4b7f-8527-871800c728d8 from this chassis (sb_readonly=0)
Oct 14 09:11:43 compute-0 ovn_controller[152662]: 2025-10-14T09:11:43Z|00855|binding|INFO|Releasing lport d2ea0b85-033d-47f5-af89-2df076f2ce40 from this chassis (sb_readonly=0)
Oct 14 09:11:43 compute-0 nova_compute[259627]: 2025-10-14 09:11:43.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 341 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 7.4 MiB/s wr, 375 op/s
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.464 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.465 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.465 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.465 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.466 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.467 2 INFO nova.compute.manager [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Terminating instance
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.468 2 DEBUG nova.compute.manager [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:11:44 compute-0 kernel: tapc9b0841b-40 (unregistering): left promiscuous mode
Oct 14 09:11:44 compute-0 NetworkManager[44885]: <info>  [1760433104.5321] device (tapc9b0841b-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00856|binding|INFO|Releasing lport c9b0841b-401d-4f72-aa51-209173353afe from this chassis (sb_readonly=0)
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00857|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe down in Southbound
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00858|binding|INFO|Removing iface tapc9b0841b-40 ovn-installed in OVS
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.555 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:e1:a1 10.100.0.10'], port_security=['fa:16:3e:6d:e1:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ab77dbf7-4458-4b16-a2e7-ed73be047838', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c9b0841b-401d-4f72-aa51-209173353afe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.557 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c9b0841b-401d-4f72-aa51-209173353afe in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.559 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.582 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5af41c7-e4b8-4bb2-a7dd-1fa2b503f14f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Deactivated successfully.
Oct 14 09:11:44 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Consumed 5.982s CPU time.
Oct 14 09:11:44 compute-0 systemd-machined[214636]: Machine qemu-103-instance-00000053 terminated.
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.627 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d26c8b-19f3-441a-8ce4-92e3c371a9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.631 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6c025865-80fd-4700-b24a-2b458b6663ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.670 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0097a940-5af9-4b9c-9736-1b60c4358c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 kernel: tapc9b0841b-40: entered promiscuous mode
Oct 14 09:11:44 compute-0 NetworkManager[44885]: <info>  [1760433104.6895] manager: (tapc9b0841b-40): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00859|binding|INFO|Claiming lport c9b0841b-401d-4f72-aa51-209173353afe for this chassis.
Oct 14 09:11:44 compute-0 kernel: tapc9b0841b-40 (unregistering): left promiscuous mode
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00860|binding|INFO|c9b0841b-401d-4f72-aa51-209173353afe: Claiming fa:16:3e:6d:e1:a1 10.100.0.10
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.699 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:e1:a1 10.100.0.10'], port_security=['fa:16:3e:6d:e1:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ab77dbf7-4458-4b16-a2e7-ed73be047838', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c9b0841b-401d-4f72-aa51-209173353afe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.699 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf3843b-04b6-4c92-9dee-9af406eaaf68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345807, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.726 2 INFO nova.virt.libvirt.driver [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Instance destroyed successfully.
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.727 2 DEBUG nova.objects.instance [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'resources' on Instance uuid ab77dbf7-4458-4b16-a2e7-ed73be047838 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.724 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[161d58ed-2788-4b78-868b-68619635cf01]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345811, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345811, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.729 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00861|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe ovn-installed in OVS
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00862|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe up in Southbound
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00863|binding|INFO|Releasing lport c9b0841b-401d-4f72-aa51-209173353afe from this chassis (sb_readonly=1)
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00864|if_status|INFO|Dropped 1 log messages in last 304 seconds (most recently, 304 seconds ago) due to excessive rate
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00865|if_status|INFO|Not setting lport c9b0841b-401d-4f72-aa51-209173353afe down as sb is readonly
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00866|binding|INFO|Removing iface tapc9b0841b-40 ovn-installed in OVS
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00867|binding|INFO|Releasing lport c9b0841b-401d-4f72-aa51-209173353afe from this chassis (sb_readonly=0)
Oct 14 09:11:44 compute-0 ovn_controller[152662]: 2025-10-14T09:11:44Z|00868|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe down in Southbound
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.746 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:e1:a1 10.100.0.10'], port_security=['fa:16:3e:6d:e1:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ab77dbf7-4458-4b16-a2e7-ed73be047838', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c9b0841b-401d-4f72-aa51-209173353afe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.748 2 DEBUG nova.virt.libvirt.vif [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-1',id=83,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:39Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=ab77dbf7-4458-4b16-a2e7-ed73be047838,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.748 2 DEBUG nova.network.os_vif_util [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.749 2 DEBUG nova.network.os_vif_util [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.749 2 DEBUG os_vif [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9b0841b-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.759 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.760 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.760 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.761 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.763 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c9b0841b-401d-4f72-aa51-209173353afe in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.764 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.767 2 INFO os_vif [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40')
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.782 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38d138bd-38de-4741-8b56-f796641e2975]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.817 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd9ddec-309e-47f1-9304-37d03c3442c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.820 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4d91c704-7ee0-4233-b3c3-696325105cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 podman[345814]: 2025-10-14 09:11:44.840665015 +0000 UTC m=+0.088288195 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:11:44 compute-0 podman[345812]: 2025-10-14 09:11:44.85794176 +0000 UTC m=+0.110097202 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.859 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[96e631cd-b17a-4b72-b118-522c64d1da52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.878 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19e26bd4-cb53-4f1e-a6ad-44046e03f45f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345872, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.890 2 DEBUG nova.compute.manager [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.890 2 DEBUG oslo_concurrency.lockutils [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.890 2 DEBUG oslo_concurrency.lockutils [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.891 2 DEBUG oslo_concurrency.lockutils [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.891 2 DEBUG nova.compute.manager [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.892 2 DEBUG nova.compute.manager [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.901 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe9daed-d78f-4185-8445-e4290eff02be]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345876, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345876, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.904 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 nova_compute[259627]: 2025-10-14 09:11:44.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.907 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.908 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.908 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.909 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.910 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c9b0841b-401d-4f72-aa51-209173353afe in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.912 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.929 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[21dd3b47-bc0f-452c-ba2b-22e58352b3d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.968 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[51b1f586-33e7-4f65-bfd2-2abc1b640fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.972 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dbe09c-4fb9-47a7-9f57-459b57caf7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.006 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[10dc728b-642f-466f-bd85-569d68ea0393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:45 compute-0 rsyslogd[1002]: imjournal: 11491 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 09:11:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.051 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0beea3a8-fc0a-456a-8903-842fa4f6a797]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345882, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.073 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5aaa5cc1-b595-4811-b355-40ebd707945a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345883, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345883, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.076 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.078 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.079 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.079 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.079 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.238 2 INFO nova.virt.libvirt.driver [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Deleting instance files /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838_del
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.240 2 INFO nova.virt.libvirt.driver [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Deletion of /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838_del complete
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.298 2 INFO nova.compute.manager [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.299 2 DEBUG oslo.service.loopingcall [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.299 2 DEBUG nova.compute.manager [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.299 2 DEBUG nova.network.neutron [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:11:45 compute-0 ceph-mon[74249]: pgmap v1718: 305 pgs: 305 active+clean; 341 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 7.4 MiB/s wr, 375 op/s
Oct 14 09:11:45 compute-0 ovn_controller[152662]: 2025-10-14T09:11:45Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:5e:c8 10.100.0.5
Oct 14 09:11:45 compute-0 ovn_controller[152662]: 2025-10-14T09:11:45Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:5e:c8 10.100.0.5
Oct 14 09:11:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.6 MiB/s wr, 656 op/s
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.979 2 DEBUG nova.network.neutron [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:45 compute-0 nova_compute[259627]: 2025-10-14 09:11:45.997 2 INFO nova.compute.manager [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Took 0.70 seconds to deallocate network for instance.
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.057 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.057 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.069 2 DEBUG nova.compute.manager [req-4d84d474-ddc8-4f15-bf22-2adfc94f0b35 req-0e5ef6dc-4f64-4ba9-bd1b-ecb1545a0acc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-deleted-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.220 2 DEBUG oslo_concurrency.processutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254349499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.714 2 DEBUG oslo_concurrency.processutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.727 2 DEBUG nova.compute.provider_tree [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.751 2 DEBUG nova.scheduler.client.report [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.779 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.801 2 INFO nova.scheduler.client.report [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Deleted allocations for instance ab77dbf7-4458-4b16-a2e7-ed73be047838
Oct 14 09:11:46 compute-0 nova_compute[259627]: 2025-10-14 09:11:46.864 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:11:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7937 writes, 35K keys, 7937 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7937 writes, 7937 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1669 writes, 7750 keys, 1669 commit groups, 1.0 writes per commit group, ingest: 9.97 MB, 0.02 MB/s
                                           Interval WAL: 1669 writes, 1669 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     94.3      0.45              0.14        21    0.021       0      0       0.0       0.0
                                             L6      1/0    9.45 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    182.7    149.9      1.00              0.48        20    0.050    100K    11K       0.0       0.0
                                            Sum      1/0    9.45 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.5    126.2    132.6      1.45              0.62        41    0.035    100K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.7     88.1     90.2      0.58              0.18        10    0.058     31K   3151       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    182.7    149.9      1.00              0.48        20    0.050    100K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.7      0.44              0.14        20    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.041, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.06 MB/s write, 0.18 GB read, 0.06 MB/s read, 1.5 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 22.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000158 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1452,21.23 MB,6.98202%) FilterBlock(42,290.30 KB,0.0932543%) IndexBlock(42,526.34 KB,0.169081%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.118 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.119 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.120 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.120 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.121 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.121 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.121 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.122 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.122 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.123 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.123 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.123 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.124 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.124 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.125 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.125 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.125 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.126 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.126 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.127 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.127 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.128 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.128 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.128 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.129 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.129 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.129 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.130 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.130 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.131 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.
Oct 14 09:11:47 compute-0 ceph-mon[74249]: pgmap v1719: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.6 MiB/s wr, 656 op/s
Oct 14 09:11:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4254349499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.497 2 INFO nova.compute.manager [None req-8482ca0e-7f08-4d01-a98d-1b05db4bcb53 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Get console output
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.504 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.746 2 INFO nova.compute.manager [None req-12f90051-391d-48ea-8fbb-12aa75a3f8d2 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Pausing
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.748 2 DEBUG nova.objects.instance [None req-12f90051-391d-48ea-8fbb-12aa75a3f8d2 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 290980d2-08b4-4029-a1c3-becd3457a410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.782 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433107.7814875, 290980d2-08b4-4029-a1c3-becd3457a410 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.782 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Paused (Lifecycle Event)
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.784 2 DEBUG nova.compute.manager [None req-12f90051-391d-48ea-8fbb-12aa75a3f8d2 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.817 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.822 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.2 MiB/s wr, 428 op/s
Oct 14 09:11:47 compute-0 nova_compute[259627]: 2025-10-14 09:11:47.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:11:49 compute-0 ceph-mon[74249]: pgmap v1720: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.2 MiB/s wr, 428 op/s
Oct 14 09:11:49 compute-0 nova_compute[259627]: 2025-10-14 09:11:49.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.2 MiB/s wr, 428 op/s
Oct 14 09:11:49 compute-0 nova_compute[259627]: 2025-10-14 09:11:49.994 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:49 compute-0 nova_compute[259627]: 2025-10-14 09:11:49.994 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:49 compute-0 nova_compute[259627]: 2025-10-14 09:11:49.994 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:49 compute-0 nova_compute[259627]: 2025-10-14 09:11:49.995 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:49 compute-0 nova_compute[259627]: 2025-10-14 09:11:49.995 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:49 compute-0 nova_compute[259627]: 2025-10-14 09:11:49.996 2 INFO nova.compute.manager [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Terminating instance
Oct 14 09:11:49 compute-0 nova_compute[259627]: 2025-10-14 09:11:49.997 2 DEBUG nova.compute.manager [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:11:50 compute-0 kernel: tap8fb091af-e4 (unregistering): left promiscuous mode
Oct 14 09:11:50 compute-0 NetworkManager[44885]: <info>  [1760433110.0447] device (tap8fb091af-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00869|binding|INFO|Releasing lport 8fb091af-e492-4374-b3c2-7ab4157389a6 from this chassis (sb_readonly=0)
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00870|binding|INFO|Setting lport 8fb091af-e492-4374-b3c2-7ab4157389a6 down in Southbound
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00871|binding|INFO|Removing iface tap8fb091af-e4 ovn-installed in OVS
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.134 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b2:b6 10.100.0.6'], port_security=['fa:16:3e:fa:b2:b6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ed2aee1e-f632-4d7f-ae03-f5d9c41e9104', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8fb091af-e492-4374-b3c2-7ab4157389a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.135 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8fb091af-e492-4374-b3c2-7ab4157389a6 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.140 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57
Oct 14 09:11:50 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 14 09:11:50 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Consumed 11.236s CPU time.
Oct 14 09:11:50 compute-0 systemd-machined[214636]: Machine qemu-104-instance-00000054 terminated.
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.163 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b3f98f-4fe3-4729-aace-332cf73df229]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.197 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e446223-f12a-4eb9-8ae1-d73114bf4cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.201 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7ee235-6633-4658-b543-e81f9c735425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.241 2 INFO nova.virt.libvirt.driver [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Instance destroyed successfully.
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.242 2 DEBUG nova.objects.instance [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'resources' on Instance uuid ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.245 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[770fea98-dc18-4e1c-9b93-5108e41c023f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.266 2 DEBUG nova.virt.libvirt.vif [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-2',id=84,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-14T09:11:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:39Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=ed2aee1e-f632-4d7f-ae03-f5d9c41e9104,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.267 2 DEBUG nova.network.os_vif_util [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.268 2 DEBUG nova.network.os_vif_util [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.268 2 DEBUG os_vif [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8fb091af-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.275 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a30fcdd0-1ef3-409f-953a-636287532609]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345929, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.277 2 INFO os_vif [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4')
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.302 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8df2b51b-dff4-4332-bda7-73d517ed5891]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345934, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345934, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.305 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.310 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.310 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.311 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.311 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.315 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.315 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.316 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.316 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.316 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.317 2 INFO nova.compute.manager [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Terminating instance
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.318 2 DEBUG nova.compute.manager [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:11:50 compute-0 kernel: tap4006119e-fa (unregistering): left promiscuous mode
Oct 14 09:11:50 compute-0 NetworkManager[44885]: <info>  [1760433110.3620] device (tap4006119e-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00872|binding|INFO|Releasing lport 4006119e-fa08-4095-bba7-d338c82ac066 from this chassis (sb_readonly=0)
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00873|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 down in Southbound
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00874|binding|INFO|Removing iface tap4006119e-fa ovn-installed in OVS
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.388 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:3e:79 10.100.0.11'], port_security=['fa:16:3e:be:3e:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '62fe725d-b24a-477a-a275-06d2cd960aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4006119e-fa08-4095-bba7-d338c82ac066) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.389 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4006119e-fa08-4095-bba7-d338c82ac066 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.390 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.392 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[363379de-1c6e-4c72-b722-f525a9ca9740]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.396 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 namespace which is not needed anymore
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 14 09:11:50 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Consumed 9.797s CPU time.
Oct 14 09:11:50 compute-0 systemd-machined[214636]: Machine qemu-105-instance-00000055 terminated.
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.434 2 DEBUG nova.compute.manager [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-unplugged-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.435 2 DEBUG oslo_concurrency.lockutils [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.436 2 DEBUG oslo_concurrency.lockutils [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.436 2 DEBUG oslo_concurrency.lockutils [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.437 2 DEBUG nova.compute.manager [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] No waiting events found dispatching network-vif-unplugged-8fb091af-e492-4374-b3c2-7ab4157389a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.437 2 DEBUG nova.compute.manager [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-unplugged-8fb091af-e492-4374-b3c2-7ab4157389a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:11:50 compute-0 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [NOTICE]   (345552) : haproxy version is 2.8.14-c23fe91
Oct 14 09:11:50 compute-0 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [NOTICE]   (345552) : path to executable is /usr/sbin/haproxy
Oct 14 09:11:50 compute-0 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [WARNING]  (345552) : Exiting Master process...
Oct 14 09:11:50 compute-0 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [WARNING]  (345552) : Exiting Master process...
Oct 14 09:11:50 compute-0 systemd-udevd[345910]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:11:50 compute-0 kernel: tap4006119e-fa: entered promiscuous mode
Oct 14 09:11:50 compute-0 NetworkManager[44885]: <info>  [1760433110.5440] manager: (tap4006119e-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Oct 14 09:11:50 compute-0 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [ALERT]    (345552) : Current worker (345554) exited with code 143 (Terminated)
Oct 14 09:11:50 compute-0 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [WARNING]  (345552) : All workers exited. Exiting... (0)
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00875|binding|INFO|Claiming lport 4006119e-fa08-4095-bba7-d338c82ac066 for this chassis.
Oct 14 09:11:50 compute-0 kernel: tap4006119e-fa (unregistering): left promiscuous mode
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00876|binding|INFO|4006119e-fa08-4095-bba7-d338c82ac066: Claiming fa:16:3e:be:3e:79 10.100.0.11
Oct 14 09:11:50 compute-0 systemd[1]: libpod-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89.scope: Deactivated successfully.
Oct 14 09:11:50 compute-0 conmon[345545]: conmon 46e961f84509ad1e08e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89.scope/container/memory.events
Oct 14 09:11:50 compute-0 podman[345970]: 2025-10-14 09:11:50.556736706 +0000 UTC m=+0.063817693 container died 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.567 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:3e:79 10.100.0.11'], port_security=['fa:16:3e:be:3e:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '62fe725d-b24a-477a-a275-06d2cd960aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4006119e-fa08-4095-bba7-d338c82ac066) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.571 2 INFO nova.virt.libvirt.driver [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Instance destroyed successfully.
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.571 2 DEBUG nova.objects.instance [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'resources' on Instance uuid 62fe725d-b24a-477a-a275-06d2cd960aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89-userdata-shm.mount: Deactivated successfully.
Oct 14 09:11:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f890f6c75aac33a3bd8ad553f6090efd1c052d214d952027d2bd4c2c78b53ada-merged.mount: Deactivated successfully.
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00877|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 ovn-installed in OVS
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00878|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 up in Southbound
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00879|binding|INFO|Releasing lport 4006119e-fa08-4095-bba7-d338c82ac066 from this chassis (sb_readonly=1)
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00880|binding|INFO|Removing iface tap4006119e-fa ovn-installed in OVS
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00881|binding|INFO|Releasing lport 4006119e-fa08-4095-bba7-d338c82ac066 from this chassis (sb_readonly=0)
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.594 2 DEBUG nova.virt.libvirt.vif [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-3',id=85,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-14T09:11:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:41Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=62fe725d-b24a-477a-a275-06d2cd960aaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.595 2 DEBUG nova.network.os_vif_util [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:50 compute-0 ovn_controller[152662]: 2025-10-14T09:11:50Z|00882|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 down in Southbound
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.595 2 DEBUG nova.network.os_vif_util [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.596 2 DEBUG os_vif [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4006119e-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:50 compute-0 podman[345970]: 2025-10-14 09:11:50.600110494 +0000 UTC m=+0.107191481 container cleanup 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.607 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:3e:79 10.100.0.11'], port_security=['fa:16:3e:be:3e:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '62fe725d-b24a-477a-a275-06d2cd960aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4006119e-fa08-4095-bba7-d338c82ac066) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:50 compute-0 systemd[1]: libpod-conmon-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89.scope: Deactivated successfully.
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.619 2 INFO os_vif [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa')
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.683 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433095.655628, d1c0470c-5f74-43c3-a206-07147fa01d5e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.683 2 INFO nova.compute.manager [-] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] VM Stopped (Lifecycle Event)
Oct 14 09:11:50 compute-0 podman[346002]: 2025-10-14 09:11:50.683556689 +0000 UTC m=+0.048266009 container remove 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.692 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f71cbf70-7e6d-4f83-a024-a591930d5302]: (4, ('Tue Oct 14 09:11:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 (46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89)\n46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89\nTue Oct 14 09:11:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 (46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89)\n46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.697 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a029ed42-bc3e-4b6a-b55a-4dbb1dfcf4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.701 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:50 compute-0 kernel: tap7b7bccdd-40: left promiscuous mode
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.705 2 DEBUG nova.compute.manager [None req-7cf7bd16-ce9a-4c4d-b745-0475e483aefe - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[122617be-aaa0-463d-b9aa-b426f00f5c53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03cafd5c-7a45-4743-b983-9b406aee6197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.737 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebb7c4a-43ec-458f-af63-680d4dc65c49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.764 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7eeb8f58-9bd0-4dab-8715-fe4fa5075a4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687661, 'reachable_time': 42969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346034, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d7b7bccdd\x2d4e3c\x2d4e4b\x2da2d8\x2d8f7cf1dbab57.mount: Deactivated successfully.
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.770 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.770 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[43d17145-e24d-49b3-be13-6af8de1f63e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.771 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4006119e-fa08-4095-bba7-d338c82ac066 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.773 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.773 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b49d7389-dd27-49ef-96f9-97ecb4760028]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.774 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4006119e-fa08-4095-bba7-d338c82ac066 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.776 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:11:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.776 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f61744ba-b0be-4930-b5df-8aeeb1e436c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.785 2 INFO nova.virt.libvirt.driver [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Deleting instance files /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_del
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.786 2 INFO nova.virt.libvirt.driver [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Deletion of /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_del complete
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.829 2 INFO nova.compute.manager [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.830 2 DEBUG oslo.service.loopingcall [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.830 2 DEBUG nova.compute.manager [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:11:50 compute-0 nova_compute[259627]: 2025-10-14 09:11:50.831 2 DEBUG nova.network.neutron [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.035 2 INFO nova.virt.libvirt.driver [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Deleting instance files /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf_del
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.036 2 INFO nova.virt.libvirt.driver [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Deletion of /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf_del complete
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.084 2 INFO nova.compute.manager [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.085 2 DEBUG oslo.service.loopingcall [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.085 2 DEBUG nova.compute.manager [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.086 2 DEBUG nova.network.neutron [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:11:51 compute-0 ceph-mon[74249]: pgmap v1721: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.2 MiB/s wr, 428 op/s
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.544 2 DEBUG nova.compute.manager [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-unplugged-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.544 2 DEBUG oslo_concurrency.lockutils [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.544 2 DEBUG oslo_concurrency.lockutils [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.545 2 DEBUG oslo_concurrency.lockutils [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.545 2 DEBUG nova.compute.manager [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] No waiting events found dispatching network-vif-unplugged-4006119e-fa08-4095-bba7-d338c82ac066 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.545 2 DEBUG nova.compute.manager [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-unplugged-4006119e-fa08-4095-bba7-d338c82ac066 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.694 2 INFO nova.compute.manager [None req-370ac7de-a92b-4ded-8491-9abf4c6f9131 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Get console output
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.702 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.852 2 INFO nova.compute.manager [None req-7cb6c8d7-6aeb-4f5d-89a4-1c9ad1ddf5d9 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Unpausing
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.853 2 DEBUG nova.objects.instance [None req-7cb6c8d7-6aeb-4f5d-89a4-1c9ad1ddf5d9 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 290980d2-08b4-4029-a1c3-becd3457a410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.880 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433111.8806348, 290980d2-08b4-4029-a1c3-becd3457a410 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.881 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Resumed (Lifecycle Event)
Oct 14 09:11:51 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.888 2 DEBUG nova.virt.libvirt.guest [None req-7cb6c8d7-6aeb-4f5d-89a4-1c9ad1ddf5d9 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.889 2 DEBUG nova.compute.manager [None req-7cb6c8d7-6aeb-4f5d-89a4-1c9ad1ddf5d9 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 330 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 4.3 MiB/s wr, 468 op/s
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.903 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.907 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:11:51 compute-0 nova_compute[259627]: 2025-10-14 09:11:51.940 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 14 09:11:52 compute-0 nova_compute[259627]: 2025-10-14 09:11:52.515 2 DEBUG nova.compute.manager [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:52 compute-0 nova_compute[259627]: 2025-10-14 09:11:52.516 2 DEBUG oslo_concurrency.lockutils [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:52 compute-0 nova_compute[259627]: 2025-10-14 09:11:52.516 2 DEBUG oslo_concurrency.lockutils [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:52 compute-0 nova_compute[259627]: 2025-10-14 09:11:52.516 2 DEBUG oslo_concurrency.lockutils [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:52 compute-0 nova_compute[259627]: 2025-10-14 09:11:52.516 2 DEBUG nova.compute.manager [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] No waiting events found dispatching network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:52 compute-0 nova_compute[259627]: 2025-10-14 09:11:52.517 2 WARNING nova.compute.manager [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received unexpected event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 for instance with vm_state active and task_state deleting.
Oct 14 09:11:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:52 compute-0 nova_compute[259627]: 2025-10-14 09:11:52.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:52 compute-0 nova_compute[259627]: 2025-10-14 09:11:52.933 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.238 2 DEBUG nova.network.neutron [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.272 2 INFO nova.compute.manager [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Took 2.19 seconds to deallocate network for instance.
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.312 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.313 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.362 2 DEBUG nova.network.neutron [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:53 compute-0 ceph-mon[74249]: pgmap v1722: 305 pgs: 305 active+clean; 330 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 4.3 MiB/s wr, 468 op/s
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.404 2 INFO nova.compute.manager [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Took 2.57 seconds to deallocate network for instance.
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.454 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.484 2 DEBUG oslo_concurrency.processutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.662 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.663 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.664 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.665 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.665 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] No waiting events found dispatching network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.666 2 WARNING nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received unexpected event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 for instance with vm_state deleted and task_state None.
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.667 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.668 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.668 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.669 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.670 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] No waiting events found dispatching network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.670 2 WARNING nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received unexpected event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 for instance with vm_state deleted and task_state None.
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.671 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-deleted-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.672 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-deleted-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 330 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.2 MiB/s wr, 320 op/s
Oct 14 09:11:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556266415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:53 compute-0 nova_compute[259627]: 2025-10-14 09:11:53.995 2 DEBUG oslo_concurrency.processutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.002 2 DEBUG nova.compute.provider_tree [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.044 2 DEBUG nova.scheduler.client.report [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.081 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.083 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.107 2 INFO nova.scheduler.client.report [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Deleted allocations for instance 62fe725d-b24a-477a-a275-06d2cd960aaf
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.211 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.234 2 DEBUG oslo_concurrency.processutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2556266415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:11:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2384852915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.781 2 DEBUG oslo_concurrency.processutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.787 2 DEBUG nova.compute.provider_tree [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.807 2 DEBUG nova.scheduler.client.report [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.832 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.871 2 INFO nova.scheduler.client.report [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Deleted allocations for instance ed2aee1e-f632-4d7f-ae03-f5d9c41e9104
Oct 14 09:11:54 compute-0 nova_compute[259627]: 2025-10-14 09:11:54.950 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:55 compute-0 nova_compute[259627]: 2025-10-14 09:11:55.385 2 INFO nova.compute.manager [None req-ac9fd134-ff5c-444b-af36-42a88ab82c00 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Get console output
Oct 14 09:11:55 compute-0 ceph-mon[74249]: pgmap v1723: 305 pgs: 305 active+clean; 330 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.2 MiB/s wr, 320 op/s
Oct 14 09:11:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2384852915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:11:55 compute-0 nova_compute[259627]: 2025-10-14 09:11:55.391 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:11:55 compute-0 nova_compute[259627]: 2025-10-14 09:11:55.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 4.3 MiB/s wr, 411 op/s
Oct 14 09:11:56 compute-0 ovn_controller[152662]: 2025-10-14T09:11:56Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:08:ac 10.100.0.4
Oct 14 09:11:56 compute-0 ovn_controller[152662]: 2025-10-14T09:11:56Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:08:ac 10.100.0.4
Oct 14 09:11:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:56.860 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:56 compute-0 nova_compute[259627]: 2025-10-14 09:11:56.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:56.863 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.211 2 DEBUG nova.compute.manager [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received event network-changed-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.212 2 DEBUG nova.compute.manager [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Refreshing instance network info cache due to event network-changed-106349ae-cfaa-43ec-9bda-16f36a6ac3d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.213 2 DEBUG oslo_concurrency.lockutils [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.213 2 DEBUG oslo_concurrency.lockutils [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.214 2 DEBUG nova.network.neutron [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Refreshing network info cache for port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.305 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "290980d2-08b4-4029-a1c3-becd3457a410" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.306 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.307 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "290980d2-08b4-4029-a1c3-becd3457a410-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.307 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.308 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.310 2 INFO nova.compute.manager [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Terminating instance
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.311 2 DEBUG nova.compute.manager [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:11:57 compute-0 kernel: tap106349ae-cf (unregistering): left promiscuous mode
Oct 14 09:11:57 compute-0 NetworkManager[44885]: <info>  [1760433117.3711] device (tap106349ae-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:11:57 compute-0 ovn_controller[152662]: 2025-10-14T09:11:57Z|00883|binding|INFO|Releasing lport 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 from this chassis (sb_readonly=0)
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:57 compute-0 ovn_controller[152662]: 2025-10-14T09:11:57Z|00884|binding|INFO|Setting lport 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 down in Southbound
Oct 14 09:11:57 compute-0 ovn_controller[152662]: 2025-10-14T09:11:57Z|00885|binding|INFO|Removing iface tap106349ae-cf ovn-installed in OVS
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:57 compute-0 ceph-mon[74249]: pgmap v1724: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 4.3 MiB/s wr, 411 op/s
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.402 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:23:80 10.100.0.11'], port_security=['fa:16:3e:07:23:80 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '290980d2-08b4-4029-a1c3-becd3457a410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2389730-ae66-46f4-aea3-6a67311703e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fef16ebb-8e3c-4cd9-b046-59c4109d4508', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45135636-9d1b-4fd5-951f-3d4d3d97b1e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=106349ae-cfaa-43ec-9bda-16f36a6ac3d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.404 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 in datapath b2389730-ae66-46f4-aea3-6a67311703e9 unbound from our chassis
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.406 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2389730-ae66-46f4-aea3-6a67311703e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.407 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2473e52a-d4b5-4350-a0dd-4e824771fd10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.407 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 namespace which is not needed anymore
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:57 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Oct 14 09:11:57 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d0000004f.scope: Consumed 13.312s CPU time.
Oct 14 09:11:57 compute-0 systemd-machined[214636]: Machine qemu-99-instance-0000004f terminated.
Oct 14 09:11:57 compute-0 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [NOTICE]   (342597) : haproxy version is 2.8.14-c23fe91
Oct 14 09:11:57 compute-0 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [NOTICE]   (342597) : path to executable is /usr/sbin/haproxy
Oct 14 09:11:57 compute-0 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [WARNING]  (342597) : Exiting Master process...
Oct 14 09:11:57 compute-0 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [ALERT]    (342597) : Current worker (342599) exited with code 143 (Terminated)
Oct 14 09:11:57 compute-0 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [WARNING]  (342597) : All workers exited. Exiting... (0)
Oct 14 09:11:57 compute-0 systemd[1]: libpod-519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc.scope: Deactivated successfully.
Oct 14 09:11:57 compute-0 podman[346105]: 2025-10-14 09:11:57.537610825 +0000 UTC m=+0.042945448 container died 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.557 2 INFO nova.virt.libvirt.driver [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Instance destroyed successfully.
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.557 2 DEBUG nova.objects.instance [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 290980d2-08b4-4029-a1c3-becd3457a410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc-userdata-shm.mount: Deactivated successfully.
Oct 14 09:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5d3769d648c5314a067d97a55b05a97e81509531f785e9c48153acfc6141c6f-merged.mount: Deactivated successfully.
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.574 2 DEBUG nova.virt.libvirt.vif [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2139849378',display_name='tempest-TestNetworkAdvancedServerOps-server-2139849378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2139849378',id=79,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhJsgZxd+OWpD7zaKfBBpwRrzG5y2svlcIl3JOB/vCmtEivnvjVbVGSOAYp3d5tHvQ3QI5rFGEYwlQUr4eteTREKUKNxmmu7QDjX9h1ezH3YG5N/CgGPytwUcQasRNPJg==',key_name='tempest-TestNetworkAdvancedServerOps-1442263360',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-se9y79xx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:51Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=290980d2-08b4-4029-a1c3-becd3457a410,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.574 2 DEBUG nova.network.os_vif_util [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.575 2 DEBUG nova.network.os_vif_util [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.575 2 DEBUG os_vif [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:57 compute-0 podman[346105]: 2025-10-14 09:11:57.577145839 +0000 UTC m=+0.082480472 container cleanup 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap106349ae-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.583 2 INFO os_vif [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf')
Oct 14 09:11:57 compute-0 systemd[1]: libpod-conmon-519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc.scope: Deactivated successfully.
Oct 14 09:11:57 compute-0 podman[346150]: 2025-10-14 09:11:57.647974843 +0000 UTC m=+0.044411415 container remove 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.657 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40886af5-ebb5-4698-b6eb-9bf8954cbe27]: (4, ('Tue Oct 14 09:11:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 (519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc)\n519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc\nTue Oct 14 09:11:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 (519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc)\n519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.659 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[89168fc5-f086-43de-a4d8-88e8a8b704e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.661 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2389730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:11:57 compute-0 kernel: tapb2389730-a0: left promiscuous mode
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.673 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b3ebd7-93b8-4c84-b1d6-f204fd510bc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.702 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4142d2-8d96-4d11-9f1c-a2e2637150c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.703 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afd35977-7ac0-46ad-850a-c86e8f16637b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.720 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e20157-1633-432c-931b-3c088554d6a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686404, 'reachable_time': 28202, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346181, 'error': None, 'target': 'ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:57 compute-0 systemd[1]: run-netns-ovnmeta\x2db2389730\x2dae66\x2d46f4\x2daea3\x2d6a67311703e9.mount: Deactivated successfully.
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.726 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:11:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.726 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7eda5965-7af6-40cc-a5e9-262831385fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:11:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:11:57 compute-0 podman[346180]: 2025-10-14 09:11:57.784156597 +0000 UTC m=+0.058366058 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent)
Oct 14 09:11:57 compute-0 podman[346177]: 2025-10-14 09:11:57.850268845 +0000 UTC m=+0.135812425 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 14 09:11:57 compute-0 nova_compute[259627]: 2025-10-14 09:11:57.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 14 09:11:58 compute-0 nova_compute[259627]: 2025-10-14 09:11:58.008 2 INFO nova.virt.libvirt.driver [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Deleting instance files /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410_del
Oct 14 09:11:58 compute-0 nova_compute[259627]: 2025-10-14 09:11:58.008 2 INFO nova.virt.libvirt.driver [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Deletion of /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410_del complete
Oct 14 09:11:58 compute-0 nova_compute[259627]: 2025-10-14 09:11:58.064 2 INFO nova.compute.manager [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 14 09:11:58 compute-0 nova_compute[259627]: 2025-10-14 09:11:58.065 2 DEBUG oslo.service.loopingcall [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:11:58 compute-0 nova_compute[259627]: 2025-10-14 09:11:58.066 2 DEBUG nova.compute.manager [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:11:58 compute-0 nova_compute[259627]: 2025-10-14 09:11:58.066 2 DEBUG nova.network.neutron [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:11:59 compute-0 ovn_controller[152662]: 2025-10-14T09:11:59Z|00886|binding|INFO|Releasing lport 970f2645-7ec3-4b7f-8527-871800c728d8 from this chassis (sb_readonly=0)
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:59 compute-0 ovn_controller[152662]: 2025-10-14T09:11:59Z|00887|binding|INFO|Releasing lport 970f2645-7ec3-4b7f-8527-871800c728d8 from this chassis (sb_readonly=0)
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:11:59 compute-0 ceph-mon[74249]: pgmap v1725: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.428 2 DEBUG nova.network.neutron [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.465 2 INFO nova.compute.manager [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Took 1.40 seconds to deallocate network for instance.
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.516 2 DEBUG nova.compute.manager [req-7f99e947-9e06-4f64-8cca-f84d49c8902e req-04e98afc-848d-4a67-807c-6305af4cbb54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received event network-vif-deleted-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.536 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.536 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.646 2 DEBUG oslo_concurrency.processutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.714 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433104.7121098, ab77dbf7-4458-4b16-a2e7-ed73be047838 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.714 2 INFO nova.compute.manager [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] VM Stopped (Lifecycle Event)
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.735 2 DEBUG nova.compute.manager [None req-6c9acea0-f74e-4e3a-b75d-0fcd07dd8726 - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.777 2 DEBUG nova.network.neutron [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updated VIF entry in instance network info cache for port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.778 2 DEBUG nova.network.neutron [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updating instance_info_cache with network_info: [{"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:11:59 compute-0 nova_compute[259627]: 2025-10-14 09:11:59.796 2 DEBUG oslo_concurrency.lockutils [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:11:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 14 09:12:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3860183317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:00 compute-0 nova_compute[259627]: 2025-10-14 09:12:00.107 2 DEBUG oslo_concurrency.processutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:00 compute-0 nova_compute[259627]: 2025-10-14 09:12:00.116 2 DEBUG nova.compute.provider_tree [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:00 compute-0 nova_compute[259627]: 2025-10-14 09:12:00.134 2 DEBUG nova.scheduler.client.report [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:00 compute-0 nova_compute[259627]: 2025-10-14 09:12:00.169 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:00 compute-0 nova_compute[259627]: 2025-10-14 09:12:00.205 2 INFO nova.scheduler.client.report [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance 290980d2-08b4-4029-a1c3-becd3457a410
Oct 14 09:12:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3860183317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:00 compute-0 nova_compute[259627]: 2025-10-14 09:12:00.417 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:01 compute-0 ceph-mon[74249]: pgmap v1726: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 14 09:12:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.2 MiB/s wr, 178 op/s
Oct 14 09:12:02 compute-0 nova_compute[259627]: 2025-10-14 09:12:02.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:12:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:12:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:12:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:12:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:12:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:12:02 compute-0 nova_compute[259627]: 2025-10-14 09:12:02.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:03 compute-0 ceph-mon[74249]: pgmap v1727: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.2 MiB/s wr, 178 op/s
Oct 14 09:12:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 14 09:12:03 compute-0 nova_compute[259627]: 2025-10-14 09:12:03.982 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:12:04 compute-0 nova_compute[259627]: 2025-10-14 09:12:04.874 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:04 compute-0 nova_compute[259627]: 2025-10-14 09:12:04.875 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:04 compute-0 nova_compute[259627]: 2025-10-14 09:12:04.909 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:12:04 compute-0 nova_compute[259627]: 2025-10-14 09:12:04.993 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:04 compute-0 nova_compute[259627]: 2025-10-14 09:12:04.994 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.002 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.002 2 INFO nova.compute.claims [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.152 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.239 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433110.2384858, ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.240 2 INFO nova.compute.manager [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] VM Stopped (Lifecycle Event)
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.267 2 DEBUG nova.compute.manager [None req-01c83d0d-08f1-4ac7-8f95-0ce1abda8691 - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:05 compute-0 ceph-mon[74249]: pgmap v1728: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.566 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433110.565926, 62fe725d-b24a-477a-a275-06d2cd960aaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.567 2 INFO nova.compute.manager [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] VM Stopped (Lifecycle Event)
Oct 14 09:12:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712468327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.586 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.593 2 DEBUG nova.compute.provider_tree [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.697 2 DEBUG nova.compute.manager [None req-45e68e01-a61f-4783-ba94-7fa2bf0ee9df - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.700 2 DEBUG nova.scheduler.client.report [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:12:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1078909986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:12:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:12:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1078909986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.726 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.727 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.772 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.772 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.793 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.812 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.891 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.892 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.893 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating image(s)
Oct 14 09:12:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 2.2 MiB/s wr, 142 op/s
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.914 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.936 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.958 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:05 compute-0 nova_compute[259627]: 2025-10-14 09:12:05.964 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.061 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.062 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.062 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.063 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.083 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.086 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2534f8b9-e832-4b78-ada4-e551429bdc75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:06 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 14 09:12:06 compute-0 kernel: tap7ce99440-fa (unregistering): left promiscuous mode
Oct 14 09:12:06 compute-0 NetworkManager[44885]: <info>  [1760433126.2682] device (tap7ce99440-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:06 compute-0 ovn_controller[152662]: 2025-10-14T09:12:06Z|00888|binding|INFO|Releasing lport 7ce99440-fa49-4876-bb38-fce631d40400 from this chassis (sb_readonly=0)
Oct 14 09:12:06 compute-0 ovn_controller[152662]: 2025-10-14T09:12:06Z|00889|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 down in Southbound
Oct 14 09:12:06 compute-0 ovn_controller[152662]: 2025-10-14T09:12:06Z|00890|binding|INFO|Removing iface tap7ce99440-fa ovn-installed in OVS
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.289 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:ac 10.100.0.4'], port_security=['fa:16:3e:2b:08:ac 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1141f79e-2e47-40f1-91b0-275a9fac765c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7ce99440-fa49-4876-bb38-fce631d40400) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.291 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7ce99440-fa49-4876-bb38-fce631d40400 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 unbound from our chassis
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.292 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e6e95c-6cd2-4631-98f6-9ed276458c39
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.315 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5399713-90a2-4c73-9e06-a2f97a7698cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.348 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0992ef2d-c3e8-4e63-bd81-9ba7ce7a03c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.351 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cd071236-e3ce-490a-ba9c-3623be67e13b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:06 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 14 09:12:06 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000052.scope: Consumed 16.797s CPU time.
Oct 14 09:12:06 compute-0 systemd-machined[214636]: Machine qemu-102-instance-00000052 terminated.
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.370 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2534f8b9-e832-4b78-ada4-e551429bdc75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.385 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6c89295c-7333-4574-bfc9-ea3a3bc04014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.408 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[913fde3e-d92d-4a95-96cd-be0530ffb2ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346393, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.426 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb1f9aa-80d5-44c0-91bc-e45eb24dc5d2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686938, 'tstamp': 686938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346411, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686941, 'tstamp': 686941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346411, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.428 2 DEBUG nova.policy [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '92e59e145f6942b78d0ffbebc4d89e76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '517aafb84156407c8672042097e3ef4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.428 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.435 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e6e95c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.436 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.436 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] resizing rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.436 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e6e95c-60, col_values=(('external_ids', {'iface-id': '970f2645-7ec3-4b7f-8527-871800c728d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.437 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/712468327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1078909986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:12:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1078909986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.530 2 DEBUG nova.objects.instance [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'migration_context' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.546 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.547 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Ensure instance console log exists: /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.547 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.548 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.548 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.852 2 DEBUG nova.compute.manager [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.853 2 DEBUG oslo_concurrency.lockutils [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.853 2 DEBUG oslo_concurrency.lockutils [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.854 2 DEBUG oslo_concurrency.lockutils [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.854 2 DEBUG nova.compute.manager [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:06 compute-0 nova_compute[259627]: 2025-10-14 09:12:06.854 2 WARNING nova.compute.manager [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state active and task_state rescuing.
Oct 14 09:12:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.864 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.018 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance shutdown successfully after 24 seconds.
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.025 2 INFO nova.virt.libvirt.driver [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance destroyed successfully.
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.026 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:07.029 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:07.029 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.049 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Attempting rescue
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.051 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.055 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.056 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Creating image(s)
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.101 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.105 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.145 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.164 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.168 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.225 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Successfully created port: 4f827284-f357-43c5-bdde-c69731b52914 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.240 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.241 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.241 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.241 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.260 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.263 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:07 compute-0 ceph-mon[74249]: pgmap v1729: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 2.2 MiB/s wr, 142 op/s
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.546 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.547 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.560 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.561 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Start _get_guest_xml network_info=[{"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "vif_mac": "fa:16:3e:2b:08:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.561 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'resources' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.578 2 WARNING nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.584 2 DEBUG nova.virt.libvirt.host [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.585 2 DEBUG nova.virt.libvirt.host [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.587 2 DEBUG nova.virt.libvirt.host [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.588 2 DEBUG nova.virt.libvirt.host [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.588 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.588 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.589 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.589 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.591 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.591 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.591 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.591 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.606 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:07 compute-0 nova_compute[259627]: 2025-10-14 09:12:07.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 108 KiB/s wr, 50 op/s
Oct 14 09:12:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344636585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.162 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.164 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.219 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Successfully updated port: 4f827284-f357-43c5-bdde-c69731b52914 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.240 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.240 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.240 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1344636585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.516 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.560 2 DEBUG nova.compute.manager [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-changed-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.561 2 DEBUG nova.compute.manager [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Refreshing instance network info cache due to event network-changed-4f827284-f357-43c5-bdde-c69731b52914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.561 2 DEBUG oslo_concurrency.lockutils [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272901987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.693 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.695 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.955 2 DEBUG nova.compute.manager [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.956 2 DEBUG oslo_concurrency.lockutils [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.956 2 DEBUG oslo_concurrency.lockutils [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.956 2 DEBUG oslo_concurrency.lockutils [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.956 2 DEBUG nova.compute.manager [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:08 compute-0 nova_compute[259627]: 2025-10-14 09:12:08.957 2 WARNING nova.compute.manager [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state active and task_state rescuing.
Oct 14 09:12:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2962736074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.130 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.131 2 DEBUG nova.virt.libvirt.vif [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-624290930',display_name='tempest-ServerRescueNegativeTestJSON-server-624290930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-624290930',id=82,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-njr7lxft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:40Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=1141f79e-2e47-40f1-91b0-275a9fac765c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "vif_mac": "fa:16:3e:2b:08:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.131 2 DEBUG nova.network.os_vif_util [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "vif_mac": "fa:16:3e:2b:08:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.132 2 DEBUG nova.network.os_vif_util [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.133 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.151 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <uuid>1141f79e-2e47-40f1-91b0-275a9fac765c</uuid>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <name>instance-00000052</name>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-624290930</nova:name>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:07</nova:creationTime>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <nova:user uuid="aa1425f7fdfc4218bdabfe2458cd1c60">tempest-ServerRescueNegativeTestJSON-1031174086-project-member</nova:user>
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <nova:project uuid="f10ae705d9a34608a922683282b952b5">tempest-ServerRescueNegativeTestJSON-1031174086</nova:project>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <nova:port uuid="7ce99440-fa49-4876-bb38-fce631d40400">
Oct 14 09:12:09 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <entry name="serial">1141f79e-2e47-40f1-91b0-275a9fac765c</entry>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <entry name="uuid">1141f79e-2e47-40f1-91b0-275a9fac765c</entry>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue">
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1141f79e-2e47-40f1-91b0-275a9fac765c_disk">
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <target dev="vdb" bus="virtio"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue">
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:09 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:2b:08:ac"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <target dev="tap7ce99440-fa"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/console.log" append="off"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:09 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:09 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:09 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:09 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:09 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.165 2 INFO nova.virt.libvirt.driver [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance destroyed successfully.
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.215 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.216 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.217 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.218 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No VIF found with MAC fa:16:3e:2b:08:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.219 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Using config drive
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.260 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.291 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.323 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'keypairs' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:09 compute-0 ceph-mon[74249]: pgmap v1730: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 108 KiB/s wr, 50 op/s
Oct 14 09:12:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2272901987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2962736074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.678 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Creating config drive at /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.688 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4qu42cl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.834 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4qu42cl" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.863 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.868 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 108 KiB/s wr, 50 op/s
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.955 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.976 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.977 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance network_info: |[{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.978 2 DEBUG oslo_concurrency.lockutils [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.978 2 DEBUG nova.network.neutron [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Refreshing network info cache for port 4f827284-f357-43c5-bdde-c69731b52914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.983 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start _get_guest_xml network_info=[{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.991 2 WARNING nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.996 2 DEBUG nova.virt.libvirt.host [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:09 compute-0 nova_compute[259627]: 2025-10-14 09:12:09.997 2 DEBUG nova.virt.libvirt.host [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.023 2 DEBUG nova.virt.libvirt.host [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.023 2 DEBUG nova.virt.libvirt.host [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.024 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.024 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.025 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.025 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.025 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.026 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.026 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.026 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.026 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.027 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.027 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.027 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.031 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.080 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.081 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deleting local config drive /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue because it was imported into RBD.
Oct 14 09:12:10 compute-0 kernel: tap7ce99440-fa: entered promiscuous mode
Oct 14 09:12:10 compute-0 NetworkManager[44885]: <info>  [1760433130.1378] manager: (tap7ce99440-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Oct 14 09:12:10 compute-0 ovn_controller[152662]: 2025-10-14T09:12:10Z|00891|binding|INFO|Claiming lport 7ce99440-fa49-4876-bb38-fce631d40400 for this chassis.
Oct 14 09:12:10 compute-0 ovn_controller[152662]: 2025-10-14T09:12:10Z|00892|binding|INFO|7ce99440-fa49-4876-bb38-fce631d40400: Claiming fa:16:3e:2b:08:ac 10.100.0.4
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:10 compute-0 ovn_controller[152662]: 2025-10-14T09:12:10Z|00893|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 ovn-installed in OVS
Oct 14 09:12:10 compute-0 ovn_controller[152662]: 2025-10-14T09:12:10Z|00894|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 up in Southbound
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.177 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:ac 10.100.0.4'], port_security=['fa:16:3e:2b:08:ac 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1141f79e-2e47-40f1-91b0-275a9fac765c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7ce99440-fa49-4876-bb38-fce631d40400) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.178 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7ce99440-fa49-4876-bb38-fce631d40400 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 bound to our chassis
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.180 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e6e95c-6cd2-4631-98f6-9ed276458c39
Oct 14 09:12:10 compute-0 systemd-udevd[346695]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:10 compute-0 systemd-machined[214636]: New machine qemu-106-instance-00000052.
Oct 14 09:12:10 compute-0 NetworkManager[44885]: <info>  [1760433130.2088] device (tap7ce99440-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.208 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0974daa4-0c34-4429-88f2-efd937ee79fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:10 compute-0 NetworkManager[44885]: <info>  [1760433130.2098] device (tap7ce99440-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:10 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-00000052.
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.246 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7c18133c-8dec-48ad-9c87-5f307552f6b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.250 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fcae90-8924-46e3-9f0c-53bc1dedb9ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.286 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[66101c9c-3394-4e17-82f7-ae4220e7df1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.305 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e71c40df-1ee7-411e-aeab-4c467163f76c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346729, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.322 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98859a24-71ed-4096-bc44-ac703bed3900]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686938, 'tstamp': 686938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346730, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686941, 'tstamp': 686941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346730, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.324 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.327 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e6e95c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.328 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.328 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e6e95c-60, col_values=(('external_ids', {'iface-id': '970f2645-7ec3-4b7f-8527-871800c728d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.329 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229050100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.555 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.577 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:10 compute-0 nova_compute[259627]: 2025-10-14 09:12:10.581 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831358236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.006 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.009 2 DEBUG nova.virt.libvirt.vif [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:05Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.009 2 DEBUG nova.network.os_vif_util [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.011 2 DEBUG nova.network.os_vif_util [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.013 2 DEBUG nova.objects.instance [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.031 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <uuid>2534f8b9-e832-4b78-ada4-e551429bdc75</uuid>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <name>instance-00000056</name>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersNegativeTestJSON-server-17250352</nova:name>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:09</nova:creationTime>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <nova:user uuid="92e59e145f6942b78d0ffbebc4d89e76">tempest-ServersNegativeTestJSON-1475695514-project-member</nova:user>
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <nova:project uuid="517aafb84156407c8672042097e3ef4f">tempest-ServersNegativeTestJSON-1475695514</nova:project>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <nova:port uuid="4f827284-f357-43c5-bdde-c69731b52914">
Oct 14 09:12:11 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <entry name="serial">2534f8b9-e832-4b78-ada4-e551429bdc75</entry>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <entry name="uuid">2534f8b9-e832-4b78-ada4-e551429bdc75</entry>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk">
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config">
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:11 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:8b:d7:f7"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <target dev="tap4f827284-f3"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/console.log" append="off"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:11 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:11 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:11 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:11 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:11 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.032 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Preparing to wait for external event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.033 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.034 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.034 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.036 2 DEBUG nova.virt.libvirt.vif [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:05Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.036 2 DEBUG nova.network.os_vif_util [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.037 2 DEBUG nova.network.os_vif_util [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.038 2 DEBUG os_vif [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f827284-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f827284-f3, col_values=(('external_ids', {'iface-id': '4f827284-f357-43c5-bdde-c69731b52914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:d7:f7', 'vm-uuid': '2534f8b9-e832-4b78-ada4-e551429bdc75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:11 compute-0 NetworkManager[44885]: <info>  [1760433131.0505] manager: (tap4f827284-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.060 2 INFO os_vif [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.085 2 DEBUG nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.086 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.086 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.086 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.086 2 DEBUG nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.087 2 WARNING nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state active and task_state rescuing.
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.087 2 DEBUG nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.087 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.087 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.087 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.088 2 DEBUG nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.088 2 WARNING nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state active and task_state rescuing.
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.127 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.128 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.128 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No VIF found with MAC fa:16:3e:8b:d7:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.128 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Using config drive
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.153 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:11 compute-0 ceph-mon[74249]: pgmap v1731: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 108 KiB/s wr, 50 op/s
Oct 14 09:12:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2229050100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/831358236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.517 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 1141f79e-2e47-40f1-91b0-275a9fac765c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.518 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433131.517287, 1141f79e-2e47-40f1-91b0-275a9fac765c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.518 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Resumed (Lifecycle Event)
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.522 2 DEBUG nova.compute.manager [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.565 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.569 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.600 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.600 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433131.5199463, 1141f79e-2e47-40f1-91b0-275a9fac765c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.601 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Started (Lifecycle Event)
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:11 compute-0 nova_compute[259627]: 2025-10-14 09:12:11.620 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 3.7 MiB/s wr, 99 op/s
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.554 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433117.552755, 290980d2-08b4-4029-a1c3-becd3457a410 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.554 2 INFO nova.compute.manager [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Stopped (Lifecycle Event)
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.579 2 DEBUG nova.compute.manager [None req-4e80ac2d-c88e-487a-a4a7-7a853cca56ee - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.671 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating config drive at /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.677 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4cb_lo9u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.827 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4cb_lo9u" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.874 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.880 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:12 compute-0 nova_compute[259627]: 2025-10-14 09:12:12.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.057 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.058 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deleting local config drive /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config because it was imported into RBD.
Oct 14 09:12:13 compute-0 kernel: tap4f827284-f3: entered promiscuous mode
Oct 14 09:12:13 compute-0 NetworkManager[44885]: <info>  [1760433133.1242] manager: (tap4f827284-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Oct 14 09:12:13 compute-0 ovn_controller[152662]: 2025-10-14T09:12:13Z|00895|binding|INFO|Claiming lport 4f827284-f357-43c5-bdde-c69731b52914 for this chassis.
Oct 14 09:12:13 compute-0 ovn_controller[152662]: 2025-10-14T09:12:13Z|00896|binding|INFO|4f827284-f357-43c5-bdde-c69731b52914: Claiming fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.180 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.183 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c bound to our chassis
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.185 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:12:13 compute-0 NetworkManager[44885]: <info>  [1760433133.1986] device (tap4f827284-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:13 compute-0 NetworkManager[44885]: <info>  [1760433133.2005] device (tap4f827284-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.212 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dbad8275-c74a-4226-9942-21021150aca0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.213 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa49b41b4-21 in ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:12:13 compute-0 systemd-machined[214636]: New machine qemu-107-instance-00000056.
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.216 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa49b41b4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.216 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[81669ea8-a53f-4394-89e2-842265fa0df9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[39b49bde-800e-46f1-8499-b2a7c3af0a0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-00000056.
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.239 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[96fc59ea-9b4e-4479-9752-1592b6d9706c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.276 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6238d4-0a67-4f5f-b60a-04002fa1de71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_controller[152662]: 2025-10-14T09:12:13Z|00897|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 ovn-installed in OVS
Oct 14 09:12:13 compute-0 ovn_controller[152662]: 2025-10-14T09:12:13Z|00898|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 up in Southbound
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.308 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[98c71466-8809-4493-80ab-e56c32259f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 NetworkManager[44885]: <info>  [1760433133.3145] manager: (tapa49b41b4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/367)
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.314 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfb9253-bb0a-4464-82ac-61519a518f0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 systemd-udevd[346920]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.361 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcb9480-66b1-4848-9c4c-a30ac7a6b71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.364 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[049d73a4-69ef-4835-9682-ed32d55fe28f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.388 2 DEBUG nova.network.neutron [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updated VIF entry in instance network info cache for port 4f827284-f357-43c5-bdde-c69731b52914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.388 2 DEBUG nova.network.neutron [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:13 compute-0 NetworkManager[44885]: <info>  [1760433133.3893] device (tapa49b41b4-20): carrier: link connected
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.400 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[46a1706a-ec1c-4a49-8403-7392233c48f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.416 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad67ee35-7868-4bdf-ba4d-3cdf288fa5e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691215, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346939, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.419 2 DEBUG oslo_concurrency.lockutils [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.431 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[758fd87f-8614-47a2-99f4-dc7c0312c018]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:5b6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691215, 'tstamp': 691215}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346940, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.448 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6af0847-2333-4280-bf21-fbfa09d25c18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691215, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346941, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.479 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d125a32-e81c-4a4b-b3a7-5980814da3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ceph-mon[74249]: pgmap v1732: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 3.7 MiB/s wr, 99 op/s
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a56944c3-e15a-451f-b269-4b8653732bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 NetworkManager[44885]: <info>  [1760433133.5481] manager: (tapa49b41b4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Oct 14 09:12:13 compute-0 kernel: tapa49b41b4-20: entered promiscuous mode
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.551 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 ovn_controller[152662]: 2025-10-14T09:12:13Z|00899|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.554 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.556 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a73130c2-2444-44f1-8adf-7124c5f4550a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.557 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:12:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.557 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'env', 'PROCESS_TAG=haproxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:12:13 compute-0 nova_compute[259627]: 2025-10-14 09:12:13.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 51 op/s
Oct 14 09:12:14 compute-0 podman[347015]: 2025-10-14 09:12:14.013200377 +0000 UTC m=+0.086334677 container create 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 09:12:14 compute-0 podman[347015]: 2025-10-14 09:12:13.96539384 +0000 UTC m=+0.038528210 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:12:14 compute-0 systemd[1]: Started libpod-conmon-993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864.scope.
Oct 14 09:12:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b768a60b37b396b810a626d7725822e051f41211fb6861ae019660974357e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:14 compute-0 podman[347015]: 2025-10-14 09:12:14.107660513 +0000 UTC m=+0.180794853 container init 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:12:14 compute-0 podman[347015]: 2025-10-14 09:12:14.11727722 +0000 UTC m=+0.190411530 container start 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.136 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433134.135862, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.136 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Started (Lifecycle Event)
Oct 14 09:12:14 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [NOTICE]   (347034) : New worker (347036) forked
Oct 14 09:12:14 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [NOTICE]   (347034) : Loading success.
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.168 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.172 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433134.136061, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.172 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Paused (Lifecycle Event)
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.198 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.202 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.226 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.568 2 INFO nova.compute.manager [None req-d8b55714-50d7-4e4c-937f-1db455c68914 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Pausing
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.569 2 DEBUG nova.objects.instance [None req-d8b55714-50d7-4e4c-937f-1db455c68914 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'flavor' on Instance uuid 70e3c250-cd38-4718-9a7f-0fbf7bf471fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.604 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433134.6047902, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.605 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Paused (Lifecycle Event)
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.607 2 DEBUG nova.compute.manager [None req-d8b55714-50d7-4e4c-937f-1db455c68914 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.627 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.632 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:14 compute-0 nova_compute[259627]: 2025-10-14 09:12:14.654 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 14 09:12:15 compute-0 ceph-mon[74249]: pgmap v1733: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 51 op/s
Oct 14 09:12:15 compute-0 podman[347046]: 2025-10-14 09:12:15.69683769 +0000 UTC m=+0.095170365 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:12:15 compute-0 podman[347045]: 2025-10-14 09:12:15.706522388 +0000 UTC m=+0.110693977 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 09:12:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 133 op/s
Oct 14 09:12:16 compute-0 nova_compute[259627]: 2025-10-14 09:12:16.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.122 2 INFO nova.compute.manager [None req-770b5ff0-1343-49c9-972e-2396b40bab6d aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Unpausing
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.123 2 DEBUG nova.objects.instance [None req-770b5ff0-1343-49c9-972e-2396b40bab6d aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'flavor' on Instance uuid 70e3c250-cd38-4718-9a7f-0fbf7bf471fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.149 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433137.1494572, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.149 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Resumed (Lifecycle Event)
Oct 14 09:12:17 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.153 2 DEBUG nova.virt.libvirt.guest [None req-770b5ff0-1343-49c9-972e-2396b40bab6d aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.153 2 DEBUG nova.compute.manager [None req-770b5ff0-1343-49c9-972e-2396b40bab6d aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.175 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.177 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.206 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 14 09:12:17 compute-0 ceph-mon[74249]: pgmap v1734: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 133 op/s
Oct 14 09:12:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 14 09:12:17 compute-0 nova_compute[259627]: 2025-10-14 09:12:17.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.757 2 DEBUG nova.compute.manager [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.758 2 DEBUG oslo_concurrency.lockutils [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.758 2 DEBUG oslo_concurrency.lockutils [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.759 2 DEBUG oslo_concurrency.lockutils [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.760 2 DEBUG nova.compute.manager [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Processing event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.761 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.765 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433138.7648256, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.765 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Resumed (Lifecycle Event)
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.768 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.772 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance spawned successfully.
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.773 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.799 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.808 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.809 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.810 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.810 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.811 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.812 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.819 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.851 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.883 2 INFO nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Took 12.99 seconds to spawn the instance on the hypervisor.
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.883 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.948 2 INFO nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Took 13.98 seconds to build instance.
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.968 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.990 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:18 compute-0 nova_compute[259627]: 2025-10-14 09:12:18.990 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.022 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.040 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.041 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.068 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.151 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.152 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.161 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.161 2 INFO nova.compute.claims [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.169 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.369 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:19 compute-0 ceph-mon[74249]: pgmap v1735: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.811 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.811 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.812 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.812 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.812 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.814 2 INFO nova.compute.manager [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Terminating instance
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.816 2 DEBUG nova.compute.manager [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:12:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011629649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:19 compute-0 kernel: tap7ce99440-fa (unregistering): left promiscuous mode
Oct 14 09:12:19 compute-0 NetworkManager[44885]: <info>  [1760433139.8601] device (tap7ce99440-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.866 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:19 compute-0 ovn_controller[152662]: 2025-10-14T09:12:19Z|00900|binding|INFO|Releasing lport 7ce99440-fa49-4876-bb38-fce631d40400 from this chassis (sb_readonly=0)
Oct 14 09:12:19 compute-0 ovn_controller[152662]: 2025-10-14T09:12:19Z|00901|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 down in Southbound
Oct 14 09:12:19 compute-0 ovn_controller[152662]: 2025-10-14T09:12:19Z|00902|binding|INFO|Removing iface tap7ce99440-fa ovn-installed in OVS
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.879 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:ac 10.100.0.4'], port_security=['fa:16:3e:2b:08:ac 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1141f79e-2e47-40f1-91b0-275a9fac765c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7ce99440-fa49-4876-bb38-fce631d40400) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.880 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7ce99440-fa49-4876-bb38-fce631d40400 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 unbound from our chassis
Oct 14 09:12:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.882 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e6e95c-6cd2-4631-98f6-9ed276458c39
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.897 2 DEBUG nova.compute.provider_tree [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:19 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 14 09:12:19 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000052.scope: Consumed 9.710s CPU time.
Oct 14 09:12:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 14 09:12:19 compute-0 systemd-machined[214636]: Machine qemu-106-instance-00000052 terminated.
Oct 14 09:12:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.913 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc135c42-77bf-458f-b76d-a7a64c3dd2fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.918 2 DEBUG nova.scheduler.client.report [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.943 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.944 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:12:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.945 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[52f999af-5ec3-4b22-b20f-8ef8ba9e4a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.948 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d2668f-44ce-4c7c-bcb7-023b3acbb558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.950 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.959 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.959 2 INFO nova.compute.claims [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:12:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.985 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b872de77-1c75-45b3-a700-10bf5f691315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.993 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:12:19 compute-0 nova_compute[259627]: 2025-10-14 09:12:19.994 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:12:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.014 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90dcba27-85cf-4c08-b0d0-c901d5ef3206]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347118, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.030 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:12:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.043 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3c76c5-081a-43e8-9489-f3f276e782a7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686938, 'tstamp': 686938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347119, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686941, 'tstamp': 686941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347119, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.045 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.056 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:12:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.058 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e6e95c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.059 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.060 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e6e95c-60, col_values=(('external_ids', {'iface-id': '970f2645-7ec3-4b7f-8527-871800c728d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.060 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.067 2 INFO nova.virt.libvirt.driver [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance destroyed successfully.
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.068 2 DEBUG nova.objects.instance [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'resources' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.098 2 DEBUG nova.virt.libvirt.vif [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-624290930',display_name='tempest-ServerRescueNegativeTestJSON-server-624290930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-624290930',id=82,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-njr7lxft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:11Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=1141f79e-2e47-40f1-91b0-275a9fac765c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.098 2 DEBUG nova.network.os_vif_util [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.099 2 DEBUG nova.network.os_vif_util [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.100 2 DEBUG os_vif [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ce99440-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.111 2 INFO os_vif [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa')
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.180 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.183 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.184 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Creating image(s)
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.207 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.227 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.250 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.254 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.318 2 DEBUG nova.policy [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50b95774c384c5a8414b197ed5d7b82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a057db932754d6eae91f0d2f359f1ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.323 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.363 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.365 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.365 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.366 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.391 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.395 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4011629649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.715 2 DEBUG nova.compute.manager [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.715 2 DEBUG oslo_concurrency.lockutils [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.715 2 DEBUG oslo_concurrency.lockutils [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.716 2 DEBUG oslo_concurrency.lockutils [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.716 2 DEBUG nova.compute.manager [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.716 2 DEBUG nova.compute.manager [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.717 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770145722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.797 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.806 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] resizing rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.852 2 DEBUG nova.compute.provider_tree [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.871 2 DEBUG nova.scheduler.client.report [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.923 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.924 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.932 2 DEBUG nova.objects.instance [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'migration_context' on Instance uuid 16c1b8b8-cda9-45f9-994f-3f102eb85e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.952 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.952 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Ensure instance console log exists: /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.952 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.953 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.953 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.980 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.981 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:12:20 compute-0 nova_compute[259627]: 2025-10-14 09:12:20.998 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.016 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.033 2 DEBUG nova.compute.manager [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.033 2 DEBUG oslo_concurrency.lockutils [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.034 2 DEBUG oslo_concurrency.lockutils [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.034 2 DEBUG oslo_concurrency.lockutils [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.034 2 DEBUG nova.compute.manager [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.034 2 WARNING nova.compute.manager [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state None.
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.113 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.115 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.115 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Creating image(s)
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.155 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.180 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.210 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.214 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.259 2 DEBUG nova.policy [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50b95774c384c5a8414b197ed5d7b82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a057db932754d6eae91f0d2f359f1ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.271 2 INFO nova.virt.libvirt.driver [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deleting instance files /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c_del
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.272 2 INFO nova.virt.libvirt.driver [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deletion of /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c_del complete
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.297 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Successfully created port: b0cc5216-2023-4add-97a9-4bafe30fd8c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.304 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.304 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.305 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.305 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.327 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.332 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.386 2 INFO nova.compute.manager [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Took 1.57 seconds to destroy the instance on the hypervisor.
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.387 2 DEBUG oslo.service.loopingcall [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.387 2 DEBUG nova.compute.manager [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.387 2 DEBUG nova.network.neutron [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:12:21 compute-0 ceph-mon[74249]: pgmap v1736: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 14 09:12:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/770145722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.648 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.728 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] resizing rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.865 2 DEBUG nova.objects.instance [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'migration_context' on Instance uuid ab89bfba-67b2-4767-90f2-7ef5dab476c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.888 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.889 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Ensure instance console log exists: /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.890 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.891 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.891 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 277 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Oct 14 09:12:21 compute-0 nova_compute[259627]: 2025-10-14 09:12:21.926 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Successfully created port: 1037d287-c167-4691-9393-55be86ecbab2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.660 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.661 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.686 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.781 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.781 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.787 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.788 2 INFO nova.compute.claims [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.812 2 DEBUG nova.network.neutron [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.820 2 DEBUG nova.compute.manager [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.820 2 DEBUG oslo_concurrency.lockutils [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.820 2 DEBUG oslo_concurrency.lockutils [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.821 2 DEBUG oslo_concurrency.lockutils [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.821 2 DEBUG nova.compute.manager [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.821 2 WARNING nova.compute.manager [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state rescued and task_state deleting.
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.849 2 INFO nova.compute.manager [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Took 1.46 seconds to deallocate network for instance.
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.857 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Successfully updated port: b0cc5216-2023-4add-97a9-4bafe30fd8c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.897 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.897 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquired lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.897 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.906 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:22 compute-0 nova_compute[259627]: 2025-10-14 09:12:22.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.046 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.117 2 DEBUG nova.compute.manager [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-deleted-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.118 2 DEBUG nova.compute.manager [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-changed-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.119 2 DEBUG nova.compute.manager [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Refreshing instance network info cache due to event network-changed-b0cc5216-2023-4add-97a9-4bafe30fd8c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.119 2 DEBUG oslo_concurrency.lockutils [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.249 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Successfully updated port: 1037d287-c167-4691-9393-55be86ecbab2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.283 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.291 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.291 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquired lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.291 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3034072506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.502 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.511 2 DEBUG nova.compute.provider_tree [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.528 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.534 2 DEBUG nova.scheduler.client.report [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.562 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.563 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.567 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:23 compute-0 ceph-mon[74249]: pgmap v1737: 305 pgs: 305 active+clean; 277 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Oct 14 09:12:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3034072506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.631 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.632 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.655 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.677 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.731 2 DEBUG oslo_concurrency.processutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.832 2 DEBUG nova.policy [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '92e59e145f6942b78d0ffbebc4d89e76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '517aafb84156407c8672042097e3ef4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.836 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.838 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.838 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Creating image(s)
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.862 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.887 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 277 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 69 KiB/s wr, 158 op/s
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.921 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:23 compute-0 nova_compute[259627]: 2025-10-14 09:12:23.925 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.025 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.026 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.027 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.027 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.047 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.051 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552245106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.230 2 DEBUG oslo_concurrency.processutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.260 2 DEBUG nova.compute.provider_tree [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.288 2 DEBUG nova.scheduler.client.report [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.318 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.336 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Updating instance_info_cache with network_info: [{"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.346 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.386 2 INFO nova.scheduler.client.report [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Deleted allocations for instance 1141f79e-2e47-40f1-91b0-275a9fac765c
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.388 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Releasing lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.388 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance network_info: |[{"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.390 2 DEBUG oslo_concurrency.lockutils [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.391 2 DEBUG nova.network.neutron [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Refreshing network info cache for port b0cc5216-2023-4add-97a9-4bafe30fd8c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.396 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start _get_guest_xml network_info=[{"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.446 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] resizing rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.492 2 WARNING nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.496 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.497 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.501 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.502 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.502 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.502 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.507 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.563 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3552245106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.626 2 DEBUG nova.objects.instance [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'migration_context' on Instance uuid 4774788b-1dc2-40c6-87d0-db4e3f54a609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.642 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.643 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Ensure instance console log exists: /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.643 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.644 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.644 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.645 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Successfully created port: 26bcc700-59c6-4e79-904d-988cd11152c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:12:24 compute-0 nova_compute[259627]: 2025-10-14 09:12:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.025 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Updating instance_info_cache with network_info: [{"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.049 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Releasing lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.050 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance network_info: |[{"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.054 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start _get_guest_xml network_info=[{"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.060 2 WARNING nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.066 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3541068764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.068 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.071 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.072 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.073 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.073 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.074 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.074 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.075 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.075 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.076 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.076 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.077 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.077 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.078 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.078 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.083 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.125 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.152 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.157 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.377 2 DEBUG nova.compute.manager [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-changed-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.378 2 DEBUG nova.compute.manager [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Refreshing instance network info cache due to event network-changed-1037d287-c167-4691-9393-55be86ecbab2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.379 2 DEBUG oslo_concurrency.lockutils [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.379 2 DEBUG oslo_concurrency.lockutils [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.380 2 DEBUG nova.network.neutron [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Refreshing network info cache for port 1037d287-c167-4691-9393-55be86ecbab2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1852324471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.560 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.584 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:25 compute-0 ceph-mon[74249]: pgmap v1738: 305 pgs: 305 active+clean; 277 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 69 KiB/s wr, 158 op/s
Oct 14 09:12:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3541068764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1852324471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.588 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/749511871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.638 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Successfully updated port: 26bcc700-59c6-4e79-904d-988cd11152c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.644 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.646 2 DEBUG nova.virt.libvirt.vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-1',id=87,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:20Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=16c1b8b8-cda9-45f9-994f-3f102eb85e1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.647 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.648 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.649 2 DEBUG nova.objects.instance [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 16c1b8b8-cda9-45f9-994f-3f102eb85e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.670 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.670 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.670 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.675 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <uuid>16c1b8b8-cda9-45f9-994f-3f102eb85e1e</uuid>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <name>instance-00000057</name>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <nova:name>tempest-tempest.common.compute-instance-1135349362-1</nova:name>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:24</nova:creationTime>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <nova:user uuid="f50b95774c384c5a8414b197ed5d7b82">tempest-MultipleCreateTestJSON-2115206001-project-member</nova:user>
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <nova:project uuid="5a057db932754d6eae91f0d2f359f1ff">tempest-MultipleCreateTestJSON-2115206001</nova:project>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <nova:port uuid="b0cc5216-2023-4add-97a9-4bafe30fd8c3">
Oct 14 09:12:25 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <entry name="serial">16c1b8b8-cda9-45f9-994f-3f102eb85e1e</entry>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <entry name="uuid">16c1b8b8-cda9-45f9-994f-3f102eb85e1e</entry>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk">
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config">
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:25 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:52:4e:ee"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <target dev="tapb0cc5216-20"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/console.log" append="off"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:25 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:25 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:25 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:25 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:25 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.677 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Preparing to wait for external event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.677 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.678 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.678 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.679 2 DEBUG nova.virt.libvirt.vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-1',id=87,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:20Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=16c1b8b8-cda9-45f9-994f-3f102eb85e1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.680 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.681 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.682 2 DEBUG os_vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.683 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.683 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.683 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.684 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.684 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.686 2 INFO nova.compute.manager [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Terminating instance
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.687 2 DEBUG nova.compute.manager [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0cc5216-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0cc5216-20, col_values=(('external_ids', {'iface-id': 'b0cc5216-2023-4add-97a9-4bafe30fd8c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:4e:ee', 'vm-uuid': '16c1b8b8-cda9-45f9-994f-3f102eb85e1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 NetworkManager[44885]: <info>  [1760433145.7001] manager: (tapb0cc5216-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.707 2 INFO os_vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20')
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.717 2 DEBUG nova.compute.manager [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-changed-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.717 2 DEBUG nova.compute.manager [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Refreshing instance network info cache due to event network-changed-26bcc700-59c6-4e79-904d-988cd11152c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.717 2 DEBUG oslo_concurrency.lockutils [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:25 compute-0 kernel: tapf027d20e-66 (unregistering): left promiscuous mode
Oct 14 09:12:25 compute-0 NetworkManager[44885]: <info>  [1760433145.7638] device (tapf027d20e-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.776 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.777 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.778 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No VIF found with MAC fa:16:3e:52:4e:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.778 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Using config drive
Oct 14 09:12:25 compute-0 ovn_controller[152662]: 2025-10-14T09:12:25Z|00903|binding|INFO|Releasing lport f027d20e-665b-4bd0-836c-7e8edb2b6bf7 from this chassis (sb_readonly=0)
Oct 14 09:12:25 compute-0 ovn_controller[152662]: 2025-10-14T09:12:25Z|00904|binding|INFO|Setting lport f027d20e-665b-4bd0-836c-7e8edb2b6bf7 down in Southbound
Oct 14 09:12:25 compute-0 ovn_controller[152662]: 2025-10-14T09:12:25Z|00905|binding|INFO|Removing iface tapf027d20e-66 ovn-installed in OVS
Oct 14 09:12:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.833 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:5e:c8 10.100.0.5'], port_security=['fa:16:3e:1a:5e:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '70e3c250-cd38-4718-9a7f-0fbf7bf471fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f027d20e-665b-4bd0-836c-7e8edb2b6bf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.835 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f027d20e-665b-4bd0-836c-7e8edb2b6bf7 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 unbound from our chassis
Oct 14 09:12:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.836 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15e6e95c-6cd2-4631-98f6-9ed276458c39, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.845 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:25 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000050.scope: Deactivated successfully.
Oct 14 09:12:25 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000050.scope: Consumed 15.581s CPU time.
Oct 14 09:12:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.837 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3ba941-96fe-41ed-aecd-80fcf8900198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.849 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 namespace which is not needed anymore
Oct 14 09:12:25 compute-0 systemd-machined[214636]: Machine qemu-100-instance-00000050 terminated.
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.900 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:12:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 280 op/s
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.941 2 DEBUG nova.network.neutron [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Updated VIF entry in instance network info cache for port b0cc5216-2023-4add-97a9-4bafe30fd8c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.942 2 DEBUG nova.network.neutron [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Updating instance_info_cache with network_info: [{"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.952 2 INFO nova.virt.libvirt.driver [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Instance destroyed successfully.
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.953 2 DEBUG nova.objects.instance [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'resources' on Instance uuid 70e3c250-cd38-4718-9a7f-0fbf7bf471fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.964 2 DEBUG oslo_concurrency.lockutils [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.966 2 DEBUG nova.virt.libvirt.vif [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1354708936',display_name='tempest-ServerRescueNegativeTestJSON-server-1354708936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1354708936',id=80,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-vlzoqsi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:17Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=70e3c250-cd38-4718-9a7f-0fbf7bf471fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.966 2 DEBUG nova.network.os_vif_util [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.967 2 DEBUG nova.network.os_vif_util [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.967 2 DEBUG os_vif [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf027d20e-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:25 compute-0 nova_compute[259627]: 2025-10-14 09:12:25.985 2 INFO os_vif [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66')
Oct 14 09:12:26 compute-0 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [NOTICE]   (343564) : haproxy version is 2.8.14-c23fe91
Oct 14 09:12:26 compute-0 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [NOTICE]   (343564) : path to executable is /usr/sbin/haproxy
Oct 14 09:12:26 compute-0 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [WARNING]  (343564) : Exiting Master process...
Oct 14 09:12:26 compute-0 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [WARNING]  (343564) : Exiting Master process...
Oct 14 09:12:26 compute-0 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [ALERT]    (343564) : Current worker (343566) exited with code 143 (Terminated)
Oct 14 09:12:26 compute-0 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [WARNING]  (343564) : All workers exited. Exiting... (0)
Oct 14 09:12:26 compute-0 systemd[1]: libpod-73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7.scope: Deactivated successfully.
Oct 14 09:12:26 compute-0 podman[347894]: 2025-10-14 09:12:26.036890786 +0000 UTC m=+0.067295508 container died 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.058 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.058 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7-userdata-shm.mount: Deactivated successfully.
Oct 14 09:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-52a5ab24f7466258d49f499f0da82e5d8b268704cbd206b0609a9e9ddb16bcab-merged.mount: Deactivated successfully.
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.073 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:12:26 compute-0 podman[347894]: 2025-10-14 09:12:26.083894374 +0000 UTC m=+0.114299096 container cleanup 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:12:26 compute-0 systemd[1]: libpod-conmon-73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7.scope: Deactivated successfully.
Oct 14 09:12:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684799037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.131 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.132 2 DEBUG nova.virt.libvirt.vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-2',id=88,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:21Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=ab89bfba-67b2-4767-90f2-7ef5dab476c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.132 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.133 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.134 2 DEBUG nova.objects.instance [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'pci_devices' on Instance uuid ab89bfba-67b2-4767-90f2-7ef5dab476c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.148 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <uuid>ab89bfba-67b2-4767-90f2-7ef5dab476c0</uuid>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <name>instance-00000058</name>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <nova:name>tempest-tempest.common.compute-instance-1135349362-2</nova:name>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:25</nova:creationTime>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <nova:user uuid="f50b95774c384c5a8414b197ed5d7b82">tempest-MultipleCreateTestJSON-2115206001-project-member</nova:user>
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <nova:project uuid="5a057db932754d6eae91f0d2f359f1ff">tempest-MultipleCreateTestJSON-2115206001</nova:project>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <nova:port uuid="1037d287-c167-4691-9393-55be86ecbab2">
Oct 14 09:12:26 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <entry name="serial">ab89bfba-67b2-4767-90f2-7ef5dab476c0</entry>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <entry name="uuid">ab89bfba-67b2-4767-90f2-7ef5dab476c0</entry>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk">
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config">
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:26 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:08:ea:3e"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <target dev="tap1037d287-c1"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/console.log" append="off"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:26 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:26 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:26 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:26 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:26 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.148 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Preparing to wait for external event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.148 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.149 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.149 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.150 2 DEBUG nova.virt.libvirt.vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-2',id=88,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:21Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=ab89bfba-67b2-4767-90f2-7ef5dab476c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.150 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.151 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.152 2 DEBUG os_vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.153 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.153 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.158 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.158 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:26 compute-0 podman[347939]: 2025-10-14 09:12:26.159630359 +0000 UTC m=+0.052159475 container remove 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1037d287-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1037d287-c1, col_values=(('external_ids', {'iface-id': '1037d287-c167-4691-9393-55be86ecbab2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:ea:3e', 'vm-uuid': 'ab89bfba-67b2-4767-90f2-7ef5dab476c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 NetworkManager[44885]: <info>  [1760433146.1629] manager: (tap1037d287-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.171 2 INFO os_vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1')
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.175 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5912bbef-d71f-4d6b-b87a-a58c46428da5]: (4, ('Tue Oct 14 09:12:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 (73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7)\n73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7\nTue Oct 14 09:12:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 (73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7)\n73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.176 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[144e75cf-df58-415e-bf40-0e31ddec41f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.177 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:26 compute-0 kernel: tap15e6e95c-60: left promiscuous mode
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.187 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.188 2 INFO nova.compute.claims [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.202 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba906da6-c73f-4ff3-9576-1aea58c8c6f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.223 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5685ed45-13da-46de-a8b4-aa2b52f846bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.224 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb30d3f3-bc49-4ad1-9fa8-074b51c05ebd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.239 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.240 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.240 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No VIF found with MAC fa:16:3e:08:ea:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.240 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Using config drive
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.242 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1b41c2-7c2f-4213-b53a-3dc18a5bad03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686905, 'reachable_time': 22784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347967, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d15e6e95c\x2d6cd2\x2d4631\x2d98f6\x2d9ed276458c39.mount: Deactivated successfully.
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.247 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.248 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5d588f75-c96e-42c9-9c65-3b08bbba7ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.277 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.286 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Creating config drive at /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.296 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatg9khtx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.399 2 INFO nova.virt.libvirt.driver [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Deleting instance files /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe_del
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.400 2 INFO nova.virt.libvirt.driver [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Deletion of /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe_del complete
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.446 2 INFO nova.compute.manager [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.447 2 DEBUG oslo.service.loopingcall [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.449 2 DEBUG nova.compute.manager [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.449 2 DEBUG nova.network.neutron [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.450 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatg9khtx" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.472 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.476 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.542 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/749511871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2684799037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.670 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.671 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Deleting local config drive /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config because it was imported into RBD.
Oct 14 09:12:26 compute-0 kernel: tapb0cc5216-20: entered promiscuous mode
Oct 14 09:12:26 compute-0 systemd-udevd[347860]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 ovn_controller[152662]: 2025-10-14T09:12:26Z|00906|binding|INFO|Claiming lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 for this chassis.
Oct 14 09:12:26 compute-0 ovn_controller[152662]: 2025-10-14T09:12:26Z|00907|binding|INFO|b0cc5216-2023-4add-97a9-4bafe30fd8c3: Claiming fa:16:3e:52:4e:ee 10.100.0.14
Oct 14 09:12:26 compute-0 NetworkManager[44885]: <info>  [1760433146.7531] manager: (tapb0cc5216-20): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.765 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Creating config drive at /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.769 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d4jcbsn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.768 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:4e:ee 10.100.0.14'], port_security=['fa:16:3e:52:4e:ee 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '16c1b8b8-cda9-45f9-994f-3f102eb85e1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b0cc5216-2023-4add-97a9-4bafe30fd8c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.770 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b0cc5216-2023-4add-97a9-4bafe30fd8c3 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 bound to our chassis
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.771 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:26 compute-0 NetworkManager[44885]: <info>  [1760433146.7792] device (tapb0cc5216-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:26 compute-0 NetworkManager[44885]: <info>  [1760433146.7823] device (tapb0cc5216-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.800 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67438e9a-6338-4066-a34e-7c5bb6b68b8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.801 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0506bb08-71 in ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.803 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0506bb08-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.804 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a6243c-63d3-4cf2-93f5-5bce5f535e72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.804 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c567fb6-d519-4023-b31c-1ee2ec2dfca7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 systemd-machined[214636]: New machine qemu-108-instance-00000057.
Oct 14 09:12:26 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000057.
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.821 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a11174d4-9edf-4a04-9290-efedb87f0a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.854 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52f0e096-4aeb-49b8-ba32-3f4df3f7b152]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 ovn_controller[152662]: 2025-10-14T09:12:26Z|00908|binding|INFO|Setting lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 ovn-installed in OVS
Oct 14 09:12:26 compute-0 ovn_controller[152662]: 2025-10-14T09:12:26Z|00909|binding|INFO|Setting lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 up in Southbound
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.893 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5570dd81-c1f3-4ab5-8cc7-11cb76d39f17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.911 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7e67d0-7f60-4361-bc1b-8c8861acffb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 NetworkManager[44885]: <info>  [1760433146.9129] manager: (tap0506bb08-70): new Veth device (/org/freedesktop/NetworkManager/Devices/372)
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.920 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d4jcbsn" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.957 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d5cf87-0cff-4317-85bd-698133206b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.961 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77890ec3-62df-4478-a7c2-c187f0f974f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.982 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:26 compute-0 NetworkManager[44885]: <info>  [1760433146.9933] device (tap0506bb08-70): carrier: link connected
Oct 14 09:12:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286221075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:26 compute-0 nova_compute[259627]: 2025-10-14 09:12:26.995 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.999 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0e25fa4c-cef2-466a-90cc-de53bb675581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.035 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[63bea90d-a09a-45a3-90b2-f544a281de85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692575, 'reachable_time': 26707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348118, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.042 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.052 2 DEBUG nova.compute.provider_tree [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.056 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Updating instance_info_cache with network_info: [{"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.061 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8af3493e-df76-4ad2-92b7-fbcb76c68b17]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:c30c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692575, 'tstamp': 692575}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348120, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.063 2 DEBUG nova.network.neutron [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Updated VIF entry in instance network info cache for port 1037d287-c167-4691-9393-55be86ecbab2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.063 2 DEBUG nova.network.neutron [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Updating instance_info_cache with network_info: [{"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.090 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eef4573c-342c-47cd-af0f-29a0d6f0de28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692575, 'reachable_time': 26707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348121, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.096 2 DEBUG nova.scheduler.client.report [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.100 2 DEBUG oslo_concurrency.lockutils [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.101 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.101 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance network_info: |[{"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.102 2 DEBUG oslo_concurrency.lockutils [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.102 2 DEBUG nova.network.neutron [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Refreshing network info cache for port 26bcc700-59c6-4e79-904d-988cd11152c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.105 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start _get_guest_xml network_info=[{"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.124 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.125 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.127 2 WARNING nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.136 2 DEBUG nova.virt.libvirt.host [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.136 2 DEBUG nova.virt.libvirt.host [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.139 2 DEBUG nova.virt.libvirt.host [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.140 2 DEBUG nova.virt.libvirt.host [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.140 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.141 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.141 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.142 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.142 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.142 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.142 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.143 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.143 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.143 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.143 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.144 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.147 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.148 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68ec43b4-958d-44cc-9974-a201c33b01f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.208 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.209 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.210 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.213 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Deleting local config drive /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config because it was imported into RBD.
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.229 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.242 2 DEBUG nova.network.neutron [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.241 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[246229e7-6d07-4117-97de-5eaf9fdacf8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:27 compute-0 NetworkManager[44885]: <info>  [1760433147.2464] manager: (tap0506bb08-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct 14 09:12:27 compute-0 kernel: tap0506bb08-70: entered promiscuous mode
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.254 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.254 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:27 compute-0 ovn_controller[152662]: 2025-10-14T09:12:27Z|00910|binding|INFO|Releasing lport 6cfe11a6-55c2-4d2e-880b-8832ad317040 from this chassis (sb_readonly=0)
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.264 2 INFO nova.compute.manager [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Took 0.81 seconds to deallocate network for instance.
Oct 14 09:12:27 compute-0 NetworkManager[44885]: <info>  [1760433147.2681] manager: (tap1037d287-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Oct 14 09:12:27 compute-0 systemd-udevd[348091]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.281 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:27 compute-0 NetworkManager[44885]: <info>  [1760433147.2827] device (tap1037d287-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:27 compute-0 kernel: tap1037d287-c1: entered promiscuous mode
Oct 14 09:12:27 compute-0 NetworkManager[44885]: <info>  [1760433147.2840] device (tap1037d287-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[534869fc-de70-49a2-be92-91da630b2b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.283 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.283 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'env', 'PROCESS_TAG=haproxy-0506bb08-7957-44ca-9a0f-014c548c7b40', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0506bb08-7957-44ca-9a0f-014c548c7b40.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:12:27 compute-0 ovn_controller[152662]: 2025-10-14T09:12:27Z|00911|binding|INFO|Claiming lport 1037d287-c167-4691-9393-55be86ecbab2 for this chassis.
Oct 14 09:12:27 compute-0 ovn_controller[152662]: 2025-10-14T09:12:27Z|00912|binding|INFO|1037d287-c167-4691-9393-55be86ecbab2: Claiming fa:16:3e:08:ea:3e 10.100.0.5
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.296 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:ea:3e 10.100.0.5'], port_security=['fa:16:3e:08:ea:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ab89bfba-67b2-4767-90f2-7ef5dab476c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1037d287-c167-4691-9393-55be86ecbab2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:27 compute-0 ovn_controller[152662]: 2025-10-14T09:12:27Z|00913|binding|INFO|Setting lport 1037d287-c167-4691-9393-55be86ecbab2 ovn-installed in OVS
Oct 14 09:12:27 compute-0 ovn_controller[152662]: 2025-10-14T09:12:27Z|00914|binding|INFO|Setting lport 1037d287-c167-4691-9393-55be86ecbab2 up in Southbound
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:27 compute-0 systemd-machined[214636]: New machine qemu-109-instance-00000058.
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.316 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.316 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:27 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-00000058.
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.364 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.365 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.366 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Creating image(s)
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.398 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.421 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.445 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.448 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.497 2 DEBUG nova.policy [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.503 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-vif-unplugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.504 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.504 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.504 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.504 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] No waiting events found dispatching network-vif-unplugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.505 2 WARNING nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received unexpected event network-vif-unplugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 for instance with vm_state deleted and task_state None.
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.505 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.505 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.506 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.506 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.506 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] No waiting events found dispatching network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.506 2 WARNING nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received unexpected event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 for instance with vm_state deleted and task_state None.
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.507 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.507 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.507 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.507 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.508 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Processing event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.508 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.508 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.508 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.509 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.509 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] No waiting events found dispatching network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.509 2 WARNING nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received unexpected event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 for instance with vm_state building and task_state spawning.
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.545 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.546 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.546 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.547 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.577 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.587 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9e354e27-d674-43c3-890b-caf8731cb827_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:27 compute-0 ceph-mon[74249]: pgmap v1739: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 280 op/s
Oct 14 09:12:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3286221075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:27 compute-0 podman[348308]: 2025-10-14 09:12:27.65593395 +0000 UTC m=+0.065330820 container create 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:12:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1320841696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:27 compute-0 systemd[1]: Started libpod-conmon-7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4.scope.
Oct 14 09:12:27 compute-0 podman[348308]: 2025-10-14 09:12:27.625785637 +0000 UTC m=+0.035182557 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.720 2 DEBUG oslo_concurrency.processutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:27 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31049def98d964650864fccfae84a5b472505ff9f0ff1c2e6df68f12f050e09d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:27 compute-0 podman[348308]: 2025-10-14 09:12:27.750177571 +0000 UTC m=+0.159574491 container init 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:12:27 compute-0 podman[348308]: 2025-10-14 09:12:27.75541222 +0000 UTC m=+0.164809110 container start 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.771 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:27 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [NOTICE]   (348411) : New worker (348420) forked
Oct 14 09:12:27 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [NOTICE]   (348411) : Loading success.
Oct 14 09:12:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.812 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.826 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1037d287-c167-4691-9393-55be86ecbab2 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.833 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.864 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd386ea-fa5c-4934-b4fe-74d941f4bd01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.894 2 DEBUG nova.compute.manager [req-3a256bfd-c96d-418e-a6b9-1dfdfe710423 req-b1becbdd-1b49-430d-bd66-827c835c8745 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-vif-deleted-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.916 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdad488-057b-4943-b6eb-3f45b3c8c325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.921 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[674c1197-5fcd-45b6-8b0f-fec3c4fde67d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.952 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a3467f58-e333-4923-88eb-893ab423ca96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 nova_compute[259627]: 2025-10-14 09:12:27.956 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9e354e27-d674-43c3-890b-caf8731cb827_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.978 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f037273c-5073-4c8b-817b-01548f526ebf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692575, 'reachable_time': 26707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348467, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d96ac93e-e0b9-41cc-a807-8162d0e3b251]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692594, 'tstamp': 692594}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348483, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692598, 'tstamp': 692598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348483, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.993 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.996 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.996 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.997 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.997 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.046 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.146 2 DEBUG nova.objects.instance [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid 9e354e27-d674-43c3-890b-caf8731cb827 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.163 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.164 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Ensure instance console log exists: /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.164 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.164 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.165 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.237 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.2373714, 16c1b8b8-cda9-45f9-994f-3f102eb85e1e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.238 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] VM Started (Lifecycle Event)
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.241 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.251 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.259 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.264 2 INFO nova.virt.libvirt.driver [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance spawned successfully.
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.264 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:12:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755585408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.272 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.291 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.292 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.2374704, 16c1b8b8-cda9-45f9-994f-3f102eb85e1e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.292 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] VM Paused (Lifecycle Event)
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.295 2 DEBUG oslo_concurrency.processutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.299 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.299 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.300 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.301 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.302 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.303 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.313 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.324 2 DEBUG nova.compute.provider_tree [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.331 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.2509418, 16c1b8b8-cda9-45f9-994f-3f102eb85e1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.332 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] VM Resumed (Lifecycle Event)
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.361 2 DEBUG nova.scheduler.client.report [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.366 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Successfully created port: ce4eb1a6-2221-4519-98fa-44a39da77b71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.369 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.373 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764184911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.390 2 INFO nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Took 8.21 seconds to spawn the instance on the hypervisor.
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.390 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.395 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.399 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.399 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.3402908, ab89bfba-67b2-4767-90f2-7ef5dab476c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.399 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] VM Started (Lifecycle Event)
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.405 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.407 2 DEBUG nova.virt.libvirt.vif [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1339714429',display_name='tempest-ServersNegativeTestJSON-server-1339714429',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1339714429',id=89,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-3mdh02k5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:23Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=4774788b-1dc2-40c6-87d0-db4e3f54a609,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.407 2 DEBUG nova.network.os_vif_util [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.408 2 DEBUG nova.network.os_vif_util [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.408 2 DEBUG nova.objects.instance [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4774788b-1dc2-40c6-87d0-db4e3f54a609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.419 2 INFO nova.scheduler.client.report [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Deleted allocations for instance 70e3c250-cd38-4718-9a7f-0fbf7bf471fe
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.426 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.430 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.3403986, ab89bfba-67b2-4767-90f2-7ef5dab476c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.430 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] VM Paused (Lifecycle Event)
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.433 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <uuid>4774788b-1dc2-40c6-87d0-db4e3f54a609</uuid>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <name>instance-00000059</name>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersNegativeTestJSON-server-1339714429</nova:name>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:27</nova:creationTime>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <nova:user uuid="92e59e145f6942b78d0ffbebc4d89e76">tempest-ServersNegativeTestJSON-1475695514-project-member</nova:user>
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <nova:project uuid="517aafb84156407c8672042097e3ef4f">tempest-ServersNegativeTestJSON-1475695514</nova:project>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <nova:port uuid="26bcc700-59c6-4e79-904d-988cd11152c8">
Oct 14 09:12:28 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <entry name="serial">4774788b-1dc2-40c6-87d0-db4e3f54a609</entry>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <entry name="uuid">4774788b-1dc2-40c6-87d0-db4e3f54a609</entry>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4774788b-1dc2-40c6-87d0-db4e3f54a609_disk">
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config">
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:28 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:46:94:f3"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <target dev="tap26bcc700-59"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/console.log" append="off"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:28 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:28 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:28 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:28 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:28 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.434 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Preparing to wait for external event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.435 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.435 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.435 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.436 2 DEBUG nova.virt.libvirt.vif [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1339714429',display_name='tempest-ServersNegativeTestJSON-server-1339714429',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1339714429',id=89,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-3mdh02k5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:23Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=4774788b-1dc2-40c6-87d0-db4e3f54a609,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.436 2 DEBUG nova.network.os_vif_util [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.437 2 DEBUG nova.network.os_vif_util [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.437 2 DEBUG os_vif [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26bcc700-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.442 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26bcc700-59, col_values=(('external_ids', {'iface-id': '26bcc700-59c6-4e79-904d-988cd11152c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:94:f3', 'vm-uuid': '4774788b-1dc2-40c6-87d0-db4e3f54a609'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:28 compute-0 NetworkManager[44885]: <info>  [1760433148.4758] manager: (tap26bcc700-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.481 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.487 2 INFO os_vif [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59')
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.489 2 INFO nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Took 9.37 seconds to build instance.
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.499 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.519 2 DEBUG nova.network.neutron [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Updated VIF entry in instance network info cache for port 26bcc700-59c6-4e79-904d-988cd11152c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.520 2 DEBUG nova.network.neutron [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Updating instance_info_cache with network_info: [{"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.523 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.523 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.535 2 DEBUG oslo_concurrency.lockutils [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.540 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.552 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.552 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.552 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No VIF found with MAC fa:16:3e:46:94:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.553 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Using config drive
Oct 14 09:12:28 compute-0 podman[348568]: 2025-10-14 09:12:28.581469743 +0000 UTC m=+0.062784617 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.599 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1320841696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/755585408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1764184911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:28 compute-0 podman[348566]: 2025-10-14 09:12:28.627389444 +0000 UTC m=+0.101245604 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:28 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:28.999 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.000 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.000 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.001 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.001 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.048 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Creating config drive at /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.053 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps3bg1nyf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.139 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Successfully updated port: ce4eb1a6-2221-4519-98fa-44a39da77b71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.159 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.159 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.160 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.218 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps3bg1nyf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.264 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.270 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.317 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.564 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.565 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Deleting local config drive /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config because it was imported into RBD.
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.591 2 DEBUG nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.592 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.592 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.592 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2369680037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.593 2 DEBUG nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Processing event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.594 2 DEBUG nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.594 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.594 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.595 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.595 2 DEBUG nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] No waiting events found dispatching network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.595 2 WARNING nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received unexpected event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 for instance with vm_state building and task_state spawning.
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.596 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.610 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433149.6104703, ab89bfba-67b2-4767-90f2-7ef5dab476c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.611 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] VM Resumed (Lifecycle Event)
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.613 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.621 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:29 compute-0 ceph-mon[74249]: pgmap v1740: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct 14 09:12:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2369680037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.633 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.643 2 INFO nova.virt.libvirt.driver [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance spawned successfully.
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.644 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.646 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:29 compute-0 NetworkManager[44885]: <info>  [1760433149.6573] manager: (tap26bcc700-59): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Oct 14 09:12:29 compute-0 kernel: tap26bcc700-59: entered promiscuous mode
Oct 14 09:12:29 compute-0 ovn_controller[152662]: 2025-10-14T09:12:29Z|00915|binding|INFO|Claiming lport 26bcc700-59c6-4e79-904d-988cd11152c8 for this chassis.
Oct 14 09:12:29 compute-0 ovn_controller[152662]: 2025-10-14T09:12:29Z|00916|binding|INFO|26bcc700-59c6-4e79-904d-988cd11152c8: Claiming fa:16:3e:46:94:f3 10.100.0.13
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.670 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:94:f3 10.100.0.13'], port_security=['fa:16:3e:46:94:f3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4774788b-1dc2-40c6-87d0-db4e3f54a609', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=26bcc700-59c6-4e79-904d-988cd11152c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.672 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 26bcc700-59c6-4e79-904d-988cd11152c8 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c bound to our chassis
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.675 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.682 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:29 compute-0 ovn_controller[152662]: 2025-10-14T09:12:29Z|00917|binding|INFO|Setting lport 26bcc700-59c6-4e79-904d-988cd11152c8 ovn-installed in OVS
Oct 14 09:12:29 compute-0 ovn_controller[152662]: 2025-10-14T09:12:29Z|00918|binding|INFO|Setting lport 26bcc700-59c6-4e79-904d-988cd11152c8 up in Southbound
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.698 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.698 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.699 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.699 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.700 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.700 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:29 compute-0 systemd-udevd[348703]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:29 compute-0 systemd-machined[214636]: New machine qemu-110-instance-00000059.
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f04d1d2f-42ce-4f36-968e-96ba05a17a02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:29 compute-0 NetworkManager[44885]: <info>  [1760433149.7218] device (tap26bcc700-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:29 compute-0 NetworkManager[44885]: <info>  [1760433149.7225] device (tap26bcc700-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:29 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-00000059.
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.750 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[57d282fb-dfaa-4bc1-ad96-732f72195b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.754 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a941a3de-b190-4654-89c0-a682dc5602a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.785 2 INFO nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Took 8.67 seconds to spawn the instance on the hypervisor.
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.785 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.794 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[04af6222-75ed-4abb-a0c1-40b0392e1826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.831 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca12aa7-0715-4389-ab28-23512eb5b0a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691215, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348716, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.856 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d48790-0b46-4a6e-8285-65090308b0c8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa49b41b4-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691226, 'tstamp': 691226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348717, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa49b41b4-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691229, 'tstamp': 691229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348717, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.858 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.865 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.865 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.866 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.867 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.894 2 INFO nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Took 10.75 seconds to build instance.
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.899 2 DEBUG nova.compute.manager [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.900 2 DEBUG nova.compute.manager [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing instance network info cache due to event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.900 2 DEBUG oslo_concurrency.lockutils [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.904 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.904 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.912 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.913 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:12:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.916 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.918 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.918 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.922 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.922 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.929 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.948 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.948 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance network_info: |[{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.949 2 DEBUG oslo_concurrency.lockutils [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.949 2 DEBUG nova.network.neutron [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.951 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start _get_guest_xml network_info=[{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.954 2 WARNING nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.959 2 DEBUG nova.virt.libvirt.host [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.960 2 DEBUG nova.virt.libvirt.host [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.966 2 DEBUG nova.virt.libvirt.host [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.966 2 DEBUG nova.virt.libvirt.host [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.966 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:29 compute-0 nova_compute[259627]: 2025-10-14 09:12:29.970 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.258 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.259 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3563MB free_disk=59.85959243774414GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.259 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.259 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.372 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2534f8b9-e832-4b78-ada4-e551429bdc75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.373 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 16c1b8b8-cda9-45f9-994f-3f102eb85e1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.373 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance ab89bfba-67b2-4767-90f2-7ef5dab476c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.373 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 4774788b-1dc2-40c6-87d0-db4e3f54a609 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.374 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 9e354e27-d674-43c3-890b-caf8731cb827 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.374 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.374 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:12:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667893954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.495 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.534 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.573 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.577 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.619 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433150.5444643, 4774788b-1dc2-40c6-87d0-db4e3f54a609 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.620 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] VM Started (Lifecycle Event)
Oct 14 09:12:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1667893954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.644 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.648 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433150.5445905, 4774788b-1dc2-40c6-87d0-db4e3f54a609 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.649 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] VM Paused (Lifecycle Event)
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.683 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.687 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.711 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699491605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.941 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.945 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.958 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.979 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:12:30 compute-0 nova_compute[259627]: 2025-10-14 09:12:30.979 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050899084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.022 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.023 2 DEBUG nova.virt.libvirt.vif [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-461250371',display_name='tempest-TestNetworkAdvancedServerOps-server-461250371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-461250371',id=90,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcVmsUIwo920oPyHLmJGrkrYCQ5UunB9Yv0/Buc9cCiSWB2ZOXdvOp0s2cEsPfEAttRSx6VdIWgt0joL5sdVyP2CI3WgYA2zF+RirB/x5531ApwlIJzNgUQx7hgxyfijg==',key_name='tempest-TestNetworkAdvancedServerOps-806517333',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-dt6vqls1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:27Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=9e354e27-d674-43c3-890b-caf8731cb827,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.023 2 DEBUG nova.network.os_vif_util [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.024 2 DEBUG nova.network.os_vif_util [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.025 2 DEBUG nova.objects.instance [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e354e27-d674-43c3-890b-caf8731cb827 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.042 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <uuid>9e354e27-d674-43c3-890b-caf8731cb827</uuid>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <name>instance-0000005a</name>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-461250371</nova:name>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:29</nova:creationTime>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <nova:port uuid="ce4eb1a6-2221-4519-98fa-44a39da77b71">
Oct 14 09:12:31 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <entry name="serial">9e354e27-d674-43c3-890b-caf8731cb827</entry>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <entry name="uuid">9e354e27-d674-43c3-890b-caf8731cb827</entry>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9e354e27-d674-43c3-890b-caf8731cb827_disk">
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9e354e27-d674-43c3-890b-caf8731cb827_disk.config">
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:7f:9c:bd"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <target dev="tapce4eb1a6-22"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/console.log" append="off"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:31 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:31 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:31 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:31 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:31 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.043 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Preparing to wait for external event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.043 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.044 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.044 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.045 2 DEBUG nova.virt.libvirt.vif [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-461250371',display_name='tempest-TestNetworkAdvancedServerOps-server-461250371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-461250371',id=90,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcVmsUIwo920oPyHLmJGrkrYCQ5UunB9Yv0/Buc9cCiSWB2ZOXdvOp0s2cEsPfEAttRSx6VdIWgt0joL5sdVyP2CI3WgYA2zF+RirB/x5531ApwlIJzNgUQx7hgxyfijg==',key_name='tempest-TestNetworkAdvancedServerOps-806517333',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-dt6vqls1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:27Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=9e354e27-d674-43c3-890b-caf8731cb827,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.045 2 DEBUG nova.network.os_vif_util [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.045 2 DEBUG nova.network.os_vif_util [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.046 2 DEBUG os_vif [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4eb1a6-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce4eb1a6-22, col_values=(('external_ids', {'iface-id': 'ce4eb1a6-2221-4519-98fa-44a39da77b71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:9c:bd', 'vm-uuid': '9e354e27-d674-43c3-890b-caf8731cb827'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:31 compute-0 NetworkManager[44885]: <info>  [1760433151.1022] manager: (tapce4eb1a6-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.107 2 INFO os_vif [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22')
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.173 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.173 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.174 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:7f:9c:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.175 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Using config drive
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.213 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.239 2 DEBUG nova.network.neutron [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updated VIF entry in instance network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.240 2 DEBUG nova.network.neutron [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.274 2 DEBUG oslo_concurrency.lockutils [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:31 compute-0 ovn_controller[152662]: 2025-10-14T09:12:31Z|00919|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:12:31 compute-0 ovn_controller[152662]: 2025-10-14T09:12:31Z|00920|binding|INFO|Releasing lport 6cfe11a6-55c2-4d2e-880b-8832ad317040 from this chassis (sb_readonly=0)
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.613 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Creating config drive at /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.618 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ff1dqto execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:31 compute-0 ceph-mon[74249]: pgmap v1741: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct 14 09:12:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/699491605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1050899084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:31 compute-0 ovn_controller[152662]: 2025-10-14T09:12:31Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 09:12:31 compute-0 ovn_controller[152662]: 2025-10-14T09:12:31Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.698 2 DEBUG nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.699 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.699 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.699 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.700 2 DEBUG nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Processing event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.700 2 DEBUG nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.700 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.701 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.701 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.701 2 DEBUG nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] No waiting events found dispatching network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.701 2 WARNING nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received unexpected event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 for instance with vm_state building and task_state spawning.
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.702 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.706 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433151.7058816, 4774788b-1dc2-40c6-87d0-db4e3f54a609 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.706 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] VM Resumed (Lifecycle Event)
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.708 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.711 2 INFO nova.virt.libvirt.driver [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance spawned successfully.
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.712 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.738 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.742 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.743 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.743 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.744 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.744 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.744 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.748 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.763 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ff1dqto" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.785 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.788 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config 9e354e27-d674-43c3-890b-caf8731cb827_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.831 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.833 2 INFO nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Took 8.00 seconds to spawn the instance on the hypervisor.
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.833 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.848 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.849 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.849 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.849 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.850 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.852 2 INFO nova.compute.manager [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Terminating instance
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.852 2 DEBUG nova.compute.manager [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:12:31 compute-0 kernel: tapb0cc5216-20 (unregistering): left promiscuous mode
Oct 14 09:12:31 compute-0 NetworkManager[44885]: <info>  [1760433151.9028] device (tapb0cc5216-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.910 2 INFO nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Took 9.15 seconds to build instance.
Oct 14 09:12:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 295 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 9.2 MiB/s wr, 400 op/s
Oct 14 09:12:31 compute-0 ovn_controller[152662]: 2025-10-14T09:12:31Z|00921|binding|INFO|Releasing lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 from this chassis (sb_readonly=0)
Oct 14 09:12:31 compute-0 ovn_controller[152662]: 2025-10-14T09:12:31Z|00922|binding|INFO|Setting lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 down in Southbound
Oct 14 09:12:31 compute-0 ovn_controller[152662]: 2025-10-14T09:12:31Z|00923|binding|INFO|Removing iface tapb0cc5216-20 ovn-installed in OVS
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.936 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:4e:ee 10.100.0.14'], port_security=['fa:16:3e:52:4e:ee 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '16c1b8b8-cda9-45f9-994f-3f102eb85e1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b0cc5216-2023-4add-97a9-4bafe30fd8c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.936 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.938 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b0cc5216-2023-4add-97a9-4bafe30fd8c3 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis
Oct 14 09:12:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.940 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:31 compute-0 nova_compute[259627]: 2025-10-14 09:12:31.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.954 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f60024ef-4980-4503-9f17-a47bfa648a51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:31 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 14 09:12:31 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000057.scope: Consumed 4.704s CPU time.
Oct 14 09:12:31 compute-0 systemd-machined[214636]: Machine qemu-108-instance-00000057 terminated.
Oct 14 09:12:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.990 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[25e6a651-ee82-4418-aa2f-88a4d4f2c060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.992 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[39b76fcf-8731-4ecb-a029-14d1f0a7cdc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.008 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.009 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.009 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.009 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.009 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.011 2 INFO nova.compute.manager [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Terminating instance
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.011 2 DEBUG nova.compute.manager [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.025 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[077bbb5c-b57d-47cf-9173-f1cbdcf91a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.043 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdccda3-319e-44b6-a887-480d63538274]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692575, 'reachable_time': 26707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348915, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.064 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26bf25c5-8ec1-464b-a9a4-b23a5f6fd6b4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692594, 'tstamp': 692594}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348916, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692598, 'tstamp': 692598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348916, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.067 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.078 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.079 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.080 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.081 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.089 2 INFO nova.virt.libvirt.driver [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance destroyed successfully.
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.089 2 DEBUG nova.objects.instance [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'resources' on Instance uuid 16c1b8b8-cda9-45f9-994f-3f102eb85e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.106 2 DEBUG nova.virt.libvirt.vif [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-1',id=87,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:28Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=16c1b8b8-cda9-45f9-994f-3f102eb85e1e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.108 2 DEBUG nova.network.os_vif_util [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.109 2 DEBUG nova.network.os_vif_util [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.110 2 DEBUG os_vif [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0cc5216-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:32 compute-0 kernel: tap1037d287-c1 (unregistering): left promiscuous mode
Oct 14 09:12:32 compute-0 NetworkManager[44885]: <info>  [1760433152.1523] device (tap1037d287-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.157 2 INFO os_vif [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20')
Oct 14 09:12:32 compute-0 ovn_controller[152662]: 2025-10-14T09:12:32Z|00924|binding|INFO|Releasing lport 1037d287-c167-4691-9393-55be86ecbab2 from this chassis (sb_readonly=0)
Oct 14 09:12:32 compute-0 ovn_controller[152662]: 2025-10-14T09:12:32Z|00925|binding|INFO|Setting lport 1037d287-c167-4691-9393-55be86ecbab2 down in Southbound
Oct 14 09:12:32 compute-0 ovn_controller[152662]: 2025-10-14T09:12:32Z|00926|binding|INFO|Removing iface tap1037d287-c1 ovn-installed in OVS
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.177 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:ea:3e 10.100.0.5'], port_security=['fa:16:3e:08:ea:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ab89bfba-67b2-4767-90f2-7ef5dab476c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1037d287-c167-4691-9393-55be86ecbab2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.178 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1037d287-c167-4691-9393-55be86ecbab2 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.179 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0506bb08-7957-44ca-9a0f-014c548c7b40, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.179 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config 9e354e27-d674-43c3-890b-caf8731cb827_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.180 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Deleting local config drive /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config because it was imported into RBD.
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.180 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37142f37-1bad-4104-b73c-89013b86a4c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.181 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 namespace which is not needed anymore
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct 14 09:12:32 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Consumed 3.208s CPU time.
Oct 14 09:12:32 compute-0 systemd-machined[214636]: Machine qemu-109-instance-00000058 terminated.
Oct 14 09:12:32 compute-0 kernel: tapce4eb1a6-22: entered promiscuous mode
Oct 14 09:12:32 compute-0 NetworkManager[44885]: <info>  [1760433152.2566] manager: (tapce4eb1a6-22): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 ovn_controller[152662]: 2025-10-14T09:12:32Z|00927|binding|INFO|Claiming lport ce4eb1a6-2221-4519-98fa-44a39da77b71 for this chassis.
Oct 14 09:12:32 compute-0 ovn_controller[152662]: 2025-10-14T09:12:32Z|00928|binding|INFO|ce4eb1a6-2221-4519-98fa-44a39da77b71: Claiming fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 09:12:32 compute-0 NetworkManager[44885]: <info>  [1760433152.2682] device (tapce4eb1a6-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:32 compute-0 NetworkManager[44885]: <info>  [1760433152.2692] device (tapce4eb1a6-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.268 2 INFO nova.virt.libvirt.driver [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance destroyed successfully.
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.268 2 DEBUG nova.objects.instance [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'resources' on Instance uuid ab89bfba-67b2-4767-90f2-7ef5dab476c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.276 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:bd 10.100.0.7'], port_security=['fa:16:3e:7f:9c:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9e354e27-d674-43c3-890b-caf8731cb827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c85ef4e1-bf02-447d-8de0-60f2d978738d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7211eb9-3be4-4007-bf83-d7812e6ec9fe, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ce4eb1a6-2221-4519-98fa-44a39da77b71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.283 2 DEBUG nova.virt.libvirt.vif [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-2',id=88,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-14T09:12:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:29Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=ab89bfba-67b2-4767-90f2-7ef5dab476c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.283 2 DEBUG nova.network.os_vif_util [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.284 2 DEBUG nova.network.os_vif_util [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.284 2 DEBUG os_vif [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1037d287-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:32 compute-0 systemd-machined[214636]: New machine qemu-111-instance-0000005a.
Oct 14 09:12:32 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-0000005a.
Oct 14 09:12:32 compute-0 ovn_controller[152662]: 2025-10-14T09:12:32Z|00929|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 ovn-installed in OVS
Oct 14 09:12:32 compute-0 ovn_controller[152662]: 2025-10-14T09:12:32Z|00930|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 up in Southbound
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.352 2 INFO os_vif [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1')
Oct 14 09:12:32 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [NOTICE]   (348411) : haproxy version is 2.8.14-c23fe91
Oct 14 09:12:32 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [NOTICE]   (348411) : path to executable is /usr/sbin/haproxy
Oct 14 09:12:32 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [WARNING]  (348411) : Exiting Master process...
Oct 14 09:12:32 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [ALERT]    (348411) : Current worker (348420) exited with code 143 (Terminated)
Oct 14 09:12:32 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [WARNING]  (348411) : All workers exited. Exiting... (0)
Oct 14 09:12:32 compute-0 systemd[1]: libpod-7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4.scope: Deactivated successfully.
Oct 14 09:12:32 compute-0 podman[348997]: 2025-10-14 09:12:32.379258942 +0000 UTC m=+0.065756901 container died 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 09:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4-userdata-shm.mount: Deactivated successfully.
Oct 14 09:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-31049def98d964650864fccfae84a5b472505ff9f0ff1c2e6df68f12f050e09d-merged.mount: Deactivated successfully.
Oct 14 09:12:32 compute-0 podman[348997]: 2025-10-14 09:12:32.701477857 +0000 UTC m=+0.387975806 container cleanup 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 09:12:32 compute-0 systemd[1]: libpod-conmon-7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4.scope: Deactivated successfully.
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:12:32
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'vms', '.mgr']
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:12:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:12:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:32 compute-0 podman[349050]: 2025-10-14 09:12:32.902669072 +0000 UTC m=+0.161081648 container remove 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.913 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2f5d7f-bafe-4272-9152-0a6c42a37286]: (4, ('Tue Oct 14 09:12:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 (7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4)\n7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4\nTue Oct 14 09:12:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 (7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4)\n7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.915 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d18cdc9d-db92-4d04-8129-b54d236c113c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.917 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:32 compute-0 kernel: tap0506bb08-70: left promiscuous mode
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 nova_compute[259627]: 2025-10-14 09:12:32.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.956 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e301bca4-60b7-4de8-99b2-99ade3aa01ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec16076-0d4b-4d4a-8dd2-957d9b9b6310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.986 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98834985-9c57-4cde-a1f5-c34abd3777e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.011 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[703cc39c-47bd-47cb-a6db-53b510a1eef4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692564, 'reachable_time': 25965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349062, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.014 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.014 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c28901ae-fb4a-4c10-b475-1390de9903a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.016 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ce4eb1a6-2221-4519-98fa-44a39da77b71 in datapath 7cb8e394-ebca-4b27-8174-62c6b6f3a7da unbound from our chassis
Oct 14 09:12:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d0506bb08\x2d7957\x2d44ca\x2d9a0f\x2d014c548c7b40.mount: Deactivated successfully.
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.020 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.033 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2aae7a-5927-4625-b975-cb6c878115e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.034 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7cb8e394-e1 in ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.035 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7cb8e394-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.036 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec52e96-c2d7-4599-ad1b-60098cb1f73d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.037 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0987ec56-d33e-4c0f-b7b3-433c77058d42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.050 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee802f9-b0b5-49f7-b4ef-fadd71496a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.073 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3851e309-e7d9-4e6e-8a20-ab50a90924d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.104 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ac79c5a1-4471-44d5-86f1-41589940ac12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 systemd-udevd[349064]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:33 compute-0 NetworkManager[44885]: <info>  [1760433153.1128] manager: (tap7cb8e394-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/379)
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.112 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3963d85a-6b33-4949-87ef-87280d5e24ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.167 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cf3462-2de3-4d7f-b89d-17cec09e4cb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.170 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6e667455-6fa3-4edc-9859-f4de653b83e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 NetworkManager[44885]: <info>  [1760433153.2061] device (tap7cb8e394-e0): carrier: link connected
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.217 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[37b6fde3-8afb-4a4c-bf79-00e5c83330bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.239 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[416443f3-657c-4ce6-8943-71835eb4f3ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cb8e394-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0c:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693196, 'reachable_time': 25964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349091, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.260 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ff5ae1-b12c-4d0c-8712-0db2932badaa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:c43'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693196, 'tstamp': 693196}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349092, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[228b9829-62af-41db-864d-4c5c80f76aa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cb8e394-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0c:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693196, 'reachable_time': 25964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349093, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.346 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de463239-2447-431c-bf59-e994da3e90de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.417 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[015b3301-6a9a-4858-9791-defe29bbbd5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb8e394-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.419 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7cb8e394-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:33 compute-0 NetworkManager[44885]: <info>  [1760433153.4213] manager: (tap7cb8e394-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Oct 14 09:12:33 compute-0 kernel: tap7cb8e394-e0: entered promiscuous mode
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.423 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7cb8e394-e0, col_values=(('external_ids', {'iface-id': 'abbcb164-8856-47e0-a7b9-984d66daedac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:33 compute-0 ovn_controller[152662]: 2025-10-14T09:12:33Z|00931|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.451 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.452 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[321d65e5-7b2f-4989-863c-3046ad7c1705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.453 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.457 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'env', 'PROCESS_TAG=haproxy-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:12:33 compute-0 ceph-mon[74249]: pgmap v1742: 305 pgs: 305 active+clean; 295 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 9.2 MiB/s wr, 400 op/s
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.821 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.822 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.823 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.823 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.824 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.825 2 INFO nova.compute.manager [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Terminating instance
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.826 2 DEBUG nova.compute.manager [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:12:33 compute-0 kernel: tap26bcc700-59 (unregistering): left promiscuous mode
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.902 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-unplugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.903 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.904 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.904 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.905 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] No waiting events found dispatching network-vif-unplugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.906 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-unplugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.906 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.907 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:33 compute-0 NetworkManager[44885]: <info>  [1760433153.9079] device (tap26bcc700-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.908 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.908 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.909 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] No waiting events found dispatching network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.910 2 WARNING nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received unexpected event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 for instance with vm_state active and task_state deleting.
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.910 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-unplugged-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.911 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.912 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.912 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:33 compute-0 ovn_controller[152662]: 2025-10-14T09:12:33Z|00932|binding|INFO|Releasing lport 26bcc700-59c6-4e79-904d-988cd11152c8 from this chassis (sb_readonly=0)
Oct 14 09:12:33 compute-0 ovn_controller[152662]: 2025-10-14T09:12:33Z|00933|binding|INFO|Setting lport 26bcc700-59c6-4e79-904d-988cd11152c8 down in Southbound
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.913 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] No waiting events found dispatching network-vif-unplugged-1037d287-c167-4691-9393-55be86ecbab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.914 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-unplugged-1037d287-c167-4691-9393-55be86ecbab2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:12:33 compute-0 ovn_controller[152662]: 2025-10-14T09:12:33Z|00934|binding|INFO|Removing iface tap26bcc700-59 ovn-installed in OVS
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.915 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 295 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 9.1 MiB/s wr, 323 op/s
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.916 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.917 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.918 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.918 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] No waiting events found dispatching network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.918 2 WARNING nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received unexpected event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 for instance with vm_state active and task_state deleting.
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:33 compute-0 podman[349167]: 2025-10-14 09:12:33.921031172 +0000 UTC m=+0.104851963 container create d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:12:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.925 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:94:f3 10.100.0.13'], port_security=['fa:16:3e:46:94:f3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4774788b-1dc2-40c6-87d0-db4e3f54a609', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=26bcc700-59c6-4e79-904d-988cd11152c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:33 compute-0 podman[349167]: 2025-10-14 09:12:33.851441008 +0000 UTC m=+0.035261829 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:33 compute-0 systemd[1]: Started libpod-conmon-d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958.scope.
Oct 14 09:12:33 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct 14 09:12:33 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Consumed 2.816s CPU time.
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:12:33 compute-0 nova_compute[259627]: 2025-10-14 09:12:33.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:12:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:33 compute-0 systemd-machined[214636]: Machine qemu-110-instance-00000059 terminated.
Oct 14 09:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29994210cca99f3be79ccbe5f31a62bef181ac26761ce349073eb7da91ec9974/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.015 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:12:34 compute-0 podman[349167]: 2025-10-14 09:12:34.018921953 +0000 UTC m=+0.202742784 container init d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:12:34 compute-0 podman[349167]: 2025-10-14 09:12:34.025928595 +0000 UTC m=+0.209749396 container start d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:12:34 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [NOTICE]   (349192) : New worker (349199) forked
Oct 14 09:12:34 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [NOTICE]   (349192) : Loading success.
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.067 2 INFO nova.virt.libvirt.driver [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance destroyed successfully.
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.068 2 DEBUG nova.objects.instance [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'resources' on Instance uuid 4774788b-1dc2-40c6-87d0-db4e3f54a609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.080 2 DEBUG nova.virt.libvirt.vif [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1339714429',display_name='tempest-ServersNegativeTestJSON-server-1339714429',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1339714429',id=89,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-3mdh02k5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:31Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=4774788b-1dc2-40c6-87d0-db4e3f54a609,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.080 2 DEBUG nova.network.os_vif_util [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.081 2 DEBUG nova.network.os_vif_util [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.082 2 DEBUG os_vif [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26bcc700-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.089 2 INFO os_vif [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59')
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.152 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 26bcc700-59c6-4e79-904d-988cd11152c8 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c unbound from our chassis
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.154 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.165 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.166 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.166 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.167 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.174 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2fb44c-fc27-479a-b43c-31784997ed49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.211 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0853ac76-005e-4a18-88c4-a99d4a833265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.215 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5516e8-6464-43cd-b836-38fe6994233c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.249 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[813f4d67-2dd5-4128-a66e-2bec2903cae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.276 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36cb50d9-d158-410d-8af4-8f290c1c04f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691215, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349236, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.284 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433154.28369, 9e354e27-d674-43c3-890b-caf8731cb827 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.285 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Started (Lifecycle Event)
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[010723e9-756b-40c4-8455-084e33b436f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa49b41b4-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691226, 'tstamp': 691226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349237, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa49b41b4-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691229, 'tstamp': 691229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349237, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.304 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.310 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.310 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.311 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:34 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.311 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.311 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.317 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433154.2838418, 9e354e27-d674-43c3-890b-caf8731cb827 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.317 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Paused (Lifecycle Event)
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.334 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.337 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.363 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.485 2 INFO nova.virt.libvirt.driver [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Deleting instance files /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e_del
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.485 2 INFO nova.virt.libvirt.driver [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Deletion of /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e_del complete
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.507 2 INFO nova.virt.libvirt.driver [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Deleting instance files /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0_del
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.508 2 INFO nova.virt.libvirt.driver [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Deletion of /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0_del complete
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.546 2 INFO nova.compute.manager [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Took 2.69 seconds to destroy the instance on the hypervisor.
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.546 2 DEBUG oslo.service.loopingcall [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.547 2 DEBUG nova.compute.manager [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.547 2 DEBUG nova.network.neutron [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.558 2 INFO nova.compute.manager [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Took 2.55 seconds to destroy the instance on the hypervisor.
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.559 2 DEBUG oslo.service.loopingcall [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.559 2 DEBUG nova.compute.manager [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.559 2 DEBUG nova.network.neutron [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:12:34 compute-0 ceph-mon[74249]: pgmap v1743: 305 pgs: 305 active+clean; 295 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 9.1 MiB/s wr, 323 op/s
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.962 2 INFO nova.virt.libvirt.driver [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Deleting instance files /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609_del
Oct 14 09:12:34 compute-0 nova_compute[259627]: 2025-10-14 09:12:34.962 2 INFO nova.virt.libvirt.driver [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Deletion of /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609_del complete
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.013 2 INFO nova.compute.manager [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Took 1.19 seconds to destroy the instance on the hypervisor.
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.014 2 DEBUG oslo.service.loopingcall [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.014 2 DEBUG nova.compute.manager [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.015 2 DEBUG nova.network.neutron [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.050 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433140.0497344, 1141f79e-2e47-40f1-91b0-275a9fac765c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.051 2 INFO nova.compute.manager [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Stopped (Lifecycle Event)
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.078 2 DEBUG nova.compute.manager [None req-8c4c46dc-7e81-4226-9358-ebac38a5613a - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.791 2 DEBUG nova.network.neutron [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.819 2 INFO nova.compute.manager [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Took 1.27 seconds to deallocate network for instance.
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.897 2 DEBUG nova.compute.manager [req-c622664c-053a-4324-9492-efcc16cfd84a req-dc44b5c8-a2a4-4bb8-8821-b76dd3846618 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-deleted-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.908 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.909 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.911 2 DEBUG nova.network.neutron [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.2 MiB/s wr, 549 op/s
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.928 2 INFO nova.compute.manager [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Took 1.37 seconds to deallocate network for instance.
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.985 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.995 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.996 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.996 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.996 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.997 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Processing event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.997 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.997 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.997 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.998 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.998 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.998 2 WARNING nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state building and task_state spawning.
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.998 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-unplugged-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.999 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.999 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:35 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.999 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:35.999 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] No waiting events found dispatching network-vif-unplugged-26bcc700-59c6-4e79-904d-988cd11152c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.000 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-unplugged-26bcc700-59c6-4e79-904d-988cd11152c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.000 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.000 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.001 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.001 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.001 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] No waiting events found dispatching network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.001 2 WARNING nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received unexpected event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 for instance with vm_state active and task_state deleting.
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.002 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-deleted-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.002 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.008 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433156.0080884, 9e354e27-d674-43c3-890b-caf8731cb827 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.008 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Resumed (Lifecycle Event)
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.010 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.015 2 INFO nova.virt.libvirt.driver [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance spawned successfully.
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.016 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.034 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.043 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.046 2 DEBUG oslo_concurrency.processutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.087 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.092 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.093 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.094 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.095 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.096 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.096 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.151 2 INFO nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Took 8.79 seconds to spawn the instance on the hypervisor.
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.151 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.224 2 INFO nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Took 10.09 seconds to build instance.
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.239 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.344 2 DEBUG nova.network.neutron [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.369 2 INFO nova.compute.manager [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Took 1.35 seconds to deallocate network for instance.
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.416 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005298739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.490 2 DEBUG oslo_concurrency.processutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.495 2 DEBUG nova.compute.provider_tree [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.511 2 DEBUG nova.scheduler.client.report [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.531 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.534 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.557 2 INFO nova.scheduler.client.report [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Deleted allocations for instance 16c1b8b8-cda9-45f9-994f-3f102eb85e1e
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.604 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.623 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.624 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.624 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.624 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.625 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.670 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:36 compute-0 nova_compute[259627]: 2025-10-14 09:12:36.691 2 DEBUG oslo_concurrency.processutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:37 compute-0 ceph-mon[74249]: pgmap v1744: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.2 MiB/s wr, 549 op/s
Oct 14 09:12:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2005298739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1640992374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.226 2 DEBUG oslo_concurrency.processutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.231 2 DEBUG nova.compute.provider_tree [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.250 2 DEBUG nova.scheduler.client.report [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.279 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.282 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.332 2 INFO nova.scheduler.client.report [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Deleted allocations for instance ab89bfba-67b2-4767-90f2-7ef5dab476c0
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.394 2 DEBUG oslo_concurrency.processutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.445 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967470313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.880 2 DEBUG oslo_concurrency.processutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.885 2 DEBUG nova.compute.provider_tree [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.900 2 DEBUG nova.scheduler.client.report [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.0 MiB/s wr, 427 op/s
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.920 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.947 2 INFO nova.scheduler.client.report [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Deleted allocations for instance 4774788b-1dc2-40c6-87d0-db4e3f54a609
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:37 compute-0 nova_compute[259627]: 2025-10-14 09:12:37.988 2 DEBUG nova.compute.manager [req-6c227c34-b233-447f-9a0c-fbb887903fbd req-7ca744c6-19c3-49d7-8aee-e6e83e72cbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-deleted-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:38 compute-0 nova_compute[259627]: 2025-10-14 09:12:38.007 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1640992374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1967470313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:39 compute-0 nova_compute[259627]: 2025-10-14 09:12:39.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:39 compute-0 ceph-mon[74249]: pgmap v1745: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.0 MiB/s wr, 427 op/s
Oct 14 09:12:39 compute-0 ovn_controller[152662]: 2025-10-14T09:12:39Z|00935|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:12:39 compute-0 ovn_controller[152662]: 2025-10-14T09:12:39Z|00936|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 09:12:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.0 MiB/s wr, 427 op/s
Oct 14 09:12:39 compute-0 nova_compute[259627]: 2025-10-14 09:12:39.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:39 compute-0 NetworkManager[44885]: <info>  [1760433159.9233] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Oct 14 09:12:39 compute-0 NetworkManager[44885]: <info>  [1760433159.9242] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Oct 14 09:12:39 compute-0 nova_compute[259627]: 2025-10-14 09:12:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:12:40 compute-0 ovn_controller[152662]: 2025-10-14T09:12:40Z|00937|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:12:40 compute-0 ovn_controller[152662]: 2025-10-14T09:12:40Z|00938|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.511 2 DEBUG nova.compute.manager [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.512 2 DEBUG nova.compute.manager [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing instance network info cache due to event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.512 2 DEBUG oslo_concurrency.lockutils [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.513 2 DEBUG oslo_concurrency.lockutils [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.513 2 DEBUG nova.network.neutron [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:40 compute-0 sudo[349306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:40 compute-0 sudo[349306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:40 compute-0 sudo[349306]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:40 compute-0 sudo[349331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:12:40 compute-0 sudo[349331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:40 compute-0 sudo[349331]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:40 compute-0 sudo[349356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:40 compute-0 sudo[349356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:40 compute-0 sudo[349356]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:40 compute-0 sudo[349381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:12:40 compute-0 sudo[349381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.832 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.834 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.861 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.879 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.880 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.912 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.947 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.948 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.948 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433145.9477327, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.949 2 INFO nova.compute.manager [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Stopped (Lifecycle Event)
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.966 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.966 2 INFO nova.compute.claims [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.977 2 DEBUG nova.compute.manager [None req-79f3e3f6-bb8d-455f-b9b3-172eba258e12 - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:40 compute-0 nova_compute[259627]: 2025-10-14 09:12:40.997 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.135 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:41 compute-0 ceph-mon[74249]: pgmap v1746: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.0 MiB/s wr, 427 op/s
Oct 14 09:12:41 compute-0 sudo[349381]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:12:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:12:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:12:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:12:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:12:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:12:41 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 84c455ae-9c4b-449c-88bb-d83c202c8cc1 does not exist
Oct 14 09:12:41 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0a13b36f-297f-48fa-8e9d-d5fd40f44bab does not exist
Oct 14 09:12:41 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3d7d6c5c-e3ed-4899-a46f-c19195fa3f00 does not exist
Oct 14 09:12:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:12:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:12:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:12:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:12:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:12:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:12:41 compute-0 sudo[349457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:41 compute-0 sudo[349457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:41 compute-0 sudo[349457]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:41 compute-0 sudo[349482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:12:41 compute-0 sudo[349482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:41 compute-0 sudo[349482]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:41 compute-0 sudo[349507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:41 compute-0 sudo[349507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:41 compute-0 sudo[349507]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790926568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.688 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.698 2 DEBUG nova.compute.provider_tree [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:41 compute-0 sudo[349532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:12:41 compute-0 sudo[349532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.722 2 DEBUG nova.scheduler.client.report [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.751 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.752 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.757 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.771 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.772 2 INFO nova.compute.claims [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.839 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.840 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.859 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.891 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:12:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.0 MiB/s wr, 491 op/s
Oct 14 09:12:41 compute-0 nova_compute[259627]: 2025-10-14 09:12:41.964 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.018 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.021 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.021 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Creating image(s)
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.051 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.083 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.119 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.123 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:42 compute-0 podman[349637]: 2025-10-14 09:12:42.162790305 +0000 UTC m=+0.064411117 container create 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:12:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:12:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:12:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:12:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:12:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:12:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:12:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/790926568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:42 compute-0 systemd[1]: Started libpod-conmon-138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84.scope.
Oct 14 09:12:42 compute-0 podman[349637]: 2025-10-14 09:12:42.132657673 +0000 UTC m=+0.034278525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:12:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.229 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.234 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.235 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.235 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:42 compute-0 podman[349637]: 2025-10-14 09:12:42.248768002 +0000 UTC m=+0.150388854 container init 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:12:42 compute-0 podman[349637]: 2025-10-14 09:12:42.256445021 +0000 UTC m=+0.158065823 container start 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.257 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:42 compute-0 podman[349637]: 2025-10-14 09:12:42.260229935 +0000 UTC m=+0.161850787 container attach 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.261 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4ad76124-48eb-467e-9a6f-951235efdb35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:42 compute-0 wonderful_jemison[349689]: 167 167
Oct 14 09:12:42 compute-0 systemd[1]: libpod-138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84.scope: Deactivated successfully.
Oct 14 09:12:42 compute-0 conmon[349689]: conmon 138caf6f8485a3bf99af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84.scope/container/memory.events
Oct 14 09:12:42 compute-0 podman[349637]: 2025-10-14 09:12:42.263975057 +0000 UTC m=+0.165595859 container died 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-76a8f15d405d8dfacfeabbb6217c37d1a9383af6c7ddb984c0f6518cd5fc7919-merged.mount: Deactivated successfully.
Oct 14 09:12:42 compute-0 podman[349637]: 2025-10-14 09:12:42.30188049 +0000 UTC m=+0.203501292 container remove 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:12:42 compute-0 systemd[1]: libpod-conmon-138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84.scope: Deactivated successfully.
Oct 14 09:12:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606086695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.497 2 DEBUG nova.policy [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50b95774c384c5a8414b197ed5d7b82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a057db932754d6eae91f0d2f359f1ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.504 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:42 compute-0 podman[349751]: 2025-10-14 09:12:42.507934945 +0000 UTC m=+0.052771111 container create 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.518 2 DEBUG nova.compute.provider_tree [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.562 2 DEBUG nova.scheduler.client.report [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:42 compute-0 systemd[1]: Started libpod-conmon-56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2.scope.
Oct 14 09:12:42 compute-0 podman[349751]: 2025-10-14 09:12:42.476748767 +0000 UTC m=+0.021584953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.586 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.587 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.590 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4ad76124-48eb-467e-9a6f-951235efdb35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:42 compute-0 podman[349751]: 2025-10-14 09:12:42.619040881 +0000 UTC m=+0.163877097 container init 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 09:12:42 compute-0 podman[349751]: 2025-10-14 09:12:42.626511115 +0000 UTC m=+0.171347301 container start 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:12:42 compute-0 podman[349751]: 2025-10-14 09:12:42.630884313 +0000 UTC m=+0.175720519 container attach 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.643 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.644 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.674 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.681 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] resizing rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.704 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.764 2 DEBUG nova.objects.instance [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'migration_context' on Instance uuid 4ad76124-48eb-467e-9a6f-951235efdb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.786 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.786 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Ensure instance console log exists: /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.787 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.787 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.787 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.804 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.805 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.806 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Creating image(s)
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.821 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.844 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.863 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.865 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.932 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.933 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.933 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.934 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.955 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.958 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 bf38daab-2994-41c4-a44f-91e466acf68e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:42 compute-0 nova_compute[259627]: 2025-10-14 09:12:42.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001106605497493 of space, bias 1.0, pg target 0.3319816492479 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.224 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 bf38daab-2994-41c4-a44f-91e466acf68e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.316 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] resizing rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.363 2 DEBUG nova.network.neutron [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updated VIF entry in instance network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.364 2 DEBUG nova.network.neutron [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:43 compute-0 ceph-mon[74249]: pgmap v1747: 305 pgs: 305 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.0 MiB/s wr, 491 op/s
Oct 14 09:12:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2606086695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.439 2 DEBUG nova.policy [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50b95774c384c5a8414b197ed5d7b82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a057db932754d6eae91f0d2f359f1ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.447 2 DEBUG oslo_concurrency.lockutils [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.455 2 DEBUG nova.objects.instance [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'migration_context' on Instance uuid bf38daab-2994-41c4-a44f-91e466acf68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.474 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.474 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Ensure instance console log exists: /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.476 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.477 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:43 compute-0 nova_compute[259627]: 2025-10-14 09:12:43.478 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:43 compute-0 lucid_kare[349770]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:12:43 compute-0 lucid_kare[349770]: --> relative data size: 1.0
Oct 14 09:12:43 compute-0 lucid_kare[349770]: --> All data devices are unavailable
Oct 14 09:12:43 compute-0 systemd[1]: libpod-56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2.scope: Deactivated successfully.
Oct 14 09:12:43 compute-0 podman[349751]: 2025-10-14 09:12:43.703651932 +0000 UTC m=+1.248488098 container died 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb-merged.mount: Deactivated successfully.
Oct 14 09:12:43 compute-0 podman[349751]: 2025-10-14 09:12:43.758359779 +0000 UTC m=+1.303195945 container remove 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 09:12:43 compute-0 systemd[1]: libpod-conmon-56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2.scope: Deactivated successfully.
Oct 14 09:12:43 compute-0 sudo[349532]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:43 compute-0 sudo[350049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:43 compute-0 sudo[350049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:43 compute-0 sudo[350049]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 119 KiB/s wr, 290 op/s
Oct 14 09:12:43 compute-0 sudo[350074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:12:43 compute-0 sudo[350074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:43 compute-0 sudo[350074]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:44 compute-0 sudo[350099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:44 compute-0 sudo[350099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:44 compute-0 sudo[350099]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:44 compute-0 nova_compute[259627]: 2025-10-14 09:12:44.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:44 compute-0 sudo[350124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:12:44 compute-0 sudo[350124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:44 compute-0 nova_compute[259627]: 2025-10-14 09:12:44.598 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Successfully created port: 66a0c3b8-73ab-490e-a3d4-06827c574cb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:12:44 compute-0 podman[350191]: 2025-10-14 09:12:44.663714816 +0000 UTC m=+0.061799143 container create 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:12:44 compute-0 systemd[1]: Started libpod-conmon-04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a.scope.
Oct 14 09:12:44 compute-0 podman[350191]: 2025-10-14 09:12:44.633133663 +0000 UTC m=+0.031218040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:12:44 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:44 compute-0 podman[350191]: 2025-10-14 09:12:44.752490502 +0000 UTC m=+0.150574829 container init 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:12:44 compute-0 podman[350191]: 2025-10-14 09:12:44.759787422 +0000 UTC m=+0.157871749 container start 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:12:44 compute-0 podman[350191]: 2025-10-14 09:12:44.763635417 +0000 UTC m=+0.161719714 container attach 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 09:12:44 compute-0 hopeful_panini[350207]: 167 167
Oct 14 09:12:44 compute-0 systemd[1]: libpod-04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a.scope: Deactivated successfully.
Oct 14 09:12:44 compute-0 podman[350191]: 2025-10-14 09:12:44.768700181 +0000 UTC m=+0.166784508 container died 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebec7d2f0e69c7c43d133c5abe68f8e2c950f19bf7a9365e76cb803411609dc5-merged.mount: Deactivated successfully.
Oct 14 09:12:44 compute-0 podman[350191]: 2025-10-14 09:12:44.819262947 +0000 UTC m=+0.217347244 container remove 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:12:44 compute-0 systemd[1]: libpod-conmon-04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a.scope: Deactivated successfully.
Oct 14 09:12:45 compute-0 podman[350231]: 2025-10-14 09:12:45.069673503 +0000 UTC m=+0.054222006 container create 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:12:45 compute-0 systemd[1]: Started libpod-conmon-6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2.scope.
Oct 14 09:12:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:45 compute-0 podman[350231]: 2025-10-14 09:12:45.049484126 +0000 UTC m=+0.034032649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:12:45 compute-0 podman[350231]: 2025-10-14 09:12:45.163817242 +0000 UTC m=+0.148365795 container init 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:12:45 compute-0 podman[350231]: 2025-10-14 09:12:45.175406377 +0000 UTC m=+0.159954920 container start 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:12:45 compute-0 podman[350231]: 2025-10-14 09:12:45.179504508 +0000 UTC m=+0.164053041 container attach 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:12:45 compute-0 ceph-mon[74249]: pgmap v1748: 305 pgs: 305 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 119 KiB/s wr, 290 op/s
Oct 14 09:12:45 compute-0 nova_compute[259627]: 2025-10-14 09:12:45.528 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Successfully created port: 59075c43-66a8-4a9c-a693-31f83575b355 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]: {
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:     "0": [
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:         {
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "devices": [
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "/dev/loop3"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             ],
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_name": "ceph_lv0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_size": "21470642176",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "name": "ceph_lv0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "tags": {
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cluster_name": "ceph",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.crush_device_class": "",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.encrypted": "0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osd_id": "0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.type": "block",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.vdo": "0"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             },
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "type": "block",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "vg_name": "ceph_vg0"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:         }
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:     ],
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:     "1": [
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:         {
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "devices": [
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "/dev/loop4"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             ],
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_name": "ceph_lv1",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_size": "21470642176",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "name": "ceph_lv1",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "tags": {
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cluster_name": "ceph",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.crush_device_class": "",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.encrypted": "0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osd_id": "1",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.type": "block",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.vdo": "0"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             },
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "type": "block",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "vg_name": "ceph_vg1"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:         }
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:     ],
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:     "2": [
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:         {
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "devices": [
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "/dev/loop5"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             ],
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_name": "ceph_lv2",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_size": "21470642176",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "name": "ceph_lv2",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "tags": {
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.cluster_name": "ceph",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.crush_device_class": "",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.encrypted": "0",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osd_id": "2",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.type": "block",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:                 "ceph.vdo": "0"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             },
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "type": "block",
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:             "vg_name": "ceph_vg2"
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:         }
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]:     ]
Oct 14 09:12:45 compute-0 pensive_goldstine[350248]: }
Oct 14 09:12:45 compute-0 systemd[1]: libpod-6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2.scope: Deactivated successfully.
Oct 14 09:12:45 compute-0 podman[350231]: 2025-10-14 09:12:45.902948695 +0000 UTC m=+0.887497198 container died 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:12:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.7 MiB/s wr, 344 op/s
Oct 14 09:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1-merged.mount: Deactivated successfully.
Oct 14 09:12:45 compute-0 podman[350231]: 2025-10-14 09:12:45.974283342 +0000 UTC m=+0.958831845 container remove 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:12:45 compute-0 systemd[1]: libpod-conmon-6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2.scope: Deactivated successfully.
Oct 14 09:12:46 compute-0 sudo[350124]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:46 compute-0 podman[350258]: 2025-10-14 09:12:46.036789271 +0000 UTC m=+0.097050351 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:12:46 compute-0 podman[350265]: 2025-10-14 09:12:46.053487222 +0000 UTC m=+0.106224577 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:12:46 compute-0 sudo[350300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:46 compute-0 sudo[350300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:46 compute-0 sudo[350300]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:46 compute-0 sudo[350328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:12:46 compute-0 sudo[350328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:46 compute-0 sudo[350328]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:46 compute-0 sudo[350353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:46 compute-0 sudo[350353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:46 compute-0 sudo[350353]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:46 compute-0 sudo[350378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:12:46 compute-0 sudo[350378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:46 compute-0 nova_compute[259627]: 2025-10-14 09:12:46.664 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Successfully updated port: 66a0c3b8-73ab-490e-a3d4-06827c574cb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:12:46 compute-0 nova_compute[259627]: 2025-10-14 09:12:46.724 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:46 compute-0 nova_compute[259627]: 2025-10-14 09:12:46.724 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquired lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:46 compute-0 nova_compute[259627]: 2025-10-14 09:12:46.724 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:46 compute-0 podman[350442]: 2025-10-14 09:12:46.729453119 +0000 UTC m=+0.058059651 container create 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 09:12:46 compute-0 systemd[1]: Started libpod-conmon-28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9.scope.
Oct 14 09:12:46 compute-0 podman[350442]: 2025-10-14 09:12:46.696920928 +0000 UTC m=+0.025527490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:12:46 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:46 compute-0 podman[350442]: 2025-10-14 09:12:46.815723823 +0000 UTC m=+0.144330375 container init 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 14 09:12:46 compute-0 podman[350442]: 2025-10-14 09:12:46.82329025 +0000 UTC m=+0.151896772 container start 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:12:46 compute-0 podman[350442]: 2025-10-14 09:12:46.826515269 +0000 UTC m=+0.155121811 container attach 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:12:46 compute-0 tender_wescoff[350458]: 167 167
Oct 14 09:12:46 compute-0 systemd[1]: libpod-28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9.scope: Deactivated successfully.
Oct 14 09:12:46 compute-0 conmon[350458]: conmon 28750fe3c1480f286b28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9.scope/container/memory.events
Oct 14 09:12:46 compute-0 podman[350442]: 2025-10-14 09:12:46.831093582 +0000 UTC m=+0.159700124 container died 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:12:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-88b7d172d6a7f4633f91e8b1053ca53aa26841a3b3013715bf70c5c2da6e5e04-merged.mount: Deactivated successfully.
Oct 14 09:12:46 compute-0 podman[350442]: 2025-10-14 09:12:46.86876518 +0000 UTC m=+0.197371702 container remove 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:12:46 compute-0 systemd[1]: libpod-conmon-28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9.scope: Deactivated successfully.
Oct 14 09:12:47 compute-0 podman[350482]: 2025-10-14 09:12:47.046447756 +0000 UTC m=+0.044648231 container create dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.086 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433152.0860417, 16c1b8b8-cda9-45f9-994f-3f102eb85e1e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.087 2 INFO nova.compute.manager [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] VM Stopped (Lifecycle Event)
Oct 14 09:12:47 compute-0 systemd[1]: Started libpod-conmon-dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71.scope.
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.121 2 DEBUG nova.compute.manager [None req-fa3a1d4a-35d2-41ff-9850-2dba7124a0ac - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:47 compute-0 podman[350482]: 2025-10-14 09:12:47.028461723 +0000 UTC m=+0.026662228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:12:47 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:47 compute-0 podman[350482]: 2025-10-14 09:12:47.16194133 +0000 UTC m=+0.160141825 container init dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:12:47 compute-0 podman[350482]: 2025-10-14 09:12:47.168794949 +0000 UTC m=+0.166995424 container start dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 09:12:47 compute-0 podman[350482]: 2025-10-14 09:12:47.171847274 +0000 UTC m=+0.170047849 container attach dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.249 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433152.2486112, ab89bfba-67b2-4767-90f2-7ef5dab476c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.250 2 INFO nova.compute.manager [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] VM Stopped (Lifecycle Event)
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.275 2 DEBUG nova.compute.manager [None req-8ecfc52c-3e2f-4dbd-aa92-38631fdd99cf - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.284 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.349 2 DEBUG nova.compute.manager [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-changed-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.349 2 DEBUG nova.compute.manager [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Refreshing instance network info cache due to event network-changed-66a0c3b8-73ab-490e-a3d4-06827c574cb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.349 2 DEBUG oslo_concurrency.lockutils [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:47 compute-0 ceph-mon[74249]: pgmap v1749: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.7 MiB/s wr, 344 op/s
Oct 14 09:12:47 compute-0 ovn_controller[152662]: 2025-10-14T09:12:47Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 09:12:47 compute-0 ovn_controller[152662]: 2025-10-14T09:12:47Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.652 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Successfully updated port: 59075c43-66a8-4a9c-a693-31f83575b355 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.670 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.671 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquired lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.671 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 14 09:12:47 compute-0 nova_compute[259627]: 2025-10-14 09:12:47.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:48 compute-0 beautiful_cray[350499]: {
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "osd_id": 2,
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "type": "bluestore"
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:     },
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "osd_id": 1,
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "type": "bluestore"
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:     },
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "osd_id": 0,
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:         "type": "bluestore"
Oct 14 09:12:48 compute-0 beautiful_cray[350499]:     }
Oct 14 09:12:48 compute-0 beautiful_cray[350499]: }
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.112 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:12:48 compute-0 systemd[1]: libpod-dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71.scope: Deactivated successfully.
Oct 14 09:12:48 compute-0 podman[350482]: 2025-10-14 09:12:48.136941602 +0000 UTC m=+1.135142077 container died dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 09:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238-merged.mount: Deactivated successfully.
Oct 14 09:12:48 compute-0 podman[350482]: 2025-10-14 09:12:48.189840185 +0000 UTC m=+1.188040670 container remove dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 09:12:48 compute-0 systemd[1]: libpod-conmon-dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71.scope: Deactivated successfully.
Oct 14 09:12:48 compute-0 sudo[350378]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:12:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:12:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:12:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:12:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b8dc6c5f-ba32-4311-938e-5a8dec5d43e0 does not exist
Oct 14 09:12:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 128872c2-1252-4289-b87c-855758ca5bf2 does not exist
Oct 14 09:12:48 compute-0 sudo[350544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:12:48 compute-0 sudo[350544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:48 compute-0 sudo[350544]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:48 compute-0 sudo[350569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:12:48 compute-0 sudo[350569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:12:48 compute-0 sudo[350569]: pam_unix(sudo:session): session closed for user root
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.490 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Updating instance_info_cache with network_info: [{"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.512 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Releasing lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.513 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance network_info: |[{"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.513 2 DEBUG oslo_concurrency.lockutils [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.513 2 DEBUG nova.network.neutron [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Refreshing network info cache for port 66a0c3b8-73ab-490e-a3d4-06827c574cb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.515 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start _get_guest_xml network_info=[{"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.521 2 WARNING nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.527 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.527 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.536 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.536 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.537 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.537 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.538 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.538 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.538 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.538 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.540 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.543 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486944178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.956 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:48 compute-0 nova_compute[259627]: 2025-10-14 09:12:48.993 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.000 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.064 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433154.0633433, 4774788b-1dc2-40c6-87d0-db4e3f54a609 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.065 2 INFO nova.compute.manager [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] VM Stopped (Lifecycle Event)
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.085 2 DEBUG nova.compute.manager [None req-230b8606-0aef-4961-939d-f8e3a97975a9 - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:49 compute-0 ceph-mon[74249]: pgmap v1750: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 14 09:12:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:12:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:12:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1486944178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.345 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Updating instance_info_cache with network_info: [{"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.372 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Releasing lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.373 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance network_info: |[{"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.378 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start _get_guest_xml network_info=[{"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.383 2 WARNING nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.390 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.391 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.395 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.396 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.396 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.397 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.398 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.398 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.398 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.399 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.399 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.400 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.400 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.401 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.401 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.402 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.409 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425337834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.523 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.526 2 DEBUG nova.virt.libvirt.vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-1',id=91,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:41Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=4ad76124-48eb-467e-9a6f-951235efdb35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.527 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.528 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.531 2 DEBUG nova.objects.instance [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ad76124-48eb-467e-9a6f-951235efdb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.549 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <uuid>4ad76124-48eb-467e-9a6f-951235efdb35</uuid>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <name>instance-0000005b</name>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <nova:name>tempest-MultipleCreateTestJSON-server-1291705969-1</nova:name>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:48</nova:creationTime>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <nova:user uuid="f50b95774c384c5a8414b197ed5d7b82">tempest-MultipleCreateTestJSON-2115206001-project-member</nova:user>
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <nova:project uuid="5a057db932754d6eae91f0d2f359f1ff">tempest-MultipleCreateTestJSON-2115206001</nova:project>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <nova:port uuid="66a0c3b8-73ab-490e-a3d4-06827c574cb6">
Oct 14 09:12:49 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <entry name="serial">4ad76124-48eb-467e-9a6f-951235efdb35</entry>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <entry name="uuid">4ad76124-48eb-467e-9a6f-951235efdb35</entry>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4ad76124-48eb-467e-9a6f-951235efdb35_disk">
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4ad76124-48eb-467e-9a6f-951235efdb35_disk.config">
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c2:ff:73"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <target dev="tap66a0c3b8-73"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/console.log" append="off"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:49 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:49 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:49 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:49 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:49 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.550 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Preparing to wait for external event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.551 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.552 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.552 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.554 2 DEBUG nova.virt.libvirt.vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-1',id=91,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:41Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=4ad76124-48eb-467e-9a6f-951235efdb35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.554 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.555 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.556 2 DEBUG os_vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66a0c3b8-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66a0c3b8-73, col_values=(('external_ids', {'iface-id': '66a0c3b8-73ab-490e-a3d4-06827c574cb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:ff:73', 'vm-uuid': '4ad76124-48eb-467e-9a6f-951235efdb35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:49 compute-0 NetworkManager[44885]: <info>  [1760433169.5666] manager: (tap66a0c3b8-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.578 2 INFO os_vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73')
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.650 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.651 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.651 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No VIF found with MAC fa:16:3e:c2:ff:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.651 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Using config drive
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.676 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69948782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.940 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.968 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:49 compute-0 nova_compute[259627]: 2025-10-14 09:12:49.973 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2425337834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/69948782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.444 2 DEBUG nova.compute.manager [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-changed-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.444 2 DEBUG nova.compute.manager [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Refreshing instance network info cache due to event network-changed-59075c43-66a8-4a9c-a693-31f83575b355. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.445 2 DEBUG oslo_concurrency.lockutils [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.445 2 DEBUG oslo_concurrency.lockutils [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.445 2 DEBUG nova.network.neutron [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Refreshing network info cache for port 59075c43-66a8-4a9c-a693-31f83575b355 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4080435401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.487 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.488 2 DEBUG nova.virt.libvirt.vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-2',id=92,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:42Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=bf38daab-2994-41c4-a44f-91e466acf68e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.489 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.490 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.492 2 DEBUG nova.objects.instance [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'pci_devices' on Instance uuid bf38daab-2994-41c4-a44f-91e466acf68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.515 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Creating config drive at /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.520 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpklf_2xrm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.572 2 DEBUG nova.network.neutron [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Updated VIF entry in instance network info cache for port 66a0c3b8-73ab-490e-a3d4-06827c574cb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.573 2 DEBUG nova.network.neutron [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Updating instance_info_cache with network_info: [{"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.577 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <uuid>bf38daab-2994-41c4-a44f-91e466acf68e</uuid>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <name>instance-0000005c</name>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <nova:name>tempest-MultipleCreateTestJSON-server-1291705969-2</nova:name>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:49</nova:creationTime>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <nova:user uuid="f50b95774c384c5a8414b197ed5d7b82">tempest-MultipleCreateTestJSON-2115206001-project-member</nova:user>
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <nova:project uuid="5a057db932754d6eae91f0d2f359f1ff">tempest-MultipleCreateTestJSON-2115206001</nova:project>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <nova:port uuid="59075c43-66a8-4a9c-a693-31f83575b355">
Oct 14 09:12:50 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <entry name="serial">bf38daab-2994-41c4-a44f-91e466acf68e</entry>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <entry name="uuid">bf38daab-2994-41c4-a44f-91e466acf68e</entry>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/bf38daab-2994-41c4-a44f-91e466acf68e_disk">
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/bf38daab-2994-41c4-a44f-91e466acf68e_disk.config">
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:50 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:63:b1:fa"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <target dev="tap59075c43-66"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/console.log" append="off"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:50 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:50 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:50 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:50 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:50 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.578 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Preparing to wait for external event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.578 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.579 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.579 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.580 2 DEBUG nova.virt.libvirt.vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-2',id=92,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:42Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=bf38daab-2994-41c4-a44f-91e466acf68e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.580 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.581 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.582 2 DEBUG os_vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.589 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59075c43-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.589 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59075c43-66, col_values=(('external_ids', {'iface-id': '59075c43-66a8-4a9c-a693-31f83575b355', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:b1:fa', 'vm-uuid': 'bf38daab-2994-41c4-a44f-91e466acf68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:50 compute-0 NetworkManager[44885]: <info>  [1760433170.5927] manager: (tap59075c43-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.596 2 DEBUG oslo_concurrency.lockutils [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.599 2 INFO os_vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66')
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.652 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.652 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.653 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No VIF found with MAC fa:16:3e:63:b1:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.653 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Using config drive
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.678 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.684 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpklf_2xrm" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.713 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.717 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.883 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.884 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Deleting local config drive /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config because it was imported into RBD.
Oct 14 09:12:50 compute-0 NetworkManager[44885]: <info>  [1760433170.9357] manager: (tap66a0c3b8-73): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Oct 14 09:12:50 compute-0 kernel: tap66a0c3b8-73: entered promiscuous mode
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:50 compute-0 ovn_controller[152662]: 2025-10-14T09:12:50Z|00939|binding|INFO|Claiming lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 for this chassis.
Oct 14 09:12:50 compute-0 ovn_controller[152662]: 2025-10-14T09:12:50Z|00940|binding|INFO|66a0c3b8-73ab-490e-a3d4-06827c574cb6: Claiming fa:16:3e:c2:ff:73 10.100.0.11
Oct 14 09:12:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.954 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:ff:73 10.100.0.11'], port_security=['fa:16:3e:c2:ff:73 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4ad76124-48eb-467e-9a6f-951235efdb35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=66a0c3b8-73ab-490e-a3d4-06827c574cb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.955 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 66a0c3b8-73ab-490e-a3d4-06827c574cb6 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 bound to our chassis
Oct 14 09:12:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.958 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:50 compute-0 systemd-udevd[350812]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:50 compute-0 ovn_controller[152662]: 2025-10-14T09:12:50Z|00941|binding|INFO|Setting lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 ovn-installed in OVS
Oct 14 09:12:50 compute-0 ovn_controller[152662]: 2025-10-14T09:12:50Z|00942|binding|INFO|Setting lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 up in Southbound
Oct 14 09:12:50 compute-0 nova_compute[259627]: 2025-10-14 09:12:50.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.983 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a08e688-c7e3-4a25-bd58-d7849dd4e37a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.984 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0506bb08-71 in ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:12:50 compute-0 systemd-machined[214636]: New machine qemu-112-instance-0000005b.
Oct 14 09:12:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.987 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0506bb08-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:12:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.987 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[643ace4e-88b1-4d7e-b4e0-4992db52b96e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[adb6947e-1ab1-4074-906e-3a8488519dc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:50 compute-0 NetworkManager[44885]: <info>  [1760433170.9990] device (tap66a0c3b8-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:51 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005b.
Oct 14 09:12:51 compute-0 NetworkManager[44885]: <info>  [1760433171.0006] device (tap66a0c3b8-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.008 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f0ac13-0b45-4092-bb36-ed23d2206845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.042 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c662e2-be3a-4739-b9c6-6b7d0cd2a682]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.083 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41ba1ffb-6441-4f5a-b2e9-a672781fac84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 systemd-udevd[350817]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.092 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46e56416-49e5-4bec-8b63-fb58cca25e4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 NetworkManager[44885]: <info>  [1760433171.0943] manager: (tap0506bb08-70): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.143 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e0979797-4844-4d71-94d7-d12340c3469b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.148 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ac72b0-289d-4720-a25e-e267785fa588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 NetworkManager[44885]: <info>  [1760433171.1845] device (tap0506bb08-70): carrier: link connected
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.195 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7451cdde-a4f6-4b3f-87d1-b607a63d3b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.221 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0b0510-0024-4181-8b3c-2e833a13c4e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694994, 'reachable_time': 19321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350850, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.235 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[326f99a3-64d5-4d91-b6c4-617366b53627]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:c30c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694994, 'tstamp': 694994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350851, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ceph-mon[74249]: pgmap v1751: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 14 09:12:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4080435401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.253 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fdf2807-03a6-474b-aca6-045db2f39ca8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694994, 'reachable_time': 19321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350852, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40f0e841-541c-4eb4-8d69-0e9964a6f7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.343 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a00bf7-c684-4425-aec6-6cdef23784fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.344 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.345 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.345 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:51 compute-0 NetworkManager[44885]: <info>  [1760433171.3483] manager: (tap0506bb08-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Oct 14 09:12:51 compute-0 kernel: tap0506bb08-70: entered promiscuous mode
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.357 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:51 compute-0 ovn_controller[152662]: 2025-10-14T09:12:51Z|00943|binding|INFO|Releasing lport 6cfe11a6-55c2-4d2e-880b-8832ad317040 from this chassis (sb_readonly=0)
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.395 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a165815-c99b-4eac-992e-83739e3e3b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.397 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.398 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'env', 'PROCESS_TAG=haproxy-0506bb08-7957-44ca-9a0f-014c548c7b40', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0506bb08-7957-44ca-9a0f-014c548c7b40.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.531 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Creating config drive at /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.540 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptlcv2m7e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.684 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptlcv2m7e" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.728 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.732 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config bf38daab-2994-41c4-a44f-91e466acf68e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.789 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.790 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:51 compute-0 podman[350945]: 2025-10-14 09:12:51.794924158 +0000 UTC m=+0.057864296 container create e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.806 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:12:51 compute-0 systemd[1]: Started libpod-conmon-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e.scope.
Oct 14 09:12:51 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8eaa5e67a067367ecbaeb8ed40f166de47bd4cb062e63d1c476cf7e27a74873/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:12:51 compute-0 podman[350945]: 2025-10-14 09:12:51.859416866 +0000 UTC m=+0.122357024 container init e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:12:51 compute-0 podman[350945]: 2025-10-14 09:12:51.764285754 +0000 UTC m=+0.027225902 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:12:51 compute-0 podman[350945]: 2025-10-14 09:12:51.865737472 +0000 UTC m=+0.128677610 container start e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.883 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.884 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:51 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [NOTICE]   (350984) : New worker (350989) forked
Oct 14 09:12:51 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [NOTICE]   (350984) : Loading success.
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.892 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.892 2 INFO nova.compute.claims [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.922 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config bf38daab-2994-41c4-a44f-91e466acf68e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.922 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Deleting local config drive /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config because it was imported into RBD.
Oct 14 09:12:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 292 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.948 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433171.9482193, 4ad76124-48eb-467e-9a6f-951235efdb35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.949 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] VM Started (Lifecycle Event)
Oct 14 09:12:51 compute-0 kernel: tap59075c43-66: entered promiscuous mode
Oct 14 09:12:51 compute-0 NetworkManager[44885]: <info>  [1760433171.9690] manager: (tap59075c43-66): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Oct 14 09:12:51 compute-0 systemd-udevd[350844]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:51 compute-0 ovn_controller[152662]: 2025-10-14T09:12:51Z|00944|binding|INFO|Claiming lport 59075c43-66a8-4a9c-a693-31f83575b355 for this chassis.
Oct 14 09:12:51 compute-0 ovn_controller[152662]: 2025-10-14T09:12:51Z|00945|binding|INFO|59075c43-66a8-4a9c-a693-31f83575b355: Claiming fa:16:3e:63:b1:fa 10.100.0.3
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.975 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.977 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b1:fa 10.100.0.3'], port_security=['fa:16:3e:63:b1:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bf38daab-2994-41c4-a44f-91e466acf68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=59075c43-66a8-4a9c-a693-31f83575b355) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.979 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 59075c43-66a8-4a9c-a693-31f83575b355 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 bound to our chassis
Oct 14 09:12:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.982 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:51 compute-0 NetworkManager[44885]: <info>  [1760433171.9838] device (tap59075c43-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:51 compute-0 NetworkManager[44885]: <info>  [1760433171.9846] device (tap59075c43-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:51 compute-0 ovn_controller[152662]: 2025-10-14T09:12:51Z|00946|binding|INFO|Setting lport 59075c43-66a8-4a9c-a693-31f83575b355 ovn-installed in OVS
Oct 14 09:12:51 compute-0 ovn_controller[152662]: 2025-10-14T09:12:51Z|00947|binding|INFO|Setting lport 59075c43-66a8-4a9c-a693-31f83575b355 up in Southbound
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.991 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433171.9484267, 4ad76124-48eb-467e-9a6f-951235efdb35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.992 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] VM Paused (Lifecycle Event)
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:51 compute-0 nova_compute[259627]: 2025-10-14 09:12:51.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.001 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d40ccd-10df-45ba-97e8-c80fc0471c5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.008 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:52 compute-0 systemd-machined[214636]: New machine qemu-113-instance-0000005c.
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.015 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:52 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005c.
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.031 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.040 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d46d2b53-81f6-4066-89cb-22635603b80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.043 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[17828b12-26d4-4311-b08e-da6874901395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.053 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.069 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[442e27c9-ea9a-48ee-b8b3-35c1de8d3fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.088 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85f576b2-77ed-4261-afe8-98ec03327514]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694994, 'reachable_time': 19321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351023, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.105 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa234be-608e-456b-a052-df0bb4f19f85]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695007, 'tstamp': 695007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351025, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695010, 'tstamp': 695010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351025, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.107 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.110 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.110 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.110 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.110 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761441399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.491 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.500 2 DEBUG nova.compute.provider_tree [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.535 2 DEBUG nova.scheduler.client.report [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.577 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.577 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.620 2 DEBUG nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.620 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.621 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.621 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.621 2 DEBUG nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Processing event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.622 2 DEBUG nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.622 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.622 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.622 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.623 2 DEBUG nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] No waiting events found dispatching network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.623 2 WARNING nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received unexpected event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 for instance with vm_state building and task_state spawning.
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.624 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.628 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433172.6284063, 4ad76124-48eb-467e-9a6f-951235efdb35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.628 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] VM Resumed (Lifecycle Event)
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.630 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.633 2 INFO nova.virt.libvirt.driver [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance spawned successfully.
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.634 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.639 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.639 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.646 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.650 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.667 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.673 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.674 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.674 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.675 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.675 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.675 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.683 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.696 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.748 2 INFO nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Took 10.73 seconds to spawn the instance on the hypervisor.
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.748 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.819 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.821 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.821 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Creating image(s)
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.842 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.864 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.885 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.888 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.923 2 DEBUG nova.network.neutron [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Updated VIF entry in instance network info cache for port 59075c43-66a8-4a9c-a693-31f83575b355. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.924 2 DEBUG nova.network.neutron [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Updating instance_info_cache with network_info: [{"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.927 2 DEBUG nova.policy [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21d93d87742344a1b7662df0d97a69b2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd094b0f9acb49ca8b1f295403a44ec3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.936 2 INFO nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Took 12.01 seconds to build instance.
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.956 2 DEBUG oslo_concurrency.lockutils [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.957 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.958 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.958 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.959 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.959 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.981 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:52 compute-0 nova_compute[259627]: 2025-10-14 09:12:52.984 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.055 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433173.0553296, bf38daab-2994-41c4-a44f-91e466acf68e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.056 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] VM Started (Lifecycle Event)
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.078 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.083 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433173.055622, bf38daab-2994-41c4-a44f-91e466acf68e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.084 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] VM Paused (Lifecycle Event)
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.236 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.241 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:53 compute-0 ceph-mon[74249]: pgmap v1752: 305 pgs: 305 active+clean; 292 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Oct 14 09:12:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/761441399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.255 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.287 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.330 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] resizing rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.452 2 DEBUG nova.objects.instance [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lazy-loading 'migration_context' on Instance uuid 69c5d250-71a4-47d5-a3ce-5b606ee9c692 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.470 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.471 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Ensure instance console log exists: /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.472 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.472 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.472 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:53 compute-0 nova_compute[259627]: 2025-10-14 09:12:53.783 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Successfully created port: 63189572-78bc-4d3a-8135-659f8c39ce7d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:12:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 292 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 5.7 MiB/s wr, 117 op/s
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.404 2 INFO nova.compute.manager [None req-bb03464d-8d0a-4c8d-9076-b051429c8030 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Get console output
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.408 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.644 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.644 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.644 2 INFO nova.compute.manager [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Rebooting instance
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.660 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.660 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.660 2 DEBUG nova.network.neutron [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.696 2 DEBUG nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.696 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Processing event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.698 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.698 2 DEBUG nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] No waiting events found dispatching network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.698 2 WARNING nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received unexpected event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 for instance with vm_state building and task_state spawning.
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.698 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.702 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.704 2 INFO nova.virt.libvirt.driver [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance spawned successfully.
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.704 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.706 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433174.7061656, bf38daab-2994-41c4-a44f-91e466acf68e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.706 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] VM Resumed (Lifecycle Event)
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.723 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.727 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.727 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.727 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.728 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.728 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.728 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.731 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.760 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.790 2 INFO nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Took 11.99 seconds to spawn the instance on the hypervisor.
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.791 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.840 2 INFO nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Took 13.87 seconds to build instance.
Oct 14 09:12:54 compute-0 nova_compute[259627]: 2025-10-14 09:12:54.856 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:55 compute-0 ceph-mon[74249]: pgmap v1753: 305 pgs: 305 active+clean; 292 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 5.7 MiB/s wr, 117 op/s
Oct 14 09:12:55 compute-0 nova_compute[259627]: 2025-10-14 09:12:55.302 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Successfully updated port: 63189572-78bc-4d3a-8135-659f8c39ce7d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:12:55 compute-0 nova_compute[259627]: 2025-10-14 09:12:55.320 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:55 compute-0 nova_compute[259627]: 2025-10-14 09:12:55.320 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquired lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:55 compute-0 nova_compute[259627]: 2025-10-14 09:12:55.321 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:12:55 compute-0 nova_compute[259627]: 2025-10-14 09:12:55.492 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:12:55 compute-0 nova_compute[259627]: 2025-10-14 09:12:55.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 232 op/s
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.369 2 DEBUG nova.network.neutron [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.390 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.393 2 DEBUG nova.compute.manager [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.551 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Updating instance_info_cache with network_info: [{"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.567 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Releasing lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.568 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance network_info: |[{"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.574 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start _get_guest_xml network_info=[{"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.582 2 WARNING nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.588 2 DEBUG nova.virt.libvirt.host [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.589 2 DEBUG nova.virt.libvirt.host [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.594 2 DEBUG nova.virt.libvirt.host [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.595 2 DEBUG nova.virt.libvirt.host [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.596 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.597 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.598 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.598 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.599 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.599 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.600 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.600 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.601 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.601 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.602 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.602 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.608 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.889 2 DEBUG nova.compute.manager [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-changed-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.891 2 DEBUG nova.compute.manager [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Refreshing instance network info cache due to event network-changed-63189572-78bc-4d3a-8135-659f8c39ce7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.891 2 DEBUG oslo_concurrency.lockutils [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.892 2 DEBUG oslo_concurrency.lockutils [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:12:56 compute-0 nova_compute[259627]: 2025-10-14 09:12:56.893 2 DEBUG nova.network.neutron [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Refreshing network info cache for port 63189572-78bc-4d3a-8135-659f8c39ce7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.034 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.034 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.035 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.035 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.035 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.037 2 INFO nova.compute.manager [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Terminating instance
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.038 2 DEBUG nova.compute.manager [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:12:57 compute-0 kernel: tap66a0c3b8-73 (unregistering): left promiscuous mode
Oct 14 09:12:57 compute-0 NetworkManager[44885]: <info>  [1760433177.0784] device (tap66a0c3b8-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:57 compute-0 ovn_controller[152662]: 2025-10-14T09:12:57Z|00948|binding|INFO|Releasing lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 from this chassis (sb_readonly=0)
Oct 14 09:12:57 compute-0 ovn_controller[152662]: 2025-10-14T09:12:57Z|00949|binding|INFO|Setting lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 down in Southbound
Oct 14 09:12:57 compute-0 ovn_controller[152662]: 2025-10-14T09:12:57Z|00950|binding|INFO|Removing iface tap66a0c3b8-73 ovn-installed in OVS
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.093 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:ff:73 10.100.0.11'], port_security=['fa:16:3e:c2:ff:73 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4ad76124-48eb-467e-9a6f-951235efdb35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=66a0c3b8-73ab-490e-a3d4-06827c574cb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.095 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 66a0c3b8-73ab-490e-a3d4-06827c574cb6 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.097 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 14 09:12:57 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Consumed 5.341s CPU time.
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.116 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c05eb111-6454-4f02-9cbd-e48fa5bed71b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 systemd-machined[214636]: Machine qemu-112-instance-0000005b terminated.
Oct 14 09:12:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641616359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.149 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.160 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1a7d63-bb40-47e2-88b4-bee97c96a389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.164 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dd183ba5-9c72-49df-a707-502aa2e8c748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.187 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.192 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.201 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1c097b-33f0-4812-a6f3-262f7732b8a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0659b154-1a15-48c3-ba9a-642e6eba8916]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694994, 'reachable_time': 19321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351308, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.266 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6648f5b-dafc-4537-a1ac-2dda59dc938f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695007, 'tstamp': 695007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351310, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695010, 'tstamp': 695010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351310, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 ceph-mon[74249]: pgmap v1754: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 232 op/s
Oct 14 09:12:57 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/641616359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.269 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.279 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.280 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.280 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.281 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.287 2 INFO nova.virt.libvirt.driver [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance destroyed successfully.
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.289 2 DEBUG nova.objects.instance [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'resources' on Instance uuid 4ad76124-48eb-467e-9a6f-951235efdb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.306 2 DEBUG nova.virt.libvirt.vif [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-1',id=91,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:52Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=4ad76124-48eb-467e-9a6f-951235efdb35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.306 2 DEBUG nova.network.os_vif_util [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.308 2 DEBUG nova.network.os_vif_util [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.309 2 DEBUG os_vif [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66a0c3b8-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.320 2 INFO os_vif [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73')
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.369 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.370 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.371 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.371 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.372 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.375 2 INFO nova.compute.manager [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Terminating instance
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.377 2 DEBUG nova.compute.manager [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:12:57 compute-0 kernel: tap59075c43-66 (unregistering): left promiscuous mode
Oct 14 09:12:57 compute-0 NetworkManager[44885]: <info>  [1760433177.4224] device (tap59075c43-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:57 compute-0 ovn_controller[152662]: 2025-10-14T09:12:57Z|00951|binding|INFO|Releasing lport 59075c43-66a8-4a9c-a693-31f83575b355 from this chassis (sb_readonly=0)
Oct 14 09:12:57 compute-0 ovn_controller[152662]: 2025-10-14T09:12:57Z|00952|binding|INFO|Setting lport 59075c43-66a8-4a9c-a693-31f83575b355 down in Southbound
Oct 14 09:12:57 compute-0 ovn_controller[152662]: 2025-10-14T09:12:57Z|00953|binding|INFO|Removing iface tap59075c43-66 ovn-installed in OVS
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.439 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b1:fa 10.100.0.3'], port_security=['fa:16:3e:63:b1:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bf38daab-2994-41c4-a44f-91e466acf68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=59075c43-66a8-4a9c-a693-31f83575b355) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.440 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 59075c43-66a8-4a9c-a693-31f83575b355 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.442 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0506bb08-7957-44ca-9a0f-014c548c7b40, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.442 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba30d5c2-bbb4-4eab-bc9d-e38acbf7c527]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.443 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 namespace which is not needed anymore
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 14 09:12:57 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Consumed 3.686s CPU time.
Oct 14 09:12:57 compute-0 systemd-machined[214636]: Machine qemu-113-instance-0000005c terminated.
Oct 14 09:12:57 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [NOTICE]   (350984) : haproxy version is 2.8.14-c23fe91
Oct 14 09:12:57 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [NOTICE]   (350984) : path to executable is /usr/sbin/haproxy
Oct 14 09:12:57 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [ALERT]    (350984) : Current worker (350989) exited with code 143 (Terminated)
Oct 14 09:12:57 compute-0 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [WARNING]  (350984) : All workers exited. Exiting... (0)
Oct 14 09:12:57 compute-0 systemd[1]: libpod-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e.scope: Deactivated successfully.
Oct 14 09:12:57 compute-0 conmon[350965]: conmon e64283190618996fe6a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e.scope/container/memory.events
Oct 14 09:12:57 compute-0 podman[351378]: 2025-10-14 09:12:57.607442365 +0000 UTC m=+0.053295994 container died e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.627 2 INFO nova.virt.libvirt.driver [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance destroyed successfully.
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.628 2 DEBUG nova.objects.instance [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'resources' on Instance uuid bf38daab-2994-41c4-a44f-91e466acf68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e-userdata-shm.mount: Deactivated successfully.
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.645 2 DEBUG nova.virt.libvirt.vif [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-2',id=92,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-14T09:12:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:54Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=bf38daab-2994-41c4-a44f-91e466acf68e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.645 2 DEBUG nova.network.os_vif_util [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.646 2 DEBUG nova.network.os_vif_util [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8eaa5e67a067367ecbaeb8ed40f166de47bd4cb062e63d1c476cf7e27a74873-merged.mount: Deactivated successfully.
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.647 2 DEBUG os_vif [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59075c43-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.653 2 INFO os_vif [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66')
Oct 14 09:12:57 compute-0 podman[351378]: 2025-10-14 09:12:57.658099722 +0000 UTC m=+0.103953361 container cleanup e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:12:57 compute-0 systemd[1]: libpod-conmon-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e.scope: Deactivated successfully.
Oct 14 09:12:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:12:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4163474572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.718 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.720 2 DEBUG nova.virt.libvirt.vif [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1449697808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1449697808',id=93,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd094b0f9acb49ca8b1f295403a44ec3',ramdisk_id='',reservation_id='r-0pjevjco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-648965588',owner_user_name='tempest-ServerTagsTestJSON-648965588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:52Z,user_data=None,user_id='21d93d87742344a1b7662df0d97a69b2',uuid=69c5d250-71a4-47d5-a3ce-5b606ee9c692,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.721 2 DEBUG nova.network.os_vif_util [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converting VIF {"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.721 2 DEBUG nova.network.os_vif_util [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.723 2 DEBUG nova.objects.instance [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69c5d250-71a4-47d5-a3ce-5b606ee9c692 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:12:57 compute-0 podman[351429]: 2025-10-14 09:12:57.725199425 +0000 UTC m=+0.044351534 container remove e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.729 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.774 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <uuid>69c5d250-71a4-47d5-a3ce-5b606ee9c692</uuid>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <name>instance-0000005d</name>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerTagsTestJSON-server-1449697808</nova:name>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:12:56</nova:creationTime>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <nova:user uuid="21d93d87742344a1b7662df0d97a69b2">tempest-ServerTagsTestJSON-648965588-project-member</nova:user>
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <nova:project uuid="fd094b0f9acb49ca8b1f295403a44ec3">tempest-ServerTagsTestJSON-648965588</nova:project>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <nova:port uuid="63189572-78bc-4d3a-8135-659f8c39ce7d">
Oct 14 09:12:57 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <system>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <entry name="serial">69c5d250-71a4-47d5-a3ce-5b606ee9c692</entry>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <entry name="uuid">69c5d250-71a4-47d5-a3ce-5b606ee9c692</entry>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </system>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <os>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   </os>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <features>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   </features>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk">
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config">
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       </source>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:12:57 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:4a:ab:40"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <target dev="tap63189572-78"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/console.log" append="off"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <video>
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </video>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:12:57 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:12:57 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:12:57 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:12:57 compute-0 nova_compute[259627]: </domain>
Oct 14 09:12:57 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.774 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Preparing to wait for external event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.774 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.775 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.775 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.775 2 DEBUG nova.virt.libvirt.vif [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1449697808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1449697808',id=93,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd094b0f9acb49ca8b1f295403a44ec3',ramdisk_id='',reservation_id='r-0pjevjco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-648965588',owner_user_name='tempest-ServerTagsTestJSON-648965588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:52Z,user_data=None,user_id='21d93d87742344a1b7662df0d97a69b2',uuid=69c5d250-71a4-47d5-a3ce-5b606ee9c692,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.776 2 DEBUG nova.network.os_vif_util [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converting VIF {"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.776 2 DEBUG nova.network.os_vif_util [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.776 2 DEBUG os_vif [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.776 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd52fe2b-23ed-45e7-8e48-03ea757d562f]: (4, ('Tue Oct 14 09:12:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 (e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e)\ne64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e\nTue Oct 14 09:12:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 (e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e)\ne64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.777 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.777 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.778 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c39bb614-a239-477a-86a6-50f8e79ec965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.779 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63189572-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63189572-78, col_values=(('external_ids', {'iface-id': '63189572-78bc-4d3a-8135-659f8c39ce7d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:ab:40', 'vm-uuid': '69c5d250-71a4-47d5-a3ce-5b606ee9c692'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:57 compute-0 kernel: tap0506bb08-70: left promiscuous mode
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:12:57 compute-0 NetworkManager[44885]: <info>  [1760433177.7839] manager: (tap63189572-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.789 2 INFO nova.virt.libvirt.driver [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Deleting instance files /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35_del
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.790 2 INFO nova.virt.libvirt.driver [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Deletion of /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35_del complete
Oct 14 09:12:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.808 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e30464bc-983e-4d42-85e9-6af240b5340a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.810 2 INFO os_vif [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78')
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.836 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f406c42-a36c-4b68-ab79-b87adeebd58c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.838 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0aefb180-f7df-4954-aa87-1d0a159f4aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.850 2 INFO nova.compute.manager [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.850 2 DEBUG oslo.service.loopingcall [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.851 2 DEBUG nova.compute.manager [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.851 2 DEBUG nova.network.neutron [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.854 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c009871a-d52b-4d8a-91c3-66618eb61a55]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694983, 'reachable_time': 29378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351453, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d0506bb08\x2d7957\x2d44ca\x2d9a0f\x2d014c548c7b40.mount: Deactivated successfully.
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.857 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.857 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4a7902-87b7-43e0-8f51-b96e10a90f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.862 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.865 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.866 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.866 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] No VIF found with MAC fa:16:3e:4a:ab:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.866 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Using config drive
Oct 14 09:12:57 compute-0 nova_compute[259627]: 2025-10-14 09:12:57.892 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.064 2 INFO nova.virt.libvirt.driver [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Deleting instance files /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e_del
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.065 2 INFO nova.virt.libvirt.driver [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Deletion of /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e_del complete
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.114 2 INFO nova.compute.manager [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.115 2 DEBUG oslo.service.loopingcall [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.115 2 DEBUG nova.compute.manager [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.115 2 DEBUG nova.network.neutron [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:12:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4163474572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.434 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Creating config drive at /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.442 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4w8jo8gr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.605 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4w8jo8gr" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.641 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.647 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.701 2 DEBUG nova.network.neutron [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Updated VIF entry in instance network info cache for port 63189572-78bc-4d3a-8135-659f8c39ce7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.702 2 DEBUG nova.network.neutron [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Updating instance_info_cache with network_info: [{"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.724 2 DEBUG oslo_concurrency.lockutils [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:12:58 compute-0 kernel: tapce4eb1a6-22 (unregistering): left promiscuous mode
Oct 14 09:12:58 compute-0 NetworkManager[44885]: <info>  [1760433178.8521] device (tapce4eb1a6-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:12:58 compute-0 virtqemud[259351]: End of file while reading data: Input/output error
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.871 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.873 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Deleting local config drive /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config because it was imported into RBD.
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:58 compute-0 ovn_controller[152662]: 2025-10-14T09:12:58Z|00954|binding|INFO|Releasing lport ce4eb1a6-2221-4519-98fa-44a39da77b71 from this chassis (sb_readonly=0)
Oct 14 09:12:58 compute-0 ovn_controller[152662]: 2025-10-14T09:12:58Z|00955|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 down in Southbound
Oct 14 09:12:58 compute-0 ovn_controller[152662]: 2025-10-14T09:12:58Z|00956|binding|INFO|Removing iface tapce4eb1a6-22 ovn-installed in OVS
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.922 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:bd 10.100.0.7'], port_security=['fa:16:3e:7f:9c:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9e354e27-d674-43c3-890b-caf8731cb827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c85ef4e1-bf02-447d-8de0-60f2d978738d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7211eb9-3be4-4007-bf83-d7812e6ec9fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ce4eb1a6-2221-4519-98fa-44a39da77b71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.923 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ce4eb1a6-2221-4519-98fa-44a39da77b71 in datapath 7cb8e394-ebca-4b27-8174-62c6b6f3a7da unbound from our chassis
Oct 14 09:12:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.925 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:12:58 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct 14 09:12:58 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Consumed 13.308s CPU time.
Oct 14 09:12:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.928 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29644bec-32c0-4d98-965a-c9f40e2b473b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.928 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da namespace which is not needed anymore
Oct 14 09:12:58 compute-0 systemd-machined[214636]: Machine qemu-111-instance-0000005a terminated.
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:58 compute-0 kernel: tap63189572-78: entered promiscuous mode
Oct 14 09:12:58 compute-0 NetworkManager[44885]: <info>  [1760433178.9450] manager: (tap63189572-78): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Oct 14 09:12:58 compute-0 systemd-udevd[351281]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:58 compute-0 ovn_controller[152662]: 2025-10-14T09:12:58Z|00957|binding|INFO|Claiming lport 63189572-78bc-4d3a-8135-659f8c39ce7d for this chassis.
Oct 14 09:12:58 compute-0 ovn_controller[152662]: 2025-10-14T09:12:58Z|00958|binding|INFO|63189572-78bc-4d3a-8135-659f8c39ce7d: Claiming fa:16:3e:4a:ab:40 10.100.0.11
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.958 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ab:40 10.100.0.11'], port_security=['fa:16:3e:4a:ab:40 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '69c5d250-71a4-47d5-a3ce-5b606ee9c692', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd094b0f9acb49ca8b1f295403a44ec3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10487db5-e210-4216-ae01-596f4a7c7ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fab9082-797f-44f0-9d82-af22d5b6d133, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=63189572-78bc-4d3a-8135-659f8c39ce7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:58 compute-0 NetworkManager[44885]: <info>  [1760433178.9609] device (tap63189572-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:58 compute-0 NetworkManager[44885]: <info>  [1760433178.9618] device (tap63189572-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:58 compute-0 ovn_controller[152662]: 2025-10-14T09:12:58Z|00959|binding|INFO|Setting lport 63189572-78bc-4d3a-8135-659f8c39ce7d ovn-installed in OVS
Oct 14 09:12:58 compute-0 ovn_controller[152662]: 2025-10-14T09:12:58Z|00960|binding|INFO|Setting lport 63189572-78bc-4d3a-8135-659f8c39ce7d up in Southbound
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:58 compute-0 nova_compute[259627]: 2025-10-14 09:12:58.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:58 compute-0 systemd-machined[214636]: New machine qemu-114-instance-0000005d.
Oct 14 09:12:58 compute-0 podman[351520]: 2025-10-14 09:12:58.990533727 +0000 UTC m=+0.096422446 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:12:59 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-0000005d.
Oct 14 09:12:59 compute-0 podman[351516]: 2025-10-14 09:12:59.007911285 +0000 UTC m=+0.123705048 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 14 09:12:59 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [NOTICE]   (349192) : haproxy version is 2.8.14-c23fe91
Oct 14 09:12:59 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [NOTICE]   (349192) : path to executable is /usr/sbin/haproxy
Oct 14 09:12:59 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [WARNING]  (349192) : Exiting Master process...
Oct 14 09:12:59 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [ALERT]    (349192) : Current worker (349199) exited with code 143 (Terminated)
Oct 14 09:12:59 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [WARNING]  (349192) : All workers exited. Exiting... (0)
Oct 14 09:12:59 compute-0 systemd[1]: libpod-d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958.scope: Deactivated successfully.
Oct 14 09:12:59 compute-0 NetworkManager[44885]: <info>  [1760433179.0898] manager: (tapce4eb1a6-22): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Oct 14 09:12:59 compute-0 podman[351594]: 2025-10-14 09:12:59.096173979 +0000 UTC m=+0.053817407 container died d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.096 2 DEBUG nova.network.neutron [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.117 2 DEBUG nova.network.neutron [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.135 2 INFO nova.compute.manager [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Took 1.28 seconds to deallocate network for instance.
Oct 14 09:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958-userdata-shm.mount: Deactivated successfully.
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.142 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-unplugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-29994210cca99f3be79ccbe5f31a62bef181ac26761ce349073eb7da91ec9974-merged.mount: Deactivated successfully.
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.144 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.144 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.144 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.145 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] No waiting events found dispatching network-vif-unplugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.145 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-unplugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.145 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.145 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.146 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.146 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.146 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] No waiting events found dispatching network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.146 2 WARNING nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received unexpected event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 for instance with vm_state active and task_state deleting.
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.147 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-unplugged-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.147 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.147 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.147 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.148 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] No waiting events found dispatching network-vif-unplugged-59075c43-66a8-4a9c-a693-31f83575b355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.148 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-unplugged-59075c43-66a8-4a9c-a693-31f83575b355 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.148 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.149 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.149 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.149 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.149 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] No waiting events found dispatching network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.150 2 WARNING nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received unexpected event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 for instance with vm_state active and task_state deleting.
Oct 14 09:12:59 compute-0 podman[351594]: 2025-10-14 09:12:59.151143072 +0000 UTC m=+0.108786500 container cleanup d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 09:12:59 compute-0 systemd[1]: libpod-conmon-d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958.scope: Deactivated successfully.
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.159 2 INFO nova.compute.manager [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Took 1.04 seconds to deallocate network for instance.
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.196 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.197 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.214 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.218 2 DEBUG nova.compute.manager [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.218 2 DEBUG oslo_concurrency.lockutils [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.219 2 DEBUG oslo_concurrency.lockutils [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.219 2 DEBUG oslo_concurrency.lockutils [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.219 2 DEBUG nova.compute.manager [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.220 2 WARNING nova.compute.manager [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state active and task_state reboot_started.
Oct 14 09:12:59 compute-0 podman[351636]: 2025-10-14 09:12:59.22780672 +0000 UTC m=+0.051091149 container remove d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e00a4297-d395-406d-a427-a982acec65ee]: (4, ('Tue Oct 14 09:12:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da (d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958)\nd1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958\nTue Oct 14 09:12:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da (d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958)\nd1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.235 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e36e88-01ea-4532-813d-61a818fefe34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.235 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb8e394-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:59 compute-0 kernel: tap7cb8e394-e0: left promiscuous mode
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.259 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c88a942f-2b14-4b98-a678-08a003b66e97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ceph-mon[74249]: pgmap v1755: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.284 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae19393-f472-4b84-8c74-e67773157ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.285 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2be99da9-58ce-435f-9465-cd44b41464f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.310 2 DEBUG oslo_concurrency.processutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.325 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2658afa1-03df-4b21-9134-b7c6c8e34ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693185, 'reachable_time': 16629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351667, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.328 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.328 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1ac215-cc8c-4e16-9eaf-9d153863a92f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.330 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 63189572-78bc-4d3a-8135-659f8c39ce7d in datapath 84e9c235-fb90-472a-8cac-f7ae999c18dd unbound from our chassis
Oct 14 09:12:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d7cb8e394\x2debca\x2d4b27\x2d8174\x2d62c6b6f3a7da.mount: Deactivated successfully.
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.333 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84e9c235-fb90-472a-8cac-f7ae999c18dd
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.348 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be7a8d6e-55bf-4422-bf20-6ba5c1fa3722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.349 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84e9c235-f1 in ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.351 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84e9c235-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.351 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e73c46d6-6582-46da-8f15-6afe746aa910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.353 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78526ea0-734a-45fb-849a-9d969908f28c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.372 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8db44d6d-2139-4417-9b5a-6adc3dc2d96d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.392 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccfbd61-99e5-4554-9b0d-2f556c630991]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.420 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6f4bbc-ae91-49c0-9857-0a387b060547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 NetworkManager[44885]: <info>  [1760433179.4283] manager: (tap84e9c235-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.428 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a79a8dc8-5ad8-4bd4-bfb5-c5b39774c977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.474 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[22f2eff7-0e34-4bd1-9ab0-e547f8255e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.478 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c14d1dbb-fb93-46da-96a0-dacfad9ca5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 NetworkManager[44885]: <info>  [1760433179.5019] device (tap84e9c235-f0): carrier: link connected
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.510 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3653ff-6730-40fa-ba8e-f21dc06828fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.538 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e129c79-a2b3-40d7-afda-1e237d404d12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84e9c235-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:ac:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695826, 'reachable_time': 43866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351743, 'error': None, 'target': 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.555 2 INFO nova.virt.libvirt.driver [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance shutdown successfully.
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db1eebf6-4343-4181-872f-fe8d155c9246]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:ac64'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695826, 'tstamp': 695826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351744, 'error': None, 'target': 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.594 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e654d127-add3-4f54-b5f2-0160947807cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84e9c235-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:ac:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695826, 'reachable_time': 43866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351746, 'error': None, 'target': 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 NetworkManager[44885]: <info>  [1760433179.6256] manager: (tapce4eb1a6-22): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Oct 14 09:12:59 compute-0 systemd-udevd[351718]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:12:59 compute-0 kernel: tapce4eb1a6-22: entered promiscuous mode
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:59 compute-0 ovn_controller[152662]: 2025-10-14T09:12:59Z|00961|binding|INFO|Claiming lport ce4eb1a6-2221-4519-98fa-44a39da77b71 for this chassis.
Oct 14 09:12:59 compute-0 ovn_controller[152662]: 2025-10-14T09:12:59Z|00962|binding|INFO|ce4eb1a6-2221-4519-98fa-44a39da77b71: Claiming fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.639 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:bd 10.100.0.7'], port_security=['fa:16:3e:7f:9c:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9e354e27-d674-43c3-890b-caf8731cb827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c85ef4e1-bf02-447d-8de0-60f2d978738d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7211eb9-3be4-4007-bf83-d7812e6ec9fe, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ce4eb1a6-2221-4519-98fa-44a39da77b71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:12:59 compute-0 NetworkManager[44885]: <info>  [1760433179.6418] device (tapce4eb1a6-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:12:59 compute-0 NetworkManager[44885]: <info>  [1760433179.6425] device (tapce4eb1a6-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.647 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd4da9b-0901-4495-b57d-7ba7f273e256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_controller[152662]: 2025-10-14T09:12:59Z|00963|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 ovn-installed in OVS
Oct 14 09:12:59 compute-0 ovn_controller[152662]: 2025-10-14T09:12:59Z|00964|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 up in Southbound
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:59 compute-0 systemd-machined[214636]: New machine qemu-115-instance-0000005a.
Oct 14 09:12:59 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-0000005a.
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.703 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb93d9c-d1b6-426f-8353-e2aab03ffcf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.704 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84e9c235-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.705 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.705 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84e9c235-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:59 compute-0 NetworkManager[44885]: <info>  [1760433179.7088] manager: (tap84e9c235-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct 14 09:12:59 compute-0 kernel: tap84e9c235-f0: entered promiscuous mode
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.712 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84e9c235-f0, col_values=(('external_ids', {'iface-id': '89fbd290-97fc-455c-8b54-34d7628205e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:59 compute-0 ovn_controller[152662]: 2025-10-14T09:12:59Z|00965|binding|INFO|Releasing lport 89fbd290-97fc-455c-8b54-34d7628205e3 from this chassis (sb_readonly=0)
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.734 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84e9c235-fb90-472a-8cac-f7ae999c18dd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84e9c235-fb90-472a-8cac-f7ae999c18dd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.735 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d14054-f1f1-4de9-81cb-abfa1085089b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.737 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-84e9c235-fb90-472a-8cac-f7ae999c18dd
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/84e9c235-fb90-472a-8cac-f7ae999c18dd.pid.haproxy
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 84e9c235-fb90-472a-8cac-f7ae999c18dd
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:12:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.738 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'env', 'PROCESS_TAG=haproxy-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84e9c235-fb90-472a-8cac-f7ae999c18dd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:12:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:12:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482927218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.780 2 DEBUG oslo_concurrency.processutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.785 2 DEBUG nova.compute.provider_tree [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.801 2 DEBUG nova.scheduler.client.report [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.821 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.823 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.860 2 INFO nova.scheduler.client.report [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Deleted allocations for instance 4ad76124-48eb-467e-9a6f-951235efdb35
Oct 14 09:12:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.932 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:12:59 compute-0 nova_compute[259627]: 2025-10-14 09:12:59.957 2 DEBUG oslo_concurrency.processutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.006 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433180.0061214, 69c5d250-71a4-47d5-a3ce-5b606ee9c692 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.007 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] VM Started (Lifecycle Event)
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.030 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.037 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433180.0063717, 69c5d250-71a4-47d5-a3ce-5b606ee9c692 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.037 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] VM Paused (Lifecycle Event)
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.066 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.070 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.086 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:00 compute-0 podman[351798]: 2025-10-14 09:13:00.117549493 +0000 UTC m=+0.062095021 container create 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:00 compute-0 systemd[1]: Started libpod-conmon-14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd.scope.
Oct 14 09:13:00 compute-0 podman[351798]: 2025-10-14 09:13:00.093035039 +0000 UTC m=+0.037580597 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:13:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f0fb587d1d33dedf087780ab0b81814fff666e633a1b92da6547d217849fe2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:00 compute-0 podman[351798]: 2025-10-14 09:13:00.211863675 +0000 UTC m=+0.156409253 container init 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:13:00 compute-0 podman[351798]: 2025-10-14 09:13:00.217273969 +0000 UTC m=+0.161819517 container start 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:13:00 compute-0 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [NOTICE]   (351836) : New worker (351838) forked
Oct 14 09:13:00 compute-0 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [NOTICE]   (351836) : Loading success.
Oct 14 09:13:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1482927218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.295 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ce4eb1a6-2221-4519-98fa-44a39da77b71 in datapath 7cb8e394-ebca-4b27-8174-62c6b6f3a7da unbound from our chassis
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.298 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.309 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52e016bd-8530-4ada-a9a9-b48eb7e7f85b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.310 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7cb8e394-e1 in ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.312 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7cb8e394-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9608da-f375-4c1a-869a-b0cc25b08632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.313 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36ae6675-b4c3-42af-8d8b-bdfc54edb838]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.324 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[57231915-169c-4709-95d7-6cce4d8c744a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45764ea8-e268-4a5d-8630-07bff044f0d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.372 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[050bcd29-6dba-4e6a-8872-c51e78bea2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 NetworkManager[44885]: <info>  [1760433180.3792] manager: (tap7cb8e394-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.379 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[53923f25-3f4b-4b3f-a5cb-9b5bdb2a0798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 systemd-udevd[351854]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.430 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[71302f0e-6291-4c21-ab27-0ab0cdf8a6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.434 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[999706e1-06f9-4b34-bd58-ffa00ab1cb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2263679068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:00 compute-0 NetworkManager[44885]: <info>  [1760433180.4649] device (tap7cb8e394-e0): carrier: link connected
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.474 2 DEBUG oslo_concurrency.processutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.474 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aa910f4b-0445-4cc5-ad67-2f3225d9b197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.480 2 DEBUG nova.compute.provider_tree [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.493 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c15350fb-c0fb-46a3-83fb-0f4fea0affc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cb8e394-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0c:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695922, 'reachable_time': 30146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351875, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.495 2 DEBUG nova.scheduler.client.report [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.510 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[05e80341-0e3f-42be-8570-436f3f0b0402]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:c43'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695922, 'tstamp': 695922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351876, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.513 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.529 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07bc643e-50f8-41ff-b725-c908420f9b69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cb8e394-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0c:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695922, 'reachable_time': 30146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351877, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.537 2 INFO nova.scheduler.client.report [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Deleted allocations for instance bf38daab-2994-41c4-a44f-91e466acf68e
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.557 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d11565-71c3-48a9-9fc5-f50dd353903f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.604 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb4ea7c-2b64-41fe-9ea0-03ceb1a2cf37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb8e394-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.614 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.615 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7cb8e394-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:00 compute-0 NetworkManager[44885]: <info>  [1760433180.6239] manager: (tap7cb8e394-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct 14 09:13:00 compute-0 kernel: tap7cb8e394-e0: entered promiscuous mode
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.630 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7cb8e394-e0, col_values=(('external_ids', {'iface-id': 'abbcb164-8856-47e0-a7b9-984d66daedac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:00 compute-0 ovn_controller[152662]: 2025-10-14T09:13:00Z|00966|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.635 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.636 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[63c1bacb-9e25-4df1-8e52-75f3a2a0f427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.637 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:13:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.640 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'env', 'PROCESS_TAG=haproxy-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:13:00 compute-0 nova_compute[259627]: 2025-10-14 09:13:00.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:01 compute-0 podman[351951]: 2025-10-14 09:13:01.050403166 +0000 UTC m=+0.068918379 container create 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 09:13:01 compute-0 podman[351951]: 2025-10-14 09:13:01.011341054 +0000 UTC m=+0.029856287 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:13:01 compute-0 systemd[1]: Started libpod-conmon-0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd.scope.
Oct 14 09:13:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb65901d3e784ae2e04a1afc9a7cae7d402065e9d57d7aaf3b11e83a0ac7ab76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:01 compute-0 podman[351951]: 2025-10-14 09:13:01.171104348 +0000 UTC m=+0.189619601 container init 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:01 compute-0 podman[351951]: 2025-10-14 09:13:01.176533352 +0000 UTC m=+0.195048575 container start 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:13:01 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [NOTICE]   (351970) : New worker (351972) forked
Oct 14 09:13:01 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [NOTICE]   (351970) : Loading success.
Oct 14 09:13:01 compute-0 ceph-mon[74249]: pgmap v1756: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 14 09:13:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2263679068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.379 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 9e354e27-d674-43c3-890b-caf8731cb827 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.380 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433181.3791194, 9e354e27-d674-43c3-890b-caf8731cb827 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.380 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Resumed (Lifecycle Event)
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.387 2 INFO nova.virt.libvirt.driver [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance running successfully.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.388 2 INFO nova.virt.libvirt.driver [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance soft rebooted successfully.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.388 2 DEBUG nova.compute.manager [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.422 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.425 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.453 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] During sync_power_state the instance has a pending task (reboot_started). Skip.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.453 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433181.3820546, 9e354e27-d674-43c3-890b-caf8731cb827 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.454 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Started (Lifecycle Event)
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.464 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.479 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.482 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.554 2 DEBUG nova.compute.manager [req-37ebbd07-3660-4f25-9df8-76a5b652b073 req-83544a5f-e9b4-4249-8bd9-c8719440f1d5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-deleted-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.554 2 DEBUG nova.compute.manager [req-37ebbd07-3660-4f25-9df8-76a5b652b073 req-83544a5f-e9b4-4249-8bd9-c8719440f1d5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-deleted-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.590 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.591 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.591 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.591 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.592 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.592 2 WARNING nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state active and task_state None.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.593 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.593 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.593 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.594 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.594 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Processing event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.594 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.595 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.595 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.595 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.596 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] No waiting events found dispatching network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.596 2 WARNING nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received unexpected event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d for instance with vm_state building and task_state spawning.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.597 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.597 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.598 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.598 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.598 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.598 2 WARNING nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state active and task_state None.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.599 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.599 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.599 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.600 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.600 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.600 2 WARNING nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state active and task_state None.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.602 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.607 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.607 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433181.6068354, 69c5d250-71a4-47d5-a3ce-5b606ee9c692 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.608 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] VM Resumed (Lifecycle Event)
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.614 2 INFO nova.virt.libvirt.driver [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance spawned successfully.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.615 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.629 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.643 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.644 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.645 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.646 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.646 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.647 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.652 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.690 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.716 2 INFO nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Took 8.90 seconds to spawn the instance on the hypervisor.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.717 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.814 2 INFO nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Took 9.96 seconds to build instance.
Oct 14 09:13:01 compute-0 nova_compute[259627]: 2025-10-14 09:13:01.838 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 304 op/s
Oct 14 09:13:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:13:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:13:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:13:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:13:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:13:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:13:02 compute-0 nova_compute[259627]: 2025-10-14 09:13:02.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.829320) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182829407, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1575, "num_deletes": 250, "total_data_size": 2256590, "memory_usage": 2293880, "flush_reason": "Manual Compaction"}
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182842174, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1340912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35401, "largest_seqno": 36975, "table_properties": {"data_size": 1335477, "index_size": 2511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15272, "raw_average_key_size": 21, "raw_value_size": 1323174, "raw_average_value_size": 1827, "num_data_blocks": 113, "num_entries": 724, "num_filter_entries": 724, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433037, "oldest_key_time": 1760433037, "file_creation_time": 1760433182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 12923 microseconds, and 8632 cpu microseconds.
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.842252) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1340912 bytes OK
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.842279) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.844067) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.844088) EVENT_LOG_v1 {"time_micros": 1760433182844081, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.844111) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2249611, prev total WAL file size 2249611, number of live WAL files 2.
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.845163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323530' seq:72057594037927935, type:22 .. '6D6772737461740031353031' seq:0, type:0; will stop at (end)
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1309KB)], [77(9676KB)]
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182845197, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11249159, "oldest_snapshot_seqno": -1}
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6283 keys, 8824563 bytes, temperature: kUnknown
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182891269, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8824563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8782661, "index_size": 25088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 157979, "raw_average_key_size": 25, "raw_value_size": 8670062, "raw_average_value_size": 1379, "num_data_blocks": 1019, "num_entries": 6283, "num_filter_entries": 6283, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.891660) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8824563 bytes
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.893578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.7 rd, 191.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(15.0) write-amplify(6.6) OK, records in: 6731, records dropped: 448 output_compression: NoCompression
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.893609) EVENT_LOG_v1 {"time_micros": 1760433182893595, "job": 44, "event": "compaction_finished", "compaction_time_micros": 46165, "compaction_time_cpu_micros": 20741, "output_level": 6, "num_output_files": 1, "total_output_size": 8824563, "num_input_records": 6731, "num_output_records": 6283, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182894260, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182897713, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.845108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:03 compute-0 nova_compute[259627]: 2025-10-14 09:13:03.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:03 compute-0 ceph-mon[74249]: pgmap v1757: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 304 op/s
Oct 14 09:13:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.192 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.193 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.193 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.193 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.193 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.194 2 INFO nova.compute.manager [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Terminating instance
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.195 2 DEBUG nova.compute.manager [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:13:05 compute-0 kernel: tap63189572-78 (unregistering): left promiscuous mode
Oct 14 09:13:05 compute-0 NetworkManager[44885]: <info>  [1760433185.2612] device (tap63189572-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:13:05 compute-0 ovn_controller[152662]: 2025-10-14T09:13:05Z|00967|binding|INFO|Releasing lport 63189572-78bc-4d3a-8135-659f8c39ce7d from this chassis (sb_readonly=0)
Oct 14 09:13:05 compute-0 ovn_controller[152662]: 2025-10-14T09:13:05Z|00968|binding|INFO|Setting lport 63189572-78bc-4d3a-8135-659f8c39ce7d down in Southbound
Oct 14 09:13:05 compute-0 ovn_controller[152662]: 2025-10-14T09:13:05Z|00969|binding|INFO|Removing iface tap63189572-78 ovn-installed in OVS
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.283 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ab:40 10.100.0.11'], port_security=['fa:16:3e:4a:ab:40 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '69c5d250-71a4-47d5-a3ce-5b606ee9c692', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd094b0f9acb49ca8b1f295403a44ec3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10487db5-e210-4216-ae01-596f4a7c7ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fab9082-797f-44f0-9d82-af22d5b6d133, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=63189572-78bc-4d3a-8135-659f8c39ce7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.284 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 63189572-78bc-4d3a-8135-659f8c39ce7d in datapath 84e9c235-fb90-472a-8cac-f7ae999c18dd unbound from our chassis
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.286 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84e9c235-fb90-472a-8cac-f7ae999c18dd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0b13b2-8d03-4091-9975-acd35d6e4728]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.288 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd namespace which is not needed anymore
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:05 compute-0 ceph-mon[74249]: pgmap v1758: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Oct 14 09:13:05 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 14 09:13:05 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Consumed 4.507s CPU time.
Oct 14 09:13:05 compute-0 systemd-machined[214636]: Machine qemu-114-instance-0000005d terminated.
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.426 2 INFO nova.virt.libvirt.driver [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance destroyed successfully.
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.426 2 DEBUG nova.objects.instance [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lazy-loading 'resources' on Instance uuid 69c5d250-71a4-47d5-a3ce-5b606ee9c692 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:05 compute-0 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [NOTICE]   (351836) : haproxy version is 2.8.14-c23fe91
Oct 14 09:13:05 compute-0 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [NOTICE]   (351836) : path to executable is /usr/sbin/haproxy
Oct 14 09:13:05 compute-0 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [WARNING]  (351836) : Exiting Master process...
Oct 14 09:13:05 compute-0 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [WARNING]  (351836) : Exiting Master process...
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.440 2 DEBUG nova.virt.libvirt.vif [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1449697808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1449697808',id=93,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd094b0f9acb49ca8b1f295403a44ec3',ramdisk_id='',reservation_id='r-0pjevjco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-648965588',owner_user_name='tempest-ServerTagsTestJSON-648965588-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:01Z,user_data=None,user_id='21d93d87742344a1b7662df0d97a69b2',uuid=69c5d250-71a4-47d5-a3ce-5b606ee9c692,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:13:05 compute-0 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [ALERT]    (351836) : Current worker (351838) exited with code 143 (Terminated)
Oct 14 09:13:05 compute-0 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [WARNING]  (351836) : All workers exited. Exiting... (0)
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.441 2 DEBUG nova.network.os_vif_util [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converting VIF {"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.441 2 DEBUG nova.network.os_vif_util [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.442 2 DEBUG os_vif [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:13:05 compute-0 systemd[1]: libpod-14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd.scope: Deactivated successfully.
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63189572-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:05 compute-0 podman[352004]: 2025-10-14 09:13:05.449952344 +0000 UTC m=+0.059021694 container died 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.455 2 INFO os_vif [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78')
Oct 14 09:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd-userdata-shm.mount: Deactivated successfully.
Oct 14 09:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-09f0fb587d1d33dedf087780ab0b81814fff666e633a1b92da6547d217849fe2-merged.mount: Deactivated successfully.
Oct 14 09:13:05 compute-0 podman[352004]: 2025-10-14 09:13:05.492178034 +0000 UTC m=+0.101247384 container cleanup 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:05 compute-0 systemd[1]: libpod-conmon-14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd.scope: Deactivated successfully.
Oct 14 09:13:05 compute-0 podman[352059]: 2025-10-14 09:13:05.564754852 +0000 UTC m=+0.045769389 container remove 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.571 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9aac613-835d-468d-8d35-292d9a1471c3]: (4, ('Tue Oct 14 09:13:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd (14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd)\n14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd\nTue Oct 14 09:13:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd (14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd)\n14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.573 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffcdfde-3601-450e-8842-50591177b03c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.573 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84e9c235-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:05 compute-0 kernel: tap84e9c235-f0: left promiscuous mode
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.600 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae4597e-29fe-4952-a76f-f783694c6972]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:13:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4147017175' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.627 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bebf9f-d28a-47dc-94c1-f2b89527b2ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:13:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4147017175' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.628 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[74d3495d-da57-439c-a8c3-29020a472077]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1c145d-938b-45b1-806a-bfd54d6bd6b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695817, 'reachable_time': 22286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352074, 'error': None, 'target': 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d84e9c235\x2dfb90\x2d472a\x2d8cac\x2df7ae999c18dd.mount: Deactivated successfully.
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.646 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.646 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7737ecbe-3bbf-47d4-9a2a-49e0d45fab6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.842 2 INFO nova.virt.libvirt.driver [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Deleting instance files /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692_del
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.843 2 INFO nova.virt.libvirt.driver [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Deletion of /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692_del complete
Oct 14 09:13:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.863 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.914 2 INFO nova.compute.manager [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.916 2 DEBUG oslo.service.loopingcall [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.916 2 DEBUG nova.compute.manager [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:13:05 compute-0 nova_compute[259627]: 2025-10-14 09:13:05.916 2 DEBUG nova.network.neutron [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:13:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 382 op/s
Oct 14 09:13:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4147017175' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:13:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4147017175' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:13:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:07.029 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:07.029 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:07.030 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:07 compute-0 ceph-mon[74249]: pgmap v1759: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 382 op/s
Oct 14 09:13:07 compute-0 ovn_controller[152662]: 2025-10-14T09:13:07Z|00970|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:13:07 compute-0 ovn_controller[152662]: 2025-10-14T09:13:07Z|00971|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 09:13:07 compute-0 nova_compute[259627]: 2025-10-14 09:13:07.797 2 DEBUG nova.compute.manager [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-unplugged-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:07 compute-0 nova_compute[259627]: 2025-10-14 09:13:07.797 2 DEBUG oslo_concurrency.lockutils [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:07 compute-0 nova_compute[259627]: 2025-10-14 09:13:07.798 2 DEBUG oslo_concurrency.lockutils [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:07 compute-0 nova_compute[259627]: 2025-10-14 09:13:07.798 2 DEBUG oslo_concurrency.lockutils [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:07 compute-0 nova_compute[259627]: 2025-10-14 09:13:07.798 2 DEBUG nova.compute.manager [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] No waiting events found dispatching network-vif-unplugged-63189572-78bc-4d3a-8135-659f8c39ce7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:07 compute-0 nova_compute[259627]: 2025-10-14 09:13:07.798 2 DEBUG nova.compute.manager [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-unplugged-63189572-78bc-4d3a-8135-659f8c39ce7d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:13:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:07 compute-0 nova_compute[259627]: 2025-10-14 09:13:07.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 267 op/s
Oct 14 09:13:07 compute-0 nova_compute[259627]: 2025-10-14 09:13:07.999 2 DEBUG nova.network.neutron [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.018 2 INFO nova.compute.manager [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Took 2.10 seconds to deallocate network for instance.
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.058 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.059 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.081 2 DEBUG nova.compute.manager [req-f340ee66-6f8f-42a6-a511-669659ddfb5e req-63b39487-3446-4bbd-b58f-35a537d7c510 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-deleted-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.139 2 DEBUG oslo_concurrency.processutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874453173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.549 2 DEBUG oslo_concurrency.processutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.555 2 DEBUG nova.compute.provider_tree [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.574 2 DEBUG nova.scheduler.client.report [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.601 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.634 2 INFO nova.scheduler.client.report [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Deleted allocations for instance 69c5d250-71a4-47d5-a3ce-5b606ee9c692
Oct 14 09:13:08 compute-0 nova_compute[259627]: 2025-10-14 09:13:08.733 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:09 compute-0 ceph-mon[74249]: pgmap v1760: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 267 op/s
Oct 14 09:13:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1874453173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:09 compute-0 nova_compute[259627]: 2025-10-14 09:13:09.914 2 DEBUG nova.compute.manager [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:09 compute-0 nova_compute[259627]: 2025-10-14 09:13:09.915 2 DEBUG oslo_concurrency.lockutils [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:09 compute-0 nova_compute[259627]: 2025-10-14 09:13:09.915 2 DEBUG oslo_concurrency.lockutils [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:09 compute-0 nova_compute[259627]: 2025-10-14 09:13:09.915 2 DEBUG oslo_concurrency.lockutils [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:09 compute-0 nova_compute[259627]: 2025-10-14 09:13:09.915 2 DEBUG nova.compute.manager [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] No waiting events found dispatching network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:09 compute-0 nova_compute[259627]: 2025-10-14 09:13:09.915 2 WARNING nova.compute.manager [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received unexpected event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d for instance with vm_state deleted and task_state None.
Oct 14 09:13:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 267 op/s
Oct 14 09:13:10 compute-0 nova_compute[259627]: 2025-10-14 09:13:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:11 compute-0 ceph-mon[74249]: pgmap v1761: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 267 op/s
Oct 14 09:13:11 compute-0 ovn_controller[152662]: 2025-10-14T09:13:11Z|00972|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:13:11 compute-0 ovn_controller[152662]: 2025-10-14T09:13:11Z|00973|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 09:13:11 compute-0 nova_compute[259627]: 2025-10-14 09:13:11.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 36 KiB/s wr, 285 op/s
Oct 14 09:13:12 compute-0 nova_compute[259627]: 2025-10-14 09:13:12.282 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433177.2761786, 4ad76124-48eb-467e-9a6f-951235efdb35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:12 compute-0 nova_compute[259627]: 2025-10-14 09:13:12.282 2 INFO nova.compute.manager [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] VM Stopped (Lifecycle Event)
Oct 14 09:13:12 compute-0 nova_compute[259627]: 2025-10-14 09:13:12.301 2 DEBUG nova.compute.manager [None req-1b6de2f0-c753-4cd6-8dbf-e5d9e0209af7 - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:12 compute-0 nova_compute[259627]: 2025-10-14 09:13:12.623 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433177.6219695, bf38daab-2994-41c4-a44f-91e466acf68e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:12 compute-0 nova_compute[259627]: 2025-10-14 09:13:12.624 2 INFO nova.compute.manager [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] VM Stopped (Lifecycle Event)
Oct 14 09:13:12 compute-0 nova_compute[259627]: 2025-10-14 09:13:12.639 2 DEBUG nova.compute.manager [None req-6ffc8255-f604-4e0b-8513-9cef3200cae7 - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:13 compute-0 nova_compute[259627]: 2025-10-14 09:13:13.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:13 compute-0 ceph-mon[74249]: pgmap v1762: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 36 KiB/s wr, 285 op/s
Oct 14 09:13:13 compute-0 ovn_controller[152662]: 2025-10-14T09:13:13Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 09:13:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.5 KiB/s wr, 159 op/s
Oct 14 09:13:15 compute-0 ceph-mon[74249]: pgmap v1763: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.5 KiB/s wr, 159 op/s
Oct 14 09:13:15 compute-0 nova_compute[259627]: 2025-10-14 09:13:15.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 15 KiB/s wr, 202 op/s
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.317 2 INFO nova.compute.manager [None req-22cc4328-8bda-4035-abc4-5e559b02d24e 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Pausing
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.319 2 DEBUG nova.objects.instance [None req-22cc4328-8bda-4035-abc4-5e559b02d24e 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'flavor' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.352 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433196.3518085, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.352 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Paused (Lifecycle Event)
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.354 2 DEBUG nova.compute.manager [None req-22cc4328-8bda-4035-abc4-5e559b02d24e 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.380 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.387 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.415 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 14 09:13:16 compute-0 podman[352098]: 2025-10-14 09:13:16.667260858 +0000 UTC m=+0.080453712 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:13:16 compute-0 podman[352099]: 2025-10-14 09:13:16.672247201 +0000 UTC m=+0.078680019 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:16 compute-0 nova_compute[259627]: 2025-10-14 09:13:16.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.178 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.288 2 INFO nova.compute.manager [None req-f137f94f-93e6-4596-bf78-5823e16a02bd 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Unpausing
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.291 2 DEBUG nova.objects.instance [None req-f137f94f-93e6-4596-bf78-5823e16a02bd 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'flavor' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.331 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433197.330903, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.331 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Resumed (Lifecycle Event)
Oct 14 09:13:17 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.340 2 DEBUG nova.virt.libvirt.guest [None req-f137f94f-93e6-4596-bf78-5823e16a02bd 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.341 2 DEBUG nova.compute.manager [None req-f137f94f-93e6-4596-bf78-5823e16a02bd 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.372 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.378 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:17 compute-0 nova_compute[259627]: 2025-10-14 09:13:17.418 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 14 09:13:17 compute-0 ceph-mon[74249]: pgmap v1764: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 15 KiB/s wr, 202 op/s
Oct 14 09:13:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 09:13:18 compute-0 nova_compute[259627]: 2025-10-14 09:13:18.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:19 compute-0 nova_compute[259627]: 2025-10-14 09:13:19.051 2 INFO nova.compute.manager [None req-24960b08-3c49-4313-8a93-c290d9923c81 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Get console output
Oct 14 09:13:19 compute-0 nova_compute[259627]: 2025-10-14 09:13:19.057 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:13:19 compute-0 ceph-mon[74249]: pgmap v1765: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 09:13:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.005 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.007 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.007 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.008 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.009 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.011 2 INFO nova.compute.manager [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Terminating instance
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.012 2 DEBUG nova.compute.manager [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:13:20 compute-0 kernel: tapce4eb1a6-22 (unregistering): left promiscuous mode
Oct 14 09:13:20 compute-0 NetworkManager[44885]: <info>  [1760433200.0726] device (tapce4eb1a6-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:13:20 compute-0 ovn_controller[152662]: 2025-10-14T09:13:20Z|00974|binding|INFO|Releasing lport ce4eb1a6-2221-4519-98fa-44a39da77b71 from this chassis (sb_readonly=0)
Oct 14 09:13:20 compute-0 ovn_controller[152662]: 2025-10-14T09:13:20Z|00975|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 down in Southbound
Oct 14 09:13:20 compute-0 ovn_controller[152662]: 2025-10-14T09:13:20Z|00976|binding|INFO|Removing iface tapce4eb1a6-22 ovn-installed in OVS
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.096 2 DEBUG nova.compute.manager [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.097 2 DEBUG nova.compute.manager [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing instance network info cache due to event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.097 2 DEBUG oslo_concurrency.lockutils [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.098 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:bd 10.100.0.7'], port_security=['fa:16:3e:7f:9c:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9e354e27-d674-43c3-890b-caf8731cb827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c85ef4e1-bf02-447d-8de0-60f2d978738d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7211eb9-3be4-4007-bf83-d7812e6ec9fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ce4eb1a6-2221-4519-98fa-44a39da77b71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.099 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ce4eb1a6-2221-4519-98fa-44a39da77b71 in datapath 7cb8e394-ebca-4b27-8174-62c6b6f3a7da unbound from our chassis
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.098 2 DEBUG oslo_concurrency.lockutils [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.100 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.099 2 DEBUG nova.network.neutron [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.101 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e525b56-fd62-4a2b-82ef-d25a0ea4970e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.101 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da namespace which is not needed anymore
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:20 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct 14 09:13:20 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005a.scope: Consumed 13.224s CPU time.
Oct 14 09:13:20 compute-0 systemd-machined[214636]: Machine qemu-115-instance-0000005a terminated.
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.264 2 INFO nova.virt.libvirt.driver [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance destroyed successfully.
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.266 2 DEBUG nova.objects.instance [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 9e354e27-d674-43c3-890b-caf8731cb827 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:20 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [NOTICE]   (351970) : haproxy version is 2.8.14-c23fe91
Oct 14 09:13:20 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [NOTICE]   (351970) : path to executable is /usr/sbin/haproxy
Oct 14 09:13:20 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [WARNING]  (351970) : Exiting Master process...
Oct 14 09:13:20 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [ALERT]    (351970) : Current worker (351972) exited with code 143 (Terminated)
Oct 14 09:13:20 compute-0 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [WARNING]  (351970) : All workers exited. Exiting... (0)
Oct 14 09:13:20 compute-0 systemd[1]: libpod-0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd.scope: Deactivated successfully.
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.292 2 DEBUG nova.virt.libvirt.vif [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-461250371',display_name='tempest-TestNetworkAdvancedServerOps-server-461250371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-461250371',id=90,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcVmsUIwo920oPyHLmJGrkrYCQ5UunB9Yv0/Buc9cCiSWB2ZOXdvOp0s2cEsPfEAttRSx6VdIWgt0joL5sdVyP2CI3WgYA2zF+RirB/x5531ApwlIJzNgUQx7hgxyfijg==',key_name='tempest-TestNetworkAdvancedServerOps-806517333',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-dt6vqls1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:01Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=9e354e27-d674-43c3-890b-caf8731cb827,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.293 2 DEBUG nova.network.os_vif_util [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.294 2 DEBUG nova.network.os_vif_util [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.294 2 DEBUG os_vif [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4eb1a6-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:20 compute-0 podman[352163]: 2025-10-14 09:13:20.29786524 +0000 UTC m=+0.065434273 container died 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.337 2 INFO os_vif [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22')
Oct 14 09:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd-userdata-shm.mount: Deactivated successfully.
Oct 14 09:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb65901d3e784ae2e04a1afc9a7cae7d402065e9d57d7aaf3b11e83a0ac7ab76-merged.mount: Deactivated successfully.
Oct 14 09:13:20 compute-0 podman[352163]: 2025-10-14 09:13:20.380779352 +0000 UTC m=+0.148348365 container cleanup 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:20 compute-0 systemd[1]: libpod-conmon-0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd.scope: Deactivated successfully.
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.424 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433185.4242532, 69c5d250-71a4-47d5-a3ce-5b606ee9c692 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.427 2 INFO nova.compute.manager [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] VM Stopped (Lifecycle Event)
Oct 14 09:13:20 compute-0 podman[352220]: 2025-10-14 09:13:20.459713326 +0000 UTC m=+0.053415577 container remove 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.464 2 DEBUG nova.compute.manager [None req-f94bd0d2-51ae-4d69-bccb-f2ea3df69216 - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0af50a-09c6-430e-ac22-eee9faa697dc]: (4, ('Tue Oct 14 09:13:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da (0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd)\n0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd\nTue Oct 14 09:13:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da (0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd)\n0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.473 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c726f53-d548-4a56-b5e7-b96daff0f980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.474 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb8e394-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:20 compute-0 kernel: tap7cb8e394-e0: left promiscuous mode
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb73ade5-4a0d-46c8-9187-f45c0db3e055]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a09a6277-2cd2-48df-883c-2d12206d5db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec57b8c-3155-4284-abfe-09edc0f493b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77b2991f-bbb0-4542-a32a-65cf31d172fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695912, 'reachable_time': 38739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352236, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d7cb8e394\x2debca\x2d4b27\x2d8174\x2d62c6b6f3a7da.mount: Deactivated successfully.
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.546 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:13:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.546 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c539de68-bc75-4637-8598-9b0356b6d26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.755 2 INFO nova.virt.libvirt.driver [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Deleting instance files /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827_del
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.755 2 INFO nova.virt.libvirt.driver [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Deletion of /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827_del complete
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.824 2 INFO nova.compute.manager [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.825 2 DEBUG oslo.service.loopingcall [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.825 2 DEBUG nova.compute.manager [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:13:20 compute-0 nova_compute[259627]: 2025-10-14 09:13:20.826 2 DEBUG nova.network.neutron [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:13:21 compute-0 ceph-mon[74249]: pgmap v1766: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 09:13:21 compute-0 nova_compute[259627]: 2025-10-14 09:13:21.589 2 DEBUG nova.network.neutron [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updated VIF entry in instance network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:13:21 compute-0 nova_compute[259627]: 2025-10-14 09:13:21.590 2 DEBUG nova.network.neutron [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:21 compute-0 nova_compute[259627]: 2025-10-14 09:13:21.611 2 DEBUG oslo_concurrency.lockutils [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:21 compute-0 nova_compute[259627]: 2025-10-14 09:13:21.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 163 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 32 KiB/s wr, 65 op/s
Oct 14 09:13:21 compute-0 nova_compute[259627]: 2025-10-14 09:13:21.959 2 DEBUG nova.network.neutron [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:21 compute-0 nova_compute[259627]: 2025-10-14 09:13:21.981 2 INFO nova.compute.manager [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Took 1.16 seconds to deallocate network for instance.
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.036 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.036 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.374 2 DEBUG nova.compute.manager [req-8309c113-4691-4288-a0c5-2925069e3df8 req-69ae693c-e197-4b9c-9db3-8ba575a567a9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-deleted-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.398 2 DEBUG oslo_concurrency.processutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.512 2 DEBUG nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.512 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.512 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.513 2 WARNING nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state deleted and task_state None.
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.514 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.514 2 DEBUG nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.514 2 WARNING nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state deleted and task_state None.
Oct 14 09:13:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2681891022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.895 2 DEBUG oslo_concurrency.processutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.901 2 DEBUG nova.compute.provider_tree [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:22 compute-0 nova_compute[259627]: 2025-10-14 09:13:22.937 2 DEBUG nova.scheduler.client.report [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:23 compute-0 nova_compute[259627]: 2025-10-14 09:13:23.008 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:23 compute-0 nova_compute[259627]: 2025-10-14 09:13:23.050 2 INFO nova.scheduler.client.report [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance 9e354e27-d674-43c3-890b-caf8731cb827
Oct 14 09:13:23 compute-0 nova_compute[259627]: 2025-10-14 09:13:23.124 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:23 compute-0 nova_compute[259627]: 2025-10-14 09:13:23.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:23 compute-0 ceph-mon[74249]: pgmap v1767: 305 pgs: 305 active+clean; 163 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 32 KiB/s wr, 65 op/s
Oct 14 09:13:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2681891022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 163 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 30 KiB/s wr, 47 op/s
Oct 14 09:13:25 compute-0 nova_compute[259627]: 2025-10-14 09:13:25.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:25 compute-0 ceph-mon[74249]: pgmap v1768: 305 pgs: 305 active+clean; 163 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 30 KiB/s wr, 47 op/s
Oct 14 09:13:25 compute-0 nova_compute[259627]: 2025-10-14 09:13:25.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 32 KiB/s wr, 73 op/s
Oct 14 09:13:26 compute-0 nova_compute[259627]: 2025-10-14 09:13:26.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:26 compute-0 ovn_controller[152662]: 2025-10-14T09:13:26Z|00977|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:13:26 compute-0 nova_compute[259627]: 2025-10-14 09:13:26.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:26 compute-0 ovn_controller[152662]: 2025-10-14T09:13:26Z|00978|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:13:26 compute-0 nova_compute[259627]: 2025-10-14 09:13:26.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:27 compute-0 nova_compute[259627]: 2025-10-14 09:13:27.013 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:27 compute-0 nova_compute[259627]: 2025-10-14 09:13:27.014 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:27 compute-0 nova_compute[259627]: 2025-10-14 09:13:27.015 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:27 compute-0 ceph-mon[74249]: pgmap v1769: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 32 KiB/s wr, 73 op/s
Oct 14 09:13:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 09:13:28 compute-0 nova_compute[259627]: 2025-10-14 09:13:28.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.109 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.110 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.127 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:13:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:13:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 116K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.84 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 47.24 MB, 0.08 MB/s
                                           Interval WAL: 11K writes, 4460 syncs, 2.54 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.213 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.214 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.223 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.224 2 INFO nova.compute.claims [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.406 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:29 compute-0 ceph-mon[74249]: pgmap v1770: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 09:13:29 compute-0 podman[352282]: 2025-10-14 09:13:29.652916711 +0000 UTC m=+0.062319495 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:13:29 compute-0 podman[352268]: 2025-10-14 09:13:29.71012995 +0000 UTC m=+0.116511490 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:13:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557985574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.892 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.900 2 DEBUG nova.compute.provider_tree [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.917 2 DEBUG nova.scheduler.client.report [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.946 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.947 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:29 compute-0 nova_compute[259627]: 2025-10-14 09:13:29.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.015 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.017 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.067 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.067 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.087 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.105 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.185 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.186 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.186 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Creating image(s)
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.214 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.244 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.276 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.282 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.333 2 DEBUG nova.policy [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c3ef1ada21b467b9c1717b790fabb93', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dffdc75b179b426c85be76e05489a77a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.388 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.389 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.389 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.390 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.419 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.423 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626864788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.478 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1557985574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2626864788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.597 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.598 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.772 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.849 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] resizing rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.956 2 DEBUG nova.objects.instance [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lazy-loading 'migration_context' on Instance uuid fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.974 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.974 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.975 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.976 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Ensure instance console log exists: /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.976 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.976 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.977 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:30 compute-0 nova_compute[259627]: 2025-10-14 09:13:30.993 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.006 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.007 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3674MB free_disk=59.942779541015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.081 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.123 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2534f8b9-e832-4b78-ada4-e551429bdc75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.123 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d46b6953-9413-4e6a-94f7-7b5ac9634c16 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.153 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.153 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.185 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Successfully created port: 33bf675d-c42f-486f-b483-87fa5091b0ef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.224 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:31 compute-0 ceph-mon[74249]: pgmap v1771: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 09:13:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126330429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.680 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.689 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.724 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.786 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.786 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.787 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.788 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.798 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:13:31 compute-0 nova_compute[259627]: 2025-10-14 09:13:31.799 2 INFO nova.compute.claims [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:13:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 125 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 33 KiB/s wr, 35 op/s
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.056 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.171 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Successfully updated port: 33bf675d-c42f-486f-b483-87fa5091b0ef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.186 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.186 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquired lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.187 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.386 2 DEBUG nova.compute.manager [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-changed-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.387 2 DEBUG nova.compute.manager [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Refreshing instance network info cache due to event network-changed-33bf675d-c42f-486f-b483-87fa5091b0ef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.387 2 DEBUG oslo_concurrency.lockutils [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673281914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.498 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.505 2 DEBUG nova.compute.provider_tree [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/126330429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1673281914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.532 2 DEBUG nova.scheduler.client.report [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.558 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.559 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.621 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.622 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.643 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.663 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.757 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.758 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.759 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Creating image(s)
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:13:32
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', 'vms', 'volumes']
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:13:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.801 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.840 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.884 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:32 compute-0 nova_compute[259627]: 2025-10-14 09:13:32.891 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.000 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.001 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.002 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.002 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.028 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.032 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.346 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.382 2 DEBUG nova.policy [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.389 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.474 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:13:33 compute-0 ceph-mon[74249]: pgmap v1772: 305 pgs: 305 active+clean; 125 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 33 KiB/s wr, 35 op/s
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.597 2 DEBUG nova.objects.instance [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.614 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.615 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Ensure instance console log exists: /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.615 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.616 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:33 compute-0 nova_compute[259627]: 2025-10-14 09:13:33.616 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 125 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 31 op/s
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.243 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Updating instance_info_cache with network_info: [{"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.263 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Releasing lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.264 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance network_info: |[{"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.264 2 DEBUG oslo_concurrency.lockutils [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.264 2 DEBUG nova.network.neutron [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Refreshing network info cache for port 33bf675d-c42f-486f-b483-87fa5091b0ef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.267 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start _get_guest_xml network_info=[{"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.271 2 WARNING nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.275 2 DEBUG nova.virt.libvirt.host [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.276 2 DEBUG nova.virt.libvirt.host [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.278 2 DEBUG nova.virt.libvirt.host [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.279 2 DEBUG nova.virt.libvirt.host [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.279 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.279 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.284 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:13:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 115K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.04 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.82 writes per sync, written: 0.10 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 36.96 MB, 0.06 MB/s
                                           Interval WAL: 10K writes, 4455 syncs, 2.41 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:13:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:13:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562269786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.752 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.787 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.794 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.828 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.829 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.829 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.869 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:13:34 compute-0 nova_compute[259627]: 2025-10-14 09:13:34.870 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.070 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.070 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.071 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.071 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.170 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Successfully created port: 05e92470-2658-4ea2-9c44-e91cd5226905 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:13:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:13:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3960128247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.233 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.236 2 DEBUG nova.virt.libvirt.vif [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1606551920',display_name='tempest-ServerAddressesTestJSON-server-1606551920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1606551920',id=94,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dffdc75b179b426c85be76e05489a77a',ramdisk_id='',reservation_id='r-io2ehiyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1889343396',owner_user_name='tempest-ServerAddressesTestJSON-1889343396-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:30Z,user_data=None,user_id='9c3ef1ada21b467b9c1717b790fabb93',uuid=fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.237 2 DEBUG nova.network.os_vif_util [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converting VIF {"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.238 2 DEBUG nova.network.os_vif_util [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.241 2 DEBUG nova.objects.instance [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lazy-loading 'pci_devices' on Instance uuid fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.261 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <uuid>fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf</uuid>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <name>instance-0000005e</name>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerAddressesTestJSON-server-1606551920</nova:name>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:13:34</nova:creationTime>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <nova:user uuid="9c3ef1ada21b467b9c1717b790fabb93">tempest-ServerAddressesTestJSON-1889343396-project-member</nova:user>
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <nova:project uuid="dffdc75b179b426c85be76e05489a77a">tempest-ServerAddressesTestJSON-1889343396</nova:project>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <nova:port uuid="33bf675d-c42f-486f-b483-87fa5091b0ef">
Oct 14 09:13:35 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <system>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <entry name="serial">fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf</entry>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <entry name="uuid">fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf</entry>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </system>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <os>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   </os>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <features>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   </features>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk">
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config">
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       </source>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:13:35 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:9b:b9:3e"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <target dev="tap33bf675d-c4"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/console.log" append="off"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <video>
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </video>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:13:35 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:13:35 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:13:35 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:13:35 compute-0 nova_compute[259627]: </domain>
Oct 14 09:13:35 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.263 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Preparing to wait for external event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.264 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.264 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.264 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.265 2 DEBUG nova.virt.libvirt.vif [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1606551920',display_name='tempest-ServerAddressesTestJSON-server-1606551920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1606551920',id=94,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dffdc75b179b426c85be76e05489a77a',ramdisk_id='',reservation_id='r-io2ehiyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1889343396',owner_user_name='tempest-ServerAddressesTestJSON-1889343396-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:30Z,user_data=None,user_id='9c3ef1ada21b467b9c1717b790fabb93',uuid=fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.265 2 DEBUG nova.network.os_vif_util [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converting VIF {"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.266 2 DEBUG nova.network.os_vif_util [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.266 2 DEBUG os_vif [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.268 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433200.2614918, 9e354e27-d674-43c3-890b-caf8731cb827 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.268 2 INFO nova.compute.manager [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Stopped (Lifecycle Event)
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33bf675d-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.274 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33bf675d-c4, col_values=(('external_ids', {'iface-id': '33bf675d-c42f-486f-b483-87fa5091b0ef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:b9:3e', 'vm-uuid': 'fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:35 compute-0 NetworkManager[44885]: <info>  [1760433215.2764] manager: (tap33bf675d-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.284 2 INFO os_vif [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4')
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.288 2 DEBUG nova.compute.manager [None req-002c8720-3d66-48d1-bb52-1d6331fdd9eb - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.336 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.336 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.336 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] No VIF found with MAC fa:16:3e:9b:b9:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.337 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Using config drive
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.357 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:35 compute-0 ceph-mon[74249]: pgmap v1773: 305 pgs: 305 active+clean; 125 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 31 op/s
Oct 14 09:13:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/562269786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3960128247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.802 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Creating config drive at /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.811 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5xzszvre execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Oct 14 09:13:35 compute-0 nova_compute[259627]: 2025-10-14 09:13:35.987 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5xzszvre" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.025 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.029 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.224 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.225 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Deleting local config drive /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config because it was imported into RBD.
Oct 14 09:13:36 compute-0 NetworkManager[44885]: <info>  [1760433216.2857] manager: (tap33bf675d-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Oct 14 09:13:36 compute-0 kernel: tap33bf675d-c4: entered promiscuous mode
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:36 compute-0 ovn_controller[152662]: 2025-10-14T09:13:36Z|00979|binding|INFO|Claiming lport 33bf675d-c42f-486f-b483-87fa5091b0ef for this chassis.
Oct 14 09:13:36 compute-0 ovn_controller[152662]: 2025-10-14T09:13:36Z|00980|binding|INFO|33bf675d-c42f-486f-b483-87fa5091b0ef: Claiming fa:16:3e:9b:b9:3e 10.100.0.9
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.308 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:b9:3e 10.100.0.9'], port_security=['fa:16:3e:9b:b9:3e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d74d8641-f56f-4d53-bd5c-d5364a316407', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dffdc75b179b426c85be76e05489a77a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '403c796c-1a50-44ae-b551-913e7b6b57c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c445f64b-1565-4e25-83d1-f207b66a7e54, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=33bf675d-c42f-486f-b483-87fa5091b0ef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.310 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 33bf675d-c42f-486f-b483-87fa5091b0ef in datapath d74d8641-f56f-4d53-bd5c-d5364a316407 bound to our chassis
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.312 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d74d8641-f56f-4d53-bd5c-d5364a316407
Oct 14 09:13:36 compute-0 systemd-machined[214636]: New machine qemu-116-instance-0000005e.
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.330 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d50e3095-a998-44f8-9d4a-5a4c958ccb0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.331 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd74d8641-f1 in ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.333 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd74d8641-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.333 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9d6025-9738-4e54-89c9-ff660618084d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.334 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8b0030-868a-460c-bbc2-fbc60a4d8c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.346 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[03e735b1-67b4-440b-b73f-ece1cc61ac01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-0000005e.
Oct 14 09:13:36 compute-0 systemd-udevd[352866]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.373 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7afa1b-bf6b-46d5-bf34-1551e17604c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:36 compute-0 ovn_controller[152662]: 2025-10-14T09:13:36Z|00981|binding|INFO|Setting lport 33bf675d-c42f-486f-b483-87fa5091b0ef ovn-installed in OVS
Oct 14 09:13:36 compute-0 ovn_controller[152662]: 2025-10-14T09:13:36Z|00982|binding|INFO|Setting lport 33bf675d-c42f-486f-b483-87fa5091b0ef up in Southbound
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:36 compute-0 NetworkManager[44885]: <info>  [1760433216.4002] device (tap33bf675d-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:13:36 compute-0 NetworkManager[44885]: <info>  [1760433216.4020] device (tap33bf675d-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.417 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6d254dba-2fd1-4829-9870-5837d2d8ead5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.425 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5756bc58-b93b-41f4-a8f6-435a989b5d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 systemd-udevd[352873]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:13:36 compute-0 NetworkManager[44885]: <info>  [1760433216.4272] manager: (tapd74d8641-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/399)
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.461 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[89af97ee-154d-45a3-9810-d8db59302e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.465 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[78e193df-a5d6-4d7a-85e8-a24d8c9acf44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 NetworkManager[44885]: <info>  [1760433216.4866] device (tapd74d8641-f0): carrier: link connected
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.493 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7b1276-e48c-43ca-ab7c-e767867f03d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.513 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c31bb3a8-880e-4d1a-bffc-6d31ef3ba80d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd74d8641-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:4d:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699524, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352896, 'error': None, 'target': 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.524 2 DEBUG nova.network.neutron [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Updated VIF entry in instance network info cache for port 33bf675d-c42f-486f-b483-87fa5091b0ef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.527 2 DEBUG nova.network.neutron [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Updating instance_info_cache with network_info: [{"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d48a83-12e9-4188-9ddc-1ea5a3aba4ee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:4d6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699524, 'tstamp': 699524}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352897, 'error': None, 'target': 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.548 2 DEBUG oslo_concurrency.lockutils [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.564 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9590d7cb-9f13-46d9-9454-5b8f31d5c94d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd74d8641-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:4d:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699524, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352898, 'error': None, 'target': 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.601 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[510680f7-e1bb-4ee2-b562-3b9a4fb95922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.656 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[faf87ce0-b884-4435-9627-2350ed64d2a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.657 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd74d8641-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.658 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.658 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd74d8641-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:36 compute-0 NetworkManager[44885]: <info>  [1760433216.6905] manager: (tapd74d8641-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct 14 09:13:36 compute-0 kernel: tapd74d8641-f0: entered promiscuous mode
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.693 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd74d8641-f0, col_values=(('external_ids', {'iface-id': '4af3feec-e627-47f7-a581-09cf28b78f23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:36 compute-0 ovn_controller[152662]: 2025-10-14T09:13:36Z|00983|binding|INFO|Releasing lport 4af3feec-e627-47f7-a581-09cf28b78f23 from this chassis (sb_readonly=0)
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.723 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d74d8641-f56f-4d53-bd5c-d5364a316407.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d74d8641-f56f-4d53-bd5c-d5364a316407.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.724 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3f3eab-fec5-4640-b822-d4b5ad927cd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.725 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-d74d8641-f56f-4d53-bd5c-d5364a316407
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/d74d8641-f56f-4d53-bd5c-d5364a316407.pid.haproxy
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID d74d8641-f56f-4d53-bd5c-d5364a316407
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:13:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.726 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'env', 'PROCESS_TAG=haproxy-d74d8641-f56f-4d53-bd5c-d5364a316407', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d74d8641-f56f-4d53-bd5c-d5364a316407.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.790 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.808 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.809 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.809 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.810 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.810 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.989 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.990 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:36 compute-0 nova_compute[259627]: 2025-10-14 09:13:36.990 2 INFO nova.compute.manager [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Shelving
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.012 2 DEBUG nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.072 2 DEBUG nova.compute.manager [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.073 2 DEBUG oslo_concurrency.lockutils [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.073 2 DEBUG oslo_concurrency.lockutils [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.073 2 DEBUG oslo_concurrency.lockutils [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.073 2 DEBUG nova.compute.manager [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Processing event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:13:37 compute-0 podman[352930]: 2025-10-14 09:13:37.123358019 +0000 UTC m=+0.068843616 container create f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:13:37 compute-0 systemd[1]: Started libpod-conmon-f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46.scope.
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.177 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Successfully updated port: 05e92470-2658-4ea2-9c44-e91cd5226905 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:13:37 compute-0 podman[352930]: 2025-10-14 09:13:37.090799207 +0000 UTC m=+0.036284824 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.190 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.190 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.191 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:13:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d550779a9bd77bd7e2ed61b22d656826dfaf6fc1b4e1ee3150f2fd155f6368/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:37 compute-0 podman[352930]: 2025-10-14 09:13:37.227361311 +0000 UTC m=+0.172846928 container init f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:13:37 compute-0 podman[352930]: 2025-10-14 09:13:37.234560758 +0000 UTC m=+0.180046355 container start f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 09:13:37 compute-0 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [NOTICE]   (352950) : New worker (352970) forked
Oct 14 09:13:37 compute-0 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [NOTICE]   (352950) : Loading success.
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.386 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:13:37 compute-0 ceph-mon[74249]: pgmap v1774: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.729 2 DEBUG nova.compute.manager [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-changed-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.729 2 DEBUG nova.compute.manager [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Refreshing instance network info cache due to event network-changed-05e92470-2658-4ea2-9c44-e91cd5226905. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.730 2 DEBUG oslo_concurrency.lockutils [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.862 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.865 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433217.8635738, fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.865 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] VM Started (Lifecycle Event)
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.869 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.873 2 INFO nova.virt.libvirt.driver [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance spawned successfully.
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.874 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.897 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.906 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.914 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.915 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.916 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.917 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.918 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.919 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.960 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.961 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433217.8637056, fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.961 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] VM Paused (Lifecycle Event)
Oct 14 09:13:37 compute-0 nova_compute[259627]: 2025-10-14 09:13:37.996 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.003 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433217.8683548, fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.003 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] VM Resumed (Lifecycle Event)
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.008 2 INFO nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Took 7.82 seconds to spawn the instance on the hypervisor.
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.009 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.035 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.039 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.068 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.085 2 INFO nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Took 8.90 seconds to build instance.
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.127 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:38 compute-0 nova_compute[259627]: 2025-10-14 09:13:38.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:39 compute-0 kernel: tap4f827284-f3 (unregistering): left promiscuous mode
Oct 14 09:13:39 compute-0 NetworkManager[44885]: <info>  [1760433219.2313] device (tap4f827284-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:13:39 compute-0 ovn_controller[152662]: 2025-10-14T09:13:39Z|00984|binding|INFO|Releasing lport 4f827284-f357-43c5-bdde-c69731b52914 from this chassis (sb_readonly=0)
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:39 compute-0 ovn_controller[152662]: 2025-10-14T09:13:39Z|00985|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 down in Southbound
Oct 14 09:13:39 compute-0 ovn_controller[152662]: 2025-10-14T09:13:39Z|00986|binding|INFO|Removing iface tap4f827284-f3 ovn-installed in OVS
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.292 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.294 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c unbound from our chassis
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.297 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.298 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14ab801f-280f-410b-975f-6f7b57340d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.300 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace which is not needed anymore
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:39 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 14 09:13:39 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000056.scope: Consumed 15.601s CPU time.
Oct 14 09:13:39 compute-0 systemd-machined[214636]: Machine qemu-107-instance-00000056 terminated.
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.390 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updating instance_info_cache with network_info: [{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.436 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.437 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance network_info: |[{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.438 2 DEBUG oslo_concurrency.lockutils [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.438 2 DEBUG nova.network.neutron [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Refreshing network info cache for port 05e92470-2658-4ea2-9c44-e91cd5226905 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.442 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start _get_guest_xml network_info=[{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.446 2 WARNING nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.452 2 DEBUG nova.virt.libvirt.host [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.453 2 DEBUG nova.virt.libvirt.host [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.456 2 DEBUG nova.virt.libvirt.host [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.457 2 DEBUG nova.virt.libvirt.host [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.457 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.458 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.458 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.459 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.459 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.460 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.460 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.460 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.461 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.461 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.461 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.462 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.465 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:39 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [NOTICE]   (347034) : haproxy version is 2.8.14-c23fe91
Oct 14 09:13:39 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [NOTICE]   (347034) : path to executable is /usr/sbin/haproxy
Oct 14 09:13:39 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [WARNING]  (347034) : Exiting Master process...
Oct 14 09:13:39 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [WARNING]  (347034) : Exiting Master process...
Oct 14 09:13:39 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [ALERT]    (347034) : Current worker (347036) exited with code 143 (Terminated)
Oct 14 09:13:39 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [WARNING]  (347034) : All workers exited. Exiting... (0)
Oct 14 09:13:39 compute-0 systemd[1]: libpod-993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864.scope: Deactivated successfully.
Oct 14 09:13:39 compute-0 podman[353025]: 2025-10-14 09:13:39.522886544 +0000 UTC m=+0.066785546 container died 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:13:39 compute-0 ceph-mon[74249]: pgmap v1775: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a5b768a60b37b396b810a626d7725822e051f41211fb6861ae019660974357e-merged.mount: Deactivated successfully.
Oct 14 09:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864-userdata-shm.mount: Deactivated successfully.
Oct 14 09:13:39 compute-0 podman[353025]: 2025-10-14 09:13:39.59664178 +0000 UTC m=+0.140540782 container cleanup 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:13:39 compute-0 systemd[1]: libpod-conmon-993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864.scope: Deactivated successfully.
Oct 14 09:13:39 compute-0 podman[353062]: 2025-10-14 09:13:39.671848282 +0000 UTC m=+0.049137911 container remove 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.677 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06ba8c69-b12a-49e1-ad05-f745af81a250]: (4, ('Tue Oct 14 09:13:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864)\n993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864\nTue Oct 14 09:13:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864)\n993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8013aff2-05de-4d62-bd12-0655d0a99608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:39 compute-0 kernel: tapa49b41b4-20: left promiscuous mode
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[209d4c43-76a4-45f6-8018-e3ab3d3fa6dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.748 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ca146342-17e2-4e8f-a71c-5e3a762277ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.749 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83cfeb60-1cb6-4d4e-a80a-0068ea039b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.752 2 DEBUG nova.compute.manager [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:13:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 25K writes, 98K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 8861 syncs, 2.85 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8289 writes, 32K keys, 8289 commit groups, 1.0 writes per commit group, ingest: 31.87 MB, 0.05 MB/s
                                           Interval WAL: 8289 writes, 3376 syncs, 2.46 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.760 2 DEBUG oslo_concurrency.lockutils [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b152a8-adfd-477d-8868-ae0e654c9179]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691206, 'reachable_time': 22013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353100, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.768 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.768 2 DEBUG oslo_concurrency.lockutils [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.768 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f5306a-af5d-41d0-bb34-c40aa0ca1c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:39 compute-0 systemd[1]: run-netns-ovnmeta\x2da49b41b4\x2d2559\x2d4a22\x2da274\x2da6c7bbe75f2c.mount: Deactivated successfully.
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.775 2 DEBUG oslo_concurrency.lockutils [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.776 2 DEBUG nova.compute.manager [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] No waiting events found dispatching network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.776 2 WARNING nova.compute.manager [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received unexpected event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef for instance with vm_state active and task_state None.
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.904 2 DEBUG nova.compute.manager [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.905 2 DEBUG oslo_concurrency.lockutils [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.906 2 DEBUG oslo_concurrency.lockutils [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.906 2 DEBUG oslo_concurrency.lockutils [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.906 2 DEBUG nova.compute.manager [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.907 2 WARNING nova.compute.manager [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state shelving.
Oct 14 09:13:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:13:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:13:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715862422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.969 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.990 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:39 compute-0 nova_compute[259627]: 2025-10-14 09:13:39.993 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.036 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance shutdown successfully after 3 seconds.
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.047 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance destroyed successfully.
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.048 2 DEBUG nova.objects.instance [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'numa_topology' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.306 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.307 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.308 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.309 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.309 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.311 2 INFO nova.compute.manager [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Terminating instance
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.313 2 DEBUG nova.compute.manager [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:13:40 compute-0 kernel: tap33bf675d-c4 (unregistering): left promiscuous mode
Oct 14 09:13:40 compute-0 NetworkManager[44885]: <info>  [1760433220.3565] device (tap33bf675d-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 ovn_controller[152662]: 2025-10-14T09:13:40Z|00987|binding|INFO|Releasing lport 33bf675d-c42f-486f-b483-87fa5091b0ef from this chassis (sb_readonly=0)
Oct 14 09:13:40 compute-0 ovn_controller[152662]: 2025-10-14T09:13:40Z|00988|binding|INFO|Setting lport 33bf675d-c42f-486f-b483-87fa5091b0ef down in Southbound
Oct 14 09:13:40 compute-0 ovn_controller[152662]: 2025-10-14T09:13:40Z|00989|binding|INFO|Removing iface tap33bf675d-c4 ovn-installed in OVS
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.425 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:b9:3e 10.100.0.9'], port_security=['fa:16:3e:9b:b9:3e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d74d8641-f56f-4d53-bd5c-d5364a316407', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dffdc75b179b426c85be76e05489a77a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '403c796c-1a50-44ae-b551-913e7b6b57c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c445f64b-1565-4e25-83d1-f207b66a7e54, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=33bf675d-c42f-486f-b483-87fa5091b0ef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.427 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 33bf675d-c42f-486f-b483-87fa5091b0ef in datapath d74d8641-f56f-4d53-bd5c-d5364a316407 unbound from our chassis
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.430 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d74d8641-f56f-4d53-bd5c-d5364a316407, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.431 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8cd2e3-abd1-49dd-86b5-d398ad9ece54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.432 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 namespace which is not needed anymore
Oct 14 09:13:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:13:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699146280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.460 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.463 2 DEBUG nova.virt.libvirt.vif [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1224416488',display_name='tempest-₡-1224416488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1224416488',id=95,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-c5eeedgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:32Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d46b6953-9413-4e6a-94f7-7b5ac9634c16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.463 2 DEBUG nova.network.os_vif_util [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.465 2 DEBUG nova.network.os_vif_util [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:40 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct 14 09:13:40 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Consumed 3.948s CPU time.
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.467 2 DEBUG nova.objects.instance [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.470 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Beginning cold snapshot process
Oct 14 09:13:40 compute-0 systemd-machined[214636]: Machine qemu-116-instance-0000005e terminated.
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.506 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <uuid>d46b6953-9413-4e6a-94f7-7b5ac9634c16</uuid>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <name>instance-0000005f</name>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <nova:name>tempest-₡-1224416488</nova:name>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:13:39</nova:creationTime>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <nova:port uuid="05e92470-2658-4ea2-9c44-e91cd5226905">
Oct 14 09:13:40 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <system>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <entry name="serial">d46b6953-9413-4e6a-94f7-7b5ac9634c16</entry>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <entry name="uuid">d46b6953-9413-4e6a-94f7-7b5ac9634c16</entry>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </system>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <os>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   </os>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <features>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   </features>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk">
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       </source>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config">
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       </source>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:13:40 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:7f:fb:45"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <target dev="tap05e92470-26"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/console.log" append="off"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <video>
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </video>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:13:40 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:13:40 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:13:40 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:13:40 compute-0 nova_compute[259627]: </domain>
Oct 14 09:13:40 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.507 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Preparing to wait for external event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.507 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.507 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.507 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.508 2 DEBUG nova.virt.libvirt.vif [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1224416488',display_name='tempest-₡-1224416488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1224416488',id=95,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-c5eeedgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:32Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d46b6953-9413-4e6a-94f7-7b5ac9634c16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.508 2 DEBUG nova.network.os_vif_util [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.509 2 DEBUG nova.network.os_vif_util [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.509 2 DEBUG os_vif [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05e92470-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05e92470-26, col_values=(('external_ids', {'iface-id': '05e92470-2658-4ea2-9c44-e91cd5226905', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:fb:45', 'vm-uuid': 'd46b6953-9413-4e6a-94f7-7b5ac9634c16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 NetworkManager[44885]: <info>  [1760433220.5167] manager: (tap05e92470-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.525 2 INFO os_vif [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26')
Oct 14 09:13:40 compute-0 NetworkManager[44885]: <info>  [1760433220.5338] manager: (tap33bf675d-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.545 2 INFO nova.virt.libvirt.driver [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance destroyed successfully.
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.546 2 DEBUG nova.objects.instance [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lazy-loading 'resources' on Instance uuid fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1715862422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3699146280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.574 2 DEBUG nova.virt.libvirt.vif [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1606551920',display_name='tempest-ServerAddressesTestJSON-server-1606551920',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1606551920',id=94,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dffdc75b179b426c85be76e05489a77a',ramdisk_id='',reservation_id='r-io2ehiyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1889343396',owner_user_name='tempest-ServerAddressesTestJSON-1889343396-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:38Z,user_data=None,user_id='9c3ef1ada21b467b9c1717b790fabb93',uuid=fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.574 2 DEBUG nova.network.os_vif_util [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converting VIF {"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.575 2 DEBUG nova.network.os_vif_util [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.575 2 DEBUG os_vif [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33bf675d-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:40 compute-0 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [NOTICE]   (352950) : haproxy version is 2.8.14-c23fe91
Oct 14 09:13:40 compute-0 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [NOTICE]   (352950) : path to executable is /usr/sbin/haproxy
Oct 14 09:13:40 compute-0 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [WARNING]  (352950) : Exiting Master process...
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [ALERT]    (352950) : Current worker (352970) exited with code 143 (Terminated)
Oct 14 09:13:40 compute-0 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [WARNING]  (352950) : All workers exited. Exiting... (0)
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:13:40 compute-0 systemd[1]: libpod-f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46.scope: Deactivated successfully.
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.590 2 INFO os_vif [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4')
Oct 14 09:13:40 compute-0 podman[353171]: 2025-10-14 09:13:40.593145841 +0000 UTC m=+0.048041334 container died f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:13:40 compute-0 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 09:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46-userdata-shm.mount: Deactivated successfully.
Oct 14 09:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4d550779a9bd77bd7e2ed61b22d656826dfaf6fc1b4e1ee3150f2fd155f6368-merged.mount: Deactivated successfully.
Oct 14 09:13:40 compute-0 podman[353171]: 2025-10-14 09:13:40.652936723 +0000 UTC m=+0.107832236 container cleanup f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:40 compute-0 systemd[1]: libpod-conmon-f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46.scope: Deactivated successfully.
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.663 2 DEBUG nova.virt.libvirt.imagebackend [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.667 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.668 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.668 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:7f:fb:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.668 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Using config drive
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.689 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:40 compute-0 podman[353259]: 2025-10-14 09:13:40.723980883 +0000 UTC m=+0.050431293 container remove f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.729 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffcf5d0-c313-4dad-98f8-9d515a816fa3]: (4, ('Tue Oct 14 09:13:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 (f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46)\nf05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46\nTue Oct 14 09:13:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 (f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46)\nf05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.731 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e55f1b2-7c1b-49a0-ae6f-f8f4ac879062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.732 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd74d8641-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 kernel: tapd74d8641-f0: left promiscuous mode
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.759 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cf1ef2-6241-4357-9a35-7db204346758]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.786 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[247bacb9-a35e-4495-b334-c79bafd7506f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f39e7d7b-5598-45d9-88c0-5c020f292e24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.803 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2646f598-1957-44fb-914b-9ffb25f39190]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699516, 'reachable_time': 16683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353300, 'error': None, 'target': 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:40 compute-0 systemd[1]: run-netns-ovnmeta\x2dd74d8641\x2df56f\x2d4d53\x2dbd5c\x2dd5364a316407.mount: Deactivated successfully.
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.808 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:13:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.808 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[37b53176-9c5e-4772-b806-53c9981d51a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.943 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] creating snapshot(b246f6613b8749a58e5b42598e1fdcf4) on rbd image(2534f8b9-e832-4b78-ada4-e551429bdc75_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:13:40 compute-0 nova_compute[259627]: 2025-10-14 09:13:40.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.025 2 INFO nova.virt.libvirt.driver [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Deleting instance files /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_del
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.026 2 INFO nova.virt.libvirt.driver [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Deletion of /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_del complete
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.093 2 INFO nova.compute.manager [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.093 2 DEBUG oslo.service.loopingcall [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.094 2 DEBUG nova.compute.manager [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.094 2 DEBUG nova.network.neutron [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.276 2 DEBUG nova.network.neutron [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updated VIF entry in instance network info cache for port 05e92470-2658-4ea2-9c44-e91cd5226905. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.277 2 DEBUG nova.network.neutron [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updating instance_info_cache with network_info: [{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.291 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Creating config drive at /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.301 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywyd55ii execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.367 2 DEBUG oslo_concurrency.lockutils [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.467 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywyd55ii" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.489 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.493 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 14 09:13:41 compute-0 ceph-mon[74249]: pgmap v1776: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:13:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 14 09:13:41 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.661 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] cloning vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk@b246f6613b8749a58e5b42598e1fdcf4 to images/7b536765-adaa-4682-86b5-b3ff0be769bf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.720 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.721 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Deleting local config drive /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config because it was imported into RBD.
Oct 14 09:13:41 compute-0 systemd-udevd[353056]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:13:41 compute-0 kernel: tap05e92470-26: entered promiscuous mode
Oct 14 09:13:41 compute-0 NetworkManager[44885]: <info>  [1760433221.7894] manager: (tap05e92470-26): new Tun device (/org/freedesktop/NetworkManager/Devices/403)
Oct 14 09:13:41 compute-0 ovn_controller[152662]: 2025-10-14T09:13:41Z|00990|binding|INFO|Claiming lport 05e92470-2658-4ea2-9c44-e91cd5226905 for this chassis.
Oct 14 09:13:41 compute-0 ovn_controller[152662]: 2025-10-14T09:13:41Z|00991|binding|INFO|05e92470-2658-4ea2-9c44-e91cd5226905: Claiming fa:16:3e:7f:fb:45 10.100.0.13
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:41 compute-0 NetworkManager[44885]: <info>  [1760433221.8140] device (tap05e92470-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:13:41 compute-0 NetworkManager[44885]: <info>  [1760433221.8155] device (tap05e92470-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.819 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:fb:45 10.100.0.13'], port_security=['fa:16:3e:7f:fb:45 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd46b6953-9413-4e6a-94f7-7b5ac9634c16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=05e92470-2658-4ea2-9c44-e91cd5226905) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.821 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 05e92470-2658-4ea2-9c44-e91cd5226905 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.823 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:13:41 compute-0 systemd-machined[214636]: New machine qemu-117-instance-0000005f.
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.845 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a32a488e-a37e-4d0e-92ea-1af611545dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.848 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbecee11-41 in ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.850 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbecee11-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.851 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[742e9c03-a074-474b-96c7-b7ba656403c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:41 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-0000005f.
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.852 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8265d6b3-1529-4028-b274-6828c8b316c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.859 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] flattening images/7b536765-adaa-4682-86b5-b3ff0be769bf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.871 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb1c6ce-34e9-4010-b467-3972898c2aef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.892 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[49836e53-c2b3-4db8-9382-7abbf5513d79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:41 compute-0 ovn_controller[152662]: 2025-10-14T09:13:41Z|00992|binding|INFO|Setting lport 05e92470-2658-4ea2-9c44-e91cd5226905 ovn-installed in OVS
Oct 14 09:13:41 compute-0 ovn_controller[152662]: 2025-10-14T09:13:41Z|00993|binding|INFO|Setting lport 05e92470-2658-4ea2-9c44-e91cd5226905 up in Southbound
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.931 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6b77a2d2-1524-49ce-a67f-1de9b43992df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:41 compute-0 NetworkManager[44885]: <info>  [1760433221.9424] manager: (tapfbecee11-40): new Veth device (/org/freedesktop/NetworkManager/Devices/404)
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.943 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[618bb619-f995-402c-8c2c-4681afe1fe82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 152 op/s
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:41 compute-0 nova_compute[259627]: 2025-10-14 09:13:41.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:13:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.997 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4b041b7a-3104-4726-bc93-a674608cf3f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.002 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa00bf5-85b3-4a17-98bf-bb1800a6a69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.009 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:13:42 compute-0 NetworkManager[44885]: <info>  [1760433222.0328] device (tapfbecee11-40): carrier: link connected
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.040 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1952914e-6739-4b60-a846-3dc874b20857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.062 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8e0a76-f81b-4be4-889f-17acb66819cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353465, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.087 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7a18ae3b-1356-4ad0-bbc6-dddd1fb97bb5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:b3fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700079, 'tstamp': 700079}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353466, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.109 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03406c87-82a7-445b-bed7-542518e07f5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353467, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.163 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5cfd85-ebcc-4a03-8599-b55019caae6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.243 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8307a0-c379-4ef9-8e72-b0206745cc23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.244 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.245 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.246 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:42 compute-0 NetworkManager[44885]: <info>  [1760433222.2482] manager: (tapfbecee11-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Oct 14 09:13:42 compute-0 kernel: tapfbecee11-40: entered promiscuous mode
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.253 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:42 compute-0 ovn_controller[152662]: 2025-10-14T09:13:42Z|00994|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.290 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbecee11-4892-4e36-88d8-98879af7bb1e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbecee11-4892-4e36-88d8-98879af7bb1e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31049402-5145-4342-9ff2-ad02b262d79a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.291 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/fbecee11-4892-4e36-88d8-98879af7bb1e.pid.haproxy
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:13:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.292 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'env', 'PROCESS_TAG=haproxy-fbecee11-4892-4e36-88d8-98879af7bb1e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbecee11-4892-4e36-88d8-98879af7bb1e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.298 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] removing snapshot(b246f6613b8749a58e5b42598e1fdcf4) on rbd image(2534f8b9-e832-4b78-ada4-e551429bdc75_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.512 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-unplugged-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.513 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.513 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.513 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] No waiting events found dispatching network-vif-unplugged-33bf675d-c42f-486f-b483-87fa5091b0ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-unplugged-33bf675d-c42f-486f-b483-87fa5091b0ef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.515 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.515 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] No waiting events found dispatching network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.515 2 WARNING nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received unexpected event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef for instance with vm_state active and task_state deleting.
Oct 14 09:13:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 14 09:13:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 14 09:13:42 compute-0 ceph-mon[74249]: osdmap e253: 3 total, 3 up, 3 in
Oct 14 09:13:42 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.649 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] creating snapshot(snap) on rbd image(7b536765-adaa-4682-86b5-b3ff0be769bf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:13:42 compute-0 podman[353565]: 2025-10-14 09:13:42.73865754 +0000 UTC m=+0.072981079 container create 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:13:42 compute-0 podman[353565]: 2025-10-14 09:13:42.705743819 +0000 UTC m=+0.040067368 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:13:42 compute-0 systemd[1]: Started libpod-conmon-5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d.scope.
Oct 14 09:13:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b011f8a1370d78d7a628d9aaf1bebb9c89844fdc71c852fd6c3b4b8bc5c51cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.854 2 DEBUG nova.network.neutron [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:42 compute-0 podman[353565]: 2025-10-14 09:13:42.869922662 +0000 UTC m=+0.204246221 container init 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.872 2 INFO nova.compute.manager [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Took 1.78 seconds to deallocate network for instance.
Oct 14 09:13:42 compute-0 podman[353565]: 2025-10-14 09:13:42.882610125 +0000 UTC m=+0.216933654 container start 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.916 2 DEBUG nova.compute.manager [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.917 2 DEBUG oslo_concurrency.lockutils [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.917 2 DEBUG oslo_concurrency.lockutils [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.918 2 DEBUG oslo_concurrency.lockutils [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.918 2 DEBUG nova.compute.manager [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.918 2 WARNING nova.compute.manager [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state shelving_image_uploading.
Oct 14 09:13:42 compute-0 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [NOTICE]   (353603) : New worker (353605) forked
Oct 14 09:13:42 compute-0 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [NOTICE]   (353603) : Loading success.
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.924 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:42 compute-0 nova_compute[259627]: 2025-10-14 09:13:42.925 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.042 2 DEBUG oslo_concurrency.processutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.110 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433223.0681987, d46b6953-9413-4e6a-94f7-7b5ac9634c16 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.111 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] VM Started (Lifecycle Event)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.142 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014536835807774956 of space, bias 1.0, pg target 0.4361050742332487 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.160 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433223.0686975, d46b6953-9413-4e6a-94f7-7b5ac9634c16 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.161 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] VM Paused (Lifecycle Event)
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.187 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.192 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.217 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/519279891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.592 2 DEBUG oslo_concurrency.processutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 14 09:13:43 compute-0 ceph-mon[74249]: pgmap v1778: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 152 op/s
Oct 14 09:13:43 compute-0 ceph-mon[74249]: osdmap e254: 3 total, 3 up, 3 in
Oct 14 09:13:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/519279891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.602 2 DEBUG nova.compute.provider_tree [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 14 09:13:43 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.625 2 DEBUG nova.scheduler.client.report [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.651 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.679 2 INFO nova.scheduler.client.report [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Deleted allocations for instance fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf
Oct 14 09:13:43 compute-0 nova_compute[259627]: 2025-10-14 09:13:43.759 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 42 KiB/s wr, 156 op/s
Oct 14 09:13:44 compute-0 ceph-mon[74249]: osdmap e255: 3 total, 3 up, 3 in
Oct 14 09:13:44 compute-0 nova_compute[259627]: 2025-10-14 09:13:44.760 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:44 compute-0 nova_compute[259627]: 2025-10-14 09:13:44.761 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:44 compute-0 nova_compute[259627]: 2025-10-14 09:13:44.788 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:13:44 compute-0 nova_compute[259627]: 2025-10-14 09:13:44.853 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:44 compute-0 nova_compute[259627]: 2025-10-14 09:13:44.854 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:44 compute-0 nova_compute[259627]: 2025-10-14 09:13:44.861 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:13:44 compute-0 nova_compute[259627]: 2025-10-14 09:13:44.862 2 INFO nova.compute.claims [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.016 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.081 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Snapshot image upload complete
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.083 2 DEBUG nova.compute.manager [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.086 2 DEBUG nova.compute.manager [req-2e8d67c7-cb07-4746-966b-2b315b9b3eaa req-8ab4f584-4588-4861-9d9d-0270af90c3da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-deleted-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.187 2 INFO nova.compute.manager [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Shelve offloading
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.200 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance destroyed successfully.
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.201 2 DEBUG nova.compute.manager [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.205 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.205 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.206 2 DEBUG nova.network.neutron [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:13:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528808103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.554 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.562 2 DEBUG nova.compute.provider_tree [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.576 2 DEBUG nova.scheduler.client.report [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.602 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.602 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:45 compute-0 ceph-mon[74249]: pgmap v1781: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 42 KiB/s wr, 156 op/s
Oct 14 09:13:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2528808103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.641 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.641 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.662 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.678 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.773 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.774 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.775 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating image(s)
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.795 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.817 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.839 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.843 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.938 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.938 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.939 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.939 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 396 op/s
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.960 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:45 compute-0 nova_compute[259627]: 2025-10-14 09:13:45.962 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b595141f-123e-4250-bfec-888d866fd0c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.167 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.200 2 WARNING nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] While synchronizing instance power states, found 3 instances in the database and 2 instances on the hypervisor.
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.201 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.201 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.201 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid b595141f-123e-4250-bfec-888d866fd0c6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.202 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.202 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.202 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.213 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b595141f-123e-4250-bfec-888d866fd0c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.272 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.365 2 DEBUG nova.objects.instance [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.382 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.382 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Ensure instance console log exists: /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.383 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.383 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.383 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:46 compute-0 nova_compute[259627]: 2025-10-14 09:13:46.389 2 DEBUG nova.policy [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.346 2 DEBUG nova.network.neutron [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.365 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:47 compute-0 ceph-mon[74249]: pgmap v1782: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 396 op/s
Oct 14 09:13:47 compute-0 podman[353824]: 2025-10-14 09:13:47.690755243 +0000 UTC m=+0.087547418 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd)
Oct 14 09:13:47 compute-0 podman[353825]: 2025-10-14 09:13:47.691356538 +0000 UTC m=+0.087536878 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:13:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 14 09:13:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 14 09:13:47 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.891 2 DEBUG nova.compute.manager [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.891 2 DEBUG oslo_concurrency.lockutils [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.891 2 DEBUG oslo_concurrency.lockutils [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.892 2 DEBUG oslo_concurrency.lockutils [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.892 2 DEBUG nova.compute.manager [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Processing event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.893 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.907 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433227.9036868, d46b6953-9413-4e6a-94f7-7b5ac9634c16 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.907 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] VM Resumed (Lifecycle Event)
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.909 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.915 2 INFO nova.virt.libvirt.driver [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance spawned successfully.
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.916 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:13:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 239 op/s
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.959 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.968 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.973 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.974 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.975 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.975 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.976 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:47 compute-0 nova_compute[259627]: 2025-10-14 09:13:47.977 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.029 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.086 2 INFO nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Took 15.33 seconds to spawn the instance on the hypervisor.
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.087 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.172 2 INFO nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Took 17.11 seconds to build instance.
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.193 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.194 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.194 2 INFO nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.195 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.391 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Successfully created port: 7103ce4a-69e8-454b-aed3-251ecb109232 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:13:48 compute-0 sudo[353862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:48 compute-0 sudo[353862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:48 compute-0 sudo[353862]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:48 compute-0 sudo[353887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:13:48 compute-0 sudo[353887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:48 compute-0 sudo[353887]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:48 compute-0 ovn_controller[152662]: 2025-10-14T09:13:48Z|00995|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:13:48 compute-0 sudo[353912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:48 compute-0 sudo[353912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:48 compute-0 sudo[353912]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:48 compute-0 sudo[353937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:13:48 compute-0 sudo[353937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.794 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance destroyed successfully.
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.794 2 DEBUG nova.objects.instance [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'resources' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.805 2 DEBUG nova.virt.libvirt.vif [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member',shelved_at='2025-10-14T09:13:45.082732',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7b536765-adaa-4682-86b5-b3ff0be769bf'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:40Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.805 2 DEBUG nova.network.os_vif_util [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.805 2 DEBUG nova.network.os_vif_util [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.806 2 DEBUG os_vif [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f827284-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:13:48 compute-0 nova_compute[259627]: 2025-10-14 09:13:48.812 2 INFO os_vif [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')
Oct 14 09:13:48 compute-0 ceph-mon[74249]: osdmap e256: 3 total, 3 up, 3 in
Oct 14 09:13:48 compute-0 ceph-mon[74249]: pgmap v1784: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 239 op/s
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.187 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deleting instance files /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75_del
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.188 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deletion of /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75_del complete
Oct 14 09:13:49 compute-0 sudo[353937]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.285 2 INFO nova.scheduler.client.report [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Deleted allocations for instance 2534f8b9-e832-4b78-ada4-e551429bdc75
Oct 14 09:13:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:13:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:13:49 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:13:49 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:13:49 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 94cdd114-f753-4819-94b2-3f2c875c0562 does not exist
Oct 14 09:13:49 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 831f4773-c2d3-4854-9d36-6c13bd915468 does not exist
Oct 14 09:13:49 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5d2e485d-adab-4e41-ae34-780976b25a6f does not exist
Oct 14 09:13:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:13:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:13:49 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.322 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.323 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:13:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:13:49 compute-0 sudo[354012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:49 compute-0 sudo[354012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:49 compute-0 sudo[354012]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.410 2 DEBUG oslo_concurrency.processutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:49 compute-0 sudo[354037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:13:49 compute-0 sudo[354037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:49 compute-0 sudo[354037]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.531 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Successfully updated port: 7103ce4a-69e8-454b-aed3-251ecb109232 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:13:49 compute-0 sudo[354063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:49 compute-0 sudo[354063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:49 compute-0 sudo[354063]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:49 compute-0 sudo[354098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:13:49 compute-0 sudo[354098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.639 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.639 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.640 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.689 2 DEBUG nova.compute.manager [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.690 2 DEBUG nova.compute.manager [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing instance network info cache due to event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.690 2 DEBUG oslo_concurrency.lockutils [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/804750047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:13:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:13:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/804750047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.864 2 DEBUG oslo_concurrency.processutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.871 2 DEBUG nova.compute.provider_tree [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.901 2 DEBUG nova.scheduler.client.report [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.938 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.4 MiB/s wr, 195 op/s
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.997 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.998 2 INFO nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Oct 14 09:13:49 compute-0 nova_compute[259627]: 2025-10-14 09:13:49.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:50 compute-0 podman[354174]: 2025-10-14 09:13:50.000132812 +0000 UTC m=+0.048449746 container create 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:13:50 compute-0 systemd[1]: Started libpod-conmon-6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd.scope.
Oct 14 09:13:50 compute-0 podman[354174]: 2025-10-14 09:13:49.985643384 +0000 UTC m=+0.033960358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:13:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:50 compute-0 podman[354174]: 2025-10-14 09:13:50.100641229 +0000 UTC m=+0.148958233 container init 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:50 compute-0 podman[354174]: 2025-10-14 09:13:50.112788478 +0000 UTC m=+0.161105442 container start 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:13:50 compute-0 podman[354174]: 2025-10-14 09:13:50.117274099 +0000 UTC m=+0.165591073 container attach 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:13:50 compute-0 busy_rubin[354190]: 167 167
Oct 14 09:13:50 compute-0 systemd[1]: libpod-6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd.scope: Deactivated successfully.
Oct 14 09:13:50 compute-0 podman[354174]: 2025-10-14 09:13:50.123972364 +0000 UTC m=+0.172289308 container died 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d614982d3b557190254fb4b853975d4d53d7c8a1024f32e54b4466346bf4c8f2-merged.mount: Deactivated successfully.
Oct 14 09:13:50 compute-0 podman[354174]: 2025-10-14 09:13:50.169214869 +0000 UTC m=+0.217531783 container remove 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:13:50 compute-0 systemd[1]: libpod-conmon-6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd.scope: Deactivated successfully.
Oct 14 09:13:50 compute-0 nova_compute[259627]: 2025-10-14 09:13:50.295 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:13:50 compute-0 podman[354213]: 2025-10-14 09:13:50.390317768 +0000 UTC m=+0.076645520 container create 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:13:50 compute-0 systemd[1]: Started libpod-conmon-8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46.scope.
Oct 14 09:13:50 compute-0 podman[354213]: 2025-10-14 09:13:50.360820981 +0000 UTC m=+0.047148773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:13:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:50 compute-0 podman[354213]: 2025-10-14 09:13:50.494755662 +0000 UTC m=+0.181083404 container init 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:13:50 compute-0 podman[354213]: 2025-10-14 09:13:50.505866946 +0000 UTC m=+0.192194668 container start 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:13:50 compute-0 podman[354213]: 2025-10-14 09:13:50.50924776 +0000 UTC m=+0.195575482 container attach 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:13:50 compute-0 nova_compute[259627]: 2025-10-14 09:13:50.705 2 DEBUG nova.compute.manager [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:50 compute-0 nova_compute[259627]: 2025-10-14 09:13:50.705 2 DEBUG oslo_concurrency.lockutils [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:50 compute-0 nova_compute[259627]: 2025-10-14 09:13:50.706 2 DEBUG oslo_concurrency.lockutils [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:50 compute-0 nova_compute[259627]: 2025-10-14 09:13:50.706 2 DEBUG oslo_concurrency.lockutils [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:50 compute-0 nova_compute[259627]: 2025-10-14 09:13:50.706 2 DEBUG nova.compute.manager [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] No waiting events found dispatching network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:50 compute-0 nova_compute[259627]: 2025-10-14 09:13:50.706 2 WARNING nova.compute.manager [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received unexpected event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 for instance with vm_state active and task_state None.
Oct 14 09:13:50 compute-0 ceph-mon[74249]: pgmap v1785: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.4 MiB/s wr, 195 op/s
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.514 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.533 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.533 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance network_info: |[{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.534 2 DEBUG oslo_concurrency.lockutils [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.534 2 DEBUG nova.network.neutron [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.537 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start _get_guest_xml network_info=[{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.543 2 WARNING nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.548 2 DEBUG nova.virt.libvirt.host [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.549 2 DEBUG nova.virt.libvirt.host [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.560 2 DEBUG nova.virt.libvirt.host [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.561 2 DEBUG nova.virt.libvirt.host [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.561 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.561 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.562 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.562 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.562 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.564 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.564 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:13:51 compute-0 nova_compute[259627]: 2025-10-14 09:13:51.568 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:51 compute-0 nice_heyrovsky[354229]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:13:51 compute-0 nice_heyrovsky[354229]: --> relative data size: 1.0
Oct 14 09:13:51 compute-0 nice_heyrovsky[354229]: --> All data devices are unavailable
Oct 14 09:13:51 compute-0 systemd[1]: libpod-8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46.scope: Deactivated successfully.
Oct 14 09:13:51 compute-0 podman[354213]: 2025-10-14 09:13:51.698680384 +0000 UTC m=+1.385008116 container died 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 14 09:13:51 compute-0 systemd[1]: libpod-8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46.scope: Consumed 1.114s CPU time.
Oct 14 09:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc-merged.mount: Deactivated successfully.
Oct 14 09:13:51 compute-0 podman[354213]: 2025-10-14 09:13:51.766727021 +0000 UTC m=+1.453054733 container remove 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:13:51 compute-0 systemd[1]: libpod-conmon-8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46.scope: Deactivated successfully.
Oct 14 09:13:51 compute-0 sudo[354098]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:51 compute-0 sudo[354290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:51 compute-0 sudo[354290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:51 compute-0 sudo[354290]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:51 compute-0 sudo[354315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:13:51 compute-0 sudo[354315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:51 compute-0 sudo[354315]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 8.2 MiB/s wr, 342 op/s
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.008 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:13:52 compute-0 sudo[354340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:52 compute-0 sudo[354340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:52 compute-0 sudo[354340]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:13:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272355481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.070 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:52 compute-0 sudo[354365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:13:52 compute-0 sudo[354365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.110 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.113 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:13:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152691683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:52 compute-0 podman[354466]: 2025-10-14 09:13:52.571926417 +0000 UTC m=+0.078314381 container create edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.583 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.585 2 DEBUG nova.virt.libvirt.vif [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:45Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.585 2 DEBUG nova.network.os_vif_util [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.586 2 DEBUG nova.network.os_vif_util [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.588 2 DEBUG nova.objects.instance [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.604 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <uuid>b595141f-123e-4250-bfec-888d866fd0c6</uuid>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <name>instance-00000060</name>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1399788817</nova:name>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:13:51</nova:creationTime>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <nova:port uuid="7103ce4a-69e8-454b-aed3-251ecb109232">
Oct 14 09:13:52 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <system>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <entry name="serial">b595141f-123e-4250-bfec-888d866fd0c6</entry>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <entry name="uuid">b595141f-123e-4250-bfec-888d866fd0c6</entry>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </system>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <os>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   </os>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <features>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   </features>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b595141f-123e-4250-bfec-888d866fd0c6_disk">
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       </source>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b595141f-123e-4250-bfec-888d866fd0c6_disk.config">
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       </source>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:13:52 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:9d:3c:de"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <target dev="tap7103ce4a-69"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/console.log" append="off"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <video>
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </video>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:13:52 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:13:52 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:13:52 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:13:52 compute-0 nova_compute[259627]: </domain>
Oct 14 09:13:52 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.604 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Preparing to wait for external event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.604 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.605 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.605 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.606 2 DEBUG nova.virt.libvirt.vif [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:45Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.606 2 DEBUG nova.network.os_vif_util [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.607 2 DEBUG nova.network.os_vif_util [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.608 2 DEBUG os_vif [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7103ce4a-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7103ce4a-69, col_values=(('external_ids', {'iface-id': '7103ce4a-69e8-454b-aed3-251ecb109232', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:3c:de', 'vm-uuid': 'b595141f-123e-4250-bfec-888d866fd0c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:52 compute-0 NetworkManager[44885]: <info>  [1760433232.6177] manager: (tap7103ce4a-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:13:52 compute-0 systemd[1]: Started libpod-conmon-edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5.scope.
Oct 14 09:13:52 compute-0 podman[354466]: 2025-10-14 09:13:52.53349821 +0000 UTC m=+0.039886234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.626 2 INFO os_vif [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69')
Oct 14 09:13:52 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:52 compute-0 podman[354466]: 2025-10-14 09:13:52.677391626 +0000 UTC m=+0.183779560 container init edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 14 09:13:52 compute-0 podman[354466]: 2025-10-14 09:13:52.692503669 +0000 UTC m=+0.198891633 container start edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:13:52 compute-0 systemd[1]: libpod-edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5.scope: Deactivated successfully.
Oct 14 09:13:52 compute-0 podman[354466]: 2025-10-14 09:13:52.698737572 +0000 UTC m=+0.205125546 container attach edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 09:13:52 compute-0 frosty_boyd[354485]: 167 167
Oct 14 09:13:52 compute-0 conmon[354485]: conmon edd676102a5ffe3b476e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5.scope/container/memory.events
Oct 14 09:13:52 compute-0 podman[354466]: 2025-10-14 09:13:52.700582268 +0000 UTC m=+0.206970202 container died edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.706 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.706 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.707 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:9d:3c:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.707 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Using config drive
Oct 14 09:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5caa65ebfce6e665e0a96b8b39de8a1b68eb4608b624e1923b704c81e7171543-merged.mount: Deactivated successfully.
Oct 14 09:13:52 compute-0 nova_compute[259627]: 2025-10-14 09:13:52.742 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:52 compute-0 podman[354466]: 2025-10-14 09:13:52.749260567 +0000 UTC m=+0.255648501 container remove edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:13:52 compute-0 systemd[1]: libpod-conmon-edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5.scope: Deactivated successfully.
Oct 14 09:13:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:52 compute-0 podman[354529]: 2025-10-14 09:13:52.97819793 +0000 UTC m=+0.073515943 container create f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 09:13:53 compute-0 ceph-mon[74249]: pgmap v1786: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 8.2 MiB/s wr, 342 op/s
Oct 14 09:13:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4272355481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3152691683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:13:53 compute-0 systemd[1]: Started libpod-conmon-f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f.scope.
Oct 14 09:13:53 compute-0 podman[354529]: 2025-10-14 09:13:52.950740043 +0000 UTC m=+0.046058146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:13:53 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:53 compute-0 podman[354529]: 2025-10-14 09:13:53.091347509 +0000 UTC m=+0.186665592 container init f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:13:53 compute-0 podman[354529]: 2025-10-14 09:13:53.102497344 +0000 UTC m=+0.197815367 container start f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:13:53 compute-0 podman[354529]: 2025-10-14 09:13:53.105902007 +0000 UTC m=+0.201220140 container attach f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.293 2 DEBUG nova.network.neutron [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updated VIF entry in instance network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.293 2 DEBUG nova.network.neutron [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.371 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating config drive at /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.380 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv0bdg22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.438 2 DEBUG oslo_concurrency.lockutils [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.549 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv0bdg22" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.586 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.594 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config b595141f-123e-4250-bfec-888d866fd0c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.818 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config b595141f-123e-4250-bfec-888d866fd0c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.820 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deleting local config drive /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config because it was imported into RBD.
Oct 14 09:13:53 compute-0 interesting_taussig[354546]: {
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:     "0": [
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:         {
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "devices": [
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "/dev/loop3"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             ],
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_name": "ceph_lv0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_size": "21470642176",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "name": "ceph_lv0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "tags": {
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cluster_name": "ceph",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.crush_device_class": "",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.encrypted": "0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osd_id": "0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.type": "block",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.vdo": "0"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             },
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "type": "block",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "vg_name": "ceph_vg0"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:         }
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:     ],
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:     "1": [
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:         {
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "devices": [
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "/dev/loop4"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             ],
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_name": "ceph_lv1",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_size": "21470642176",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "name": "ceph_lv1",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "tags": {
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cluster_name": "ceph",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.crush_device_class": "",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.encrypted": "0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osd_id": "1",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.type": "block",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.vdo": "0"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             },
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "type": "block",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "vg_name": "ceph_vg1"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:         }
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:     ],
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:     "2": [
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:         {
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "devices": [
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "/dev/loop5"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             ],
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_name": "ceph_lv2",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_size": "21470642176",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "name": "ceph_lv2",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "tags": {
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.cluster_name": "ceph",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.crush_device_class": "",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.encrypted": "0",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osd_id": "2",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.type": "block",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:                 "ceph.vdo": "0"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             },
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "type": "block",
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:             "vg_name": "ceph_vg2"
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:         }
Oct 14 09:13:53 compute-0 interesting_taussig[354546]:     ]
Oct 14 09:13:53 compute-0 interesting_taussig[354546]: }
Oct 14 09:13:53 compute-0 systemd[1]: libpod-f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f.scope: Deactivated successfully.
Oct 14 09:13:53 compute-0 podman[354529]: 2025-10-14 09:13:53.890178787 +0000 UTC m=+0.985496830 container died f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:13:53 compute-0 kernel: tap7103ce4a-69: entered promiscuous mode
Oct 14 09:13:53 compute-0 NetworkManager[44885]: <info>  [1760433233.9329] manager: (tap7103ce4a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Oct 14 09:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917-merged.mount: Deactivated successfully.
Oct 14 09:13:53 compute-0 ovn_controller[152662]: 2025-10-14T09:13:53Z|00996|binding|INFO|Claiming lport 7103ce4a-69e8-454b-aed3-251ecb109232 for this chassis.
Oct 14 09:13:53 compute-0 nova_compute[259627]: 2025-10-14 09:13:53.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:53 compute-0 ovn_controller[152662]: 2025-10-14T09:13:53Z|00997|binding|INFO|7103ce4a-69e8-454b-aed3-251ecb109232: Claiming fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 09:13:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct 14 09:13:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.954 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:13:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.956 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f bound to our chassis
Oct 14 09:13:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.957 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 09:13:53 compute-0 podman[354529]: 2025-10-14 09:13:53.97915488 +0000 UTC m=+1.074472893 container remove f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 09:13:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dba3551c-26a2-4f66-ae23-2198a5f9bb81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.991 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap563aa000-41 in ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:13:53 compute-0 systemd-machined[214636]: New machine qemu-118-instance-00000060.
Oct 14 09:13:53 compute-0 systemd-udevd[354621]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:13:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.997 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap563aa000-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:13:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.998 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac82974d-b1ba-4064-966e-cda024ead5a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.999 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[371c769b-ff61-4d76-b5a7-1eb7d08392ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-00000060.
Oct 14 09:13:54 compute-0 sudo[354365]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:54 compute-0 NetworkManager[44885]: <info>  [1760433234.0175] device (tap7103ce4a-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:13:54 compute-0 systemd[1]: libpod-conmon-f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f.scope: Deactivated successfully.
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.017 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[146b37c7-26b6-4000-a97a-682b3a1724d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 NetworkManager[44885]: <info>  [1760433234.0198] device (tap7103ce4a-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.043 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[296a9068-c081-44f5-adf1-d8a98216a3e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:54 compute-0 ovn_controller[152662]: 2025-10-14T09:13:54Z|00998|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 ovn-installed in OVS
Oct 14 09:13:54 compute-0 ovn_controller[152662]: 2025-10-14T09:13:54Z|00999|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 up in Southbound
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.091 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76093755-4701-4280-b618-f7de245f64c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 sudo[354626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:54 compute-0 NetworkManager[44885]: <info>  [1760433234.1021] manager: (tap563aa000-40): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Oct 14 09:13:54 compute-0 systemd-udevd[354624]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.103 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ddf93c-75ab-4ae5-8404-a06568b9eed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 sudo[354626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:54 compute-0 sudo[354626]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.147 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70a75dd5-9df6-46d9-aaa6-544e2e5bc08a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.152 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[adfd28fb-b278-4e40-a941-ae417178e2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 NetworkManager[44885]: <info>  [1760433234.1792] device (tap563aa000-40): carrier: link connected
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.186 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb11300-945f-44b7-8717-00193f41463f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 sudo[354665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:13:54 compute-0 sudo[354665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:54 compute-0 sudo[354665]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.221 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e86f2a3-8385-4999-b419-7e1c5518b78d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap563aa000-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:cd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701294, 'reachable_time': 23044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354701, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.246 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6afae12-1730-4225-a854-d629787687ea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:cd84'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701294, 'tstamp': 701294}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354716, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.268 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5a22bb-5f3e-4d15-b781-d0ac816367b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap563aa000-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:cd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701294, 'reachable_time': 23044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354728, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 sudo[354704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:54 compute-0 sudo[354704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:54 compute-0 sudo[354704]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa20d8b-06ec-4096-b24d-1028106351b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 sudo[354733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:13:54 compute-0 sudo[354733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.354 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b98516dc-8b0e-4f14-9e9a-21e3eb56e8b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.356 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563aa000-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.356 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.357 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap563aa000-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.395 2 DEBUG nova.compute.manager [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.396 2 DEBUG oslo_concurrency.lockutils [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.396 2 DEBUG oslo_concurrency.lockutils [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.396 2 DEBUG oslo_concurrency.lockutils [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.396 2 DEBUG nova.compute.manager [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Processing event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:54 compute-0 kernel: tap563aa000-40: entered promiscuous mode
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:54 compute-0 NetworkManager[44885]: <info>  [1760433234.4139] manager: (tap563aa000-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.414 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap563aa000-40, col_values=(('external_ids', {'iface-id': '4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:54 compute-0 ovn_controller[152662]: 2025-10-14T09:13:54Z|01000|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.418 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.419 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[59e3fa9c-3eff-413c-a994-f9577c94d631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.419 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:13:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.421 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'env', 'PROCESS_TAG=haproxy-563aa000-400f-4c19-ba83-9377cc50d29f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/563aa000-400f-4c19-ba83-9377cc50d29f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.520 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433219.5156043, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.521 2 INFO nova.compute.manager [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Stopped (Lifecycle Event)
Oct 14 09:13:54 compute-0 nova_compute[259627]: 2025-10-14 09:13:54.551 2 DEBUG nova.compute.manager [None req-722941fe-e62d-49d7-aa37-4d0050afa25d - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:54 compute-0 podman[354853]: 2025-10-14 09:13:54.77969531 +0000 UTC m=+0.056464203 container create 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:13:54 compute-0 systemd[1]: Started libpod-conmon-023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8.scope.
Oct 14 09:13:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:54 compute-0 podman[354853]: 2025-10-14 09:13:54.756998761 +0000 UTC m=+0.033767674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:13:54 compute-0 podman[354853]: 2025-10-14 09:13:54.857353644 +0000 UTC m=+0.134122547 container init 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:13:54 compute-0 podman[354876]: 2025-10-14 09:13:54.861839445 +0000 UTC m=+0.070066688 container create a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:13:54 compute-0 podman[354853]: 2025-10-14 09:13:54.868874658 +0000 UTC m=+0.145643551 container start 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:13:54 compute-0 heuristic_bohr[354890]: 167 167
Oct 14 09:13:54 compute-0 systemd[1]: libpod-023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8.scope: Deactivated successfully.
Oct 14 09:13:54 compute-0 conmon[354890]: conmon 023a090a3ec291cf7ec4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8.scope/container/memory.events
Oct 14 09:13:54 compute-0 podman[354853]: 2025-10-14 09:13:54.875004469 +0000 UTC m=+0.151773362 container attach 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 09:13:54 compute-0 podman[354853]: 2025-10-14 09:13:54.875651465 +0000 UTC m=+0.152420358 container died 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:13:54 compute-0 systemd[1]: Started libpod-conmon-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454.scope.
Oct 14 09:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c6f5f2a0976fd8dfc863cc7c970840de7260395b7504c2b5d7239e961e1a322-merged.mount: Deactivated successfully.
Oct 14 09:13:54 compute-0 podman[354853]: 2025-10-14 09:13:54.912430471 +0000 UTC m=+0.189199364 container remove 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:13:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:54 compute-0 systemd[1]: libpod-conmon-023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8.scope: Deactivated successfully.
Oct 14 09:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e223c53edf6cce1508ede20bb72625545cc9c2f087f47ad069c77aa13b5410e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:54 compute-0 podman[354876]: 2025-10-14 09:13:54.832567053 +0000 UTC m=+0.040794336 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:13:54 compute-0 podman[354876]: 2025-10-14 09:13:54.94319627 +0000 UTC m=+0.151423513 container init a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 14 09:13:54 compute-0 podman[354876]: 2025-10-14 09:13:54.951678109 +0000 UTC m=+0.159905352 container start a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:54 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [NOTICE]   (354918) : New worker (354920) forked
Oct 14 09:13:54 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [NOTICE]   (354918) : Loading success.
Oct 14 09:13:55 compute-0 ceph-mon[74249]: pgmap v1787: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.093 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.093 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.111 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:13:55 compute-0 podman[354934]: 2025-10-14 09:13:55.141076487 +0000 UTC m=+0.058866172 container create a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:13:55 compute-0 systemd[1]: Started libpod-conmon-a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f.scope.
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.199 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.201 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:55 compute-0 podman[354934]: 2025-10-14 09:13:55.119996387 +0000 UTC m=+0.037786072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.214 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.214 2 INFO nova.compute.claims [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:13:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:13:55 compute-0 podman[354934]: 2025-10-14 09:13:55.260785507 +0000 UTC m=+0.178575262 container init a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:13:55 compute-0 podman[354934]: 2025-10-14 09:13:55.275792877 +0000 UTC m=+0.193582552 container start a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:13:55 compute-0 podman[354934]: 2025-10-14 09:13:55.280160845 +0000 UTC m=+0.197950550 container attach a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.325 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433235.323485, b595141f-123e-4250-bfec-888d866fd0c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.326 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Started (Lifecycle Event)
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.328 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.332 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.337 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance spawned successfully.
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.337 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.347 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.353 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.394 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.406 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.407 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.408 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.409 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.409 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.410 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.437 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.438 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433235.3235881, b595141f-123e-4250-bfec-888d866fd0c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.438 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Paused (Lifecycle Event)
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.460 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.467 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433235.3312664, b595141f-123e-4250-bfec-888d866fd0c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.467 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Resumed (Lifecycle Event)
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.471 2 INFO nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Took 9.70 seconds to spawn the instance on the hypervisor.
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.471 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.498 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.505 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.534 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.549 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433220.5435143, fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.553 2 INFO nova.compute.manager [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] VM Stopped (Lifecycle Event)
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.572 2 INFO nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Took 10.73 seconds to build instance.
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.575 2 DEBUG nova.compute.manager [None req-f71cc039-f54e-4167-9ad0-cc47d1dc5ee3 - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.575 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.576 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.576 2 INFO nova.compute.manager [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Unshelving
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.593 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.593 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "b595141f-123e-4250-bfec-888d866fd0c6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 9.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.594 2 INFO nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.594 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "b595141f-123e-4250-bfec-888d866fd0c6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.647 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315042023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.810 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.816 2 DEBUG nova.compute.provider_tree [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.829 2 DEBUG nova.scheduler.client.report [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.854 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.855 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.859 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.864 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_requests' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.887 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'numa_topology' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.901 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.902 2 INFO nova.compute.claims [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.908 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.909 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:13:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.960 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:13:55 compute-0 nova_compute[259627]: 2025-10-14 09:13:55.982 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:13:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1315042023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.066 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.068 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.069 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Creating image(s)
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.100 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.134 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.173 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.181 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]: {
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "osd_id": 2,
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "type": "bluestore"
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:     },
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "osd_id": 1,
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "type": "bluestore"
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:     },
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "osd_id": 0,
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:         "type": "bluestore"
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]:     }
Oct 14 09:13:56 compute-0 intelligent_maxwell[354950]: }
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.226 2 DEBUG nova.policy [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.257 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:56 compute-0 systemd[1]: libpod-a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f.scope: Deactivated successfully.
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.306 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.307 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.307 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.308 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:56 compute-0 podman[355063]: 2025-10-14 09:13:56.330783459 +0000 UTC m=+0.040315084 container died a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.343 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.349 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d5de3978-2377-4d8e-aeaf-c952912130a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e-merged.mount: Deactivated successfully.
Oct 14 09:13:56 compute-0 podman[355063]: 2025-10-14 09:13:56.396102759 +0000 UTC m=+0.105634354 container remove a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:13:56 compute-0 systemd[1]: libpod-conmon-a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f.scope: Deactivated successfully.
Oct 14 09:13:56 compute-0 sudo[354733]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:13:56 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:13:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:13:56 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:13:56 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e3f08df5-6227-4e70-8efb-37b5717ec684 does not exist
Oct 14 09:13:56 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7e663705-c38a-4b7c-a65f-b1b0cf17a4de does not exist
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.483 2 DEBUG nova.compute.manager [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.484 2 DEBUG oslo_concurrency.lockutils [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.484 2 DEBUG oslo_concurrency.lockutils [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.484 2 DEBUG oslo_concurrency.lockutils [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.484 2 DEBUG nova.compute.manager [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.485 2 WARNING nova.compute.manager [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state None.
Oct 14 09:13:56 compute-0 sudo[355130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:13:56 compute-0 sudo[355130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:56 compute-0 sudo[355130]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:56 compute-0 sudo[355158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:13:56 compute-0 sudo[355158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:13:56 compute-0 sudo[355158]: pam_unix(sudo:session): session closed for user root
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.701 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d5de3978-2377-4d8e-aeaf-c952912130a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.762 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:13:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:13:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449513898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.794 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.799 2 DEBUG nova.compute.provider_tree [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.821 2 DEBUG nova.scheduler.client.report [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.855 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.861 2 DEBUG nova.objects.instance [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid d5de3978-2377-4d8e-aeaf-c952912130a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.875 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.875 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Ensure instance console log exists: /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.876 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.876 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:13:56 compute-0 nova_compute[259627]: 2025-10-14 09:13:56.876 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:13:57 compute-0 nova_compute[259627]: 2025-10-14 09:13:57.070 2 INFO nova.network.neutron [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating port 4f827284-f357-43c5-bdde-c69731b52914 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 14 09:13:57 compute-0 nova_compute[259627]: 2025-10-14 09:13:57.118 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Successfully created port: dc177f85-b331-40ec-b30f-1f667878bcb8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:13:57 compute-0 ceph-mon[74249]: pgmap v1788: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 14 09:13:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:13:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:13:57 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/449513898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:13:57 compute-0 nova_compute[259627]: 2025-10-14 09:13:57.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:13:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:57.899 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:13:57 compute-0 nova_compute[259627]: 2025-10-14 09:13:57.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:13:57.900 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:13:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:58 compute-0 NetworkManager[44885]: <info>  [1760433238.1702] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Oct 14 09:13:58 compute-0 NetworkManager[44885]: <info>  [1760433238.1734] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:58 compute-0 ovn_controller[152662]: 2025-10-14T09:13:58Z|01001|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 09:13:58 compute-0 ovn_controller[152662]: 2025-10-14T09:13:58Z|01002|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.610 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.610 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.611 2 DEBUG nova.network.neutron [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.832 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Successfully updated port: dc177f85-b331-40ec-b30f-1f667878bcb8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.845 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.846 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:58 compute-0 nova_compute[259627]: 2025-10-14 09:13:58.846 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.050 2 DEBUG nova.compute.manager [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.050 2 DEBUG nova.compute.manager [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing instance network info cache due to event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.050 2 DEBUG oslo_concurrency.lockutils [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.051 2 DEBUG oslo_concurrency.lockutils [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.051 2 DEBUG nova.network.neutron [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.052 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.169 2 DEBUG nova.compute.manager [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-changed-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.169 2 DEBUG nova.compute.manager [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Refreshing instance network info cache due to event network-changed-4f827284-f357-43c5-bdde-c69731b52914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:13:59 compute-0 nova_compute[259627]: 2025-10-14 09:13:59.170 2 DEBUG oslo_concurrency.lockutils [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:13:59 compute-0 ceph-mon[74249]: pgmap v1789: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.482257) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239482364, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 795, "num_deletes": 252, "total_data_size": 934370, "memory_usage": 949000, "flush_reason": "Manual Compaction"}
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239491937, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 924395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36976, "largest_seqno": 37770, "table_properties": {"data_size": 920344, "index_size": 1767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9502, "raw_average_key_size": 19, "raw_value_size": 912024, "raw_average_value_size": 1912, "num_data_blocks": 78, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433183, "oldest_key_time": 1760433183, "file_creation_time": 1760433239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 9888 microseconds, and 5676 cpu microseconds.
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.492151) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 924395 bytes OK
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.492252) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.494108) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.494134) EVENT_LOG_v1 {"time_micros": 1760433239494126, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.494155) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 930329, prev total WAL file size 930329, number of live WAL files 2.
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.495392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(902KB)], [80(8617KB)]
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239495444, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9748958, "oldest_snapshot_seqno": -1}
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6241 keys, 8089915 bytes, temperature: kUnknown
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239545266, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8089915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8048949, "index_size": 24280, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157876, "raw_average_key_size": 25, "raw_value_size": 7937754, "raw_average_value_size": 1271, "num_data_blocks": 978, "num_entries": 6241, "num_filter_entries": 6241, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.545572) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8089915 bytes
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.547526) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.3 rd, 162.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.4 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(19.3) write-amplify(8.8) OK, records in: 6760, records dropped: 519 output_compression: NoCompression
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.547559) EVENT_LOG_v1 {"time_micros": 1760433239547545, "job": 46, "event": "compaction_finished", "compaction_time_micros": 49912, "compaction_time_cpu_micros": 31793, "output_level": 6, "num_output_files": 1, "total_output_size": 8089915, "num_input_records": 6760, "num_output_records": 6241, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239548067, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239551282, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.495334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:59 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:13:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:14:00 compute-0 ovn_controller[152662]: 2025-10-14T09:14:00Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:fb:45 10.100.0.13
Oct 14 09:14:00 compute-0 ovn_controller[152662]: 2025-10-14T09:14:00Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:fb:45 10.100.0.13
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.262 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Updating instance_info_cache with network_info: [{"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.280 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.280 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance network_info: |[{"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.283 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start _get_guest_xml network_info=[{"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.288 2 WARNING nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.294 2 DEBUG nova.virt.libvirt.host [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.295 2 DEBUG nova.virt.libvirt.host [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.298 2 DEBUG nova.virt.libvirt.host [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.299 2 DEBUG nova.virt.libvirt.host [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.299 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.299 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.300 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.300 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.300 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.301 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.301 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.301 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.301 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.302 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.302 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.302 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.305 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.515 2 DEBUG nova.network.neutron [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.544 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.547 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.548 2 INFO nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating image(s)
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.594 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.612 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.613 2 DEBUG oslo_concurrency.lockutils [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.614 2 DEBUG nova.network.neutron [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Refreshing network info cache for port 4f827284-f357-43c5-bdde-c69731b52914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.674 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:00 compute-0 podman[355296]: 2025-10-14 09:14:00.695815342 +0000 UTC m=+0.093289000 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.699 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:00 compute-0 podman[355280]: 2025-10-14 09:14:00.708169756 +0000 UTC m=+0.105646485 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.710 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "ad4745c71f608ccd993c23b78f9a7e19f70d6f59" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.711 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "ad4745c71f608ccd993c23b78f9a7e19f70d6f59" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523065631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.822 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.837 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.840 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.874 2 DEBUG nova.network.neutron [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updated VIF entry in instance network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.875 2 DEBUG nova.network.neutron [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:00 compute-0 nova_compute[259627]: 2025-10-14 09:14:00.895 2 DEBUG oslo_concurrency.lockutils [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.005 2 DEBUG nova.virt.libvirt.imagebackend [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/7b536765-adaa-4682-86b5-b3ff0be769bf/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/7b536765-adaa-4682-86b5-b3ff0be769bf/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.044 2 DEBUG nova.virt.libvirt.imagebackend [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/7b536765-adaa-4682-86b5-b3ff0be769bf/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.045 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] cloning images/7b536765-adaa-4682-86b5-b3ff0be769bf@snap to None/2534f8b9-e832-4b78-ada4-e551429bdc75_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.141 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "ad4745c71f608ccd993c23b78f9a7e19f70d6f59" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.235 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'migration_context' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065249320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.281 2 DEBUG nova.compute.manager [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-changed-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.282 2 DEBUG nova.compute.manager [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Refreshing instance network info cache due to event network-changed-dc177f85-b331-40ec-b30f-1f667878bcb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.282 2 DEBUG oslo_concurrency.lockutils [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.282 2 DEBUG oslo_concurrency.lockutils [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.282 2 DEBUG nova.network.neutron [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Refreshing network info cache for port dc177f85-b331-40ec-b30f-1f667878bcb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.284 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.285 2 DEBUG nova.virt.libvirt.vif [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1227005018',display_name='tempest-ServersTestJSON-server-1227005018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1227005018',id=97,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-0z0rtrem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:56Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d5de3978-2377-4d8e-aeaf-c952912130a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.285 2 DEBUG nova.network.os_vif_util [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.286 2 DEBUG nova.network.os_vif_util [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.288 2 DEBUG nova.objects.instance [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5de3978-2377-4d8e-aeaf-c952912130a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.292 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] flattening vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.323 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <uuid>d5de3978-2377-4d8e-aeaf-c952912130a2</uuid>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <name>instance-00000061</name>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestJSON-server-1227005018</nova:name>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:14:00</nova:creationTime>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <nova:port uuid="dc177f85-b331-40ec-b30f-1f667878bcb8">
Oct 14 09:14:01 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <system>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <entry name="serial">d5de3978-2377-4d8e-aeaf-c952912130a2</entry>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <entry name="uuid">d5de3978-2377-4d8e-aeaf-c952912130a2</entry>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </system>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <os>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   </os>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <features>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   </features>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/d5de3978-2377-4d8e-aeaf-c952912130a2_disk">
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config">
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:01 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:6a:28:0f"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <target dev="tapdc177f85-b3"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/console.log" append="off"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <video>
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </video>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:14:01 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:14:01 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:14:01 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:14:01 compute-0 nova_compute[259627]: </domain>
Oct 14 09:14:01 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.329 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Preparing to wait for external event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.330 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.330 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.330 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.331 2 DEBUG nova.virt.libvirt.vif [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1227005018',display_name='tempest-ServersTestJSON-server-1227005018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1227005018',id=97,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-0z0rtrem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:56Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d5de3978-2377-4d8e-aeaf-c952912130a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.333 2 DEBUG nova.network.os_vif_util [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.333 2 DEBUG nova.network.os_vif_util [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.334 2 DEBUG os_vif [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc177f85-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc177f85-b3, col_values=(('external_ids', {'iface-id': 'dc177f85-b331-40ec-b30f-1f667878bcb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:28:0f', 'vm-uuid': 'd5de3978-2377-4d8e-aeaf-c952912130a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:01 compute-0 NetworkManager[44885]: <info>  [1760433241.3500] manager: (tapdc177f85-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.366 2 INFO os_vif [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3')
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.452 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.455 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.455 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:6a:28:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.455 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Using config drive
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.476 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:01 compute-0 ceph-mon[74249]: pgmap v1790: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:14:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2523065631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1065249320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.604 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Image rbd:vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.605 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.605 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Ensure instance console log exists: /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.606 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.606 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.607 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.610 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start _get_guest_xml network_info=[{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:13:36Z,direct_url=<?>,disk_format='raw',id=7b536765-adaa-4682-86b5-b3ff0be769bf,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-17250352-shelved',owner='517aafb84156407c8672042097e3ef4f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:13:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.614 2 WARNING nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.619 2 DEBUG nova.virt.libvirt.host [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.619 2 DEBUG nova.virt.libvirt.host [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.622 2 DEBUG nova.virt.libvirt.host [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.622 2 DEBUG nova.virt.libvirt.host [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.622 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.623 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:13:36Z,direct_url=<?>,disk_format='raw',id=7b536765-adaa-4682-86b5-b3ff0be769bf,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-17250352-shelved',owner='517aafb84156407c8672042097e3ef4f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:13:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.623 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.623 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.625 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.625 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.625 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.625 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.642 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.866 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Creating config drive at /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config
Oct 14 09:14:01 compute-0 nova_compute[259627]: 2025-10-14 09:14:01.882 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyoe9xwa6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 289 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 276 op/s
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.049 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyoe9xwa6" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510585488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.090 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.096 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.164 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.199 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.203 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.250 2 DEBUG nova.network.neutron [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updated VIF entry in instance network info cache for port 4f827284-f357-43c5-bdde-c69731b52914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.251 2 DEBUG nova.network.neutron [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.271 2 DEBUG oslo_concurrency.lockutils [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.311 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.312 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Deleting local config drive /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config because it was imported into RBD.
Oct 14 09:14:02 compute-0 kernel: tapdc177f85-b3: entered promiscuous mode
Oct 14 09:14:02 compute-0 NetworkManager[44885]: <info>  [1760433242.3699] manager: (tapdc177f85-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Oct 14 09:14:02 compute-0 ovn_controller[152662]: 2025-10-14T09:14:02Z|01003|binding|INFO|Claiming lport dc177f85-b331-40ec-b30f-1f667878bcb8 for this chassis.
Oct 14 09:14:02 compute-0 ovn_controller[152662]: 2025-10-14T09:14:02Z|01004|binding|INFO|dc177f85-b331-40ec-b30f-1f667878bcb8: Claiming fa:16:3e:6a:28:0f 10.100.0.14
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.378 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:28:0f 10.100.0.14'], port_security=['fa:16:3e:6a:28:0f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd5de3978-2377-4d8e-aeaf-c952912130a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=dc177f85-b331-40ec-b30f-1f667878bcb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.379 162547 INFO neutron.agent.ovn.metadata.agent [-] Port dc177f85-b331-40ec-b30f-1f667878bcb8 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.381 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:14:02 compute-0 ovn_controller[152662]: 2025-10-14T09:14:02Z|01005|binding|INFO|Setting lport dc177f85-b331-40ec-b30f-1f667878bcb8 ovn-installed in OVS
Oct 14 09:14:02 compute-0 ovn_controller[152662]: 2025-10-14T09:14:02Z|01006|binding|INFO|Setting lport dc177f85-b331-40ec-b30f-1f667878bcb8 up in Southbound
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d72c456-6ef7-43b6-9fcf-36b1e3e8bce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:02 compute-0 systemd-machined[214636]: New machine qemu-119-instance-00000061.
Oct 14 09:14:02 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000061.
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.441 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba54c46-f2d7-4d59-831b-2820c259b9e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.445 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3b9e6c-535e-4344-9f54-21bb588c10e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:02 compute-0 systemd-udevd[355712]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:02 compute-0 NetworkManager[44885]: <info>  [1760433242.4608] device (tapdc177f85-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:02 compute-0 NetworkManager[44885]: <info>  [1760433242.4619] device (tapdc177f85-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.486 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f9f389-7aec-4209-a75d-6e33244aa7af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[588ec1f7-2e4e-4b2a-9126-76147a6dfbd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355722, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3510585488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.538 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31e5f951-670c-42b6-b566-9aba45c09d19]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355724, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355724, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.541 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.546 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.658 2 DEBUG nova.network.neutron [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Updated VIF entry in instance network info cache for port dc177f85-b331-40ec-b30f-1f667878bcb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.659 2 DEBUG nova.network.neutron [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Updating instance_info_cache with network_info: [{"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023730173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.677 2 DEBUG oslo_concurrency.lockutils [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.696 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.697 2 DEBUG nova.virt.libvirt.vif [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='7b536765-adaa-4682-86b5-b3ff0be769bf',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member',shelved_at='2025-10-14T09:13:45.082732',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7b536765-adaa-4682-86b5-b3ff0be769bf'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:55Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.697 2 DEBUG nova.network.os_vif_util [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.698 2 DEBUG nova.network.os_vif_util [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.699 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.728 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <uuid>2534f8b9-e832-4b78-ada4-e551429bdc75</uuid>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <name>instance-00000056</name>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersNegativeTestJSON-server-17250352</nova:name>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:14:01</nova:creationTime>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <nova:user uuid="92e59e145f6942b78d0ffbebc4d89e76">tempest-ServersNegativeTestJSON-1475695514-project-member</nova:user>
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <nova:project uuid="517aafb84156407c8672042097e3ef4f">tempest-ServersNegativeTestJSON-1475695514</nova:project>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="7b536765-adaa-4682-86b5-b3ff0be769bf"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <nova:port uuid="4f827284-f357-43c5-bdde-c69731b52914">
Oct 14 09:14:02 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <system>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <entry name="serial">2534f8b9-e832-4b78-ada4-e551429bdc75</entry>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <entry name="uuid">2534f8b9-e832-4b78-ada4-e551429bdc75</entry>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </system>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <os>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   </os>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <features>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   </features>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk">
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config">
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:02 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:8b:d7:f7"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <target dev="tap4f827284-f3"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/console.log" append="off"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <video>
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </video>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:14:02 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:14:02 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:14:02 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:14:02 compute-0 nova_compute[259627]: </domain>
Oct 14 09:14:02 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.733 2 DEBUG nova.compute.manager [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Preparing to wait for external event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.733 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.734 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.734 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:14:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.734 2 DEBUG nova.virt.libvirt.vif [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='7b536765-adaa-4682-86b5-b3ff0be769bf',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member',shelved_at='2025-10-14T09:13:45.082732',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7b536765-adaa-4682-86b5-b3ff0be769bf'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:55Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.735 2 DEBUG nova.network.os_vif_util [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.735 2 DEBUG nova.network.os_vif_util [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.735 2 DEBUG os_vif [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f827284-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f827284-f3, col_values=(('external_ids', {'iface-id': '4f827284-f357-43c5-bdde-c69731b52914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:d7:f7', 'vm-uuid': '2534f8b9-e832-4b78-ada4-e551429bdc75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:02 compute-0 NetworkManager[44885]: <info>  [1760433242.7410] manager: (tap4f827284-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.746 2 INFO os_vif [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')
Oct 14 09:14:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:14:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:14:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:14:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.813 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.813 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.813 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No VIF found with MAC fa:16:3e:8b:d7:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.814 2 INFO nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Using config drive
Oct 14 09:14:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.843 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.866 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:02 compute-0 nova_compute[259627]: 2025-10-14 09:14:02.929 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'keypairs' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.305 2 INFO nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating config drive at /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.314 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptddp6k2q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.490 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptddp6k2q" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:03 compute-0 ceph-mon[74249]: pgmap v1791: 305 pgs: 305 active+clean; 289 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 276 op/s
Oct 14 09:14:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3023730173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.532 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.535 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.692 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433243.6916466, d5de3978-2377-4d8e-aeaf-c952912130a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.693 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] VM Started (Lifecycle Event)
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.706 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.706 2 INFO nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deleting local config drive /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config because it was imported into RBD.
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.730 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.735 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433243.6934407, d5de3978-2377-4d8e-aeaf-c952912130a2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.735 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] VM Paused (Lifecycle Event)
Oct 14 09:14:03 compute-0 kernel: tap4f827284-f3: entered promiscuous mode
Oct 14 09:14:03 compute-0 NetworkManager[44885]: <info>  [1760433243.7474] manager: (tap4f827284-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Oct 14 09:14:03 compute-0 ovn_controller[152662]: 2025-10-14T09:14:03Z|01007|binding|INFO|Claiming lport 4f827284-f357-43c5-bdde-c69731b52914 for this chassis.
Oct 14 09:14:03 compute-0 ovn_controller[152662]: 2025-10-14T09:14:03Z|01008|binding|INFO|4f827284-f357-43c5-bdde-c69731b52914: Claiming fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.757 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.758 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c bound to our chassis
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.760 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.763 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:03 compute-0 NetworkManager[44885]: <info>  [1760433243.7687] device (tap4f827284-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:03 compute-0 NetworkManager[44885]: <info>  [1760433243.7706] device (tap4f827284-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.775 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7879cc1-35b1-4209-a0d8-d7605d766547]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.776 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa49b41b4-21 in ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.779 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa49b41b4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.779 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[48a1a8cc-ce1a-4e4e-8287-562b0ee459da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.780 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40ad56f6-f4dd-4be3-807d-043112a29b54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 ovn_controller[152662]: 2025-10-14T09:14:03Z|01009|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 ovn-installed in OVS
Oct 14 09:14:03 compute-0 ovn_controller[152662]: 2025-10-14T09:14:03Z|01010|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 up in Southbound
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.783 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.791 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[de114988-f56f-45b2-9835-4f8016d3366c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 systemd-machined[214636]: New machine qemu-120-instance-00000056.
Oct 14 09:14:03 compute-0 nova_compute[259627]: 2025-10-14 09:14:03.802 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:03 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000056.
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.817 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[51c03d05-96ba-4a98-8f2a-1b1b499714f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.861 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38747671-4168-4359-8036-4997daee741c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c824d5e-5abe-4c68-b0c8-714e6578af82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 NetworkManager[44885]: <info>  [1760433243.8686] manager: (tapa49b41b4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/416)
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.922 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c0743b-a168-4f33-981c-24b6188df4ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.926 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf41a509-fc5d-40d1-abe5-94417e39e6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 NetworkManager[44885]: <info>  [1760433243.9561] device (tapa49b41b4-20): carrier: link connected
Oct 14 09:14:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 289 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.959 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c7b86e-e25b-42fc-b868-4c6165ad56f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.990 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1b17bc-b3b9-492e-8c45-3f6aeef33d0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702271, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355874, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.009 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42abb07e-b393-4367-bfd6-b4dcab049c7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:5b6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702271, 'tstamp': 702271}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355875, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.030 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9594c57a-de51-4301-a37b-006cbd5bf938]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702271, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355876, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.072 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d72c3885-15a5-4429-b9d8-ccbcf064f0db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.162 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[edfe9334-3e35-437d-aeb6-62e4f98e8aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.164 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.165 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.167 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:04 compute-0 kernel: tapa49b41b4-20: entered promiscuous mode
Oct 14 09:14:04 compute-0 NetworkManager[44885]: <info>  [1760433244.1702] manager: (tapa49b41b4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.176 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:04 compute-0 ovn_controller[152662]: 2025-10-14T09:14:04Z|01011|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.181 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73b16dd4-b5f9-485c-b43e-26cef6ac7526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.183 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:14:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.189 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'env', 'PROCESS_TAG=haproxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.666 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433244.665723, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.666 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Started (Lifecycle Event)
Oct 14 09:14:04 compute-0 podman[355950]: 2025-10-14 09:14:04.677543427 +0000 UTC m=+0.064090310 container create ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.685 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.689 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433244.66791, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.689 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Paused (Lifecycle Event)
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.705 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.708 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:04 compute-0 systemd[1]: Started libpod-conmon-ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62.scope.
Oct 14 09:14:04 compute-0 nova_compute[259627]: 2025-10-14 09:14:04.726 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:04 compute-0 podman[355950]: 2025-10-14 09:14:04.651481005 +0000 UTC m=+0.038027908 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:14:04 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6024105aa89e38c50917ad3e414821ec8f7b4c448c18cb2e7568bb5ccc24ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:04 compute-0 podman[355950]: 2025-10-14 09:14:04.775072801 +0000 UTC m=+0.161619684 container init ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:14:04 compute-0 podman[355950]: 2025-10-14 09:14:04.780373642 +0000 UTC m=+0.166920525 container start ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 09:14:04 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [NOTICE]   (355969) : New worker (355971) forked
Oct 14 09:14:04 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [NOTICE]   (355969) : Loading success.
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.206 2 DEBUG nova.compute.manager [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.206 2 DEBUG oslo_concurrency.lockutils [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.206 2 DEBUG oslo_concurrency.lockutils [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.207 2 DEBUG oslo_concurrency.lockutils [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.207 2 DEBUG nova.compute.manager [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Processing event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.207 2 DEBUG nova.compute.manager [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.212 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433245.2121453, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.212 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Resumed (Lifecycle Event)
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.213 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.217 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance spawned successfully.
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.232 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.236 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.255 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.410 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.411 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.425 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.492 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.492 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.499 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.499 2 INFO nova.compute.claims [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:14:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 14 09:14:05 compute-0 ceph-mon[74249]: pgmap v1792: 305 pgs: 305 active+clean; 289 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Oct 14 09:14:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 14 09:14:05 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 14 09:14:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:14:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3296468083' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:14:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:14:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3296468083' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.669 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:05 compute-0 nova_compute[259627]: 2025-10-14 09:14:05.900 2 DEBUG nova.compute.manager [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.004 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/763446376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.220 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.228 2 DEBUG nova.compute.provider_tree [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.250 2 DEBUG nova.scheduler.client.report [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.277 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.278 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.322 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.323 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.354 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.376 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.427 2 DEBUG nova.compute.manager [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.428 2 DEBUG oslo_concurrency.lockutils [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.428 2 DEBUG oslo_concurrency.lockutils [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.429 2 DEBUG oslo_concurrency.lockutils [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.429 2 DEBUG nova.compute.manager [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Processing event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.430 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.443 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.444 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433246.4423149, d5de3978-2377-4d8e-aeaf-c952912130a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.444 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] VM Resumed (Lifecycle Event)
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.450 2 INFO nova.virt.libvirt.driver [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance spawned successfully.
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.451 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.482 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.491 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.496 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.497 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.497 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Creating image(s)
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.524 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:06 compute-0 ceph-mon[74249]: osdmap e257: 3 total, 3 up, 3 in
Oct 14 09:14:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3296468083' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:14:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3296468083' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:14:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/763446376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.563 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.591 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.596 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.647 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.648 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.649 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.649 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.650 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.650 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.656 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.678 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.679 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.680 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.681 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.706 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.711 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 84080e43-a9f4-4b6a-889f-d76167ff715a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.755 2 INFO nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Took 10.69 seconds to spawn the instance on the hypervisor.
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.757 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.857 2 INFO nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Took 11.69 seconds to build instance.
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.880 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:06 compute-0 nova_compute[259627]: 2025-10-14 09:14:06.997 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 84080e43-a9f4-4b6a-889f-d76167ff715a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:07.030 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.070 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] resizing rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.190 2 DEBUG nova.objects.instance [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lazy-loading 'migration_context' on Instance uuid 84080e43-a9f4-4b6a-889f-d76167ff715a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.209 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.209 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Ensure instance console log exists: /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.210 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.210 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.210 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.249 2 DEBUG nova.policy [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '559a8ea4f81141efa5e11da9b174482d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2de05cc126e24608be28a7d5dea18bf3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.382 2 DEBUG nova.compute.manager [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.382 2 DEBUG oslo_concurrency.lockutils [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.383 2 DEBUG oslo_concurrency.lockutils [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.383 2 DEBUG oslo_concurrency.lockutils [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.383 2 DEBUG nova.compute.manager [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.383 2 WARNING nova.compute.manager [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state None.
Oct 14 09:14:07 compute-0 ceph-mon[74249]: pgmap v1794: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 09:14:07 compute-0 nova_compute[259627]: 2025-10-14 09:14:07.965 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Successfully created port: 1e3d49fe-52bd-40cb-ae1a-86eb664df473 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:14:08 compute-0 nova_compute[259627]: 2025-10-14 09:14:08.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:08 compute-0 ovn_controller[152662]: 2025-10-14T09:14:08Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 09:14:08 compute-0 ovn_controller[152662]: 2025-10-14T09:14:08Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 09:14:09 compute-0 ceph-mon[74249]: pgmap v1795: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.769 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Successfully updated port: 1e3d49fe-52bd-40cb-ae1a-86eb664df473 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.776 2 DEBUG nova.compute.manager [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.776 2 DEBUG oslo_concurrency.lockutils [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.777 2 DEBUG oslo_concurrency.lockutils [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.777 2 DEBUG oslo_concurrency.lockutils [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.778 2 DEBUG nova.compute.manager [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] No waiting events found dispatching network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.778 2 WARNING nova.compute.manager [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received unexpected event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 for instance with vm_state active and task_state None.
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.787 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.787 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquired lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.788 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:14:09 compute-0 nova_compute[259627]: 2025-10-14 09:14:09.919 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:14:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.174 2 DEBUG nova.compute.manager [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-changed-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.174 2 DEBUG nova.compute.manager [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Refreshing instance network info cache due to event network-changed-1e3d49fe-52bd-40cb-ae1a-86eb664df473. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.175 2 DEBUG oslo_concurrency.lockutils [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.701 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Updating instance_info_cache with network_info: [{"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.728 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Releasing lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.729 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance network_info: |[{"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.729 2 DEBUG oslo_concurrency.lockutils [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.730 2 DEBUG nova.network.neutron [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Refreshing network info cache for port 1e3d49fe-52bd-40cb-ae1a-86eb664df473 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.735 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start _get_guest_xml network_info=[{"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.742 2 WARNING nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.749 2 DEBUG nova.virt.libvirt.host [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.750 2 DEBUG nova.virt.libvirt.host [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.754 2 DEBUG nova.virt.libvirt.host [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.755 2 DEBUG nova.virt.libvirt.host [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.755 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.756 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.756 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.757 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.758 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.758 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.759 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.759 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.760 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.760 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.760 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.761 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.766 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.899 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.900 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.900 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.900 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.901 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.902 2 INFO nova.compute.manager [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Terminating instance
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.903 2 DEBUG nova.compute.manager [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:14:10 compute-0 kernel: tapdc177f85-b3 (unregistering): left promiscuous mode
Oct 14 09:14:10 compute-0 NetworkManager[44885]: <info>  [1760433250.9542] device (tapdc177f85-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:10 compute-0 ovn_controller[152662]: 2025-10-14T09:14:10Z|01012|binding|INFO|Releasing lport dc177f85-b331-40ec-b30f-1f667878bcb8 from this chassis (sb_readonly=0)
Oct 14 09:14:10 compute-0 ovn_controller[152662]: 2025-10-14T09:14:10Z|01013|binding|INFO|Setting lport dc177f85-b331-40ec-b30f-1f667878bcb8 down in Southbound
Oct 14 09:14:10 compute-0 ovn_controller[152662]: 2025-10-14T09:14:10Z|01014|binding|INFO|Removing iface tapdc177f85-b3 ovn-installed in OVS
Oct 14 09:14:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:10.978 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:28:0f 10.100.0.14'], port_security=['fa:16:3e:6a:28:0f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd5de3978-2377-4d8e-aeaf-c952912130a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=dc177f85-b331-40ec-b30f-1f667878bcb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:10.981 162547 INFO neutron.agent.ovn.metadata.agent [-] Port dc177f85-b331-40ec-b30f-1f667878bcb8 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis
Oct 14 09:14:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:10.982 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:14:10 compute-0 nova_compute[259627]: 2025-10-14 09:14:10.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.003 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40ec4e19-2a3a-44bd-afdf-2fea9fa1806e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:11 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 14 09:14:11 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Consumed 5.588s CPU time.
Oct 14 09:14:11 compute-0 systemd-machined[214636]: Machine qemu-119-instance-00000061 terminated.
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.037 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ec475d74-8542-47ae-bce3-06e36ca28c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.040 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[13714929-19cc-4c04-8203-3495f668aeb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.063 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cb55f6c0-c97b-401f-86c4-ba33ff21cdf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.088 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[51db7615-6552-4916-864c-9d677bf9d60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356199, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.110 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e593c83c-389b-4287-ab43-511d4b4533f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356200, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356200, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.115 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.123 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.124 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.124 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.125 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.149 2 INFO nova.virt.libvirt.driver [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance destroyed successfully.
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.150 2 DEBUG nova.objects.instance [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid d5de3978-2377-4d8e-aeaf-c952912130a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.163 2 DEBUG nova.virt.libvirt.vif [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1227005018',display_name='tempest-ServersTestJSON-server-1227005018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1227005018',id=97,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-0z0rtrem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:06Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d5de3978-2377-4d8e-aeaf-c952912130a2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.163 2 DEBUG nova.network.os_vif_util [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.164 2 DEBUG nova.network.os_vif_util [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.165 2 DEBUG os_vif [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc177f85-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.173 2 INFO os_vif [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3')
Oct 14 09:14:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239772539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.252 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.272 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.277 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.453 2 DEBUG nova.objects.instance [None req-f9282da8-119f-461e-93f7-f0c7d3345cc6 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.477 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433251.4770362, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.478 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Paused (Lifecycle Event)
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.503 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.515 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 14 09:14:11 compute-0 ceph-mon[74249]: pgmap v1796: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 09:14:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2239772539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.579 2 INFO nova.virt.libvirt.driver [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Deleting instance files /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2_del
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.580 2 INFO nova.virt.libvirt.driver [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Deletion of /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2_del complete
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.625 2 INFO nova.compute.manager [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.626 2 DEBUG oslo.service.loopingcall [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.626 2 DEBUG nova.compute.manager [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.626 2 DEBUG nova.network.neutron [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:14:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088650112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.726 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.727 2 DEBUG nova.virt.libvirt.vif [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-250281827',display_name='tempest-ServerMetadataTestJSON-server-250281827',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-250281827',id=98,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de05cc126e24608be28a7d5dea18bf3',ramdisk_id='',reservation_id='r-aqxl060k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-10451963',owner_user_name='tempest-ServerMetadataTestJSON-10451963-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:06Z,user_data=None,user_id='559a8ea4f81141efa5e11da9b174482d',uuid=84080e43-a9f4-4b6a-889f-d76167ff715a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.728 2 DEBUG nova.network.os_vif_util [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converting VIF {"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.729 2 DEBUG nova.network.os_vif_util [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.730 2 DEBUG nova.objects.instance [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84080e43-a9f4-4b6a-889f-d76167ff715a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.753 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <uuid>84080e43-a9f4-4b6a-889f-d76167ff715a</uuid>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <name>instance-00000062</name>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerMetadataTestJSON-server-250281827</nova:name>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:14:10</nova:creationTime>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <nova:user uuid="559a8ea4f81141efa5e11da9b174482d">tempest-ServerMetadataTestJSON-10451963-project-member</nova:user>
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <nova:project uuid="2de05cc126e24608be28a7d5dea18bf3">tempest-ServerMetadataTestJSON-10451963</nova:project>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <nova:port uuid="1e3d49fe-52bd-40cb-ae1a-86eb664df473">
Oct 14 09:14:11 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <system>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <entry name="serial">84080e43-a9f4-4b6a-889f-d76167ff715a</entry>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <entry name="uuid">84080e43-a9f4-4b6a-889f-d76167ff715a</entry>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </system>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <os>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   </os>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <features>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   </features>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/84080e43-a9f4-4b6a-889f-d76167ff715a_disk">
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config">
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:11 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:2d:6b:bc"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <target dev="tap1e3d49fe-52"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/console.log" append="off"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <video>
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </video>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:14:11 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:14:11 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:14:11 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:14:11 compute-0 nova_compute[259627]: </domain>
Oct 14 09:14:11 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.759 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Preparing to wait for external event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.759 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.759 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.760 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.760 2 DEBUG nova.virt.libvirt.vif [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-250281827',display_name='tempest-ServerMetadataTestJSON-server-250281827',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-250281827',id=98,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de05cc126e24608be28a7d5dea18bf3',ramdisk_id='',reservation_id='r-aqxl060k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-10451963',owner_user_name='tempest-ServerMetadataTestJSON-10451963-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:06Z,user_data=None,user_id='559a8ea4f81141efa5e11da9b174482d',uuid=84080e43-a9f4-4b6a-889f-d76167ff715a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.760 2 DEBUG nova.network.os_vif_util [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converting VIF {"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.761 2 DEBUG nova.network.os_vif_util [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.761 2 DEBUG os_vif [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.765 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3d49fe-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.765 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e3d49fe-52, col_values=(('external_ids', {'iface-id': '1e3d49fe-52bd-40cb-ae1a-86eb664df473', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:6b:bc', 'vm-uuid': '84080e43-a9f4-4b6a-889f-d76167ff715a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 NetworkManager[44885]: <info>  [1760433251.7679] manager: (tap1e3d49fe-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 kernel: tap4f827284-f3 (unregistering): left promiscuous mode
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.773 2 INFO os_vif [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52')
Oct 14 09:14:11 compute-0 NetworkManager[44885]: <info>  [1760433251.7792] device (tap4f827284-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 ovn_controller[152662]: 2025-10-14T09:14:11Z|01015|binding|INFO|Releasing lport 4f827284-f357-43c5-bdde-c69731b52914 from this chassis (sb_readonly=0)
Oct 14 09:14:11 compute-0 ovn_controller[152662]: 2025-10-14T09:14:11Z|01016|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 down in Southbound
Oct 14 09:14:11 compute-0 ovn_controller[152662]: 2025-10-14T09:14:11Z|01017|binding|INFO|Removing iface tap4f827284-f3 ovn-installed in OVS
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.802 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.803 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c unbound from our chassis
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.804 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.806 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35373295-bae5-4891-9f74-ca31da94dcfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.806 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace which is not needed anymore
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.849 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:11 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.850 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.851 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] No VIF found with MAC fa:16:3e:2d:6b:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.851 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Using config drive
Oct 14 09:14:11 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000056.scope: Consumed 7.169s CPU time.
Oct 14 09:14:11 compute-0 systemd-machined[214636]: Machine qemu-120-instance-00000056 terminated.
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.873 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:11 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [NOTICE]   (355969) : haproxy version is 2.8.14-c23fe91
Oct 14 09:14:11 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [NOTICE]   (355969) : path to executable is /usr/sbin/haproxy
Oct 14 09:14:11 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [WARNING]  (355969) : Exiting Master process...
Oct 14 09:14:11 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [ALERT]    (355969) : Current worker (355971) exited with code 143 (Terminated)
Oct 14 09:14:11 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [WARNING]  (355969) : All workers exited. Exiting... (0)
Oct 14 09:14:11 compute-0 systemd[1]: libpod-ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62.scope: Deactivated successfully.
Oct 14 09:14:11 compute-0 nova_compute[259627]: 2025-10-14 09:14:11.961 2 DEBUG nova.compute.manager [None req-f9282da8-119f-461e-93f7-f0c7d3345cc6 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:11 compute-0 podman[356317]: 2025-10-14 09:14:11.961956652 +0000 UTC m=+0.052037703 container died ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:14:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 372 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 9.4 MiB/s wr, 410 op/s
Oct 14 09:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b6024105aa89e38c50917ad3e414821ec8f7b4c448c18cb2e7568bb5ccc24ae-merged.mount: Deactivated successfully.
Oct 14 09:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62-userdata-shm.mount: Deactivated successfully.
Oct 14 09:14:11 compute-0 podman[356317]: 2025-10-14 09:14:11.998581385 +0000 UTC m=+0.088662446 container cleanup ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:14:12 compute-0 systemd[1]: libpod-conmon-ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62.scope: Deactivated successfully.
Oct 14 09:14:12 compute-0 podman[356360]: 2025-10-14 09:14:12.064348416 +0000 UTC m=+0.044366695 container remove ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.070 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6e94de-9e94-4fc2-8039-c93fa3932979]: (4, ('Tue Oct 14 09:14:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62)\ned73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62\nTue Oct 14 09:14:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62)\ned73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.071 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1824aae2-3898-458b-8c36-e879f42d837f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.072 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:12 compute-0 kernel: tapa49b41b4-20: left promiscuous mode
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.101 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df75f7af-e333-4cbb-ba68-644760774af3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.126 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f6482cc3-7574-446c-9fc4-8d753f9a02d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.127 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[270c65e4-4298-4afe-8dae-f83dc726d10d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.143 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb37128e-9a22-4cbc-a6ef-fd6a232ae01d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702261, 'reachable_time': 39374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356381, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:12 compute-0 systemd[1]: run-netns-ovnmeta\x2da49b41b4\x2d2559\x2d4a22\x2da274\x2da6c7bbe75f2c.mount: Deactivated successfully.
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.148 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:14:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.148 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[32d2bf8f-f6b3-4e85-8054-8bd3fb859805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:12 compute-0 nova_compute[259627]: 2025-10-14 09:14:12.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3088650112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 14 09:14:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 14 09:14:12 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.372 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Creating config drive at /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.381 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0uviuo5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.468 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-unplugged-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.468 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.469 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.469 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.470 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] No waiting events found dispatching network-vif-unplugged-dc177f85-b331-40ec-b30f-1f667878bcb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.470 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-unplugged-dc177f85-b331-40ec-b30f-1f667878bcb8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.471 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.471 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.471 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.472 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.472 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] No waiting events found dispatching network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.472 2 WARNING nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received unexpected event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 for instance with vm_state active and task_state deleting.
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.551 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0uviuo5" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:13 compute-0 ceph-mon[74249]: pgmap v1797: 305 pgs: 305 active+clean; 372 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 9.4 MiB/s wr, 410 op/s
Oct 14 09:14:13 compute-0 ceph-mon[74249]: osdmap e258: 3 total, 3 up, 3 in
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.615 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.624 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.698 2 DEBUG nova.compute.manager [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.700 2 DEBUG oslo_concurrency.lockutils [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.700 2 DEBUG oslo_concurrency.lockutils [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.701 2 DEBUG oslo_concurrency.lockutils [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.701 2 DEBUG nova.compute.manager [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.701 2 WARNING nova.compute.manager [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state suspended and task_state None.
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.709 2 DEBUG nova.network.neutron [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.737 2 INFO nova.compute.manager [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Took 2.11 seconds to deallocate network for instance.
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.796 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.797 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.854 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.855 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Deleting local config drive /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config because it was imported into RBD.
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.915 2 DEBUG nova.network.neutron [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Updated VIF entry in instance network info cache for port 1e3d49fe-52bd-40cb-ae1a-86eb664df473. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.917 2 DEBUG nova.network.neutron [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Updating instance_info_cache with network_info: [{"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.932 2 DEBUG oslo_concurrency.lockutils [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:13 compute-0 kernel: tap1e3d49fe-52: entered promiscuous mode
Oct 14 09:14:13 compute-0 systemd-udevd[356190]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:13 compute-0 NetworkManager[44885]: <info>  [1760433253.9449] manager: (tap1e3d49fe-52): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:13 compute-0 ovn_controller[152662]: 2025-10-14T09:14:13Z|01018|binding|INFO|Claiming lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 for this chassis.
Oct 14 09:14:13 compute-0 ovn_controller[152662]: 2025-10-14T09:14:13Z|01019|binding|INFO|1e3d49fe-52bd-40cb-ae1a-86eb664df473: Claiming fa:16:3e:2d:6b:bc 10.100.0.4
Oct 14 09:14:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.960 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:6b:bc 10.100.0.4'], port_security=['fa:16:3e:2d:6b:bc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '84080e43-a9f4-4b6a-889f-d76167ff715a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de05cc126e24608be28a7d5dea18bf3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f10ca33-82fc-4f36-9ded-7e23e5949e23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3db415ba-b4a8-48a5-b3a3-5e4c9b11b067, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1e3d49fe-52bd-40cb-ae1a-86eb664df473) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.962 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1e3d49fe-52bd-40cb-ae1a-86eb664df473 in datapath 8110a1ba-8e30-49ef-ba1c-c72228086a20 bound to our chassis
Oct 14 09:14:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 372 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 6.8 MiB/s wr, 436 op/s
Oct 14 09:14:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.964 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8110a1ba-8e30-49ef-ba1c-c72228086a20
Oct 14 09:14:13 compute-0 NetworkManager[44885]: <info>  [1760433253.9681] device (tap1e3d49fe-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:13 compute-0 NetworkManager[44885]: <info>  [1760433253.9693] device (tap1e3d49fe-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:13 compute-0 nova_compute[259627]: 2025-10-14 09:14:13.977 2 DEBUG oslo_concurrency.processutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.985 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9633c3b4-044b-4303-8b78-cb7661e192be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.986 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8110a1ba-81 in ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:14:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.988 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8110a1ba-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:14:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d6128f9e-77ef-4c54-a00f-bf8c7afcbec6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.989 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2871d7-9749-4cfe-b133-d1d1dd876183]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:13 compute-0 ovn_controller[152662]: 2025-10-14T09:14:13Z|01020|binding|INFO|Setting lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 ovn-installed in OVS
Oct 14 09:14:13 compute-0 ovn_controller[152662]: 2025-10-14T09:14:13Z|01021|binding|INFO|Setting lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 up in Southbound
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.008 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed2ad5f-b3ab-402a-8d27-3fb863503f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 systemd-machined[214636]: New machine qemu-121-instance-00000062.
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.024 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa32c0e1-b4f4-4949-b06e-924ccab85ff3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:14 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000062.
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.069 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[884e66a6-c8ff-4747-9d8d-3e2c273f51ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0dba3c4-69b5-45ab-aa42-3151229d910c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 NetworkManager[44885]: <info>  [1760433254.0780] manager: (tap8110a1ba-80): new Veth device (/org/freedesktop/NetworkManager/Devices/420)
Oct 14 09:14:14 compute-0 systemd-udevd[356447]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.125 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e8803951-6800-4017-951c-15ce51efaf7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.129 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf49af67-9307-436a-841c-d71e6d74a1f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 NetworkManager[44885]: <info>  [1760433254.1571] device (tap8110a1ba-80): carrier: link connected
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.165 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4157a261-6281-4e83-a306-363b7061e72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.190 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[218cfbe6-d78e-4919-8c13-8d4bcf636d47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8110a1ba-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5b:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703291, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356495, 'error': None, 'target': 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.218 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[589eb77d-41c9-4b54-a050-c6b1d59f9184]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:5b8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703291, 'tstamp': 703291}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356496, 'error': None, 'target': 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.243 2 INFO nova.compute.manager [None req-3bcce2a9-c5d0-43ef-becb-35cfb9686988 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Get console output
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.251 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c61e666f-55ad-447c-9cb3-323cc80b10ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8110a1ba-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5b:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703291, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356497, 'error': None, 'target': 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.304 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de2f408d-530c-4764-bd9e-a9de44dc79c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.372 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c89bb7d8-1947-4007-9389-b1c2572e99d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.374 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8110a1ba-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.375 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.375 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8110a1ba-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:14 compute-0 NetworkManager[44885]: <info>  [1760433254.3792] manager: (tap8110a1ba-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Oct 14 09:14:14 compute-0 kernel: tap8110a1ba-80: entered promiscuous mode
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.388 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8110a1ba-80, col_values=(('external_ids', {'iface-id': '08780918-7f13-4df3-8cab-973d8a442035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:14 compute-0 ovn_controller[152662]: 2025-10-14T09:14:14Z|01022|binding|INFO|Releasing lport 08780918-7f13-4df3-8cab-973d8a442035 from this chassis (sb_readonly=0)
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.461 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8110a1ba-8e30-49ef-ba1c-c72228086a20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8110a1ba-8e30-49ef-ba1c-c72228086a20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.462 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3c5867-51e9-4aa6-8988-e99a0f9ada70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.463 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-8110a1ba-8e30-49ef-ba1c-c72228086a20
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/8110a1ba-8e30-49ef-ba1c-c72228086a20.pid.haproxy
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 8110a1ba-8e30-49ef-ba1c-c72228086a20
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:14:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.464 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'env', 'PROCESS_TAG=haproxy-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8110a1ba-8e30-49ef-ba1c-c72228086a20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:14:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258488040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.496 2 DEBUG oslo_concurrency.processutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.502 2 DEBUG nova.compute.provider_tree [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.522 2 DEBUG nova.scheduler.client.report [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.554 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.591 2 INFO nova.scheduler.client.report [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance d5de3978-2377-4d8e-aeaf-c952912130a2
Oct 14 09:14:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4258488040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:14 compute-0 nova_compute[259627]: 2025-10-14 09:14:14.678 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:14 compute-0 podman[356577]: 2025-10-14 09:14:14.845286095 +0000 UTC m=+0.057931599 container create 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:14:14 compute-0 systemd[1]: Started libpod-conmon-9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5.scope.
Oct 14 09:14:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c703b8ad82eb6af106c099e66a521ffd9ab04da878e5ff9418cd428162b5e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:14 compute-0 podman[356577]: 2025-10-14 09:14:14.815155413 +0000 UTC m=+0.027800957 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:14:14 compute-0 podman[356577]: 2025-10-14 09:14:14.916690915 +0000 UTC m=+0.129336439 container init 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:14:14 compute-0 podman[356577]: 2025-10-14 09:14:14.927139333 +0000 UTC m=+0.139784837 container start 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:14:14 compute-0 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [NOTICE]   (356598) : New worker (356600) forked
Oct 14 09:14:14 compute-0 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [NOTICE]   (356598) : Loading success.
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.315 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433255.3144155, 84080e43-a9f4-4b6a-889f-d76167ff715a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.315 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] VM Started (Lifecycle Event)
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.337 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.342 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433255.3146741, 84080e43-a9f4-4b6a-889f-d76167ff715a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.343 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] VM Paused (Lifecycle Event)
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.360 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.364 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.382 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:15 compute-0 ceph-mon[74249]: pgmap v1799: 305 pgs: 305 active+clean; 372 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 6.8 MiB/s wr, 436 op/s
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.632 2 INFO nova.compute.manager [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Resuming
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.633 2 DEBUG nova.objects.instance [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'flavor' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.670 2 DEBUG oslo_concurrency.lockutils [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.670 2 DEBUG oslo_concurrency.lockutils [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.671 2 DEBUG nova.network.neutron [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.689 2 DEBUG nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.690 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.691 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.691 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.691 2 DEBUG nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Processing event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.692 2 DEBUG nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.692 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.693 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.693 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.694 2 DEBUG nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] No waiting events found dispatching network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.694 2 WARNING nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received unexpected event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 for instance with vm_state building and task_state spawning.
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.696 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.704 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433255.7037997, 84080e43-a9f4-4b6a-889f-d76167ff715a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.704 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] VM Resumed (Lifecycle Event)
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.708 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.713 2 INFO nova.virt.libvirt.driver [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance spawned successfully.
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.713 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.728 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.737 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.743 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.743 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.744 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.745 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.746 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.747 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.775 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.810 2 INFO nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Took 9.31 seconds to spawn the instance on the hypervisor.
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.811 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.830 2 DEBUG nova.compute.manager [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-deleted-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.831 2 DEBUG nova.compute.manager [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.831 2 DEBUG oslo_concurrency.lockutils [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.832 2 DEBUG oslo_concurrency.lockutils [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.832 2 DEBUG oslo_concurrency.lockutils [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.833 2 DEBUG nova.compute.manager [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.833 2 WARNING nova.compute.manager [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state suspended and task_state resuming.
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.884 2 INFO nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Took 10.42 seconds to build instance.
Oct 14 09:14:15 compute-0 nova_compute[259627]: 2025-10-14 09:14:15.902 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.168 2 INFO nova.compute.manager [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Rebuilding instance
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.442 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'trusted_certs' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.461 2 DEBUG nova.compute.manager [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.526 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_requests' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.537 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.547 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.557 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.566 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.570 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:14:16 compute-0 nova_compute[259627]: 2025-10-14 09:14:16.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.479 2 DEBUG nova.network.neutron [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.495 2 DEBUG oslo_concurrency.lockutils [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.502 2 DEBUG nova.virt.libvirt.vif [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:12Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.503 2 DEBUG nova.network.os_vif_util [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.505 2 DEBUG nova.network.os_vif_util [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.506 2 DEBUG os_vif [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.508 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f827284-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f827284-f3, col_values=(('external_ids', {'iface-id': '4f827284-f357-43c5-bdde-c69731b52914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:d7:f7', 'vm-uuid': '2534f8b9-e832-4b78-ada4-e551429bdc75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.515 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.516 2 INFO os_vif [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.547 2 DEBUG nova.objects.instance [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'numa_topology' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:17 compute-0 ceph-mon[74249]: pgmap v1800: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 09:14:17 compute-0 kernel: tap4f827284-f3: entered promiscuous mode
Oct 14 09:14:17 compute-0 NetworkManager[44885]: <info>  [1760433257.6211] manager: (tap4f827284-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Oct 14 09:14:17 compute-0 ovn_controller[152662]: 2025-10-14T09:14:17Z|01023|binding|INFO|Claiming lport 4f827284-f357-43c5-bdde-c69731b52914 for this chassis.
Oct 14 09:14:17 compute-0 ovn_controller[152662]: 2025-10-14T09:14:17Z|01024|binding|INFO|4f827284-f357-43c5-bdde-c69731b52914: Claiming fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.635 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.637 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c bound to our chassis
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.639 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.652 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4671a0e6-c214-4da5-9d3d-9faa9d7574d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.652 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa49b41b4-21 in ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.655 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa49b41b4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.655 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[81e39661-5507-4259-b282-3ab34431c9d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.655 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2422212f-a6e7-4475-9bec-d5aeca0db21a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ovn_controller[152662]: 2025-10-14T09:14:17Z|01025|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 ovn-installed in OVS
Oct 14 09:14:17 compute-0 ovn_controller[152662]: 2025-10-14T09:14:17Z|01026|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 up in Southbound
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.672 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5aee29-70ee-4606-a3ee-ebd285705877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 systemd-machined[214636]: New machine qemu-122-instance-00000056.
Oct 14 09:14:17 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000056.
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.688 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d781e5-9036-455a-91b3-827f593954d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 systemd-udevd[356626]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:17 compute-0 NetworkManager[44885]: <info>  [1760433257.7192] device (tap4f827284-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:17 compute-0 NetworkManager[44885]: <info>  [1760433257.7208] device (tap4f827284-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.726 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a99029db-b9f0-4ef9-a332-2207243e03f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 NetworkManager[44885]: <info>  [1760433257.7364] manager: (tapa49b41b4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d395c3da-30ea-476c-b933-b891ce5909e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.775 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dfb07c-c2a8-4b9d-adc2-3a0335e79a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.778 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[53608889-74f4-4209-b597-7481a9e0ef10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 NetworkManager[44885]: <info>  [1760433257.8029] device (tapa49b41b4-20): carrier: link connected
Oct 14 09:14:17 compute-0 podman[356631]: 2025-10-14 09:14:17.805954405 +0000 UTC m=+0.075525263 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.813 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eb25837f-35d9-4c7f-a44b-fdbd41c96f9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 podman[356629]: 2025-10-14 09:14:17.821481848 +0000 UTC m=+0.096865889 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.839 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[74452377-bfc9-44f0-92c4-cd68c9180e7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703656, 'reachable_time': 25331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356693, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.856 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5895f24e-ed37-441f-b960-b4dad4e4ecdf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:5b6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703656, 'tstamp': 703656}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356694, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[409a6b31-329c-4ca2-8524-ea0cbf985d39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703656, 'reachable_time': 25331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356695, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.912 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6fff46ed-a0ea-4040-af17-32732a2ef22b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32ff6e2a-a0b6-4bf9-b3cf-39d0c723942d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.986 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.986 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.987 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:17 compute-0 NetworkManager[44885]: <info>  [1760433257.9900] manager: (tapa49b41b4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:17 compute-0 kernel: tapa49b41b4-20: entered promiscuous mode
Oct 14 09:14:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.997 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:17 compute-0 ovn_controller[152662]: 2025-10-14T09:14:17Z|01027|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:14:17 compute-0 nova_compute[259627]: 2025-10-14 09:14:17.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.033 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.034 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1a5533-4860-4e93-b8cd-2238e09c9d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.035 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.037 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'env', 'PROCESS_TAG=haproxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:18 compute-0 podman[356769]: 2025-10-14 09:14:18.403839461 +0000 UTC m=+0.048878466 container create 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:14:18 compute-0 systemd[1]: Started libpod-conmon-003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd.scope.
Oct 14 09:14:18 compute-0 podman[356769]: 2025-10-14 09:14:18.381645744 +0000 UTC m=+0.026684759 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:14:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6781cf622e873ac119e8ad8166fc07516eec1ccc73d027486934f81c24ca8b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:18 compute-0 podman[356769]: 2025-10-14 09:14:18.507500576 +0000 UTC m=+0.152539591 container init 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 09:14:18 compute-0 podman[356769]: 2025-10-14 09:14:18.514360975 +0000 UTC m=+0.159399970 container start 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:14:18 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [NOTICE]   (356788) : New worker (356790) forked
Oct 14 09:14:18 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [NOTICE]   (356788) : Loading success.
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.555 2 DEBUG nova.compute.manager [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.556 2 DEBUG oslo_concurrency.lockutils [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.557 2 DEBUG oslo_concurrency.lockutils [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.557 2 DEBUG oslo_concurrency.lockutils [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.558 2 DEBUG nova.compute.manager [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.558 2 WARNING nova.compute.manager [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state suspended and task_state resuming.
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.642 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.643 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.648 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 2534f8b9-e832-4b78-ada4-e551429bdc75 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.649 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433258.6485362, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.649 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Started (Lifecycle Event)
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.665 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.669 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.678 2 DEBUG nova.compute.manager [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.679 2 DEBUG nova.objects.instance [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.682 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.701 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.702 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433258.664729, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.702 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Resumed (Lifecycle Event)
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.705 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance running successfully.
Oct 14 09:14:18 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.708 2 DEBUG nova.virt.libvirt.guest [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.709 2 DEBUG nova.compute.manager [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.733 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.737 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.748 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.748 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.755 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.756 2 INFO nova.compute.claims [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.759 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:14:18 compute-0 kernel: tap7103ce4a-69 (unregistering): left promiscuous mode
Oct 14 09:14:18 compute-0 NetworkManager[44885]: <info>  [1760433258.8311] device (tap7103ce4a-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:14:18 compute-0 ovn_controller[152662]: 2025-10-14T09:14:18Z|01028|binding|INFO|Releasing lport 7103ce4a-69e8-454b-aed3-251ecb109232 from this chassis (sb_readonly=0)
Oct 14 09:14:18 compute-0 ovn_controller[152662]: 2025-10-14T09:14:18Z|01029|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 down in Southbound
Oct 14 09:14:18 compute-0 ovn_controller[152662]: 2025-10-14T09:14:18Z|01030|binding|INFO|Removing iface tap7103ce4a-69 ovn-installed in OVS
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.849 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.850 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f unbound from our chassis
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.852 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 563aa000-400f-4c19-ba83-9377cc50d29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0521bca1-82f4-424a-83b2-0e43cf7c8308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.854 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f namespace which is not needed anymore
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:18 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 14 09:14:18 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Consumed 13.415s CPU time.
Oct 14 09:14:18 compute-0 systemd-machined[214636]: Machine qemu-118-instance-00000060 terminated.
Oct 14 09:14:18 compute-0 nova_compute[259627]: 2025-10-14 09:14:18.943 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:19 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [NOTICE]   (354918) : haproxy version is 2.8.14-c23fe91
Oct 14 09:14:19 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [NOTICE]   (354918) : path to executable is /usr/sbin/haproxy
Oct 14 09:14:19 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [WARNING]  (354918) : Exiting Master process...
Oct 14 09:14:19 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [WARNING]  (354918) : Exiting Master process...
Oct 14 09:14:19 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [ALERT]    (354918) : Current worker (354920) exited with code 143 (Terminated)
Oct 14 09:14:19 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [WARNING]  (354918) : All workers exited. Exiting... (0)
Oct 14 09:14:19 compute-0 systemd[1]: libpod-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454.scope: Deactivated successfully.
Oct 14 09:14:19 compute-0 conmon[354908]: conmon a15e176523bc2c68be72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454.scope/container/memory.events
Oct 14 09:14:19 compute-0 podman[356819]: 2025-10-14 09:14:19.030445164 +0000 UTC m=+0.054850072 container died a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e223c53edf6cce1508ede20bb72625545cc9c2f087f47ad069c77aa13b5410e4-merged.mount: Deactivated successfully.
Oct 14 09:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454-userdata-shm.mount: Deactivated successfully.
Oct 14 09:14:19 compute-0 podman[356819]: 2025-10-14 09:14:19.083931033 +0000 UTC m=+0.108335951 container cleanup a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:14:19 compute-0 systemd[1]: libpod-conmon-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454.scope: Deactivated successfully.
Oct 14 09:14:19 compute-0 podman[356876]: 2025-10-14 09:14:19.170117407 +0000 UTC m=+0.051345917 container remove a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.176 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d191437f-9713-476c-90c1-a17cb8775e75]: (4, ('Tue Oct 14 09:14:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f (a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454)\na15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454\nTue Oct 14 09:14:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f (a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454)\na15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.178 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57925bc1-d113-484c-90ea-9ab220d2c12c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.180 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563aa000-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:19 compute-0 kernel: tap563aa000-40: left promiscuous mode
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.214 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb2be5b-dff6-4ffb-b794-c395f57d5b2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[80fedbda-a921-4bb1-abd7-d41fb4c61a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.234 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[00c8f34e-38c4-44c3-8af5-079e58b50230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.252 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1a79bbbc-29c5-43ef-b20b-0fd54a5ab4e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701284, 'reachable_time': 32111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356895, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d563aa000\x2d400f\x2d4c19\x2dba83\x2d9377cc50d29f.mount: Deactivated successfully.
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.258 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:14:19 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.259 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[064e7ffd-1507-4fa7-9e31-e978c946bdb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530003336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.370 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.376 2 DEBUG nova.compute.provider_tree [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.398 2 DEBUG nova.scheduler.client.report [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.421 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.422 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.474 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.475 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.496 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.519 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.600 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance shutdown successfully after 3 seconds.
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.606 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance destroyed successfully.
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.610 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance destroyed successfully.
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.611 2 DEBUG nova.virt.libvirt.vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:15Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.612 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.613 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.613 2 DEBUG os_vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7103ce4a-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.625 2 INFO os_vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69')
Oct 14 09:14:19 compute-0 ceph-mon[74249]: pgmap v1801: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 09:14:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1530003336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.644 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.646 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.647 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Creating image(s)
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.667 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.695 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.721 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.725 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.768 2 DEBUG nova.policy [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.810 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.811 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.812 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.812 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.839 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:19 compute-0 nova_compute[259627]: 2025-10-14 09:14:19.843 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 17db7f38-479a-4d56-9424-7f5ab695ccea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.139 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 17db7f38-479a-4d56-9424-7f5ab695ccea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.214 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.253 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deleting instance files /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6_del
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.254 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deletion of /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6_del complete
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.317 2 DEBUG nova.objects.instance [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid 17db7f38-479a-4d56-9424-7f5ab695ccea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.330 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Successfully created port: b162ed75-30c8-4d39-97d3-7baa4c970ef6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.333 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.333 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Ensure instance console log exists: /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.334 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.334 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.335 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.409 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.410 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating image(s)
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.430 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.453 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.473 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.475 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.579 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.581 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.582 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.582 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.611 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.616 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf b595141f-123e-4250-bfec-888d866fd0c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.669 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.670 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.671 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.671 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.672 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.672 2 WARNING nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state None.
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.673 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.673 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.674 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.674 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.675 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.675 2 WARNING nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state rebuild_spawning.
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.675 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.676 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.676 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.677 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.677 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.678 2 WARNING nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state rebuild_spawning.
Oct 14 09:14:20 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 14 09:14:20 compute-0 nova_compute[259627]: 2025-10-14 09:14:20.897 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf b595141f-123e-4250-bfec-888d866fd0c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.000 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.001 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.001 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.002 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.002 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.005 2 INFO nova.compute.manager [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Terminating instance
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.007 2 DEBUG nova.compute.manager [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.013 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:14:21 compute-0 kernel: tap1e3d49fe-52 (unregistering): left promiscuous mode
Oct 14 09:14:21 compute-0 NetworkManager[44885]: <info>  [1760433261.1079] device (tap1e3d49fe-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:14:21 compute-0 ovn_controller[152662]: 2025-10-14T09:14:21Z|01031|binding|INFO|Releasing lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 from this chassis (sb_readonly=0)
Oct 14 09:14:21 compute-0 ovn_controller[152662]: 2025-10-14T09:14:21Z|01032|binding|INFO|Setting lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 down in Southbound
Oct 14 09:14:21 compute-0 ovn_controller[152662]: 2025-10-14T09:14:21Z|01033|binding|INFO|Removing iface tap1e3d49fe-52 ovn-installed in OVS
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.130 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:6b:bc 10.100.0.4'], port_security=['fa:16:3e:2d:6b:bc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '84080e43-a9f4-4b6a-889f-d76167ff715a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de05cc126e24608be28a7d5dea18bf3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f10ca33-82fc-4f36-9ded-7e23e5949e23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3db415ba-b4a8-48a5-b3a3-5e4c9b11b067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1e3d49fe-52bd-40cb-ae1a-86eb664df473) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.131 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1e3d49fe-52bd-40cb-ae1a-86eb664df473 in datapath 8110a1ba-8e30-49ef-ba1c-c72228086a20 unbound from our chassis
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.133 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8110a1ba-8e30-49ef-ba1c-c72228086a20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.134 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[685fd784-f956-486b-a50f-b493ef3bb11c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.135 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 namespace which is not needed anymore
Oct 14 09:14:21 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 14 09:14:21 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Consumed 6.499s CPU time.
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:21 compute-0 systemd-machined[214636]: Machine qemu-121-instance-00000062 terminated.
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.186 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.187 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Ensure instance console log exists: /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.187 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.188 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.188 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.191 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start _get_guest_xml network_info=[{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.196 2 WARNING nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.205 2 DEBUG nova.virt.libvirt.host [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.207 2 DEBUG nova.virt.libvirt.host [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.210 2 DEBUG nova.virt.libvirt.host [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.210 2 DEBUG nova.virt.libvirt.host [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.211 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.211 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.212 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.212 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.212 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.212 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.213 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.213 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.213 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.214 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.214 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.214 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.215 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'vcpu_model' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.236 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:21 compute-0 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [NOTICE]   (356598) : haproxy version is 2.8.14-c23fe91
Oct 14 09:14:21 compute-0 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [NOTICE]   (356598) : path to executable is /usr/sbin/haproxy
Oct 14 09:14:21 compute-0 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [WARNING]  (356598) : Exiting Master process...
Oct 14 09:14:21 compute-0 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [WARNING]  (356598) : Exiting Master process...
Oct 14 09:14:21 compute-0 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [ALERT]    (356598) : Current worker (356600) exited with code 143 (Terminated)
Oct 14 09:14:21 compute-0 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [WARNING]  (356598) : All workers exited. Exiting... (0)
Oct 14 09:14:21 compute-0 systemd[1]: libpod-9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5.scope: Deactivated successfully.
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:21 compute-0 podman[357271]: 2025-10-14 09:14:21.295445608 +0000 UTC m=+0.052348851 container died 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.308 2 INFO nova.virt.libvirt.driver [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance destroyed successfully.
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.311 2 DEBUG nova.objects.instance [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lazy-loading 'resources' on Instance uuid 84080e43-a9f4-4b6a-889f-d76167ff715a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.331 2 DEBUG nova.virt.libvirt.vif [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-250281827',display_name='tempest-ServerMetadataTestJSON-server-250281827',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-250281827',id=98,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2de05cc126e24608be28a7d5dea18bf3',ramdisk_id='',reservation_id='r-aqxl060k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-10451963',owner_user_name='tempest-ServerMetadataTestJSON-10451963-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:20Z,user_data=None,user_id='559a8ea4f81141efa5e11da9b174482d',uuid=84080e43-a9f4-4b6a-889f-d76167ff715a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.332 2 DEBUG nova.network.os_vif_util [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converting VIF {"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.334 2 DEBUG nova.network.os_vif_util [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.335 2 DEBUG os_vif [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5-userdata-shm.mount: Deactivated successfully.
Oct 14 09:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c703b8ad82eb6af106c099e66a521ffd9ab04da878e5ff9418cd428162b5e4-merged.mount: Deactivated successfully.
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3d49fe-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.352 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Successfully updated port: b162ed75-30c8-4d39-97d3-7baa4c970ef6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:14:21 compute-0 podman[357271]: 2025-10-14 09:14:21.354308579 +0000 UTC m=+0.111211842 container cleanup 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.355 2 INFO os_vif [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52')
Oct 14 09:14:21 compute-0 systemd[1]: libpod-conmon-9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5.scope: Deactivated successfully.
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.378 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.379 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.379 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:14:21 compute-0 podman[357314]: 2025-10-14 09:14:21.429112283 +0000 UTC m=+0.050689720 container remove 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.441 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[50ddf6de-8228-4d9f-b85c-ffffba104511]: (4, ('Tue Oct 14 09:14:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 (9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5)\n9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5\nTue Oct 14 09:14:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 (9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5)\n9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f41144ac-d43d-487a-8c1f-cf2fe7fd4886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.445 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8110a1ba-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:21 compute-0 kernel: tap8110a1ba-80: left promiscuous mode
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.453 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c984381f-290c-48a3-a38c-e25072c68fcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.476 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0777f3-c2be-4f17-8fed-ebfb789c2375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.477 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e75c9e-4b71-4e27-8db6-02cf87fc0d28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.493 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40dd8064-b3c8-4b42-952f-81812cc14da1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703282, 'reachable_time': 20477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357360, 'error': None, 'target': 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d8110a1ba\x2d8e30\x2d49ef\x2dba1c\x2dc72228086a20.mount: Deactivated successfully.
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.499 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:14:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.499 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c70f7e83-0740-4b5c-86c2-843b40282071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.596 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:14:21 compute-0 ceph-mon[74249]: pgmap v1802: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.663 2 DEBUG nova.compute.manager [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-changed-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.663 2 DEBUG nova.compute.manager [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Refreshing instance network info cache due to event network-changed-b162ed75-30c8-4d39-97d3-7baa4c970ef6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.664 2 DEBUG oslo_concurrency.lockutils [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012694589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.704 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.737 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.743 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.810 2 INFO nova.virt.libvirt.driver [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Deleting instance files /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a_del
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.812 2 INFO nova.virt.libvirt.driver [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Deletion of /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a_del complete
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.874 2 INFO nova.compute.manager [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.875 2 DEBUG oslo.service.loopingcall [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.875 2 DEBUG nova.compute.manager [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:14:21 compute-0 nova_compute[259627]: 2025-10-14 09:14:21.875 2 DEBUG nova.network.neutron [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:14:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 280 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 170 op/s
Oct 14 09:14:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623412631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.186 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.187 2 DEBUG nova.virt.libvirt.vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:20Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.188 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.189 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.191 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <uuid>b595141f-123e-4250-bfec-888d866fd0c6</uuid>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <name>instance-00000060</name>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1399788817</nova:name>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:14:21</nova:creationTime>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <nova:port uuid="7103ce4a-69e8-454b-aed3-251ecb109232">
Oct 14 09:14:22 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <system>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <entry name="serial">b595141f-123e-4250-bfec-888d866fd0c6</entry>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <entry name="uuid">b595141f-123e-4250-bfec-888d866fd0c6</entry>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </system>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <os>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   </os>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <features>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   </features>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b595141f-123e-4250-bfec-888d866fd0c6_disk">
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b595141f-123e-4250-bfec-888d866fd0c6_disk.config">
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:22 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:9d:3c:de"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <target dev="tap7103ce4a-69"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/console.log" append="off"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <video>
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </video>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:14:22 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:14:22 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:14:22 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:14:22 compute-0 nova_compute[259627]: </domain>
Oct 14 09:14:22 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.193 2 DEBUG nova.virt.libvirt.vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:20Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.193 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.194 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.194 2 DEBUG os_vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7103ce4a-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7103ce4a-69, col_values=(('external_ids', {'iface-id': '7103ce4a-69e8-454b-aed3-251ecb109232', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:3c:de', 'vm-uuid': 'b595141f-123e-4250-bfec-888d866fd0c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:22 compute-0 NetworkManager[44885]: <info>  [1760433262.2015] manager: (tap7103ce4a-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.205 2 INFO os_vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69')
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.265 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.265 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.265 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:9d:3c:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.266 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Using config drive
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.289 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.313 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'ec2_ids' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.347 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'keypairs' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.539 2 DEBUG nova.network.neutron [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.575 2 INFO nova.compute.manager [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Took 0.70 seconds to deallocate network for instance.
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.621 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Updating instance_info_cache with network_info: [{"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.628 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.628 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1012694589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/623412631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.641 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.642 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance network_info: |[{"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.642 2 DEBUG oslo_concurrency.lockutils [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.642 2 DEBUG nova.network.neutron [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Refreshing network info cache for port b162ed75-30c8-4d39-97d3-7baa4c970ef6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.646 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start _get_guest_xml network_info=[{"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.652 2 WARNING nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.657 2 DEBUG nova.virt.libvirt.host [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.658 2 DEBUG nova.virt.libvirt.host [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.665 2 DEBUG nova.virt.libvirt.host [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.666 2 DEBUG nova.virt.libvirt.host [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.666 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.667 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.667 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.668 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.668 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.668 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.669 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.669 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.669 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.669 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.670 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.670 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.673 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.755 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating config drive at /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.761 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpls5qk_u6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.883 2 DEBUG oslo_concurrency.processutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.933 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpls5qk_u6" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.974 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:22 compute-0 nova_compute[259627]: 2025-10-14 09:14:22.979 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config b595141f-123e-4250-bfec-888d866fd0c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.028 2 DEBUG nova.compute.manager [req-a62a6389-6bfa-419b-ba27-2b53d77493eb req-5328019b-8357-4548-bccc-59b6666f6bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-deleted-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.169 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config b595141f-123e-4250-bfec-888d866fd0c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.170 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deleting local config drive /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config because it was imported into RBD.
Oct 14 09:14:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814762178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:23 compute-0 kernel: tap7103ce4a-69: entered promiscuous mode
Oct 14 09:14:23 compute-0 systemd-udevd[357250]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:23 compute-0 NetworkManager[44885]: <info>  [1760433263.2206] manager: (tap7103ce4a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Oct 14 09:14:23 compute-0 NetworkManager[44885]: <info>  [1760433263.2361] device (tap7103ce4a-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:23 compute-0 NetworkManager[44885]: <info>  [1760433263.2368] device (tap7103ce4a-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.258 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:23 compute-0 ovn_controller[152662]: 2025-10-14T09:14:23Z|01034|binding|INFO|Claiming lport 7103ce4a-69e8-454b-aed3-251ecb109232 for this chassis.
Oct 14 09:14:23 compute-0 ovn_controller[152662]: 2025-10-14T09:14:23Z|01035|binding|INFO|7103ce4a-69e8-454b-aed3-251ecb109232: Claiming fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.273 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.274 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f bound to our chassis
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.275 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 09:14:23 compute-0 ovn_controller[152662]: 2025-10-14T09:14:23Z|01036|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 ovn-installed in OVS
Oct 14 09:14:23 compute-0 ovn_controller[152662]: 2025-10-14T09:14:23Z|01037|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 up in Southbound
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.295 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc92582-e081-4622-8db0-5c53ebad6cb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.295 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap563aa000-41 in ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:14:23 compute-0 systemd-machined[214636]: New machine qemu-123-instance-00000060.
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.298 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap563aa000-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.298 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[377ed8f3-27a1-4e4d-b21d-9cff3960915e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12dc08e6-75f8-4d11-acff-58fdb002fe1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000060.
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.318 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f1aa3a-22a8-4b21-9b94-dee5ed53f534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.330 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.336 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13d0ca99-ac58-4b47-9dba-f151aa7306d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.338 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082960643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.379 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aff7b29b-9a02-46c1-9e3d-f8a5c797bc95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.385 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f73c79a4-e9ac-4e87-988c-31f08e4f2b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 NetworkManager[44885]: <info>  [1760433263.3873] manager: (tap563aa000-40): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.407 2 DEBUG oslo_concurrency.processutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.424 2 DEBUG nova.compute.provider_tree [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.435 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1549f98-7f96-41c1-8568-b592d5029280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.439 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[72673bb3-9f0f-4887-ac5a-5d20867ec9b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.450 2 DEBUG nova.scheduler.client.report [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:23 compute-0 NetworkManager[44885]: <info>  [1760433263.4611] device (tap563aa000-40): carrier: link connected
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.469 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b7a01a-52ff-4b21-b69a-6263120c55eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.485 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.487 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be24fe55-844c-43fc-997c-e679d980ee0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap563aa000-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:cd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704222, 'reachable_time': 29378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357570, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.503 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c32e5f8-ddf1-4f0e-b527-ccec24a63ef0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:cd84'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704222, 'tstamp': 704222}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357572, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.511 2 INFO nova.scheduler.client.report [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Deleted allocations for instance 84080e43-a9f4-4b6a-889f-d76167ff715a
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.520 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a02f475e-143f-4dab-ad17-361f99a0e082]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap563aa000-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:cd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704222, 'reachable_time': 29378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357582, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.558 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6051ddaf-0896-4f8d-8f80-7730122ba119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.580 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.624 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cd3c04-a23c-4b9c-9c27-389a31ee1d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.625 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563aa000-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.625 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.626 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap563aa000-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 NetworkManager[44885]: <info>  [1760433263.6283] manager: (tap563aa000-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Oct 14 09:14:23 compute-0 kernel: tap563aa000-40: entered promiscuous mode
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.633 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap563aa000-40, col_values=(('external_ids', {'iface-id': '4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 ovn_controller[152662]: 2025-10-14T09:14:23Z|01038|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 09:14:23 compute-0 ceph-mon[74249]: pgmap v1803: 305 pgs: 305 active+clean; 280 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 170 op/s
Oct 14 09:14:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3814762178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4082960643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.668 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7e9aec-54b4-4dd1-a526-987738275bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.671 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:14:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.673 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'env', 'PROCESS_TAG=haproxy-563aa000-400f-4c19-ba83-9377cc50d29f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/563aa000-400f-4c19-ba83-9377cc50d29f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:14:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82152448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.832 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.833 2 DEBUG nova.virt.libvirt.vif [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-982043822',display_name='tempest-ServersTestJSON-server-982043822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-982043822',id=99,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk/Vx8N8VoO1zmSJqBglSY43YqSujWcA6ZiDD1toYBIpurFKRZBjCOKZgtOBXyQC4T0DxGqSPJNxdy1Ur96CBm27LV9yA38NPDP8uy5/wFRSRGqBndEo3EUYE7tpxK96w==',key_name='tempest-key-1288000849',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-a12i0l08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:19Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=17db7f38-479a-4d56-9424-7f5ab695ccea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.834 2 DEBUG nova.network.os_vif_util [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.835 2 DEBUG nova.network.os_vif_util [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.836 2 DEBUG nova.objects.instance [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17db7f38-479a-4d56-9424-7f5ab695ccea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.850 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <uuid>17db7f38-479a-4d56-9424-7f5ab695ccea</uuid>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <name>instance-00000063</name>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestJSON-server-982043822</nova:name>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:14:22</nova:creationTime>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <nova:port uuid="b162ed75-30c8-4d39-97d3-7baa4c970ef6">
Oct 14 09:14:23 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <system>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <entry name="serial">17db7f38-479a-4d56-9424-7f5ab695ccea</entry>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <entry name="uuid">17db7f38-479a-4d56-9424-7f5ab695ccea</entry>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </system>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <os>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   </os>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <features>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   </features>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/17db7f38-479a-4d56-9424-7f5ab695ccea_disk">
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config">
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b3:f9:07"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <target dev="tapb162ed75-30"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/console.log" append="off"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <video>
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </video>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:14:23 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:14:23 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:14:23 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:14:23 compute-0 nova_compute[259627]: </domain>
Oct 14 09:14:23 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.850 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Preparing to wait for external event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.851 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.851 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.851 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.852 2 DEBUG nova.virt.libvirt.vif [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-982043822',display_name='tempest-ServersTestJSON-server-982043822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-982043822',id=99,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk/Vx8N8VoO1zmSJqBglSY43YqSujWcA6ZiDD1toYBIpurFKRZBjCOKZgtOBXyQC4T0DxGqSPJNxdy1Ur96CBm27LV9yA38NPDP8uy5/wFRSRGqBndEo3EUYE7tpxK96w==',key_name='tempest-key-1288000849',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-a12i0l08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:19Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=17db7f38-479a-4d56-9424-7f5ab695ccea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.852 2 DEBUG nova.network.os_vif_util [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.853 2 DEBUG nova.network.os_vif_util [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.854 2 DEBUG os_vif [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb162ed75-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.858 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb162ed75-30, col_values=(('external_ids', {'iface-id': 'b162ed75-30c8-4d39-97d3-7baa4c970ef6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:f9:07', 'vm-uuid': '17db7f38-479a-4d56-9424-7f5ab695ccea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 NetworkManager[44885]: <info>  [1760433263.8604] manager: (tapb162ed75-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.869 2 INFO os_vif [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30')
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.923 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.923 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.924 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:b3:f9:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.924 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Using config drive
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.946 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.952 2 DEBUG nova.network.neutron [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Updated VIF entry in instance network info cache for port b162ed75-30c8-4d39-97d3-7baa4c970ef6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.952 2 DEBUG nova.network.neutron [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Updating instance_info_cache with network_info: [{"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 280 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 153 op/s
Oct 14 09:14:23 compute-0 nova_compute[259627]: 2025-10-14 09:14:23.978 2 DEBUG oslo_concurrency.lockutils [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.027 2 DEBUG nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-unplugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.028 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.028 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.028 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.029 2 DEBUG nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] No waiting events found dispatching network-vif-unplugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.029 2 WARNING nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received unexpected event network-vif-unplugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 for instance with vm_state deleted and task_state None.
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.029 2 DEBUG nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.029 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.029 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.030 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.030 2 DEBUG nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] No waiting events found dispatching network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.030 2 WARNING nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received unexpected event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 for instance with vm_state deleted and task_state None.
Oct 14 09:14:24 compute-0 podman[357686]: 2025-10-14 09:14:24.114752153 +0000 UTC m=+0.055985461 container create 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:14:24 compute-0 systemd[1]: Started libpod-conmon-002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2.scope.
Oct 14 09:14:24 compute-0 podman[357686]: 2025-10-14 09:14:24.08178266 +0000 UTC m=+0.023015988 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:14:24 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458b2af1b6a32b5e1572566cdbc152a3e460f3972a3a382274f2a47523b42e89/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:24 compute-0 podman[357686]: 2025-10-14 09:14:24.198454926 +0000 UTC m=+0.139688264 container init 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:14:24 compute-0 podman[357686]: 2025-10-14 09:14:24.206527935 +0000 UTC m=+0.147761243 container start 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:14:24 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [NOTICE]   (357705) : New worker (357707) forked
Oct 14 09:14:24 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [NOTICE]   (357705) : Loading success.
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.383 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for b595141f-123e-4250-bfec-888d866fd0c6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.385 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433264.3826427, b595141f-123e-4250-bfec-888d866fd0c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.385 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Resumed (Lifecycle Event)
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.393 2 DEBUG nova.compute.manager [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.393 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.398 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance spawned successfully.
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.398 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.410 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.415 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.426 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.427 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.427 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.428 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.428 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.428 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.463 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.464 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433264.3840098, b595141f-123e-4250-bfec-888d866fd0c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.464 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Started (Lifecycle Event)
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.496 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.500 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.507 2 DEBUG nova.compute.manager [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.521 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.568 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.569 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.569 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:14:24 compute-0 nova_compute[259627]: 2025-10-14 09:14:24.647 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/82152448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.056 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Creating config drive at /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.072 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww928r51 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.191 2 DEBUG nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.191 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.192 2 WARNING nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state None.
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.193 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.193 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.194 2 DEBUG nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.194 2 WARNING nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state None.
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.242 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww928r51" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.276 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.283 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.455 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.460 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Deleting local config drive /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config because it was imported into RBD.
Oct 14 09:14:25 compute-0 kernel: tapb162ed75-30: entered promiscuous mode
Oct 14 09:14:25 compute-0 NetworkManager[44885]: <info>  [1760433265.5120] manager: (tapb162ed75-30): new Tun device (/org/freedesktop/NetworkManager/Devices/430)
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:25 compute-0 ovn_controller[152662]: 2025-10-14T09:14:25Z|01039|binding|INFO|Claiming lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 for this chassis.
Oct 14 09:14:25 compute-0 ovn_controller[152662]: 2025-10-14T09:14:25Z|01040|binding|INFO|b162ed75-30c8-4d39-97d3-7baa4c970ef6: Claiming fa:16:3e:b3:f9:07 10.100.0.7
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.558 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:f9:07 10.100.0.7'], port_security=['fa:16:3e:b3:f9:07 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '17db7f38-479a-4d56-9424-7f5ab695ccea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b162ed75-30c8-4d39-97d3-7baa4c970ef6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.559 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b162ed75-30c8-4d39-97d3-7baa4c970ef6 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.561 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:14:25 compute-0 ovn_controller[152662]: 2025-10-14T09:14:25Z|01041|binding|INFO|Setting lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 up in Southbound
Oct 14 09:14:25 compute-0 ovn_controller[152662]: 2025-10-14T09:14:25Z|01042|binding|INFO|Setting lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 ovn-installed in OVS
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.582 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e58c9d98-c6f3-4280-9b4e-06be0ec1dd12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:25 compute-0 systemd-machined[214636]: New machine qemu-124-instance-00000063.
Oct 14 09:14:25 compute-0 systemd-udevd[357773]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:25 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000063.
Oct 14 09:14:25 compute-0 NetworkManager[44885]: <info>  [1760433265.6159] device (tapb162ed75-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:25 compute-0 NetworkManager[44885]: <info>  [1760433265.6171] device (tapb162ed75-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.630 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[157ac806-49aa-43bc-b50d-a4b4ca02e9d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.634 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4d05c2-94ec-448c-9440-26cdb73f033a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.665 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b1eec7bc-c525-401e-8640-e47f16ef344c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:25 compute-0 ceph-mon[74249]: pgmap v1804: 305 pgs: 305 active+clean; 280 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 153 op/s
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.683 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56fecd10-f9bc-45d0-b092-04de0bf5a0c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357781, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.711 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc67e87-bd54-4f9d-b8d9-69795eafe236]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357785, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357785, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.713 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:25 compute-0 nova_compute[259627]: 2025-10-14 09:14:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 258 op/s
Oct 14 09:14:25 compute-0 ovn_controller[152662]: 2025-10-14T09:14:25Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.142 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433251.1402357, d5de3978-2377-4d8e-aeaf-c952912130a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.143 2 INFO nova.compute.manager [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] VM Stopped (Lifecycle Event)
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.165 2 DEBUG nova.compute.manager [None req-3e8bb4fc-abeb-4a77-84e5-2d2c18f0ac90 - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.518 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433266.5172956, 17db7f38-479a-4d56-9424-7f5ab695ccea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.518 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] VM Started (Lifecycle Event)
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.544 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.550 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433266.5174892, 17db7f38-479a-4d56-9424-7f5ab695ccea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.550 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] VM Paused (Lifecycle Event)
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.572 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.576 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:26 compute-0 nova_compute[259627]: 2025-10-14 09:14:26.599 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:27 compute-0 ceph-mon[74249]: pgmap v1805: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 258 op/s
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.723 2 DEBUG nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.723 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.724 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.724 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.725 2 DEBUG nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Processing event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.725 2 DEBUG nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.726 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.726 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.727 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.727 2 DEBUG nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] No waiting events found dispatching network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.727 2 WARNING nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received unexpected event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 for instance with vm_state building and task_state spawning.
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.728 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.734 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433267.7342854, 17db7f38-479a-4d56-9424-7f5ab695ccea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.735 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] VM Resumed (Lifecycle Event)
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.739 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.744 2 INFO nova.virt.libvirt.driver [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance spawned successfully.
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.744 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.760 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.772 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.780 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.781 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.781 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.782 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.783 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.783 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.796 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.841 2 INFO nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Took 8.20 seconds to spawn the instance on the hypervisor.
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.842 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.906 2 INFO nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Took 9.18 seconds to build instance.
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.921 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:27 compute-0 nova_compute[259627]: 2025-10-14 09:14:27.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:28 compute-0 ovn_controller[152662]: 2025-10-14T09:14:28Z|01043|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 09:14:28 compute-0 ovn_controller[152662]: 2025-10-14T09:14:28Z|01044|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 09:14:28 compute-0 ovn_controller[152662]: 2025-10-14T09:14:28Z|01045|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:14:28 compute-0 nova_compute[259627]: 2025-10-14 09:14:28.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:28 compute-0 nova_compute[259627]: 2025-10-14 09:14:28.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:28 compute-0 nova_compute[259627]: 2025-10-14 09:14:28.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:28 compute-0 nova_compute[259627]: 2025-10-14 09:14:28.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:29 compute-0 ceph-mon[74249]: pgmap v1806: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 14 09:14:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 14 09:14:30 compute-0 nova_compute[259627]: 2025-10-14 09:14:30.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:30 compute-0 nova_compute[259627]: 2025-10-14 09:14:30.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.013 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.013 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.013 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.014 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.014 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.016 2 INFO nova.compute.manager [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Terminating instance
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.017 2 DEBUG nova.compute.manager [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.021 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:31 compute-0 kernel: tapb162ed75-30 (unregistering): left promiscuous mode
Oct 14 09:14:31 compute-0 NetworkManager[44885]: <info>  [1760433271.1221] device (tapb162ed75-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:14:31 compute-0 ovn_controller[152662]: 2025-10-14T09:14:31Z|01046|binding|INFO|Releasing lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 from this chassis (sb_readonly=0)
Oct 14 09:14:31 compute-0 ovn_controller[152662]: 2025-10-14T09:14:31Z|01047|binding|INFO|Setting lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 down in Southbound
Oct 14 09:14:31 compute-0 ovn_controller[152662]: 2025-10-14T09:14:31Z|01048|binding|INFO|Removing iface tapb162ed75-30 ovn-installed in OVS
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.182 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:f9:07 10.100.0.7'], port_security=['fa:16:3e:b3:f9:07 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '17db7f38-479a-4d56-9424-7f5ab695ccea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b162ed75-30c8-4d39-97d3-7baa4c970ef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.183 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b162ed75-30c8-4d39-97d3-7baa4c970ef6 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.184 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.205 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8016cc61-6958-4d00-884c-3210abf4b280]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.218 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.219 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.219 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.219 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.219 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:31 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 14 09:14:31 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000063.scope: Consumed 4.225s CPU time.
Oct 14 09:14:31 compute-0 systemd-machined[214636]: Machine qemu-124-instance-00000063 terminated.
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.225 2 INFO nova.compute.manager [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Terminating instance
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.227 2 DEBUG nova.compute.manager [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.246 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[10a2f240-9f1f-493f-83ac-95cfb3c90f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.251 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[85cad6d1-f09c-4ca1-923c-5eb42b067ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 podman[357833]: 2025-10-14 09:14:31.286167883 +0000 UTC m=+0.107301536 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.285 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd4ea17-1fb7-4a5e-8257-f8636351a651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 kernel: tap4f827284-f3 (unregistering): left promiscuous mode
Oct 14 09:14:31 compute-0 NetworkManager[44885]: <info>  [1760433271.3026] device (tap4f827284-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:14:31 compute-0 podman[357830]: 2025-10-14 09:14:31.314340117 +0000 UTC m=+0.172428201 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 ovn_controller[152662]: 2025-10-14T09:14:31Z|01049|binding|INFO|Releasing lport 4f827284-f357-43c5-bdde-c69731b52914 from this chassis (sb_readonly=0)
Oct 14 09:14:31 compute-0 ovn_controller[152662]: 2025-10-14T09:14:31Z|01050|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 down in Southbound
Oct 14 09:14:31 compute-0 ovn_controller[152662]: 2025-10-14T09:14:31Z|01051|binding|INFO|Removing iface tap4f827284-f3 ovn-installed in OVS
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.324 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.332 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0a77fd7c-ef65-4f97-96e6-bd6b56ec0a9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357910, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.342 2 INFO nova.virt.libvirt.driver [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance destroyed successfully.
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.343 2 DEBUG nova.objects.instance [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid 17db7f38-479a-4d56-9424-7f5ab695ccea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c470026c-bfff-4324-8fe2-9228909a779a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357922, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357922, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.352 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.353 2 DEBUG nova.virt.libvirt.vif [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-982043822',display_name='tempest-ServersTestJSON-server-982043822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-982043822',id=99,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk/Vx8N8VoO1zmSJqBglSY43YqSujWcA6ZiDD1toYBIpurFKRZBjCOKZgtOBXyQC4T0DxGqSPJNxdy1Ur96CBm27LV9yA38NPDP8uy5/wFRSRGqBndEo3EUYE7tpxK96w==',key_name='tempest-key-1288000849',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-a12i0l08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:27Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=17db7f38-479a-4d56-9424-7f5ab695ccea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.354 2 DEBUG nova.network.os_vif_util [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.354 2 DEBUG nova.network.os_vif_util [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.355 2 DEBUG os_vif [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb162ed75-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.360 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.360 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.360 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.361 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.361 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c unbound from our chassis
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.362 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.363 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb189b1-3682-4430-8923-9d851eeb4661]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.364 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace which is not needed anymore
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.365 2 INFO os_vif [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30')
Oct 14 09:14:31 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 14 09:14:31 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000056.scope: Consumed 8.196s CPU time.
Oct 14 09:14:31 compute-0 systemd-machined[214636]: Machine qemu-122-instance-00000056 terminated.
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.462 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance destroyed successfully.
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.462 2 DEBUG nova.objects.instance [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'resources' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.489 2 DEBUG nova.virt.libvirt.vif [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:18Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.490 2 DEBUG nova.network.os_vif_util [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.490 2 DEBUG nova.network.os_vif_util [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.490 2 DEBUG os_vif [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.492 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f827284-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.498 2 INFO os_vif [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')
Oct 14 09:14:31 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [NOTICE]   (356788) : haproxy version is 2.8.14-c23fe91
Oct 14 09:14:31 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [NOTICE]   (356788) : path to executable is /usr/sbin/haproxy
Oct 14 09:14:31 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [WARNING]  (356788) : Exiting Master process...
Oct 14 09:14:31 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [WARNING]  (356788) : Exiting Master process...
Oct 14 09:14:31 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [ALERT]    (356788) : Current worker (356790) exited with code 143 (Terminated)
Oct 14 09:14:31 compute-0 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [WARNING]  (356788) : All workers exited. Exiting... (0)
Oct 14 09:14:31 compute-0 systemd[1]: libpod-003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd.scope: Deactivated successfully.
Oct 14 09:14:31 compute-0 podman[357965]: 2025-10-14 09:14:31.528906285 +0000 UTC m=+0.062021949 container died 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:14:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039727621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6781cf622e873ac119e8ad8166fc07516eec1ccc73d027486934f81c24ca8b1-merged.mount: Deactivated successfully.
Oct 14 09:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd-userdata-shm.mount: Deactivated successfully.
Oct 14 09:14:31 compute-0 podman[357965]: 2025-10-14 09:14:31.579853321 +0000 UTC m=+0.112968985 container cleanup 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.592 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:31 compute-0 systemd[1]: libpod-conmon-003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd.scope: Deactivated successfully.
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.640 2 DEBUG nova.compute.manager [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-unplugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.641 2 DEBUG oslo_concurrency.lockutils [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.641 2 DEBUG oslo_concurrency.lockutils [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.641 2 DEBUG oslo_concurrency.lockutils [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.642 2 DEBUG nova.compute.manager [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] No waiting events found dispatching network-vif-unplugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.642 2 DEBUG nova.compute.manager [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-unplugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:14:31 compute-0 podman[358022]: 2025-10-14 09:14:31.671340266 +0000 UTC m=+0.056704599 container remove 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b543e3f-5428-4180-85b9-33042213186d]: (4, ('Tue Oct 14 09:14:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd)\n003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd\nTue Oct 14 09:14:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd)\n003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.678 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6ee6a0-a966-4a50-a5d4-01279fa92898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.679 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.679 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.679 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:14:31 compute-0 kernel: tapa49b41b4-20: left promiscuous mode
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.685 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.686 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.689 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.689 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:14:31 compute-0 ceph-mon[74249]: pgmap v1807: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 14 09:14:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3039727621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.701 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.701 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.709 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8e7817-2413-4f8d-85ff-f51285b21f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.733 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[51f4329e-c185-4a77-915f-7b1b4d20735a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc4b86e-ab5c-4641-92ea-41ec1010c565]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.756 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[93fe3eb5-667b-463f-8467-13b3eadab3a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703647, 'reachable_time': 44657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358038, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 systemd[1]: run-netns-ovnmeta\x2da49b41b4\x2d2559\x2d4a22\x2da274\x2da6c7bbe75f2c.mount: Deactivated successfully.
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.761 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:14:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.762 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2fb4a1-7005-4cbb-ab8c-c184bdefa464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.825 2 INFO nova.virt.libvirt.driver [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Deleting instance files /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea_del
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.826 2 INFO nova.virt.libvirt.driver [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Deletion of /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea_del complete
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.899 2 INFO nova.compute.manager [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Took 0.88 seconds to destroy the instance on the hypervisor.
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.900 2 DEBUG oslo.service.loopingcall [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.900 2 DEBUG nova.compute.manager [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.900 2 DEBUG nova.network.neutron [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.922 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.923 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3362MB free_disk=59.85558319091797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.923 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.923 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.962 2 INFO nova.virt.libvirt.driver [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deleting instance files /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75_del
Oct 14 09:14:31 compute-0 nova_compute[259627]: 2025-10-14 09:14:31.963 2 INFO nova.virt.libvirt.driver [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deletion of /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75_del complete
Oct 14 09:14:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 293 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.6 MiB/s wr, 371 op/s
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d46b6953-9413-4e6a-94f7-7b5ac9634c16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance b595141f-123e-4250-bfec-888d866fd0c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2534f8b9-e832-4b78-ada4-e551429bdc75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 17db7f38-479a-4d56-9424-7f5ab695ccea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.025 2 INFO nova.compute.manager [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.026 2 DEBUG oslo.service.loopingcall [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.026 2 DEBUG nova.compute.manager [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.026 2 DEBUG nova.network.neutron [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.036 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.050 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.050 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.063 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.097 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.173 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708180741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.646 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.655 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.697 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/708180741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.714 2 DEBUG nova.network.neutron [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.729 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.729 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.744 2 INFO nova.compute.manager [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Took 0.84 seconds to deallocate network for instance.
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:14:32
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:14:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.789 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.789 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.843 2 DEBUG nova.compute.manager [req-49791a0e-0167-4be8-bcff-a211fc6b984d req-6872d837-3ac5-4641-91cb-24e59714ef6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-deleted-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:32 compute-0 nova_compute[259627]: 2025-10-14 09:14:32.886 2 DEBUG oslo_concurrency.processutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:14:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437114887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.313 2 DEBUG oslo_concurrency.processutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.321 2 DEBUG nova.compute.provider_tree [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.345 2 DEBUG nova.scheduler.client.report [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.375 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.382 2 DEBUG nova.network.neutron [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.406 2 INFO nova.compute.manager [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Took 1.38 seconds to deallocate network for instance.
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.413 2 INFO nova.scheduler.client.report [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance 17db7f38-479a-4d56-9424-7f5ab695ccea
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.494 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.494 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.524 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.564 2 DEBUG oslo_concurrency.processutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:33 compute-0 ceph-mon[74249]: pgmap v1808: 305 pgs: 305 active+clean; 293 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.6 MiB/s wr, 371 op/s
Oct 14 09:14:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1437114887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.758 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.759 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.760 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.760 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.761 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] No waiting events found dispatching network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.762 2 WARNING nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received unexpected event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 for instance with vm_state deleted and task_state None.
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.762 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.763 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.763 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.763 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.764 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.764 2 WARNING nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state deleted and task_state None.
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.765 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.765 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.766 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.766 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.767 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.767 2 WARNING nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state deleted and task_state None.
Oct 14 09:14:33 compute-0 nova_compute[259627]: 2025-10-14 09:14:33.767 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-deleted-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 293 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.3 MiB/s wr, 266 op/s
Oct 14 09:14:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406511685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.040 2 DEBUG oslo_concurrency.processutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.046 2 DEBUG nova.compute.provider_tree [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.063 2 DEBUG nova.scheduler.client.report [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.084 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.105 2 INFO nova.scheduler.client.report [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Deleted allocations for instance 2534f8b9-e832-4b78-ada4-e551429bdc75
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.191 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2406511685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.728 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.729 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:14:34 compute-0 nova_compute[259627]: 2025-10-14 09:14:34.729 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:14:35 compute-0 nova_compute[259627]: 2025-10-14 09:14:35.044 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:35 compute-0 nova_compute[259627]: 2025-10-14 09:14:35.044 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:35 compute-0 nova_compute[259627]: 2025-10-14 09:14:35.045 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:14:35 compute-0 nova_compute[259627]: 2025-10-14 09:14:35.045 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:35 compute-0 ceph-mon[74249]: pgmap v1809: 305 pgs: 305 active+clean; 293 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.3 MiB/s wr, 266 op/s
Oct 14 09:14:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.5 MiB/s wr, 331 op/s
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.292 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433261.280935, 84080e43-a9f4-4b6a-889f-d76167ff715a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.292 2 INFO nova.compute.manager [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] VM Stopped (Lifecycle Event)
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.314 2 DEBUG nova.compute.manager [None req-04676838-9d5b-492d-a9d0-b802aed56d65 - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.639 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updating instance_info_cache with network_info: [{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:14:36 compute-0 ceph-mon[74249]: pgmap v1810: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.5 MiB/s wr, 331 op/s
Oct 14 09:14:36 compute-0 nova_compute[259627]: 2025-10-14 09:14:36.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:37 compute-0 nova_compute[259627]: 2025-10-14 09:14:37.339 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:37 compute-0 nova_compute[259627]: 2025-10-14 09:14:37.340 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:37 compute-0 nova_compute[259627]: 2025-10-14 09:14:37.360 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:14:37 compute-0 nova_compute[259627]: 2025-10-14 09:14:37.451 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:37 compute-0 nova_compute[259627]: 2025-10-14 09:14:37.452 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:37 compute-0 nova_compute[259627]: 2025-10-14 09:14:37.458 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:14:37 compute-0 nova_compute[259627]: 2025-10-14 09:14:37.458 2 INFO nova.compute.claims [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:14:37 compute-0 nova_compute[259627]: 2025-10-14 09:14:37.606 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.854065) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277854093, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 622, "num_deletes": 256, "total_data_size": 602740, "memory_usage": 615912, "flush_reason": "Manual Compaction"}
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277859849, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 596293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37771, "largest_seqno": 38392, "table_properties": {"data_size": 593031, "index_size": 1106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7780, "raw_average_key_size": 18, "raw_value_size": 586342, "raw_average_value_size": 1426, "num_data_blocks": 50, "num_entries": 411, "num_filter_entries": 411, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433240, "oldest_key_time": 1760433240, "file_creation_time": 1760433277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 5807 microseconds, and 2191 cpu microseconds.
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.859873) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 596293 bytes OK
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.859886) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.861845) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.861857) EVENT_LOG_v1 {"time_micros": 1760433277861853, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.861870) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 599330, prev total WAL file size 599330, number of live WAL files 2.
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.862253) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353038' seq:0, type:0; will stop at (end)
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(582KB)], [83(7900KB)]
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277862293, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8686208, "oldest_snapshot_seqno": -1}
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6128 keys, 8556518 bytes, temperature: kUnknown
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277897881, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8556518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8515267, "index_size": 24821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 156528, "raw_average_key_size": 25, "raw_value_size": 8405065, "raw_average_value_size": 1371, "num_data_blocks": 997, "num_entries": 6128, "num_filter_entries": 6128, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.898148) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8556518 bytes
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.899868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.5 rd, 239.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.7 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(28.9) write-amplify(14.3) OK, records in: 6652, records dropped: 524 output_compression: NoCompression
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.899883) EVENT_LOG_v1 {"time_micros": 1760433277899876, "job": 48, "event": "compaction_finished", "compaction_time_micros": 35668, "compaction_time_cpu_micros": 18436, "output_level": 6, "num_output_files": 1, "total_output_size": 8556518, "num_input_records": 6652, "num_output_records": 6128, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277900101, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277901370, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.862187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:14:37 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:14:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 290 KiB/s wr, 214 op/s
Oct 14 09:14:38 compute-0 ovn_controller[152662]: 2025-10-14T09:14:38Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 09:14:38 compute-0 ovn_controller[152662]: 2025-10-14T09:14:38Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 09:14:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811645815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.042 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.050 2 DEBUG nova.compute.provider_tree [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.077 2 DEBUG nova.scheduler.client.report [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.101 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.102 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:14:38 compute-0 ovn_controller[152662]: 2025-10-14T09:14:38Z|01052|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 09:14:38 compute-0 ovn_controller[152662]: 2025-10-14T09:14:38Z|01053|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.169 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.170 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.201 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.269 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.363 2 DEBUG nova.policy [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.375 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.378 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.379 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Creating image(s)
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.413 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.442 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.465 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.468 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.538 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.539 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.540 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.540 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.562 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.565 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 da40c115-048e-4844-812e-7e65e25bfb3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:38 compute-0 ceph-mon[74249]: pgmap v1811: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 290 KiB/s wr, 214 op/s
Oct 14 09:14:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2811645815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.875 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 da40c115-048e-4844-812e-7e65e25bfb3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:38 compute-0 nova_compute[259627]: 2025-10-14 09:14:38.965 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.041 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Successfully created port: c2ac8abe-0e61-4769-9529-3b391568e6b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.094 2 DEBUG nova.objects.instance [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid da40c115-048e-4844-812e-7e65e25bfb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.115 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.116 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Ensure instance console log exists: /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.117 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.117 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.118 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.772 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Successfully updated port: c2ac8abe-0e61-4769-9529-3b391568e6b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.791 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.791 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.791 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:14:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 290 KiB/s wr, 214 op/s
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.984 2 DEBUG nova.compute.manager [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-changed-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.984 2 DEBUG nova.compute.manager [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Refreshing instance network info cache due to event network-changed-c2ac8abe-0e61-4769-9529-3b391568e6b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:14:39 compute-0 nova_compute[259627]: 2025-10-14 09:14:39.985 2 DEBUG oslo_concurrency.lockutils [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:40 compute-0 nova_compute[259627]: 2025-10-14 09:14:40.312 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:14:41 compute-0 ceph-mon[74249]: pgmap v1812: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 290 KiB/s wr, 214 op/s
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.383 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Updating instance_info_cache with network_info: [{"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.418 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.418 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance network_info: |[{"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.418 2 DEBUG oslo_concurrency.lockutils [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.418 2 DEBUG nova.network.neutron [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Refreshing network info cache for port c2ac8abe-0e61-4769-9529-3b391568e6b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.421 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start _get_guest_xml network_info=[{"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.425 2 WARNING nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.435 2 DEBUG nova.virt.libvirt.host [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.435 2 DEBUG nova.virt.libvirt.host [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.442 2 DEBUG nova.virt.libvirt.host [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.442 2 DEBUG nova.virt.libvirt.host [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.443 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.443 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.443 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.443 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.445 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.445 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.445 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.448 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063771092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.862 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.882 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:41 compute-0 nova_compute[259627]: 2025-10-14 09:14:41.886 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 343 op/s
Oct 14 09:14:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3063771092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741655201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.315 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.316 2 DEBUG nova.virt.libvirt.vif [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=100,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-95sxw262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:38Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=da40c115-048e-4844-812e-7e65e25bfb3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.317 2 DEBUG nova.network.os_vif_util [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.317 2 DEBUG nova.network.os_vif_util [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.318 2 DEBUG nova.objects.instance [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid da40c115-048e-4844-812e-7e65e25bfb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.342 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <uuid>da40c115-048e-4844-812e-7e65e25bfb3f</uuid>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <name>instance-00000064</name>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestJSON-server-1896126591</nova:name>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:14:41</nova:creationTime>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <nova:port uuid="c2ac8abe-0e61-4769-9529-3b391568e6b9">
Oct 14 09:14:42 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <system>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <entry name="serial">da40c115-048e-4844-812e-7e65e25bfb3f</entry>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <entry name="uuid">da40c115-048e-4844-812e-7e65e25bfb3f</entry>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </system>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <os>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   </os>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <features>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   </features>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/da40c115-048e-4844-812e-7e65e25bfb3f_disk">
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/da40c115-048e-4844-812e-7e65e25bfb3f_disk.config">
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:42 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:fa:70:22"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <target dev="tapc2ac8abe-0e"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/console.log" append="off"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <video>
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </video>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:14:42 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:14:42 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:14:42 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:14:42 compute-0 nova_compute[259627]: </domain>
Oct 14 09:14:42 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.342 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Preparing to wait for external event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.342 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.343 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.343 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.343 2 DEBUG nova.virt.libvirt.vif [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=100,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-95sxw262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:38Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=da40c115-048e-4844-812e-7e65e25bfb3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.344 2 DEBUG nova.network.os_vif_util [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.344 2 DEBUG nova.network.os_vif_util [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.345 2 DEBUG os_vif [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2ac8abe-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2ac8abe-0e, col_values=(('external_ids', {'iface-id': 'c2ac8abe-0e61-4769-9529-3b391568e6b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:70:22', 'vm-uuid': 'da40c115-048e-4844-812e-7e65e25bfb3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:42 compute-0 NetworkManager[44885]: <info>  [1760433282.3507] manager: (tapc2ac8abe-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.356 2 INFO os_vif [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e')
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.408 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.408 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.409 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:fa:70:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.409 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Using config drive
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.429 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.888 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Creating config drive at /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.896 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb2bkbecw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.942 2 DEBUG nova.network.neutron [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Updated VIF entry in instance network info cache for port c2ac8abe-0e61-4769-9529-3b391568e6b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.943 2 DEBUG nova.network.neutron [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Updating instance_info_cache with network_info: [{"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.959 2 DEBUG oslo_concurrency.lockutils [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:42 compute-0 nova_compute[259627]: 2025-10-14 09:14:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.051 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb2bkbecw" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:43 compute-0 ceph-mon[74249]: pgmap v1813: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 343 op/s
Oct 14 09:14:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1741655201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.087 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.090 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config da40c115-048e-4844-812e-7e65e25bfb3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018610456550758612 of space, bias 1.0, pg target 0.5583136965227583 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.298 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config da40c115-048e-4844-812e-7e65e25bfb3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.299 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Deleting local config drive /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config because it was imported into RBD.
Oct 14 09:14:43 compute-0 kernel: tapc2ac8abe-0e: entered promiscuous mode
Oct 14 09:14:43 compute-0 NetworkManager[44885]: <info>  [1760433283.3424] manager: (tapc2ac8abe-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Oct 14 09:14:43 compute-0 ovn_controller[152662]: 2025-10-14T09:14:43Z|01054|binding|INFO|Claiming lport c2ac8abe-0e61-4769-9529-3b391568e6b9 for this chassis.
Oct 14 09:14:43 compute-0 ovn_controller[152662]: 2025-10-14T09:14:43Z|01055|binding|INFO|c2ac8abe-0e61-4769-9529-3b391568e6b9: Claiming fa:16:3e:fa:70:22 10.100.0.3
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.353 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:70:22 10.100.0.3'], port_security=['fa:16:3e:fa:70:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'da40c115-048e-4844-812e-7e65e25bfb3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c2ac8abe-0e61-4769-9529-3b391568e6b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.354 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c2ac8abe-0e61-4769-9529-3b391568e6b9 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.355 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:14:43 compute-0 ovn_controller[152662]: 2025-10-14T09:14:43Z|01056|binding|INFO|Setting lport c2ac8abe-0e61-4769-9529-3b391568e6b9 ovn-installed in OVS
Oct 14 09:14:43 compute-0 ovn_controller[152662]: 2025-10-14T09:14:43Z|01057|binding|INFO|Setting lport c2ac8abe-0e61-4769-9529-3b391568e6b9 up in Southbound
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.374 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a07d26-8721-4e57-b5cc-dfdefbae537d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:43 compute-0 systemd-udevd[358429]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:43 compute-0 systemd-machined[214636]: New machine qemu-125-instance-00000064.
Oct 14 09:14:43 compute-0 NetworkManager[44885]: <info>  [1760433283.3938] device (tapc2ac8abe-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:43 compute-0 NetworkManager[44885]: <info>  [1760433283.3951] device (tapc2ac8abe-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:43 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000064.
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.420 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e84414-eba8-47bf-b53a-22e5fdc68745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.424 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2adbc5c8-14b1-4624-89fd-a042a249cca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.450 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0d3674-0dcf-4847-83c8-8a4985c1802a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.469 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67346800-02d2-4020-9c53-e327a4256ea7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358442, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.485 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[69d58679-71df-4994-9a00-664e78874e50]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358443, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358443, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.487 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:43 compute-0 nova_compute[259627]: 2025-10-14 09:14:43.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.490 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.491 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.491 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.492 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 409 KiB/s rd, 3.9 MiB/s wr, 193 op/s
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.290 2 INFO nova.compute.manager [None req-b239f489-a952-4693-b750-ed4aad04c25b e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Get console output
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.296 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.383 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.383 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.403 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.488 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.489 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.498 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.499 2 INFO nova.compute.claims [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.704 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.754 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433284.7283974, da40c115-048e-4844-812e-7e65e25bfb3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.755 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] VM Started (Lifecycle Event)
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.782 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.788 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433284.7291455, da40c115-048e-4844-812e-7e65e25bfb3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.789 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] VM Paused (Lifecycle Event)
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.812 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.817 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:44 compute-0 nova_compute[259627]: 2025-10-14 09:14:44.837 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:45 compute-0 ceph-mon[74249]: pgmap v1814: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 409 KiB/s rd, 3.9 MiB/s wr, 193 op/s
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.088 2 DEBUG nova.compute.manager [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.088 2 DEBUG nova.compute.manager [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing instance network info cache due to event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.089 2 DEBUG oslo_concurrency.lockutils [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.089 2 DEBUG oslo_concurrency.lockutils [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.089 2 DEBUG nova.network.neutron [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:14:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829793887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.167 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.177 2 DEBUG nova.compute.provider_tree [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.203 2 DEBUG nova.scheduler.client.report [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.210 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.211 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.211 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.211 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.212 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.214 2 INFO nova.compute.manager [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Terminating instance
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.216 2 DEBUG nova.compute.manager [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.244 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.245 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:14:45 compute-0 kernel: tap7103ce4a-69 (unregistering): left promiscuous mode
Oct 14 09:14:45 compute-0 NetworkManager[44885]: <info>  [1760433285.2867] device (tap7103ce4a-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.287 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.287 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01058|binding|INFO|Releasing lport 7103ce4a-69e8-454b-aed3-251ecb109232 from this chassis (sb_readonly=0)
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01059|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 down in Southbound
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01060|binding|INFO|Removing iface tap7103ce4a-69 ovn-installed in OVS
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.307 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.308 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f unbound from our chassis
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.310 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 563aa000-400f-4c19-ba83-9377cc50d29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.309 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.310 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24ac04b8-aa29-42ad-bf31-a365e02f35c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.311 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f namespace which is not needed anymore
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.333 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:14:45 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 14 09:14:45 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000060.scope: Consumed 13.468s CPU time.
Oct 14 09:14:45 compute-0 systemd-machined[214636]: Machine qemu-123-instance-00000060 terminated.
Oct 14 09:14:45 compute-0 kernel: tap7103ce4a-69: entered promiscuous mode
Oct 14 09:14:45 compute-0 NetworkManager[44885]: <info>  [1760433285.4466] manager: (tap7103ce4a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:45 compute-0 kernel: tap7103ce4a-69 (unregistering): left promiscuous mode
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01061|binding|INFO|Claiming lport 7103ce4a-69e8-454b-aed3-251ecb109232 for this chassis.
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01062|binding|INFO|7103ce4a-69e8-454b-aed3-251ecb109232: Claiming fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.456 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.458 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.458 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.458 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating image(s)
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01063|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 ovn-installed in OVS
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01064|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 up in Southbound
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01065|binding|INFO|Releasing lport 7103ce4a-69e8-454b-aed3-251ecb109232 from this chassis (sb_readonly=1)
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01066|if_status|INFO|Dropped 5 log messages in last 181 seconds (most recently, 175 seconds ago) due to excessive rate
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01067|if_status|INFO|Not setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 down as sb is readonly
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01068|binding|INFO|Removing iface tap7103ce4a-69 ovn-installed in OVS
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.498 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01069|binding|INFO|Releasing lport 7103ce4a-69e8-454b-aed3-251ecb109232 from this chassis (sb_readonly=0)
Oct 14 09:14:45 compute-0 ovn_controller[152662]: 2025-10-14T09:14:45Z|01070|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 down in Southbound
Oct 14 09:14:45 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [NOTICE]   (357705) : haproxy version is 2.8.14-c23fe91
Oct 14 09:14:45 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [NOTICE]   (357705) : path to executable is /usr/sbin/haproxy
Oct 14 09:14:45 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [WARNING]  (357705) : Exiting Master process...
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.509 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:45 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [ALERT]    (357705) : Current worker (357707) exited with code 143 (Terminated)
Oct 14 09:14:45 compute-0 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [WARNING]  (357705) : All workers exited. Exiting... (0)
Oct 14 09:14:45 compute-0 systemd[1]: libpod-002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2.scope: Deactivated successfully.
Oct 14 09:14:45 compute-0 podman[358529]: 2025-10-14 09:14:45.52182249 +0000 UTC m=+0.086527274 container died 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.543 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2-userdata-shm.mount: Deactivated successfully.
Oct 14 09:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-458b2af1b6a32b5e1572566cdbc152a3e460f3972a3a382274f2a47523b42e89-merged.mount: Deactivated successfully.
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.574 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.579 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:45 compute-0 podman[358529]: 2025-10-14 09:14:45.580195439 +0000 UTC m=+0.144900223 container cleanup 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:14:45 compute-0 systemd[1]: libpod-conmon-002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2.scope: Deactivated successfully.
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.631 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance destroyed successfully.
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.632 2 DEBUG nova.objects.instance [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:45 compute-0 podman[358616]: 2025-10-14 09:14:45.64841835 +0000 UTC m=+0.044876137 container remove 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.649 2 DEBUG nova.virt.libvirt.vif [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:24Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.650 2 DEBUG nova.network.os_vif_util [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.651 2 DEBUG nova.network.os_vif_util [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.651 2 DEBUG os_vif [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7103ce4a-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.660 2 INFO os_vif [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69')
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc636df-be5f-413c-a2a1-e43b1196e138]: (4, ('Tue Oct 14 09:14:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f (002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2)\n002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2\nTue Oct 14 09:14:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f (002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2)\n002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.662 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[838594cf-2e9d-474e-8138-45856123dd42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.663 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563aa000-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:45 compute-0 kernel: tap563aa000-40: left promiscuous mode
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.686 2 DEBUG nova.policy [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '648aaa75d8974d439d8ebe331c3d6568', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4eae338e7d54d159033a20bc7460935', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.687 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c227651f-7d23-40ad-8047-a2efdb0d4e01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.691 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.692 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.693 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.693 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.716 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[462b6065-e959-449e-a8b7-311a97bd478c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.718 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[266d1ee0-831f-4c6b-a667-c6e69603b1f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.729 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:45 compute-0 nova_compute[259627]: 2025-10-14 09:14:45.733 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e1aea504-3ecf-4273-a867-66afb39de726_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f74f320e-d0a6-4926-8707-79ce1e8223e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704213, 'reachable_time': 25608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358665, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d563aa000\x2d400f\x2d4c19\x2dba83\x2d9377cc50d29f.mount: Deactivated successfully.
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.741 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.741 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cb660d-bae7-4c1a-bf56-6d1b1f7e0f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.743 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f unbound from our chassis
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.745 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 563aa000-400f-4c19-ba83-9377cc50d29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.746 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b4206cc4-742b-46c7-9d10-e489a75164d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.747 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f unbound from our chassis
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.748 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 563aa000-400f-4c19-ba83-9377cc50d29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:14:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.748 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8454cf2d-6241-42f2-8bff-11e0224876f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 3.9 MiB/s wr, 215 op/s
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.071 2 DEBUG nova.compute.manager [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.071 2 DEBUG oslo_concurrency.lockutils [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.072 2 DEBUG oslo_concurrency.lockutils [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.072 2 DEBUG oslo_concurrency.lockutils [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.073 2 DEBUG nova.compute.manager [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.073 2 DEBUG nova.compute.manager [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.075 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e1aea504-3ecf-4273-a867-66afb39de726_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3829793887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.140 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] resizing rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.220 2 DEBUG nova.objects.instance [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'migration_context' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.239 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.239 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Ensure instance console log exists: /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.241 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.241 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.241 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.316 2 INFO nova.virt.libvirt.driver [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deleting instance files /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6_del
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.317 2 INFO nova.virt.libvirt.driver [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deletion of /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6_del complete
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.320 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433271.3119168, 17db7f38-479a-4d56-9424-7f5ab695ccea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.320 2 INFO nova.compute.manager [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] VM Stopped (Lifecycle Event)
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.364 2 DEBUG nova.compute.manager [None req-b821c27e-6f1c-485b-8ca6-75bdfd225c1a - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.381 2 INFO nova.compute.manager [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Took 1.16 seconds to destroy the instance on the hypervisor.
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.382 2 DEBUG oslo.service.loopingcall [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.382 2 DEBUG nova.compute.manager [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.382 2 DEBUG nova.network.neutron [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.459 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433271.4584494, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.460 2 INFO nova.compute.manager [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Stopped (Lifecycle Event)
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.484 2 DEBUG nova.compute.manager [None req-4c645e01-d5e3-4fa8-93c0-12866a61f252 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:46 compute-0 nova_compute[259627]: 2025-10-14 09:14:46.783 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Successfully created port: 175a9914-0068-4aeb-b4b6-d501212d3374 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.041 2 DEBUG nova.network.neutron [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updated VIF entry in instance network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.042 2 DEBUG nova.network.neutron [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.065 2 DEBUG oslo_concurrency.lockutils [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.092 2 DEBUG nova.network.neutron [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:47 compute-0 ceph-mon[74249]: pgmap v1815: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 3.9 MiB/s wr, 215 op/s
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.114 2 INFO nova.compute.manager [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Took 0.73 seconds to deallocate network for instance.
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.159 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.159 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.276 2 DEBUG oslo_concurrency.processutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.526 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.527 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.528 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.528 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.528 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Processing event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.529 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.529 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.530 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.530 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.531 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] No waiting events found dispatching network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.531 2 WARNING nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received unexpected event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 for instance with vm_state building and task_state spawning.
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.531 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-deleted-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.533 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.542 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Successfully updated port: 175a9914-0068-4aeb-b4b6-d501212d3374 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.546 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433287.5458107, da40c115-048e-4844-812e-7e65e25bfb3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.546 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] VM Resumed (Lifecycle Event)
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.549 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.554 2 INFO nova.virt.libvirt.driver [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance spawned successfully.
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.554 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.560 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.561 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.561 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.583 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.593 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.598 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.599 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.599 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.600 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.600 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.601 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.636 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.673 2 INFO nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Took 9.30 seconds to spawn the instance on the hypervisor.
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.673 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.745 2 INFO nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Took 10.31 seconds to build instance.
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.764 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.765 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:14:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71581542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.833 2 DEBUG oslo_concurrency.processutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.841 2 DEBUG nova.compute.provider_tree [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.862 2 DEBUG nova.scheduler.client.report [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.888 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:47 compute-0 nova_compute[259627]: 2025-10-14 09:14:47.933 2 INFO nova.scheduler.client.report [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance b595141f-123e-4250-bfec-888d866fd0c6
Oct 14 09:14:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.004 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/71581542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.180 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.180 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.181 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.181 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.181 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.181 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.182 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.182 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.182 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.182 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.183 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.183 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.183 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.183 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.183 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.184 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.184 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.184 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.184 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.185 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.185 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.185 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.185 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.185 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.187 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.187 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.187 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.187 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:48 compute-0 podman[358787]: 2025-10-14 09:14:48.633910382 +0000 UTC m=+0.052511296 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 09:14:48 compute-0 podman[358786]: 2025-10-14 09:14:48.637024358 +0000 UTC m=+0.055901218 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.866 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.889 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.890 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance network_info: |[{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.890 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.891 2 DEBUG nova.network.neutron [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.896 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start _get_guest_xml network_info=[{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.903 2 WARNING nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.908 2 DEBUG nova.virt.libvirt.host [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.908 2 DEBUG nova.virt.libvirt.host [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.912 2 DEBUG nova.virt.libvirt.host [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.913 2 DEBUG nova.virt.libvirt.host [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.914 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.915 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.916 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.916 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.917 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.917 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.918 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.918 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.918 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.919 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.919 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.920 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:14:48 compute-0 nova_compute[259627]: 2025-10-14 09:14:48.925 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:49 compute-0 ceph-mon[74249]: pgmap v1816: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Oct 14 09:14:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259066066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.356 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.374 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.378 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355761698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.790 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.794 2 DEBUG nova.virt.libvirt.vif [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1533575249',display_name='tempest-ServerRescueTestJSONUnderV235-server-1533575249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1533575249',id=101,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4eae338e7d54d159033a20bc7460935',ramdisk_id='',reservation_id='r-tm6flkse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2037260230',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2037260230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:45Z,user_data=None,user_id='648aaa75d8974d439d8ebe331c3d6568',uuid=e1aea504-3ecf-4273-a867-66afb39de726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.795 2 DEBUG nova.network.os_vif_util [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converting VIF {"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.798 2 DEBUG nova.network.os_vif_util [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.801 2 DEBUG nova.objects.instance [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.822 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <uuid>e1aea504-3ecf-4273-a867-66afb39de726</uuid>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <name>instance-00000065</name>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1533575249</nova:name>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:14:48</nova:creationTime>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <nova:user uuid="648aaa75d8974d439d8ebe331c3d6568">tempest-ServerRescueTestJSONUnderV235-2037260230-project-member</nova:user>
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <nova:project uuid="c4eae338e7d54d159033a20bc7460935">tempest-ServerRescueTestJSONUnderV235-2037260230</nova:project>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <nova:port uuid="175a9914-0068-4aeb-b4b6-d501212d3374">
Oct 14 09:14:49 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <system>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <entry name="serial">e1aea504-3ecf-4273-a867-66afb39de726</entry>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <entry name="uuid">e1aea504-3ecf-4273-a867-66afb39de726</entry>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </system>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <os>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   </os>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <features>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   </features>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk">
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk.config">
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:6a:d8:7e"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <target dev="tap175a9914-00"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/console.log" append="off"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <video>
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </video>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:14:49 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:14:49 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:14:49 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:14:49 compute-0 nova_compute[259627]: </domain>
Oct 14 09:14:49 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.834 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Preparing to wait for external event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.834 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.835 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.835 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.836 2 DEBUG nova.virt.libvirt.vif [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1533575249',display_name='tempest-ServerRescueTestJSONUnderV235-server-1533575249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1533575249',id=101,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4eae338e7d54d159033a20bc7460935',ramdisk_id='',reservation_id='r-tm6flkse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2037260230',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2037260230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:45Z,user_data=None,user_id='648aaa75d8974d439d8ebe331c3d6568',uuid=e1aea504-3ecf-4273-a867-66afb39de726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.837 2 DEBUG nova.network.os_vif_util [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converting VIF {"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.838 2 DEBUG nova.network.os_vif_util [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.838 2 DEBUG os_vif [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.843 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap175a9914-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.849 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap175a9914-00, col_values=(('external_ids', {'iface-id': '175a9914-0068-4aeb-b4b6-d501212d3374', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:d8:7e', 'vm-uuid': 'e1aea504-3ecf-4273-a867-66afb39de726'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:49 compute-0 NetworkManager[44885]: <info>  [1760433289.8525] manager: (tap175a9914-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.860 2 INFO os_vif [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00')
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.957 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.959 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.960 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No VIF found with MAC fa:16:3e:6a:d8:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.961 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Using config drive
Oct 14 09:14:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Oct 14 09:14:49 compute-0 nova_compute[259627]: 2025-10-14 09:14:49.995 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2259066066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/355761698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.411 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating config drive at /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.416 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpjrsr30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.567 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpjrsr30" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.605 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.612 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config e1aea504-3ecf-4273-a867-66afb39de726_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.809 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config e1aea504-3ecf-4273-a867-66afb39de726_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.810 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deleting local config drive /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config because it was imported into RBD.
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.848 2 DEBUG nova.network.neutron [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.849 2 DEBUG nova.network.neutron [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:50 compute-0 kernel: tap175a9914-00: entered promiscuous mode
Oct 14 09:14:50 compute-0 NetworkManager[44885]: <info>  [1760433290.8591] manager: (tap175a9914-00): new Tun device (/org/freedesktop/NetworkManager/Devices/435)
Oct 14 09:14:50 compute-0 ovn_controller[152662]: 2025-10-14T09:14:50Z|01071|binding|INFO|Claiming lport 175a9914-0068-4aeb-b4b6-d501212d3374 for this chassis.
Oct 14 09:14:50 compute-0 ovn_controller[152662]: 2025-10-14T09:14:50Z|01072|binding|INFO|175a9914-0068-4aeb-b4b6-d501212d3374: Claiming fa:16:3e:6a:d8:7e 10.100.0.7
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:50.868 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:d8:7e 10.100.0.7'], port_security=['fa:16:3e:6a:d8:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1aea504-3ecf-4273-a867-66afb39de726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a53d982-8cf9-44ff-9b16-dda957aa7729', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4eae338e7d54d159033a20bc7460935', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a0a095f-2308-407d-90d9-6acaa688f93c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab8a6447-c4e7-4ebf-b042-eb7d83ec88dc, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=175a9914-0068-4aeb-b4b6-d501212d3374) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.871 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:50.870 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 175a9914-0068-4aeb-b4b6-d501212d3374 in datapath 8a53d982-8cf9-44ff-9b16-dda957aa7729 bound to our chassis
Oct 14 09:14:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:50.871 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a53d982-8cf9-44ff-9b16-dda957aa7729 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:14:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:50.872 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f90433d6-01d9-477a-8c97-8bc500d6e1de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:50 compute-0 ovn_controller[152662]: 2025-10-14T09:14:50Z|01073|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 ovn-installed in OVS
Oct 14 09:14:50 compute-0 ovn_controller[152662]: 2025-10-14T09:14:50Z|01074|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 up in Southbound
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:50 compute-0 nova_compute[259627]: 2025-10-14 09:14:50.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:50 compute-0 systemd-machined[214636]: New machine qemu-126-instance-00000065.
Oct 14 09:14:50 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000065.
Oct 14 09:14:50 compute-0 systemd-udevd[358959]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:50 compute-0 NetworkManager[44885]: <info>  [1760433290.9660] device (tap175a9914-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:50 compute-0 NetworkManager[44885]: <info>  [1760433290.9682] device (tap175a9914-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:51 compute-0 ceph-mon[74249]: pgmap v1817: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.200 2 DEBUG nova.compute.manager [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.202 2 DEBUG oslo_concurrency.lockutils [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.202 2 DEBUG oslo_concurrency.lockutils [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.202 2 DEBUG oslo_concurrency.lockutils [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.203 2 DEBUG nova.compute.manager [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Processing event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:14:51 compute-0 ovn_controller[152662]: 2025-10-14T09:14:51Z|01075|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:51 compute-0 ovn_controller[152662]: 2025-10-14T09:14:51Z|01076|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.711 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433291.710773, e1aea504-3ecf-4273-a867-66afb39de726 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.712 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Started (Lifecycle Event)
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.715 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.719 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.723 2 INFO nova.virt.libvirt.driver [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance spawned successfully.
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.724 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.750 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.762 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.768 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.769 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.770 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.770 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.771 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.772 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.842 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.843 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433291.7110634, e1aea504-3ecf-4273-a867-66afb39de726 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.843 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Paused (Lifecycle Event)
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.868 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.872 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433291.7183087, e1aea504-3ecf-4273-a867-66afb39de726 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.872 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Resumed (Lifecycle Event)
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.878 2 INFO nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Took 6.42 seconds to spawn the instance on the hypervisor.
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.878 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.888 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.893 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.918 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.943 2 INFO nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Took 7.48 seconds to build instance.
Oct 14 09:14:51 compute-0 nova_compute[259627]: 2025-10-14 09:14:51.960 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 213 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 270 op/s
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.566 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.567 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.584 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.612 2 INFO nova.compute.manager [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Rescuing
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.613 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.613 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.613 2 DEBUG nova.network.neutron [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.667 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.668 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.678 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.679 2 INFO nova.compute.claims [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:14:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:52 compute-0 nova_compute[259627]: 2025-10-14 09:14:52.892 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:53 compute-0 ceph-mon[74249]: pgmap v1818: 305 pgs: 305 active+clean; 213 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 270 op/s
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.341 2 DEBUG nova.compute.manager [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.342 2 DEBUG oslo_concurrency.lockutils [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.342 2 DEBUG oslo_concurrency.lockutils [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.343 2 DEBUG oslo_concurrency.lockutils [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.343 2 DEBUG nova.compute.manager [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.343 2 WARNING nova.compute.manager [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state active and task_state rescuing.
Oct 14 09:14:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:14:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937491467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.375 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.382 2 DEBUG nova.compute.provider_tree [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.396 2 DEBUG nova.scheduler.client.report [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.428 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.429 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.492 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.493 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.516 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.536 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.621 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.624 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.625 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Creating image(s)
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.660 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.697 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.723 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.736 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.807 2 DEBUG nova.policy [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.843 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.843 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.844 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.844 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.872 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:53 compute-0 nova_compute[259627]: 2025-10-14 09:14:53.876 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 84968701-f6c5-4798-888e-fa0f3311adca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 213 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct 14 09:14:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/937491467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.155 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 84968701-f6c5-4798-888e-fa0f3311adca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:54 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:14:54 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.214 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.308 2 DEBUG nova.objects.instance [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid 84968701-f6c5-4798-888e-fa0f3311adca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.329 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.330 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Ensure instance console log exists: /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.331 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.331 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.332 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.369 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Successfully created port: a8c2f2be-f25a-4512-b8a2-17b2b695ce52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.583 2 DEBUG nova.network.neutron [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.606 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:54 compute-0 nova_compute[259627]: 2025-10-14 09:14:54.914 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:14:55 compute-0 ceph-mon[74249]: pgmap v1819: 305 pgs: 305 active+clean; 213 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct 14 09:14:55 compute-0 nova_compute[259627]: 2025-10-14 09:14:55.558 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Successfully updated port: a8c2f2be-f25a-4512-b8a2-17b2b695ce52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:14:55 compute-0 nova_compute[259627]: 2025-10-14 09:14:55.572 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:55 compute-0 nova_compute[259627]: 2025-10-14 09:14:55.572 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:55 compute-0 nova_compute[259627]: 2025-10-14 09:14:55.572 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:14:55 compute-0 nova_compute[259627]: 2025-10-14 09:14:55.654 2 DEBUG nova.compute.manager [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-changed-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:14:55 compute-0 nova_compute[259627]: 2025-10-14 09:14:55.654 2 DEBUG nova.compute.manager [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Refreshing instance network info cache due to event network-changed-a8c2f2be-f25a-4512-b8a2-17b2b695ce52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:14:55 compute-0 nova_compute[259627]: 2025-10-14 09:14:55.654 2 DEBUG oslo_concurrency.lockutils [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:14:55 compute-0 nova_compute[259627]: 2025-10-14 09:14:55.834 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:14:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 242 op/s
Oct 14 09:14:56 compute-0 sudo[359200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:14:56 compute-0 sudo[359200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:56 compute-0 sudo[359200]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:56 compute-0 sudo[359225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:14:56 compute-0 sudo[359225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:56 compute-0 sudo[359225]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.846 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Updating instance_info_cache with network_info: [{"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:56 compute-0 sudo[359250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:14:56 compute-0 sudo[359250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:56 compute-0 sudo[359250]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.869 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.869 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance network_info: |[{"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.869 2 DEBUG oslo_concurrency.lockutils [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.870 2 DEBUG nova.network.neutron [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Refreshing network info cache for port a8c2f2be-f25a-4512-b8a2-17b2b695ce52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.873 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start _get_guest_xml network_info=[{"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.877 2 WARNING nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.882 2 DEBUG nova.virt.libvirt.host [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.883 2 DEBUG nova.virt.libvirt.host [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.892 2 DEBUG nova.virt.libvirt.host [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.893 2 DEBUG nova.virt.libvirt.host [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.894 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.894 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.895 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.895 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.895 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.895 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.896 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.896 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.896 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.897 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.897 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.897 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:14:56 compute-0 nova_compute[259627]: 2025-10-14 09:14:56.901 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:56 compute-0 sudo[359275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 14 09:14:56 compute-0 sudo[359275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:57 compute-0 ceph-mon[74249]: pgmap v1820: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 242 op/s
Oct 14 09:14:57 compute-0 sudo[359275]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:14:57 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:14:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:14:57 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:14:57 compute-0 sudo[359339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:14:57 compute-0 sudo[359339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:57 compute-0 sudo[359339]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435600781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.336 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:57 compute-0 sudo[359364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:14:57 compute-0 sudo[359364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:57 compute-0 sudo[359364]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.371 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.379 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:57 compute-0 sudo[359398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:14:57 compute-0 sudo[359398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:57 compute-0 sudo[359398]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:57 compute-0 sudo[359435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:14:57 compute-0 sudo[359435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:14:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:14:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490268113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.916 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.918 2 DEBUG nova.virt.libvirt.vif [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=102,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-g3f94nzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:53Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=84968701-f6c5-4798-888e-fa0f3311adca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.919 2 DEBUG nova.network.os_vif_util [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.922 2 DEBUG nova.network.os_vif_util [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.923 2 DEBUG nova.objects.instance [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84968701-f6c5-4798-888e-fa0f3311adca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.953 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <uuid>84968701-f6c5-4798-888e-fa0f3311adca</uuid>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <name>instance-00000066</name>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestJSON-server-1896126591</nova:name>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:14:56</nova:creationTime>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <nova:port uuid="a8c2f2be-f25a-4512-b8a2-17b2b695ce52">
Oct 14 09:14:57 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <system>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <entry name="serial">84968701-f6c5-4798-888e-fa0f3311adca</entry>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <entry name="uuid">84968701-f6c5-4798-888e-fa0f3311adca</entry>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </system>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <os>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   </os>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <features>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   </features>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/84968701-f6c5-4798-888e-fa0f3311adca_disk">
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/84968701-f6c5-4798-888e-fa0f3311adca_disk.config">
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       </source>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:14:57 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c1:58:a4"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <target dev="tapa8c2f2be-f2"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/console.log" append="off"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <video>
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </video>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:14:57 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:14:57 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:14:57 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:14:57 compute-0 nova_compute[259627]: </domain>
Oct 14 09:14:57 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.955 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Preparing to wait for external event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.956 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.956 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.956 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.957 2 DEBUG nova.virt.libvirt.vif [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=102,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-g3f94nzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:53Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=84968701-f6c5-4798-888e-fa0f3311adca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.958 2 DEBUG nova.network.os_vif_util [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.960 2 DEBUG nova.network.os_vif_util [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.960 2 DEBUG os_vif [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.969 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8c2f2be-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8c2f2be-f2, col_values=(('external_ids', {'iface-id': 'a8c2f2be-f25a-4512-b8a2-17b2b695ce52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:58:a4', 'vm-uuid': '84968701-f6c5-4798-888e-fa0f3311adca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:57 compute-0 NetworkManager[44885]: <info>  [1760433297.9735] manager: (tapa8c2f2be-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:57 compute-0 nova_compute[259627]: 2025-10-14 09:14:57.980 2 INFO os_vif [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2')
Oct 14 09:14:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Oct 14 09:14:57 compute-0 sudo[359435]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.047 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.048 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.049 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:c1:58:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.049 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Using config drive
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.070 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 09:14:58 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:14:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:14:58 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:14:58 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:14:58 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 960adb52-a5e4-4e97-9f70-70a815d5e2cf does not exist
Oct 14 09:14:58 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 82e9d0c5-fd39-4286-aac4-57f46ebe8b38 does not exist
Oct 14 09:14:58 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8b89a373-26e9-4ede-be32-d0f750e68abf does not exist
Oct 14 09:14:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:14:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:14:58 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:14:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:14:58 compute-0 sudo[359534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:14:58 compute-0 sudo[359534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:58 compute-0 sudo[359534]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/435600781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1490268113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:14:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:14:58 compute-0 sudo[359559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:14:58 compute-0 sudo[359559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:58 compute-0 sudo[359559]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:58 compute-0 sudo[359584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:14:58 compute-0 sudo[359584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:58 compute-0 sudo[359584]: pam_unix(sudo:session): session closed for user root
Oct 14 09:14:58 compute-0 sudo[359609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:14:58 compute-0 sudo[359609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.581 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Creating config drive at /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.585 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdwxhudkf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:58 compute-0 podman[359668]: 2025-10-14 09:14:58.679256304 +0000 UTC m=+0.040309104 container create e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:14:58 compute-0 systemd[1]: Started libpod-conmon-e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4.scope.
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.734 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdwxhudkf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:58 compute-0 podman[359668]: 2025-10-14 09:14:58.661715742 +0000 UTC m=+0.022768552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.759 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:14:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.767 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config 84968701-f6c5-4798-888e-fa0f3311adca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:14:58 compute-0 podman[359668]: 2025-10-14 09:14:58.784491178 +0000 UTC m=+0.145544018 container init e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:14:58 compute-0 podman[359668]: 2025-10-14 09:14:58.791377678 +0000 UTC m=+0.152430478 container start e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:14:58 compute-0 podman[359668]: 2025-10-14 09:14:58.795722975 +0000 UTC m=+0.156775855 container attach e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:14:58 compute-0 systemd[1]: libpod-e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4.scope: Deactivated successfully.
Oct 14 09:14:58 compute-0 sweet_black[359686]: 167 167
Oct 14 09:14:58 compute-0 conmon[359686]: conmon e678264e9241dc5b190f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4.scope/container/memory.events
Oct 14 09:14:58 compute-0 podman[359668]: 2025-10-14 09:14:58.800249696 +0000 UTC m=+0.161302496 container died e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 09:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-55caa37a6b3c4650d272911b5ba43eecb749a9ddd560f043e0c37c4bb7b8ca0f-merged.mount: Deactivated successfully.
Oct 14 09:14:58 compute-0 podman[359668]: 2025-10-14 09:14:58.855210461 +0000 UTC m=+0.216263271 container remove e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 09:14:58 compute-0 systemd[1]: libpod-conmon-e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4.scope: Deactivated successfully.
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.889 2 DEBUG nova.network.neutron [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Updated VIF entry in instance network info cache for port a8c2f2be-f25a-4512-b8a2-17b2b695ce52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.891 2 DEBUG nova.network.neutron [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Updating instance_info_cache with network_info: [{"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.910 2 DEBUG oslo_concurrency.lockutils [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.955 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config 84968701-f6c5-4798-888e-fa0f3311adca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:14:58 compute-0 nova_compute[259627]: 2025-10-14 09:14:58.955 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Deleting local config drive /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config because it was imported into RBD.
Oct 14 09:14:59 compute-0 kernel: tapa8c2f2be-f2: entered promiscuous mode
Oct 14 09:14:59 compute-0 ovn_controller[152662]: 2025-10-14T09:14:59Z|01077|binding|INFO|Claiming lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 for this chassis.
Oct 14 09:14:59 compute-0 ovn_controller[152662]: 2025-10-14T09:14:59Z|01078|binding|INFO|a8c2f2be-f25a-4512-b8a2-17b2b695ce52: Claiming fa:16:3e:c1:58:a4 10.100.0.12
Oct 14 09:14:59 compute-0 nova_compute[259627]: 2025-10-14 09:14:59.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:59 compute-0 NetworkManager[44885]: <info>  [1760433299.0163] manager: (tapa8c2f2be-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.020 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:58:a4 10.100.0.12'], port_security=['fa:16:3e:c1:58:a4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '84968701-f6c5-4798-888e-fa0f3311adca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=a8c2f2be-f25a-4512-b8a2-17b2b695ce52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.022 162547 INFO neutron.agent.ovn.metadata.agent [-] Port a8c2f2be-f25a-4512-b8a2-17b2b695ce52 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.023 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:14:59 compute-0 ovn_controller[152662]: 2025-10-14T09:14:59Z|01079|binding|INFO|Setting lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 ovn-installed in OVS
Oct 14 09:14:59 compute-0 ovn_controller[152662]: 2025-10-14T09:14:59Z|01080|binding|INFO|Setting lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 up in Southbound
Oct 14 09:14:59 compute-0 nova_compute[259627]: 2025-10-14 09:14:59.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.047 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20b19d78-2fec-4c8c-950f-d3bc750799eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:59 compute-0 systemd-udevd[359772]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:14:59 compute-0 systemd-machined[214636]: New machine qemu-127-instance-00000066.
Oct 14 09:14:59 compute-0 podman[359751]: 2025-10-14 09:14:59.067520373 +0000 UTC m=+0.066546870 container create fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:14:59 compute-0 NetworkManager[44885]: <info>  [1760433299.0719] device (tapa8c2f2be-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:14:59 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000066.
Oct 14 09:14:59 compute-0 NetworkManager[44885]: <info>  [1760433299.0728] device (tapa8c2f2be-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.103 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[20185546-02d9-40c6-aba4-abfb119451a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.105 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[998f241d-84f5-49c1-98db-4cabaeecf619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:59 compute-0 systemd[1]: Started libpod-conmon-fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905.scope.
Oct 14 09:14:59 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:14:59 compute-0 podman[359751]: 2025-10-14 09:14:59.045718866 +0000 UTC m=+0.044745373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.140 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3334ba-f0ba-4cf8-a9b7-a53f8e48909f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.148 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:14:59 compute-0 nova_compute[259627]: 2025-10-14 09:14:59.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.163 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd4cdc5-148b-4075-ad41-eaddca02e666]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359792, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:59 compute-0 podman[359751]: 2025-10-14 09:14:59.169416364 +0000 UTC m=+0.168442851 container init fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:14:59 compute-0 podman[359751]: 2025-10-14 09:14:59.177819391 +0000 UTC m=+0.176845878 container start fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:14:59 compute-0 podman[359751]: 2025-10-14 09:14:59.181578154 +0000 UTC m=+0.180604641 container attach fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e80f2394-e9df-4a38-8fb4-8f4daac92883]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359795, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359795, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.188 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:59 compute-0 nova_compute[259627]: 2025-10-14 09:14:59.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:59 compute-0 nova_compute[259627]: 2025-10-14 09:14:59.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.191 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.192 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.192 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.193 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:14:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.194 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:14:59 compute-0 ceph-mon[74249]: pgmap v1821: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Oct 14 09:14:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Oct 14 09:14:59 compute-0 nova_compute[259627]: 2025-10-14 09:14:59.983 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433299.982139, 84968701-f6c5-4798-888e-fa0f3311adca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:14:59 compute-0 nova_compute[259627]: 2025-10-14 09:14:59.984 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] VM Started (Lifecycle Event)
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.004 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.009 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433299.9823294, 84968701-f6c5-4798-888e-fa0f3311adca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.010 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] VM Paused (Lifecycle Event)
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.030 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.033 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.052 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:00 compute-0 strange_swartz[359782]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:15:00 compute-0 strange_swartz[359782]: --> relative data size: 1.0
Oct 14 09:15:00 compute-0 strange_swartz[359782]: --> All data devices are unavailable
Oct 14 09:15:00 compute-0 systemd[1]: libpod-fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905.scope: Deactivated successfully.
Oct 14 09:15:00 compute-0 conmon[359782]: conmon fcd44b21de49d382c706 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905.scope/container/memory.events
Oct 14 09:15:00 compute-0 podman[359751]: 2025-10-14 09:15:00.199075262 +0000 UTC m=+1.198101769 container died fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 09:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6-merged.mount: Deactivated successfully.
Oct 14 09:15:00 compute-0 podman[359751]: 2025-10-14 09:15:00.261866639 +0000 UTC m=+1.260893126 container remove fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:15:00 compute-0 systemd[1]: libpod-conmon-fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905.scope: Deactivated successfully.
Oct 14 09:15:00 compute-0 sudo[359609]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:00 compute-0 sudo[359874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:15:00 compute-0 sudo[359874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:00 compute-0 sudo[359874]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:00 compute-0 sudo[359899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:15:00 compute-0 sudo[359899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:00 compute-0 sudo[359899]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:00 compute-0 sudo[359924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:15:00 compute-0 sudo[359924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:00 compute-0 sudo[359924]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:00 compute-0 ovn_controller[152662]: 2025-10-14T09:15:00Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:70:22 10.100.0.3
Oct 14 09:15:00 compute-0 ovn_controller[152662]: 2025-10-14T09:15:00Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:70:22 10.100.0.3
Oct 14 09:15:00 compute-0 sudo[359949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:15:00 compute-0 sudo[359949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.626 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433285.467822, b595141f-123e-4250-bfec-888d866fd0c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.626 2 INFO nova.compute.manager [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Stopped (Lifecycle Event)
Oct 14 09:15:00 compute-0 nova_compute[259627]: 2025-10-14 09:15:00.659 2 DEBUG nova.compute.manager [None req-242c9f69-e5ed-48f4-a4ca-deb7de9d70e4 - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:00 compute-0 podman[360014]: 2025-10-14 09:15:00.919092668 +0000 UTC m=+0.042629932 container create 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:15:00 compute-0 systemd[1]: Started libpod-conmon-9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91.scope.
Oct 14 09:15:00 compute-0 podman[360014]: 2025-10-14 09:15:00.900997242 +0000 UTC m=+0.024534496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:15:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:15:01 compute-0 podman[360014]: 2025-10-14 09:15:01.039970807 +0000 UTC m=+0.163508071 container init 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:15:01 compute-0 podman[360014]: 2025-10-14 09:15:01.052513046 +0000 UTC m=+0.176050280 container start 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:15:01 compute-0 podman[360014]: 2025-10-14 09:15:01.057166141 +0000 UTC m=+0.180703365 container attach 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:15:01 compute-0 upbeat_lederberg[360031]: 167 167
Oct 14 09:15:01 compute-0 systemd[1]: libpod-9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91.scope: Deactivated successfully.
Oct 14 09:15:01 compute-0 podman[360014]: 2025-10-14 09:15:01.062780749 +0000 UTC m=+0.186317983 container died 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a6c36d2cf3685c0e671aeec3bf72bc51c5865720bdf11777bd893d5f3dc7b52-merged.mount: Deactivated successfully.
Oct 14 09:15:01 compute-0 podman[360014]: 2025-10-14 09:15:01.103888232 +0000 UTC m=+0.227425466 container remove 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:15:01 compute-0 systemd[1]: libpod-conmon-9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91.scope: Deactivated successfully.
Oct 14 09:15:01 compute-0 ceph-mon[74249]: pgmap v1822: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Oct 14 09:15:01 compute-0 podman[360055]: 2025-10-14 09:15:01.340913954 +0000 UTC m=+0.061454235 container create 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:15:01 compute-0 podman[360055]: 2025-10-14 09:15:01.311364686 +0000 UTC m=+0.031905057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:15:01 compute-0 systemd[1]: Started libpod-conmon-54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835.scope.
Oct 14 09:15:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:01 compute-0 podman[360055]: 2025-10-14 09:15:01.463119416 +0000 UTC m=+0.183659727 container init 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:15:01 compute-0 podman[360055]: 2025-10-14 09:15:01.46976169 +0000 UTC m=+0.190301981 container start 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:15:01 compute-0 podman[360055]: 2025-10-14 09:15:01.475127112 +0000 UTC m=+0.195667423 container attach 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:15:01 compute-0 podman[360071]: 2025-10-14 09:15:01.491988558 +0000 UTC m=+0.100461807 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 14 09:15:01 compute-0 podman[360069]: 2025-10-14 09:15:01.511374116 +0000 UTC m=+0.118764259 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:15:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 293 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 293 op/s
Oct 14 09:15:02 compute-0 happy_morse[360091]: {
Oct 14 09:15:02 compute-0 happy_morse[360091]:     "0": [
Oct 14 09:15:02 compute-0 happy_morse[360091]:         {
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "devices": [
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "/dev/loop3"
Oct 14 09:15:02 compute-0 happy_morse[360091]:             ],
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_name": "ceph_lv0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_size": "21470642176",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "name": "ceph_lv0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "tags": {
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cluster_name": "ceph",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.crush_device_class": "",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.encrypted": "0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osd_id": "0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.type": "block",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.vdo": "0"
Oct 14 09:15:02 compute-0 happy_morse[360091]:             },
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "type": "block",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "vg_name": "ceph_vg0"
Oct 14 09:15:02 compute-0 happy_morse[360091]:         }
Oct 14 09:15:02 compute-0 happy_morse[360091]:     ],
Oct 14 09:15:02 compute-0 happy_morse[360091]:     "1": [
Oct 14 09:15:02 compute-0 happy_morse[360091]:         {
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "devices": [
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "/dev/loop4"
Oct 14 09:15:02 compute-0 happy_morse[360091]:             ],
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_name": "ceph_lv1",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_size": "21470642176",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "name": "ceph_lv1",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "tags": {
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cluster_name": "ceph",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.crush_device_class": "",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.encrypted": "0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osd_id": "1",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.type": "block",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.vdo": "0"
Oct 14 09:15:02 compute-0 happy_morse[360091]:             },
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "type": "block",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "vg_name": "ceph_vg1"
Oct 14 09:15:02 compute-0 happy_morse[360091]:         }
Oct 14 09:15:02 compute-0 happy_morse[360091]:     ],
Oct 14 09:15:02 compute-0 happy_morse[360091]:     "2": [
Oct 14 09:15:02 compute-0 happy_morse[360091]:         {
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "devices": [
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "/dev/loop5"
Oct 14 09:15:02 compute-0 happy_morse[360091]:             ],
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_name": "ceph_lv2",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_size": "21470642176",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "name": "ceph_lv2",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "tags": {
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.cluster_name": "ceph",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.crush_device_class": "",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.encrypted": "0",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osd_id": "2",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.type": "block",
Oct 14 09:15:02 compute-0 happy_morse[360091]:                 "ceph.vdo": "0"
Oct 14 09:15:02 compute-0 happy_morse[360091]:             },
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "type": "block",
Oct 14 09:15:02 compute-0 happy_morse[360091]:             "vg_name": "ceph_vg2"
Oct 14 09:15:02 compute-0 happy_morse[360091]:         }
Oct 14 09:15:02 compute-0 happy_morse[360091]:     ]
Oct 14 09:15:02 compute-0 happy_morse[360091]: }
Oct 14 09:15:02 compute-0 systemd[1]: libpod-54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835.scope: Deactivated successfully.
Oct 14 09:15:02 compute-0 conmon[360091]: conmon 54c768a82e6a760a878e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835.scope/container/memory.events
Oct 14 09:15:02 compute-0 podman[360055]: 2025-10-14 09:15:02.293658866 +0000 UTC m=+1.014199197 container died 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 09:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2-merged.mount: Deactivated successfully.
Oct 14 09:15:02 compute-0 podman[360055]: 2025-10-14 09:15:02.363457917 +0000 UTC m=+1.083998218 container remove 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:15:02 compute-0 systemd[1]: libpod-conmon-54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835.scope: Deactivated successfully.
Oct 14 09:15:02 compute-0 sudo[359949]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.412 2 DEBUG nova.compute.manager [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.412 2 DEBUG oslo_concurrency.lockutils [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.413 2 DEBUG oslo_concurrency.lockutils [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.413 2 DEBUG oslo_concurrency.lockutils [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.413 2 DEBUG nova.compute.manager [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Processing event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.415 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.419 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433302.4194567, 84968701-f6c5-4798-888e-fa0f3311adca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.420 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] VM Resumed (Lifecycle Event)
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.422 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.426 2 INFO nova.virt.libvirt.driver [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance spawned successfully.
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.426 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.445 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.451 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.456 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.457 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.457 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.458 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.459 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.459 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:02 compute-0 sudo[360131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:15:02 compute-0 sudo[360131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:02 compute-0 sudo[360131]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.488 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.537 2 INFO nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Took 8.92 seconds to spawn the instance on the hypervisor.
Oct 14 09:15:02 compute-0 sudo[360156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:15:02 compute-0 sudo[360156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:02 compute-0 sudo[360156]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.549 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:02 compute-0 sudo[360181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:15:02 compute-0 sudo[360181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.618 2 INFO nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Took 9.98 seconds to build instance.
Oct 14 09:15:02 compute-0 sudo[360181]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.634 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:02 compute-0 sudo[360206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:15:02 compute-0 sudo[360206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:15:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:15:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:15:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:15:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:15:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:15:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:02 compute-0 nova_compute[259627]: 2025-10-14 09:15:02.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:03 compute-0 podman[360269]: 2025-10-14 09:15:03.077204427 +0000 UTC m=+0.046305953 container create f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 14 09:15:03 compute-0 systemd[1]: Started libpod-conmon-f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75.scope.
Oct 14 09:15:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:15:03 compute-0 podman[360269]: 2025-10-14 09:15:03.05664187 +0000 UTC m=+0.025743416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:15:03 compute-0 podman[360269]: 2025-10-14 09:15:03.159131946 +0000 UTC m=+0.128233492 container init f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:15:03 compute-0 podman[360269]: 2025-10-14 09:15:03.169643475 +0000 UTC m=+0.138745001 container start f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:15:03 compute-0 busy_heisenberg[360285]: 167 167
Oct 14 09:15:03 compute-0 podman[360269]: 2025-10-14 09:15:03.173523981 +0000 UTC m=+0.142625507 container attach f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:15:03 compute-0 systemd[1]: libpod-f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75.scope: Deactivated successfully.
Oct 14 09:15:03 compute-0 podman[360290]: 2025-10-14 09:15:03.214155032 +0000 UTC m=+0.026477083 container died f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed6e651098525854a061599b8e46a21e4a0af53050637580d55340540be25849-merged.mount: Deactivated successfully.
Oct 14 09:15:03 compute-0 podman[360290]: 2025-10-14 09:15:03.247476884 +0000 UTC m=+0.059798895 container remove f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:15:03 compute-0 ceph-mon[74249]: pgmap v1823: 305 pgs: 305 active+clean; 293 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 293 op/s
Oct 14 09:15:03 compute-0 systemd[1]: libpod-conmon-f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75.scope: Deactivated successfully.
Oct 14 09:15:03 compute-0 podman[360312]: 2025-10-14 09:15:03.455412698 +0000 UTC m=+0.049252734 container create 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:15:03 compute-0 systemd[1]: Started libpod-conmon-1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5.scope.
Oct 14 09:15:03 compute-0 nova_compute[259627]: 2025-10-14 09:15:03.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:03 compute-0 podman[360312]: 2025-10-14 09:15:03.532084628 +0000 UTC m=+0.125924664 container init 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 09:15:03 compute-0 podman[360312]: 2025-10-14 09:15:03.431892389 +0000 UTC m=+0.025732445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:15:03 compute-0 podman[360312]: 2025-10-14 09:15:03.539276445 +0000 UTC m=+0.133116481 container start 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:15:03 compute-0 podman[360312]: 2025-10-14 09:15:03.548728448 +0000 UTC m=+0.142568504 container attach 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 09:15:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 293 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 14 09:15:04 compute-0 practical_morse[360329]: {
Oct 14 09:15:04 compute-0 practical_morse[360329]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "osd_id": 2,
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "type": "bluestore"
Oct 14 09:15:04 compute-0 practical_morse[360329]:     },
Oct 14 09:15:04 compute-0 practical_morse[360329]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "osd_id": 1,
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "type": "bluestore"
Oct 14 09:15:04 compute-0 practical_morse[360329]:     },
Oct 14 09:15:04 compute-0 practical_morse[360329]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "osd_id": 0,
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:15:04 compute-0 practical_morse[360329]:         "type": "bluestore"
Oct 14 09:15:04 compute-0 practical_morse[360329]:     }
Oct 14 09:15:04 compute-0 practical_morse[360329]: }
Oct 14 09:15:04 compute-0 systemd[1]: libpod-1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5.scope: Deactivated successfully.
Oct 14 09:15:04 compute-0 podman[360312]: 2025-10-14 09:15:04.637207086 +0000 UTC m=+1.231047132 container died 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:15:04 compute-0 systemd[1]: libpod-1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5.scope: Consumed 1.034s CPU time.
Oct 14 09:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0-merged.mount: Deactivated successfully.
Oct 14 09:15:04 compute-0 podman[360312]: 2025-10-14 09:15:04.719249448 +0000 UTC m=+1.313089494 container remove 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:15:04 compute-0 systemd[1]: libpod-conmon-1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5.scope: Deactivated successfully.
Oct 14 09:15:04 compute-0 sudo[360206]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:15:04 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:15:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:15:04 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:15:04 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ca4426c9-0da5-42f9-99bc-db50c1b0b77e does not exist
Oct 14 09:15:04 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8adcd819-fe73-4c76-8ef8-147c3e945179 does not exist
Oct 14 09:15:04 compute-0 sudo[360374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:15:04 compute-0 sudo[360374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:04 compute-0 sudo[360374]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:04 compute-0 sudo[360399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:15:04 compute-0 sudo[360399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:15:04 compute-0 sudo[360399]: pam_unix(sudo:session): session closed for user root
Oct 14 09:15:05 compute-0 nova_compute[259627]: 2025-10-14 09:15:05.016 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:15:05 compute-0 ceph-mon[74249]: pgmap v1824: 305 pgs: 305 active+clean; 293 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 14 09:15:05 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:15:05 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:15:05 compute-0 nova_compute[259627]: 2025-10-14 09:15:05.542 2 DEBUG nova.compute.manager [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:05 compute-0 nova_compute[259627]: 2025-10-14 09:15:05.543 2 DEBUG oslo_concurrency.lockutils [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:05 compute-0 nova_compute[259627]: 2025-10-14 09:15:05.543 2 DEBUG oslo_concurrency.lockutils [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:05 compute-0 nova_compute[259627]: 2025-10-14 09:15:05.543 2 DEBUG oslo_concurrency.lockutils [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:05 compute-0 nova_compute[259627]: 2025-10-14 09:15:05.544 2 DEBUG nova.compute.manager [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] No waiting events found dispatching network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:05 compute-0 nova_compute[259627]: 2025-10-14 09:15:05.544 2 WARNING nova.compute.manager [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received unexpected event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 for instance with vm_state active and task_state None.
Oct 14 09:15:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:15:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2024625731' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:15:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:15:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2024625731' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:15:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 303 op/s
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.197 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2024625731' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:15:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2024625731' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.573 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.573 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.577 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.578 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.579 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.579 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.579 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.580 2 INFO nova.compute.manager [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Terminating instance
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.581 2 DEBUG nova.compute.manager [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.587 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:15:06 compute-0 kernel: tapa8c2f2be-f2 (unregistering): left promiscuous mode
Oct 14 09:15:06 compute-0 NetworkManager[44885]: <info>  [1760433306.6406] device (tapa8c2f2be-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.650 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.651 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:06 compute-0 ovn_controller[152662]: 2025-10-14T09:15:06Z|01081|binding|INFO|Releasing lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 from this chassis (sb_readonly=0)
Oct 14 09:15:06 compute-0 ovn_controller[152662]: 2025-10-14T09:15:06Z|01082|binding|INFO|Setting lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 down in Southbound
Oct 14 09:15:06 compute-0 ovn_controller[152662]: 2025-10-14T09:15:06Z|01083|binding|INFO|Removing iface tapa8c2f2be-f2 ovn-installed in OVS
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.664 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.664 2 INFO nova.compute.claims [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.662 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:58:a4 10.100.0.12'], port_security=['fa:16:3e:c1:58:a4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '84968701-f6c5-4798-888e-fa0f3311adca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=a8c2f2be-f25a-4512-b8a2-17b2b695ce52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.663 162547 INFO neutron.agent.ovn.metadata.agent [-] Port a8c2f2be-f25a-4512-b8a2-17b2b695ce52 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.665 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.696 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[653f80f6-21ff-456c-b079-58ae70498335]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:06 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 14 09:15:06 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000066.scope: Consumed 4.973s CPU time.
Oct 14 09:15:06 compute-0 systemd-machined[214636]: Machine qemu-127-instance-00000066 terminated.
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.737 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d424f0-181f-4ab4-b514-c71862bb85c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.740 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1d807eaa-f7c7-4b88-b4bc-f4cccd57c4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.772 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[240d9aee-76bc-493f-8c96-eba795ff227f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d74380b8-3da3-4b36-b92c-bafe066a94ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 958, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 958, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360436, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.817 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[140cc4e9-9740-4cc9-ad9c-b5209791ec95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360438, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360438, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.819 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.826 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.827 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.827 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.827 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.832 2 INFO nova.virt.libvirt.driver [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance destroyed successfully.
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.833 2 DEBUG nova.objects.instance [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid 84968701-f6c5-4798-888e-fa0f3311adca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.847 2 DEBUG nova.virt.libvirt.vif [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=102,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-g3f94nzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:02Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=84968701-f6c5-4798-888e-fa0f3311adca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.848 2 DEBUG nova.network.os_vif_util [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.849 2 DEBUG nova.network.os_vif_util [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.850 2 DEBUG os_vif [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c2f2be-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.863 2 INFO os_vif [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2')
Oct 14 09:15:06 compute-0 nova_compute[259627]: 2025-10-14 09:15:06.887 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:07 compute-0 ceph-mon[74249]: pgmap v1825: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 303 op/s
Oct 14 09:15:07 compute-0 kernel: tap175a9914-00 (unregistering): left promiscuous mode
Oct 14 09:15:07 compute-0 NetworkManager[44885]: <info>  [1760433307.3252] device (tap175a9914-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:15:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71834738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:07 compute-0 ovn_controller[152662]: 2025-10-14T09:15:07Z|01084|binding|INFO|Releasing lport 175a9914-0068-4aeb-b4b6-d501212d3374 from this chassis (sb_readonly=0)
Oct 14 09:15:07 compute-0 ovn_controller[152662]: 2025-10-14T09:15:07Z|01085|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 down in Southbound
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:07 compute-0 ovn_controller[152662]: 2025-10-14T09:15:07Z|01086|binding|INFO|Removing iface tap175a9914-00 ovn-installed in OVS
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.389 2 INFO nova.virt.libvirt.driver [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Deleting instance files /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca_del
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.390 2 INFO nova.virt.libvirt.driver [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Deletion of /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca_del complete
Oct 14 09:15:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.386 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:d8:7e 10.100.0.7'], port_security=['fa:16:3e:6a:d8:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1aea504-3ecf-4273-a867-66afb39de726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a53d982-8cf9-44ff-9b16-dda957aa7729', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4eae338e7d54d159033a20bc7460935', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a0a095f-2308-407d-90d9-6acaa688f93c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab8a6447-c4e7-4ebf-b042-eb7d83ec88dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=175a9914-0068-4aeb-b4b6-d501212d3374) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.390 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 175a9914-0068-4aeb-b4b6-d501212d3374 in datapath 8a53d982-8cf9-44ff-9b16-dda957aa7729 unbound from our chassis
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.393 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a53d982-8cf9-44ff-9b16-dda957aa7729 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:15:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.394 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6986f78b-0c96-4ab8-a4b4-5506d934f910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.398 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.402 2 DEBUG nova.compute.provider_tree [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.425 2 DEBUG nova.scheduler.client.report [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:07 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 14 09:15:07 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000065.scope: Consumed 12.682s CPU time.
Oct 14 09:15:07 compute-0 systemd-machined[214636]: Machine qemu-126-instance-00000065 terminated.
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.467 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.467 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.475 2 INFO nova.compute.manager [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.475 2 DEBUG oslo.service.loopingcall [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.476 2 DEBUG nova.compute.manager [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.476 2 DEBUG nova.network.neutron [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.519 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.520 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.541 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.567 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.654 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.656 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.657 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Creating image(s)
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.690 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.719 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.745 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.749 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.802 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-unplugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.802 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.802 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] No waiting events found dispatching network-vif-unplugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-unplugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.804 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.804 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.804 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] No waiting events found dispatching network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.804 2 WARNING nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received unexpected event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 for instance with vm_state active and task_state deleting.
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.842 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.843 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.843 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.843 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.866 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.869 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 33969555-fe06-4613-b244-d03c9b4180ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:07 compute-0 nova_compute[259627]: 2025-10-14 09:15:07.914 2 DEBUG nova.policy [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:15:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.036 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance shutdown successfully after 13 seconds.
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.047 2 INFO nova.virt.libvirt.driver [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance destroyed successfully.
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.048 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'numa_topology' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.070 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Attempting rescue
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.073 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.085 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.086 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating image(s)
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.129 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.134 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.172 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.203 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.209 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.253 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 33969555-fe06-4613-b244-d03c9b4180ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/71834738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.312 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.313 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.314 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.315 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.334 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.339 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.382 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.532 2 DEBUG nova.network.neutron [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.560 2 DEBUG nova.objects.instance [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.566 2 INFO nova.compute.manager [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Took 1.09 seconds to deallocate network for instance.
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.573 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.573 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Ensure instance console log exists: /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.574 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.574 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.574 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.604 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.605 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.653 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.654 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'migration_context' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.667 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.668 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start _get_guest_xml network_info=[{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "vif_mac": "fa:16:3e:6a:d8:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.668 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'resources' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.697 2 WARNING nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.704 2 DEBUG nova.virt.libvirt.host [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.705 2 DEBUG nova.virt.libvirt.host [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.708 2 DEBUG nova.virt.libvirt.host [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.708 2 DEBUG nova.virt.libvirt.host [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.709 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.709 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.710 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.710 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.710 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.711 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.711 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.711 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.712 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.712 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.712 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.713 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.713 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.747 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:08 compute-0 nova_compute[259627]: 2025-10-14 09:15:08.800 2 DEBUG oslo_concurrency.processutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.106 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Successfully created port: 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:15:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1196220235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.232 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.234 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:09 compute-0 ceph-mon[74249]: pgmap v1826: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 09:15:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1196220235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096615696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.326 2 DEBUG oslo_concurrency.processutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.334 2 DEBUG nova.compute.provider_tree [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.356 2 DEBUG nova.scheduler.client.report [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.378 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.412 2 INFO nova.scheduler.client.report [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance 84968701-f6c5-4798-888e-fa0f3311adca
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.494 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882457015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.750 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.751 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.899 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Successfully updated port: 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.915 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.916 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:09 compute-0 nova_compute[259627]: 2025-10-14 09:15:09.916 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:15:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.017 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.018 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.019 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.019 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.019 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.020 2 WARNING nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state active and task_state rescuing.
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.020 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.021 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.021 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.022 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.022 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.023 2 WARNING nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state active and task_state rescuing.
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.023 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-deleted-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.118 2 DEBUG nova.compute.manager [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.118 2 DEBUG nova.compute.manager [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing instance network info cache due to event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.118 2 DEBUG oslo_concurrency.lockutils [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.175 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:15:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502171530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.194 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.195 2 DEBUG nova.virt.libvirt.vif [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1533575249',display_name='tempest-ServerRescueTestJSONUnderV235-server-1533575249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1533575249',id=101,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4eae338e7d54d159033a20bc7460935',ramdisk_id='',reservation_id='r-tm6flkse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2037260230',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2037260230-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:51Z,user_data=None,user_id='648aaa75d8974d439d8ebe331c3d6568',uuid=e1aea504-3ecf-4273-a867-66afb39de726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "vif_mac": "fa:16:3e:6a:d8:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.196 2 DEBUG nova.network.os_vif_util [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converting VIF {"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "vif_mac": "fa:16:3e:6a:d8:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.196 2 DEBUG nova.network.os_vif_util [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.198 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.215 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <uuid>e1aea504-3ecf-4273-a867-66afb39de726</uuid>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <name>instance-00000065</name>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1533575249</nova:name>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:15:08</nova:creationTime>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <nova:user uuid="648aaa75d8974d439d8ebe331c3d6568">tempest-ServerRescueTestJSONUnderV235-2037260230-project-member</nova:user>
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <nova:project uuid="c4eae338e7d54d159033a20bc7460935">tempest-ServerRescueTestJSONUnderV235-2037260230</nova:project>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <nova:port uuid="175a9914-0068-4aeb-b4b6-d501212d3374">
Oct 14 09:15:10 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <system>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <entry name="serial">e1aea504-3ecf-4273-a867-66afb39de726</entry>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <entry name="uuid">e1aea504-3ecf-4273-a867-66afb39de726</entry>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </system>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <os>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   </os>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <features>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   </features>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue">
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk">
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <target dev="vdb" bus="virtio"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue">
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:10 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:6a:d8:7e"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <target dev="tap175a9914-00"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/console.log" append="off"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <video>
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </video>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:15:10 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:15:10 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:15:10 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:15:10 compute-0 nova_compute[259627]: </domain>
Oct 14 09:15:10 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.225 2 INFO nova.virt.libvirt.driver [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance destroyed successfully.
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.281 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.282 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.282 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.283 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No VIF found with MAC fa:16:3e:6a:d8:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.284 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Using config drive
Oct 14 09:15:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3096615696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2882457015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/502171530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.315 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.331 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.359 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'keypairs' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.440 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.441 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.441 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.442 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.442 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.445 2 INFO nova.compute.manager [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Terminating instance
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.446 2 DEBUG nova.compute.manager [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:15:10 compute-0 kernel: tapc2ac8abe-0e (unregistering): left promiscuous mode
Oct 14 09:15:10 compute-0 NetworkManager[44885]: <info>  [1760433310.5134] device (tapc2ac8abe-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:10 compute-0 ovn_controller[152662]: 2025-10-14T09:15:10Z|01087|binding|INFO|Releasing lport c2ac8abe-0e61-4769-9529-3b391568e6b9 from this chassis (sb_readonly=0)
Oct 14 09:15:10 compute-0 ovn_controller[152662]: 2025-10-14T09:15:10Z|01088|binding|INFO|Setting lport c2ac8abe-0e61-4769-9529-3b391568e6b9 down in Southbound
Oct 14 09:15:10 compute-0 ovn_controller[152662]: 2025-10-14T09:15:10Z|01089|binding|INFO|Removing iface tapc2ac8abe-0e ovn-installed in OVS
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.533 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:70:22 10.100.0.3'], port_security=['fa:16:3e:fa:70:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'da40c115-048e-4844-812e-7e65e25bfb3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c2ac8abe-0e61-4769-9529-3b391568e6b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.535 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c2ac8abe-0e61-4769-9529-3b391568e6b9 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.537 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.558 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9b77d0-a7dc-438a-91f8-6c0d9e638eb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:10 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct 14 09:15:10 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000064.scope: Consumed 12.963s CPU time.
Oct 14 09:15:10 compute-0 systemd-machined[214636]: Machine qemu-125-instance-00000064 terminated.
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.592 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf6199f-ab03-435d-ab16-30479f3dc9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.598 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6cdc0afa-d111-4904-a150-ad6cd5f2e8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.624 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[80183ec1-f7f1-4cdd-a5ed-611efc31905b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8bada914-ed13-4746-af92-f2e528ca3b07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 1000, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 1000, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360885, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2222d4f9-b14d-43a2-ac28-885f91de3ebf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360886, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360886, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.662 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.720 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating config drive at /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.725 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgldtc1i2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.727 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.774 2 INFO nova.virt.libvirt.driver [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance destroyed successfully.
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.775 2 DEBUG nova.objects.instance [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid da40c115-048e-4844-812e-7e65e25bfb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.788 2 DEBUG nova.virt.libvirt.vif [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=100,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-95sxw262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:47Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=da40c115-048e-4844-812e-7e65e25bfb3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.788 2 DEBUG nova.network.os_vif_util [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.789 2 DEBUG nova.network.os_vif_util [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.789 2 DEBUG os_vif [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ac8abe-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.798 2 INFO os_vif [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e')
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.878 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgldtc1i2" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.900 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:10 compute-0 nova_compute[259627]: 2025-10-14 09:15:10.905 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.094 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.095 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deleting local config drive /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue because it was imported into RBD.
Oct 14 09:15:11 compute-0 kernel: tap175a9914-00: entered promiscuous mode
Oct 14 09:15:11 compute-0 NetworkManager[44885]: <info>  [1760433311.1460] manager: (tap175a9914-00): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Oct 14 09:15:11 compute-0 systemd-udevd[360876]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:15:11 compute-0 ovn_controller[152662]: 2025-10-14T09:15:11Z|01090|binding|INFO|Claiming lport 175a9914-0068-4aeb-b4b6-d501212d3374 for this chassis.
Oct 14 09:15:11 compute-0 ovn_controller[152662]: 2025-10-14T09:15:11Z|01091|binding|INFO|175a9914-0068-4aeb-b4b6-d501212d3374: Claiming fa:16:3e:6a:d8:7e 10.100.0.7
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:11.154 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:d8:7e 10.100.0.7'], port_security=['fa:16:3e:6a:d8:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1aea504-3ecf-4273-a867-66afb39de726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a53d982-8cf9-44ff-9b16-dda957aa7729', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4eae338e7d54d159033a20bc7460935', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9a0a095f-2308-407d-90d9-6acaa688f93c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab8a6447-c4e7-4ebf-b042-eb7d83ec88dc, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=175a9914-0068-4aeb-b4b6-d501212d3374) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:11.155 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 175a9914-0068-4aeb-b4b6-d501212d3374 in datapath 8a53d982-8cf9-44ff-9b16-dda957aa7729 bound to our chassis
Oct 14 09:15:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:11.156 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a53d982-8cf9-44ff-9b16-dda957aa7729 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:15:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:11.156 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[511e92de-060d-4e2b-8e29-223aa224c47e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:11 compute-0 NetworkManager[44885]: <info>  [1760433311.1619] device (tap175a9914-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:15:11 compute-0 NetworkManager[44885]: <info>  [1760433311.1627] device (tap175a9914-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:15:11 compute-0 ovn_controller[152662]: 2025-10-14T09:15:11Z|01092|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 up in Southbound
Oct 14 09:15:11 compute-0 ovn_controller[152662]: 2025-10-14T09:15:11Z|01093|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 ovn-installed in OVS
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:11 compute-0 systemd-machined[214636]: New machine qemu-128-instance-00000065.
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.184 2 INFO nova.virt.libvirt.driver [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Deleting instance files /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f_del
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.185 2 INFO nova.virt.libvirt.driver [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Deletion of /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f_del complete
Oct 14 09:15:11 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000065.
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.230 2 INFO nova.compute.manager [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.231 2 DEBUG oslo.service.loopingcall [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.232 2 DEBUG nova.compute.manager [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.232 2 DEBUG nova.network.neutron [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:15:11 compute-0 ceph-mon[74249]: pgmap v1827: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.690 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.723 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.724 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance network_info: |[{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.725 2 DEBUG oslo_concurrency.lockutils [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.726 2 DEBUG nova.network.neutron [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.732 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start _get_guest_xml network_info=[{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.739 2 WARNING nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.746 2 DEBUG nova.virt.libvirt.host [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.747 2 DEBUG nova.virt.libvirt.host [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.755 2 DEBUG nova.virt.libvirt.host [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.756 2 DEBUG nova.virt.libvirt.host [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.757 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.758 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.759 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.759 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.760 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.761 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.761 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.762 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.763 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.763 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.764 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.764 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.770 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.922 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for e1aea504-3ecf-4273-a867-66afb39de726 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.923 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433311.9219284, e1aea504-3ecf-4273-a867-66afb39de726 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.924 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Resumed (Lifecycle Event)
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.933 2 DEBUG nova.compute.manager [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.978 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.982 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:11 compute-0 nova_compute[259627]: 2025-10-14 09:15:11.984 2 DEBUG nova.network.neutron [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 372 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 273 op/s
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.033 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.034 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433311.9241364, e1aea504-3ecf-4273-a867-66afb39de726 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.034 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Started (Lifecycle Event)
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.035 2 INFO nova.compute.manager [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Took 0.80 seconds to deallocate network for instance.
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.074 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.077 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.116 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.117 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.150 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-unplugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.150 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.151 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.151 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.151 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] No waiting events found dispatching network-vif-unplugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.151 2 WARNING nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received unexpected event network-vif-unplugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 for instance with vm_state deleted and task_state None.
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.151 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.152 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.152 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.152 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.152 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] No waiting events found dispatching network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.153 2 WARNING nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received unexpected event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 for instance with vm_state deleted and task_state None.
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.153 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.153 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.153 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.153 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.154 2 WARNING nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state rescued and task_state None.
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.155 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.155 2 WARNING nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state rescued and task_state None.
Oct 14 09:15:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806722944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.222 2 DEBUG nova.compute.manager [req-9e638318-3838-4815-aef5-4ccea4db8040 req-b9b494d6-304c-4d9c-9167-198669497f57 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-deleted-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.226 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.258 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.265 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2806722944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.318 2 DEBUG oslo_concurrency.processutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478511998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.805 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817994809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.808 2 DEBUG nova.virt.libvirt.vif [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:07Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.809 2 DEBUG nova.network.os_vif_util [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.811 2 DEBUG nova.network.os_vif_util [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.814 2 DEBUG nova.objects.instance [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.826 2 DEBUG oslo_concurrency.processutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.833 2 DEBUG nova.compute.provider_tree [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.840 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <uuid>33969555-fe06-4613-b244-d03c9b4180ba</uuid>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <name>instance-00000067</name>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-383304321</nova:name>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:15:11</nova:creationTime>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <nova:port uuid="7e42bf44-c1f8-49df-bd5f-a26abe43a832">
Oct 14 09:15:12 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <system>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <entry name="serial">33969555-fe06-4613-b244-d03c9b4180ba</entry>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <entry name="uuid">33969555-fe06-4613-b244-d03c9b4180ba</entry>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </system>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <os>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   </os>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <features>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   </features>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/33969555-fe06-4613-b244-d03c9b4180ba_disk">
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/33969555-fe06-4613-b244-d03c9b4180ba_disk.config">
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:20:8d:8f"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <target dev="tap7e42bf44-c1"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/console.log" append="off"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <video>
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </video>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:15:12 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:15:12 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:15:12 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:15:12 compute-0 nova_compute[259627]: </domain>
Oct 14 09:15:12 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.843 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Preparing to wait for external event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.843 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.844 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.844 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.846 2 DEBUG nova.virt.libvirt.vif [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:07Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.846 2 DEBUG nova.network.os_vif_util [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.848 2 DEBUG nova.network.os_vif_util [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.848 2 DEBUG os_vif [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.854 2 DEBUG nova.scheduler.client.report [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e42bf44-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e42bf44-c1, col_values=(('external_ids', {'iface-id': '7e42bf44-c1f8-49df-bd5f-a26abe43a832', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:8d:8f', 'vm-uuid': '33969555-fe06-4613-b244-d03c9b4180ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.887 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:12 compute-0 NetworkManager[44885]: <info>  [1760433312.8997] manager: (tap7e42bf44-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.904 2 INFO os_vif [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1')
Oct 14 09:15:12 compute-0 nova_compute[259627]: 2025-10-14 09:15:12.907 2 INFO nova.scheduler.client.report [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance da40c115-048e-4844-812e-7e65e25bfb3f
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.006 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.018 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.019 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.019 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:20:8d:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.021 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Using config drive
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.053 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:13 compute-0 ceph-mon[74249]: pgmap v1828: 305 pgs: 305 active+clean; 372 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 273 op/s
Oct 14 09:15:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/478511998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3817994809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.621 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Creating config drive at /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.631 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9u32y4o3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.693 2 DEBUG nova.network.neutron [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updated VIF entry in instance network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.694 2 DEBUG nova.network.neutron [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.715 2 DEBUG oslo_concurrency.lockutils [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.802 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9u32y4o3" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.831 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:13 compute-0 nova_compute[259627]: 2025-10-14 09:15:13.836 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config 33969555-fe06-4613-b244-d03c9b4180ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 372 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 200 op/s
Oct 14 09:15:14 compute-0 nova_compute[259627]: 2025-10-14 09:15:14.177 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config 33969555-fe06-4613-b244-d03c9b4180ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:14 compute-0 nova_compute[259627]: 2025-10-14 09:15:14.179 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Deleting local config drive /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config because it was imported into RBD.
Oct 14 09:15:14 compute-0 kernel: tap7e42bf44-c1: entered promiscuous mode
Oct 14 09:15:14 compute-0 NetworkManager[44885]: <info>  [1760433314.2266] manager: (tap7e42bf44-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/440)
Oct 14 09:15:14 compute-0 nova_compute[259627]: 2025-10-14 09:15:14.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:14 compute-0 ovn_controller[152662]: 2025-10-14T09:15:14Z|01094|binding|INFO|Claiming lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 for this chassis.
Oct 14 09:15:14 compute-0 ovn_controller[152662]: 2025-10-14T09:15:14Z|01095|binding|INFO|7e42bf44-c1f8-49df-bd5f-a26abe43a832: Claiming fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.241 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:8d:8f 10.100.0.4'], port_security=['fa:16:3e:20:8d:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33969555-fe06-4613-b244-d03c9b4180ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fb2d92f-9be1-4133-9fba-da943dad4162', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a0b33e-96f5-46d7-a240-9f59c55a6b07, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7e42bf44-c1f8-49df-bd5f-a26abe43a832) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.243 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 in datapath 7f4225de-9f3f-48e2-bad7-a89cf4884a2e bound to our chassis
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.244 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.263 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfd2c3a-75b2-4467-91ce-4ed25fff6fdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.264 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f4225de-91 in ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.266 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f4225de-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.266 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[771315dc-5c57-4ddb-a9b2-b96d3af83b47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.267 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a0eb3e-f4bc-4df4-8ba2-3131c155a284]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 systemd-udevd[361193]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.278 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[01c6f339-9e4f-4e98-831a-81517f08ea90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 systemd-machined[214636]: New machine qemu-129-instance-00000067.
Oct 14 09:15:14 compute-0 NetworkManager[44885]: <info>  [1760433314.2885] device (tap7e42bf44-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:15:14 compute-0 NetworkManager[44885]: <info>  [1760433314.2902] device (tap7e42bf44-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:15:14 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000067.
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.305 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4988116d-6c70-46f9-b526-5edfa07919e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.340 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4b40831a-9c2d-4eb3-8dc3-cc86cf85310d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 systemd-udevd[361198]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:15:14 compute-0 NetworkManager[44885]: <info>  [1760433314.3502] manager: (tap7f4225de-90): new Veth device (/org/freedesktop/NetworkManager/Devices/441)
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed05bfd7-1b30-4459-8957-b20f375ac03c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 nova_compute[259627]: 2025-10-14 09:15:14.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:14 compute-0 ovn_controller[152662]: 2025-10-14T09:15:14Z|01096|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 ovn-installed in OVS
Oct 14 09:15:14 compute-0 ovn_controller[152662]: 2025-10-14T09:15:14Z|01097|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 up in Southbound
Oct 14 09:15:14 compute-0 nova_compute[259627]: 2025-10-14 09:15:14.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.398 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4879801e-4488-42d0-a8b5-20611444e8a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.403 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[12ca2d3c-2d95-42d4-a728-2750ab2379b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 NetworkManager[44885]: <info>  [1760433314.4261] device (tap7f4225de-90): carrier: link connected
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.433 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[421e0e27-f348-466f-97df-13d733a9d0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d04b94d1-9386-4da7-80a2-f543e2d10a2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f4225de-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:e2:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709318, 'reachable_time': 28288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361226, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d87856-f55c-4a7d-856f-a91294273e6d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:e234'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709318, 'tstamp': 709318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361227, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.501 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcef1ee-cb35-45e5-ac43-403409676e25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f4225de-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:e2:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709318, 'reachable_time': 28288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361228, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.547 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e46651-2c09-4ece-af19-a3567682369c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f63ca8e-0216-495c-af13-ed39753696f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4225de-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f4225de-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:14 compute-0 kernel: tap7f4225de-90: entered promiscuous mode
Oct 14 09:15:14 compute-0 NetworkManager[44885]: <info>  [1760433314.6165] manager: (tap7f4225de-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Oct 14 09:15:14 compute-0 nova_compute[259627]: 2025-10-14 09:15:14.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.618 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f4225de-90, col_values=(('external_ids', {'iface-id': 'f5c700c1-27f2-4a9c-8db0-f69417ca2318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:14 compute-0 ovn_controller[152662]: 2025-10-14T09:15:14Z|01098|binding|INFO|Releasing lport f5c700c1-27f2-4a9c-8db0-f69417ca2318 from this chassis (sb_readonly=0)
Oct 14 09:15:14 compute-0 nova_compute[259627]: 2025-10-14 09:15:14.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.654 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.655 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6f61ce-dac3-4b30-8929-0db73f748039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.656 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:15:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.658 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'env', 'PROCESS_TAG=haproxy-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:15:15 compute-0 podman[361302]: 2025-10-14 09:15:15.065438963 +0000 UTC m=+0.082204297 container create 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 09:15:15 compute-0 podman[361302]: 2025-10-14 09:15:15.014741354 +0000 UTC m=+0.031506768 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:15:15 compute-0 systemd[1]: Started libpod-conmon-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1.scope.
Oct 14 09:15:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839d52bde3e6e8745834d54c6e47047a57e1518d17430d80d50930db816dbf4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:15 compute-0 podman[361302]: 2025-10-14 09:15:15.170748879 +0000 UTC m=+0.187514243 container init 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 09:15:15 compute-0 podman[361302]: 2025-10-14 09:15:15.176669235 +0000 UTC m=+0.193434559 container start 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 09:15:15 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [NOTICE]   (361321) : New worker (361323) forked
Oct 14 09:15:15 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [NOTICE]   (361321) : Loading success.
Oct 14 09:15:15 compute-0 ceph-mon[74249]: pgmap v1829: 305 pgs: 305 active+clean; 372 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 200 op/s
Oct 14 09:15:15 compute-0 nova_compute[259627]: 2025-10-14 09:15:15.389 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433315.3885822, 33969555-fe06-4613-b244-d03c9b4180ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:15 compute-0 nova_compute[259627]: 2025-10-14 09:15:15.389 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Started (Lifecycle Event)
Oct 14 09:15:15 compute-0 nova_compute[259627]: 2025-10-14 09:15:15.409 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:15 compute-0 nova_compute[259627]: 2025-10-14 09:15:15.413 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433315.3892004, 33969555-fe06-4613-b244-d03c9b4180ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:15 compute-0 nova_compute[259627]: 2025-10-14 09:15:15.413 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Paused (Lifecycle Event)
Oct 14 09:15:15 compute-0 nova_compute[259627]: 2025-10-14 09:15:15.434 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:15 compute-0 nova_compute[259627]: 2025-10-14 09:15:15.438 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:15 compute-0 nova_compute[259627]: 2025-10-14 09:15:15.464 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 314 op/s
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.270 2 DEBUG nova.compute.manager [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.270 2 DEBUG oslo_concurrency.lockutils [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.271 2 DEBUG oslo_concurrency.lockutils [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.271 2 DEBUG oslo_concurrency.lockutils [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.271 2 DEBUG nova.compute.manager [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Processing event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.273 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.289 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433316.2861264, 33969555-fe06-4613-b244-d03c9b4180ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.289 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Resumed (Lifecycle Event)
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.293 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.298 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance spawned successfully.
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.299 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.315 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.323 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.329 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.329 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.330 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.330 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.331 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.332 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.354 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.563 2 INFO nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Took 8.91 seconds to spawn the instance on the hypervisor.
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.564 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.644 2 INFO nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Took 10.02 seconds to build instance.
Oct 14 09:15:16 compute-0 nova_compute[259627]: 2025-10-14 09:15:16.661 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.067 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.067 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.083 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.152 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.153 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.163 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.163 2 INFO nova.compute.claims [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:15:17 compute-0 ceph-mon[74249]: pgmap v1830: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 314 op/s
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.366 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1503332990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.881 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.887 2 DEBUG nova.compute.provider_tree [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.906 2 DEBUG nova.scheduler.client.report [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.915 2 DEBUG nova.compute.manager [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.915 2 DEBUG nova.compute.manager [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.915 2 DEBUG oslo_concurrency.lockutils [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.916 2 DEBUG oslo_concurrency.lockutils [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.916 2 DEBUG nova.network.neutron [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.930 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.930 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.977 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.978 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:15:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Oct 14 09:15:17 compute-0 nova_compute[259627]: 2025-10-14 09:15:17.996 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.021 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.110 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.113 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.114 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Creating image(s)
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.159 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.201 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.241 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.246 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.353 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.354 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.354 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.355 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1503332990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.380 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.387 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.434 2 DEBUG nova.policy [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.573 2 DEBUG nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.574 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.575 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.575 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.575 2 DEBUG nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.576 2 WARNING nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state active and task_state None.
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.576 2 DEBUG nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.576 2 DEBUG nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.577 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.694 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.768 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.861 2 DEBUG nova.objects.instance [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b47438c-b2b5-48e7-a15c-ee7c5936da65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.873 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.874 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Ensure instance console log exists: /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.874 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.875 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:18 compute-0 nova_compute[259627]: 2025-10-14 09:15:18.875 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:19 compute-0 nova_compute[259627]: 2025-10-14 09:15:19.207 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Successfully created port: 96f6d888-2b4a-4ca3-a48c-4628d4143f60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:15:19 compute-0 ceph-mon[74249]: pgmap v1831: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Oct 14 09:15:19 compute-0 ovn_controller[152662]: 2025-10-14T09:15:19Z|01099|binding|INFO|Releasing lport f5c700c1-27f2-4a9c-8db0-f69417ca2318 from this chassis (sb_readonly=0)
Oct 14 09:15:19 compute-0 NetworkManager[44885]: <info>  [1760433319.4659] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Oct 14 09:15:19 compute-0 ovn_controller[152662]: 2025-10-14T09:15:19Z|01100|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:15:19 compute-0 NetworkManager[44885]: <info>  [1760433319.4668] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Oct 14 09:15:19 compute-0 ovn_controller[152662]: 2025-10-14T09:15:19Z|01101|binding|INFO|Releasing lport f5c700c1-27f2-4a9c-8db0-f69417ca2318 from this chassis (sb_readonly=0)
Oct 14 09:15:19 compute-0 ovn_controller[152662]: 2025-10-14T09:15:19Z|01102|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:15:19 compute-0 nova_compute[259627]: 2025-10-14 09:15:19.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:19 compute-0 nova_compute[259627]: 2025-10-14 09:15:19.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:19 compute-0 podman[361521]: 2025-10-14 09:15:19.671917507 +0000 UTC m=+0.090544303 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 14 09:15:19 compute-0 podman[361522]: 2025-10-14 09:15:19.672562283 +0000 UTC m=+0.080913835 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:15:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.157 2 DEBUG nova.compute.manager [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.158 2 DEBUG nova.compute.manager [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing instance network info cache due to event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.158 2 DEBUG oslo_concurrency.lockutils [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.158 2 DEBUG oslo_concurrency.lockutils [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.158 2 DEBUG nova.network.neutron [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.194 2 DEBUG nova.network.neutron [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.195 2 DEBUG nova.network.neutron [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.221 2 DEBUG oslo_concurrency.lockutils [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.222 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.222 2 DEBUG nova.network.neutron [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:15:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 14 09:15:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 14 09:15:20 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.582 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Successfully updated port: 96f6d888-2b4a-4ca3-a48c-4628d4143f60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.596 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.597 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.597 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:15:20 compute-0 nova_compute[259627]: 2025-10-14 09:15:20.869 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:15:21 compute-0 ceph-mon[74249]: pgmap v1832: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Oct 14 09:15:21 compute-0 ceph-mon[74249]: osdmap e259: 3 total, 3 up, 3 in
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.562 2 DEBUG nova.network.neutron [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.563 2 DEBUG nova.network.neutron [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.583 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.830 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433306.8268523, 84968701-f6c5-4798-888e-fa0f3311adca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.830 2 INFO nova.compute.manager [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] VM Stopped (Lifecycle Event)
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.865 2 DEBUG nova.compute.manager [None req-22412626-5dff-40ff-82df-a0ab087ba917 - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.949 2 DEBUG nova.network.neutron [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updated VIF entry in instance network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.949 2 DEBUG nova.network.neutron [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:21 compute-0 nova_compute[259627]: 2025-10-14 09:15:21.972 2 DEBUG oslo_concurrency.lockutils [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 339 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 246 op/s
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.122 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Updating instance_info_cache with network_info: [{"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.150 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.150 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance network_info: |[{"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.152 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start _get_guest_xml network_info=[{"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.157 2 WARNING nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.164 2 DEBUG nova.virt.libvirt.host [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.166 2 DEBUG nova.virt.libvirt.host [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.168 2 DEBUG nova.virt.libvirt.host [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.169 2 DEBUG nova.virt.libvirt.host [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.169 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.169 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.170 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.170 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.170 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.170 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.172 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.174 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.280 2 DEBUG nova.compute.manager [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-changed-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.281 2 DEBUG nova.compute.manager [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Refreshing instance network info cache due to event network-changed-96f6d888-2b4a-4ca3-a48c-4628d4143f60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.281 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.281 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.281 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Refreshing network info cache for port 96f6d888-2b4a-4ca3-a48c-4628d4143f60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:15:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417846393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.674 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.705 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.711 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:22 compute-0 nova_compute[259627]: 2025-10-14 09:15:22.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1205577679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.163 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.165 2 DEBUG nova.virt.libvirt.vif [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-265021143',display_name='tempest-ServersTestJSON-server-265021143',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-265021143',id=104,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-6w8v0jr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:18Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=6b47438c-b2b5-48e7-a15c-ee7c5936da65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.166 2 DEBUG nova.network.os_vif_util [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.167 2 DEBUG nova.network.os_vif_util [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.168 2 DEBUG nova.objects.instance [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b47438c-b2b5-48e7-a15c-ee7c5936da65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.200 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <uuid>6b47438c-b2b5-48e7-a15c-ee7c5936da65</uuid>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <name>instance-00000068</name>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestJSON-server-265021143</nova:name>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:15:22</nova:creationTime>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <nova:port uuid="96f6d888-2b4a-4ca3-a48c-4628d4143f60">
Oct 14 09:15:23 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <system>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <entry name="serial">6b47438c-b2b5-48e7-a15c-ee7c5936da65</entry>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <entry name="uuid">6b47438c-b2b5-48e7-a15c-ee7c5936da65</entry>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </system>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <os>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   </os>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <features>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   </features>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk">
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config">
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:59:b2:50"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <target dev="tap96f6d888-2b"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/console.log" append="off"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <video>
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </video>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:15:23 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:15:23 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:15:23 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:15:23 compute-0 nova_compute[259627]: </domain>
Oct 14 09:15:23 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.202 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Preparing to wait for external event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.203 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.203 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.204 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.205 2 DEBUG nova.virt.libvirt.vif [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-265021143',display_name='tempest-ServersTestJSON-server-265021143',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-265021143',id=104,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-6w8v0jr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:18Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=6b47438c-b2b5-48e7-a15c-ee7c5936da65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.205 2 DEBUG nova.network.os_vif_util [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.206 2 DEBUG nova.network.os_vif_util [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.206 2 DEBUG os_vif [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.209 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96f6d888-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.215 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96f6d888-2b, col_values=(('external_ids', {'iface-id': '96f6d888-2b4a-4ca3-a48c-4628d4143f60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:b2:50', 'vm-uuid': '6b47438c-b2b5-48e7-a15c-ee7c5936da65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:23 compute-0 NetworkManager[44885]: <info>  [1760433323.2647] manager: (tap96f6d888-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.274 2 INFO os_vif [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b')
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.379 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.381 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.381 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:59:b2:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.382 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Using config drive
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.411 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 14 09:15:23 compute-0 ceph-mon[74249]: pgmap v1834: 305 pgs: 305 active+clean; 339 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 246 op/s
Oct 14 09:15:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2417846393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1205577679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 14 09:15:23 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 14 09:15:23 compute-0 nova_compute[259627]: 2025-10-14 09:15:23.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 339 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.357 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Creating config drive at /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.367 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvs15fzfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:24 compute-0 ceph-mon[74249]: osdmap e260: 3 total, 3 up, 3 in
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.540 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvs15fzfm" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.571 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.575 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.775 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.777 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Deleting local config drive /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config because it was imported into RBD.
Oct 14 09:15:24 compute-0 NetworkManager[44885]: <info>  [1760433324.8369] manager: (tap96f6d888-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Oct 14 09:15:24 compute-0 kernel: tap96f6d888-2b: entered promiscuous mode
Oct 14 09:15:24 compute-0 ovn_controller[152662]: 2025-10-14T09:15:24Z|01103|binding|INFO|Claiming lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 for this chassis.
Oct 14 09:15:24 compute-0 ovn_controller[152662]: 2025-10-14T09:15:24Z|01104|binding|INFO|96f6d888-2b4a-4ca3-a48c-4628d4143f60: Claiming fa:16:3e:59:b2:50 10.100.0.4
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.852 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:b2:50 10.100.0.4'], port_security=['fa:16:3e:59:b2:50 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6b47438c-b2b5-48e7-a15c-ee7c5936da65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=96f6d888-2b4a-4ca3-a48c-4628d4143f60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.854 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 96f6d888-2b4a-4ca3-a48c-4628d4143f60 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis
Oct 14 09:15:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.871 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:15:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.891 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1baf7dd6-7424-4371-afb2-8e5ef6b2a80b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:24 compute-0 ovn_controller[152662]: 2025-10-14T09:15:24Z|01105|binding|INFO|Setting lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 ovn-installed in OVS
Oct 14 09:15:24 compute-0 ovn_controller[152662]: 2025-10-14T09:15:24Z|01106|binding|INFO|Setting lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 up in Southbound
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:24 compute-0 nova_compute[259627]: 2025-10-14 09:15:24.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:24 compute-0 systemd-udevd[361696]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:15:24 compute-0 systemd-machined[214636]: New machine qemu-130-instance-00000068.
Oct 14 09:15:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.929 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ea95af-c20b-440b-96d9-aaad2ad06c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.931 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d99d36ac-a443-44c9-8125-cd7d1a82787c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:24 compute-0 NetworkManager[44885]: <info>  [1760433324.9363] device (tap96f6d888-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:15:24 compute-0 NetworkManager[44885]: <info>  [1760433324.9374] device (tap96f6d888-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:15:24 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000068.
Oct 14 09:15:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.974 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ab8135-878c-4803-9c99-2073b4ecaa26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.008 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e39d2cb-71c1-47ef-a230-8d88f378711a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 21, 'rx_bytes': 1000, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 21, 'rx_bytes': 1000, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361703, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.032 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be274130-0d24-4251-aca4-b681dbcdd601]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361707, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361707, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.038 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.042 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.042 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.042 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.043 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.143 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Updated VIF entry in instance network info cache for port 96f6d888-2b4a-4ca3-a48c-4628d4143f60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.144 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Updating instance_info_cache with network_info: [{"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.162 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.162 2 DEBUG nova.compute.manager [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.162 2 DEBUG nova.compute.manager [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.163 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.163 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.163 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.305 2 DEBUG nova.compute.manager [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.306 2 DEBUG oslo_concurrency.lockutils [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.306 2 DEBUG oslo_concurrency.lockutils [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.307 2 DEBUG oslo_concurrency.lockutils [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.307 2 DEBUG nova.compute.manager [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Processing event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:15:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 14 09:15:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 14 09:15:25 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 14 09:15:25 compute-0 ceph-mon[74249]: pgmap v1836: 305 pgs: 305 active+clean; 339 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.772 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433310.744599, da40c115-048e-4844-812e-7e65e25bfb3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.773 2 INFO nova.compute.manager [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] VM Stopped (Lifecycle Event)
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.795 2 DEBUG nova.compute.manager [None req-9b9d7534-54e7-465c-83b8-a30a18c4667e - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.969 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433325.9689043, 6b47438c-b2b5-48e7-a15c-ee7c5936da65 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.969 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] VM Started (Lifecycle Event)
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.971 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.975 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.978 2 INFO nova.virt.libvirt.driver [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance spawned successfully.
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.979 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:15:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 406 op/s
Oct 14 09:15:25 compute-0 nova_compute[259627]: 2025-10-14 09:15:25.998 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.006 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.011 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.012 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.013 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.013 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.014 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.015 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.044 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433325.971192, 6b47438c-b2b5-48e7-a15c-ee7c5936da65 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.045 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] VM Paused (Lifecycle Event)
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.072 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.076 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433325.976358, 6b47438c-b2b5-48e7-a15c-ee7c5936da65 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.076 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] VM Resumed (Lifecycle Event)
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.083 2 INFO nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Took 7.97 seconds to spawn the instance on the hypervisor.
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.084 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.097 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.101 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.134 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.166 2 INFO nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Took 9.03 seconds to build instance.
Oct 14 09:15:26 compute-0 nova_compute[259627]: 2025-10-14 09:15:26.181 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 14 09:15:26 compute-0 ceph-mon[74249]: osdmap e261: 3 total, 3 up, 3 in
Oct 14 09:15:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 14 09:15:26 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.044 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.045 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.072 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.397 2 DEBUG nova.compute.manager [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.397 2 DEBUG oslo_concurrency.lockutils [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.398 2 DEBUG oslo_concurrency.lockutils [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.398 2 DEBUG oslo_concurrency.lockutils [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.399 2 DEBUG nova.compute.manager [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] No waiting events found dispatching network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:27 compute-0 nova_compute[259627]: 2025-10-14 09:15:27.399 2 WARNING nova.compute.manager [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received unexpected event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 for instance with vm_state active and task_state None.
Oct 14 09:15:27 compute-0 ceph-mon[74249]: pgmap v1838: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 406 op/s
Oct 14 09:15:27 compute-0 ceph-mon[74249]: osdmap e262: 3 total, 3 up, 3 in
Oct 14 09:15:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 53 KiB/s wr, 222 op/s
Oct 14 09:15:28 compute-0 nova_compute[259627]: 2025-10-14 09:15:28.006 2 DEBUG nova.compute.manager [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:28 compute-0 nova_compute[259627]: 2025-10-14 09:15:28.006 2 DEBUG nova.compute.manager [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:15:28 compute-0 nova_compute[259627]: 2025-10-14 09:15:28.007 2 DEBUG oslo_concurrency.lockutils [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:28 compute-0 nova_compute[259627]: 2025-10-14 09:15:28.008 2 DEBUG oslo_concurrency.lockutils [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:28 compute-0 nova_compute[259627]: 2025-10-14 09:15:28.008 2 DEBUG nova.network.neutron [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:15:28 compute-0 nova_compute[259627]: 2025-10-14 09:15:28.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:28 compute-0 nova_compute[259627]: 2025-10-14 09:15:28.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:29 compute-0 ceph-mon[74249]: pgmap v1840: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 53 KiB/s wr, 222 op/s
Oct 14 09:15:29 compute-0 ovn_controller[152662]: 2025-10-14T09:15:29Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 09:15:29 compute-0 ovn_controller[152662]: 2025-10-14T09:15:29Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.961 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.961 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.963 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.964 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.965 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.966 2 INFO nova.compute.manager [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Terminating instance
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.968 2 DEBUG nova.compute.manager [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:29 compute-0 nova_compute[259627]: 2025-10-14 09:15:29.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 48 KiB/s wr, 204 op/s
Oct 14 09:15:30 compute-0 kernel: tap96f6d888-2b (unregistering): left promiscuous mode
Oct 14 09:15:30 compute-0 NetworkManager[44885]: <info>  [1760433330.0152] device (tap96f6d888-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:15:30 compute-0 ovn_controller[152662]: 2025-10-14T09:15:30Z|01107|binding|INFO|Releasing lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 from this chassis (sb_readonly=0)
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:30 compute-0 ovn_controller[152662]: 2025-10-14T09:15:30Z|01108|binding|INFO|Setting lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 down in Southbound
Oct 14 09:15:30 compute-0 ovn_controller[152662]: 2025-10-14T09:15:30Z|01109|binding|INFO|Removing iface tap96f6d888-2b ovn-installed in OVS
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.042 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:b2:50 10.100.0.4'], port_security=['fa:16:3e:59:b2:50 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6b47438c-b2b5-48e7-a15c-ee7c5936da65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=96f6d888-2b4a-4ca3-a48c-4628d4143f60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.043 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 96f6d888-2b4a-4ca3-a48c-4628d4143f60 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.045 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.063 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61812b6c-310d-43d4-b7b2-550fe4601382]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.092 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[15f252c8-9b40-4049-bc8e-b20c7af865ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.095 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7c0eb4-c530-4521-978d-834c1ca40f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:30 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 14 09:15:30 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Consumed 4.993s CPU time.
Oct 14 09:15:30 compute-0 systemd-machined[214636]: Machine qemu-130-instance-00000068 terminated.
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.125 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1d1ca0-ec30-4ff7-ba9d-18fc3d92ad4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.143 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f691a26-071f-492f-96f9-9ac477a2efcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 23, 'rx_bytes': 1000, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 23, 'rx_bytes': 1000, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361762, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.159 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c5ec72-20a7-4f55-bd93-54b4b7f18c52]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361763, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361763, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.160 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.169 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.169 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.169 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.170 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.201 2 INFO nova.virt.libvirt.driver [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance destroyed successfully.
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.201 2 DEBUG nova.objects.instance [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid 6b47438c-b2b5-48e7-a15c-ee7c5936da65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.215 2 DEBUG nova.virt.libvirt.vif [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-265021143',display_name='tempest-ServersTestJSON-server-265021143',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-265021143',id=104,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-6w8v0jr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:28Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=6b47438c-b2b5-48e7-a15c-ee7c5936da65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.215 2 DEBUG nova.network.os_vif_util [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.216 2 DEBUG nova.network.os_vif_util [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.216 2 DEBUG os_vif [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96f6d888-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.222 2 INFO os_vif [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b')
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.538 2 DEBUG nova.network.neutron [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.539 2 DEBUG nova.network.neutron [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.565 2 DEBUG oslo_concurrency.lockutils [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.603 2 INFO nova.virt.libvirt.driver [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Deleting instance files /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65_del
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.604 2 INFO nova.virt.libvirt.driver [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Deletion of /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65_del complete
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.668 2 DEBUG nova.compute.manager [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-unplugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.668 2 DEBUG oslo_concurrency.lockutils [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.670 2 DEBUG oslo_concurrency.lockutils [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.670 2 DEBUG oslo_concurrency.lockutils [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.670 2 DEBUG nova.compute.manager [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] No waiting events found dispatching network-vif-unplugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.671 2 DEBUG nova.compute.manager [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-unplugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.676 2 INFO nova.compute.manager [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.677 2 DEBUG oslo.service.loopingcall [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.677 2 DEBUG nova.compute.manager [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.677 2 DEBUG nova.network.neutron [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:30 compute-0 nova_compute[259627]: 2025-10-14 09:15:30.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:31 compute-0 nova_compute[259627]: 2025-10-14 09:15:31.480 2 DEBUG nova.network.neutron [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:31 compute-0 nova_compute[259627]: 2025-10-14 09:15:31.504 2 INFO nova.compute.manager [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Took 0.83 seconds to deallocate network for instance.
Oct 14 09:15:31 compute-0 ceph-mon[74249]: pgmap v1841: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 48 KiB/s wr, 204 op/s
Oct 14 09:15:31 compute-0 nova_compute[259627]: 2025-10-14 09:15:31.553 2 DEBUG nova.compute.manager [req-942d255b-f509-4360-b3da-1d696c72bc84 req-63d01bd1-2bd7-41e7-8324-702a079d0eac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-deleted-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:31 compute-0 nova_compute[259627]: 2025-10-14 09:15:31.560 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:31 compute-0 nova_compute[259627]: 2025-10-14 09:15:31.560 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:31 compute-0 nova_compute[259627]: 2025-10-14 09:15:31.673 2 DEBUG oslo_concurrency.processutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:31 compute-0 podman[361796]: 2025-10-14 09:15:31.716509282 +0000 UTC m=+0.112504414 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:15:31 compute-0 podman[361795]: 2025-10-14 09:15:31.741444707 +0000 UTC m=+0.145606580 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:15:31 compute-0 nova_compute[259627]: 2025-10-14 09:15:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 359 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.2 MiB/s wr, 429 op/s
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859700460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.174 2 DEBUG oslo_concurrency.processutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.182 2 DEBUG nova.compute.provider_tree [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.201 2 DEBUG nova.scheduler.client.report [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.235 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.240 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.240 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.241 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.241 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.336 2 INFO nova.scheduler.client.report [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance 6b47438c-b2b5-48e7-a15c-ee7c5936da65
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.432 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 14 09:15:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 14 09:15:32 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 14 09:15:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/859700460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159755200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.712 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.787 2 DEBUG nova.compute.manager [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.787 2 DEBUG oslo_concurrency.lockutils [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.787 2 DEBUG oslo_concurrency.lockutils [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.787 2 DEBUG oslo_concurrency.lockutils [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.788 2 DEBUG nova.compute.manager [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] No waiting events found dispatching network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.788 2 WARNING nova.compute.manager [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received unexpected event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 for instance with vm_state deleted and task_state None.
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:15:32
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:15:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.804 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.805 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.811 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.811 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.817 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.817 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:32 compute-0 nova_compute[259627]: 2025-10-14 09:15:32.818 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 14 09:15:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 14 09:15:32 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.056 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.057 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3149MB free_disk=59.81802749633789GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.057 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.057 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.146 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d46b6953-9413-4e6a-94f7-7b5ac9634c16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.146 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e1aea504-3ecf-4273-a867-66afb39de726 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.146 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 33969555-fe06-4613-b244-d03c9b4180ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.146 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.147 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.228 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.503 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.504 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.504 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.504 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.505 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.506 2 INFO nova.compute.manager [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Terminating instance
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.507 2 DEBUG nova.compute.manager [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:15:33 compute-0 ceph-mon[74249]: pgmap v1842: 305 pgs: 305 active+clean; 359 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.2 MiB/s wr, 429 op/s
Oct 14 09:15:33 compute-0 ceph-mon[74249]: osdmap e263: 3 total, 3 up, 3 in
Oct 14 09:15:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2159755200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:33 compute-0 ceph-mon[74249]: osdmap e264: 3 total, 3 up, 3 in
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:33 compute-0 kernel: tap175a9914-00 (unregistering): left promiscuous mode
Oct 14 09:15:33 compute-0 NetworkManager[44885]: <info>  [1760433333.5762] device (tap175a9914-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:15:33 compute-0 ovn_controller[152662]: 2025-10-14T09:15:33Z|01110|binding|INFO|Releasing lport 175a9914-0068-4aeb-b4b6-d501212d3374 from this chassis (sb_readonly=0)
Oct 14 09:15:33 compute-0 ovn_controller[152662]: 2025-10-14T09:15:33Z|01111|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 down in Southbound
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:33 compute-0 ovn_controller[152662]: 2025-10-14T09:15:33Z|01112|binding|INFO|Removing iface tap175a9914-00 ovn-installed in OVS
Oct 14 09:15:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:33.589 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:d8:7e 10.100.0.7'], port_security=['fa:16:3e:6a:d8:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1aea504-3ecf-4273-a867-66afb39de726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a53d982-8cf9-44ff-9b16-dda957aa7729', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4eae338e7d54d159033a20bc7460935', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9a0a095f-2308-407d-90d9-6acaa688f93c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab8a6447-c4e7-4ebf-b042-eb7d83ec88dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=175a9914-0068-4aeb-b4b6-d501212d3374) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:33.590 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 175a9914-0068-4aeb-b4b6-d501212d3374 in datapath 8a53d982-8cf9-44ff-9b16-dda957aa7729 unbound from our chassis
Oct 14 09:15:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:33.591 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a53d982-8cf9-44ff-9b16-dda957aa7729 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:15:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:33.592 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[369d570b-928a-4470-b3f1-d2e0e6f9bd30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:33 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 14 09:15:33 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000065.scope: Consumed 12.722s CPU time.
Oct 14 09:15:33 compute-0 systemd-machined[214636]: Machine qemu-128-instance-00000065 terminated.
Oct 14 09:15:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712277105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.713 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.724 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.742 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.748 2 INFO nova.virt.libvirt.driver [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance destroyed successfully.
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.748 2 DEBUG nova.objects.instance [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'resources' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.764 2 DEBUG nova.virt.libvirt.vif [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1533575249',display_name='tempest-ServerRescueTestJSONUnderV235-server-1533575249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1533575249',id=101,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4eae338e7d54d159033a20bc7460935',ramdisk_id='',reservation_id='r-tm6flkse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2037260230',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2037260230-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:12Z,user_data=None,user_id='648aaa75d8974d439d8ebe331c3d6568',uuid=e1aea504-3ecf-4273-a867-66afb39de726,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.764 2 DEBUG nova.network.os_vif_util [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converting VIF {"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.765 2 DEBUG nova.network.os_vif_util [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.765 2 DEBUG os_vif [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.767 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap175a9914-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.770 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.770 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:33 compute-0 nova_compute[259627]: 2025-10-14 09:15:33.772 2 INFO os_vif [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00')
Oct 14 09:15:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 14 09:15:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 14 09:15:33 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 14 09:15:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 359 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 349 op/s
Oct 14 09:15:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1712277105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:34 compute-0 ceph-mon[74249]: osdmap e265: 3 total, 3 up, 3 in
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.657 2 INFO nova.virt.libvirt.driver [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deleting instance files /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726_del
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.658 2 INFO nova.virt.libvirt.driver [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deletion of /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726_del complete
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.727 2 INFO nova.compute.manager [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Took 1.22 seconds to destroy the instance on the hypervisor.
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.728 2 DEBUG oslo.service.loopingcall [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.729 2 DEBUG nova.compute.manager [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.730 2 DEBUG nova.network.neutron [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.970 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.971 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.971 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.972 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.973 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.973 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.974 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.975 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.975 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.976 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.977 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:34 compute-0 nova_compute[259627]: 2025-10-14 09:15:34.977 2 WARNING nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state rescued and task_state deleting.
Oct 14 09:15:35 compute-0 ceph-mon[74249]: pgmap v1846: 305 pgs: 305 active+clean; 359 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 349 op/s
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.696 2 DEBUG nova.network.neutron [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.719 2 INFO nova.compute.manager [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Took 0.99 seconds to deallocate network for instance.
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.771 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.772 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.811 2 DEBUG nova.compute.manager [req-85fd60c1-7642-4c36-844c-aa52f4c42d34 req-bae79897-53ea-43a2-9541-1286b9dc437a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-deleted-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.845 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.846 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.867 2 INFO nova.compute.manager [None req-0b0f90e8-c2b8-43ce-8854-d98ac3f27c1b e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Get console output
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.877 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.888 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:15:35 compute-0 nova_compute[259627]: 2025-10-14 09:15:35.913 2 DEBUG oslo_concurrency.processutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 559 op/s
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.008 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.224 2 DEBUG oslo_concurrency.lockutils [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.225 2 DEBUG oslo_concurrency.lockutils [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.225 2 DEBUG nova.compute.manager [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.231 2 DEBUG nova.compute.manager [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.233 2 DEBUG nova.objects.instance [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.270 2 DEBUG nova.virt.libvirt.driver [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:15:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098411361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.369 2 DEBUG oslo_concurrency.processutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.380 2 DEBUG nova.compute.provider_tree [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.395 2 DEBUG nova.scheduler.client.report [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.416 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.421 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.431 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.432 2 INFO nova.compute.claims [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.458 2 INFO nova.scheduler.client.report [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Deleted allocations for instance e1aea504-3ecf-4273-a867-66afb39de726
Oct 14 09:15:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3098411361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.565 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.627 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.771 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.772 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.800 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.800 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:36 compute-0 nova_compute[259627]: 2025-10-14 09:15:36.800 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:15:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424274207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.071 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.080 2 DEBUG nova.compute.provider_tree [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.097 2 DEBUG nova.scheduler.client.report [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.122 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.124 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.194 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.195 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.221 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.240 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.354 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.356 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.356 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Creating image(s)
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.384 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.409 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.433 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.436 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.526 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.527 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.528 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.528 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.553 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.557 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c28cafe5-40e7-47f9-8793-6193487fccc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:37 compute-0 ceph-mon[74249]: pgmap v1847: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 559 op/s
Oct 14 09:15:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2424274207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.699 2 DEBUG nova.policy [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.856 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c28cafe5-40e7-47f9-8793-6193487fccc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 14 09:15:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 14 09:15:37 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.949 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:15:37 compute-0 nova_compute[259627]: 2025-10-14 09:15:37.988 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 38 KiB/s wr, 231 op/s
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.071 2 DEBUG nova.objects.instance [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.084 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.085 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Ensure instance console log exists: /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.085 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.085 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.086 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:38 compute-0 kernel: tap7e42bf44-c1 (unregistering): left promiscuous mode
Oct 14 09:15:38 compute-0 NetworkManager[44885]: <info>  [1760433338.5898] device (tap7e42bf44-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:38 compute-0 ovn_controller[152662]: 2025-10-14T09:15:38Z|01113|binding|INFO|Releasing lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 from this chassis (sb_readonly=0)
Oct 14 09:15:38 compute-0 ovn_controller[152662]: 2025-10-14T09:15:38Z|01114|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 down in Southbound
Oct 14 09:15:38 compute-0 ovn_controller[152662]: 2025-10-14T09:15:38Z|01115|binding|INFO|Removing iface tap7e42bf44-c1 ovn-installed in OVS
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.610 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:8d:8f 10.100.0.4'], port_security=['fa:16:3e:20:8d:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33969555-fe06-4613-b244-d03c9b4180ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fb2d92f-9be1-4133-9fba-da943dad4162', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a0b33e-96f5-46d7-a240-9f59c55a6b07, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7e42bf44-c1f8-49df-bd5f-a26abe43a832) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.611 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 in datapath 7f4225de-9f3f-48e2-bad7-a89cf4884a2e unbound from our chassis
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.613 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.613 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4dea3aa5-c2a8-48af-b481-c41fd6dbf14f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.614 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e namespace which is not needed anymore
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:38 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 14 09:15:38 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Consumed 13.667s CPU time.
Oct 14 09:15:38 compute-0 systemd-machined[214636]: Machine qemu-129-instance-00000067 terminated.
Oct 14 09:15:38 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [NOTICE]   (361321) : haproxy version is 2.8.14-c23fe91
Oct 14 09:15:38 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [NOTICE]   (361321) : path to executable is /usr/sbin/haproxy
Oct 14 09:15:38 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [WARNING]  (361321) : Exiting Master process...
Oct 14 09:15:38 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [ALERT]    (361321) : Current worker (361323) exited with code 143 (Terminated)
Oct 14 09:15:38 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [WARNING]  (361321) : All workers exited. Exiting... (0)
Oct 14 09:15:38 compute-0 systemd[1]: libpod-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1.scope: Deactivated successfully.
Oct 14 09:15:38 compute-0 conmon[361317]: conmon 4c63af5c6a6f78608041 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1.scope/container/memory.events
Oct 14 09:15:38 compute-0 podman[362179]: 2025-10-14 09:15:38.765420443 +0000 UTC m=+0.051085680 container died 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1-userdata-shm.mount: Deactivated successfully.
Oct 14 09:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5839d52bde3e6e8745834d54c6e47047a57e1518d17430d80d50930db816dbf4-merged.mount: Deactivated successfully.
Oct 14 09:15:38 compute-0 podman[362179]: 2025-10-14 09:15:38.818221955 +0000 UTC m=+0.103887222 container cleanup 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:15:38 compute-0 systemd[1]: libpod-conmon-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1.scope: Deactivated successfully.
Oct 14 09:15:38 compute-0 ceph-mon[74249]: osdmap e266: 3 total, 3 up, 3 in
Oct 14 09:15:38 compute-0 ceph-mon[74249]: pgmap v1849: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 38 KiB/s wr, 231 op/s
Oct 14 09:15:38 compute-0 podman[362211]: 2025-10-14 09:15:38.89836354 +0000 UTC m=+0.050445825 container remove 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d40d17e1-f028-4109-94f8-a231969d3036]: (4, ('Tue Oct 14 09:15:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e (4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1)\n4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1\nTue Oct 14 09:15:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e (4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1)\n4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.911 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37708e8a-39ee-4b6b-b1c2-f87ad9f8f0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.912 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4225de-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:38 compute-0 kernel: tap7f4225de-90: left promiscuous mode
Oct 14 09:15:38 compute-0 nova_compute[259627]: 2025-10-14 09:15:38.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.949 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e252a4e6-84a1-4d07-8e01-6010ed358121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.983 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[28f6deb7-0a3e-442c-81c5-f40ca0f52814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c55df0-b962-45ce-9c41-752fc2516cb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:39.002 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[161d6710-551b-4b99-b0fa-d9ea3598d892]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709309, 'reachable_time': 21217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362241, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d7f4225de\x2d9f3f\x2d48e2\x2dbad7\x2da89cf4884a2e.mount: Deactivated successfully.
Oct 14 09:15:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:39.008 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:15:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:39.008 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[14bee3e0-53c1-430c-ab46-72e0a9ea5928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.294 2 INFO nova.virt.libvirt.driver [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance shutdown successfully after 3 seconds.
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.302 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance destroyed successfully.
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.303 2 DEBUG nova.objects.instance [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'numa_topology' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.325 2 DEBUG nova.compute.manager [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.394 2 DEBUG oslo_concurrency.lockutils [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.736 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Successfully created port: 114d4e63-ee15-4133-b8bc-9cd2b1861072 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG nova.compute.manager [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-unplugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG oslo_concurrency.lockutils [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG oslo_concurrency.lockutils [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG oslo_concurrency.lockutils [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG nova.compute.manager [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-unplugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:39 compute-0 nova_compute[259627]: 2025-10-14 09:15:39.852 2 WARNING nova.compute.manager [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-unplugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state stopped and task_state None.
Oct 14 09:15:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 29 KiB/s wr, 177 op/s
Oct 14 09:15:40 compute-0 ovn_controller[152662]: 2025-10-14T09:15:40Z|01116|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:15:40 compute-0 nova_compute[259627]: 2025-10-14 09:15:40.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:40 compute-0 ovn_controller[152662]: 2025-10-14T09:15:40Z|01117|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:15:40 compute-0 nova_compute[259627]: 2025-10-14 09:15:40.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:40 compute-0 nova_compute[259627]: 2025-10-14 09:15:40.995 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Successfully updated port: 114d4e63-ee15-4133-b8bc-9cd2b1861072 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:15:41 compute-0 nova_compute[259627]: 2025-10-14 09:15:41.007 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:41 compute-0 nova_compute[259627]: 2025-10-14 09:15:41.007 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:41 compute-0 nova_compute[259627]: 2025-10-14 09:15:41.007 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:15:41 compute-0 ceph-mon[74249]: pgmap v1850: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 29 KiB/s wr, 177 op/s
Oct 14 09:15:41 compute-0 nova_compute[259627]: 2025-10-14 09:15:41.291 2 DEBUG nova.compute.manager [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-changed-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:41 compute-0 nova_compute[259627]: 2025-10-14 09:15:41.291 2 DEBUG nova.compute.manager [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Refreshing instance network info cache due to event network-changed-114d4e63-ee15-4133-b8bc-9cd2b1861072. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:15:41 compute-0 nova_compute[259627]: 2025-10-14 09:15:41.292 2 DEBUG oslo_concurrency.lockutils [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:41 compute-0 nova_compute[259627]: 2025-10-14 09:15:41.371 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:15:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 2.7 MiB/s wr, 200 op/s
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.013 2 DEBUG nova.compute.manager [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.014 2 DEBUG oslo_concurrency.lockutils [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.015 2 DEBUG oslo_concurrency.lockutils [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.015 2 DEBUG oslo_concurrency.lockutils [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.016 2 DEBUG nova.compute.manager [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.016 2 WARNING nova.compute.manager [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state stopped and task_state None.
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.519 2 INFO nova.compute.manager [None req-fe1aa59b-3a98-4bfb-a7a9-9a14018fd2d0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Get console output
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.793 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.816 2 DEBUG oslo_concurrency.lockutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.816 2 DEBUG oslo_concurrency.lockutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.816 2 DEBUG nova.network.neutron [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:15:42 compute-0 nova_compute[259627]: 2025-10-14 09:15:42.817 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'info_cache' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:43 compute-0 ceph-mon[74249]: pgmap v1851: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 2.7 MiB/s wr, 200 op/s
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.110 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Updating instance_info_cache with network_info: [{"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.145 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.146 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance network_info: |[{"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.146 2 DEBUG oslo_concurrency.lockutils [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.147 2 DEBUG nova.network.neutron [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Refreshing network info cache for port 114d4e63-ee15-4133-b8bc-9cd2b1861072 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.152 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start _get_guest_xml network_info=[{"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.159 2 WARNING nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.170 2 DEBUG nova.virt.libvirt.host [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.171 2 DEBUG nova.virt.libvirt.host [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.176 2 DEBUG nova.virt.libvirt.host [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.176 2 DEBUG nova.virt.libvirt.host [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.177 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.177 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.178 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.179 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.179 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.180 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.181 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.182 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.182 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.183 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.183 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.184 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018646067347138113 of space, bias 1.0, pg target 0.5593820204141434 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.189 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:15:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752679014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.694 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.719 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.724 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:43 compute-0 nova_compute[259627]: 2025-10-14 09:15:43.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 2.2 MiB/s wr, 162 op/s
Oct 14 09:15:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2752679014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2816728327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.212 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.215 2 DEBUG nova.virt.libvirt.vif [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-271457736',display_name='tempest-ServersTestJSON-server-271457736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-271457736',id=105,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-tuwn13dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:37Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=c28cafe5-40e7-47f9-8793-6193487fccc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.215 2 DEBUG nova.network.os_vif_util [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.217 2 DEBUG nova.network.os_vif_util [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.219 2 DEBUG nova.objects.instance [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.238 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <uuid>c28cafe5-40e7-47f9-8793-6193487fccc3</uuid>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <name>instance-00000069</name>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <nova:name>tempest-ServersTestJSON-server-271457736</nova:name>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:15:43</nova:creationTime>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <nova:port uuid="114d4e63-ee15-4133-b8bc-9cd2b1861072">
Oct 14 09:15:44 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <system>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <entry name="serial">c28cafe5-40e7-47f9-8793-6193487fccc3</entry>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <entry name="uuid">c28cafe5-40e7-47f9-8793-6193487fccc3</entry>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </system>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <os>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   </os>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <features>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   </features>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c28cafe5-40e7-47f9-8793-6193487fccc3_disk">
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config">
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:d1:b1:69"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <target dev="tap114d4e63-ee"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/console.log" append="off"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <video>
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </video>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:15:44 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:15:44 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:15:44 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:15:44 compute-0 nova_compute[259627]: </domain>
Oct 14 09:15:44 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.240 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Preparing to wait for external event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.241 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.241 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.242 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.243 2 DEBUG nova.virt.libvirt.vif [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-271457736',display_name='tempest-ServersTestJSON-server-271457736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-271457736',id=105,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-tuwn13dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:37Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=c28cafe5-40e7-47f9-8793-6193487fccc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.243 2 DEBUG nova.network.os_vif_util [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.244 2 DEBUG nova.network.os_vif_util [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.245 2 DEBUG os_vif [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.251 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap114d4e63-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.252 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap114d4e63-ee, col_values=(('external_ids', {'iface-id': '114d4e63-ee15-4133-b8bc-9cd2b1861072', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:b1:69', 'vm-uuid': 'c28cafe5-40e7-47f9-8793-6193487fccc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:44 compute-0 NetworkManager[44885]: <info>  [1760433344.2555] manager: (tap114d4e63-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.266 2 INFO os_vif [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee')
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.324 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.324 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.324 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:d1:b1:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.325 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Using config drive
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.350 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.869 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Creating config drive at /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.878 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwe3g1i6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:44 compute-0 nova_compute[259627]: 2025-10-14 09:15:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.049 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwe3g1i6" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:45 compute-0 ceph-mon[74249]: pgmap v1852: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 2.2 MiB/s wr, 162 op/s
Oct 14 09:15:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2816728327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.092 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.097 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.153 2 DEBUG nova.network.neutron [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.180 2 DEBUG oslo_concurrency.lockutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.200 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433330.198205, 6b47438c-b2b5-48e7-a15c-ee7c5936da65 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.200 2 INFO nova.compute.manager [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] VM Stopped (Lifecycle Event)
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.223 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance destroyed successfully.
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.224 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'numa_topology' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.229 2 DEBUG nova.compute.manager [None req-f535e7d7-882f-4c27-900b-d8a429d33636 - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.246 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.273 2 DEBUG nova.virt.libvirt.vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:39Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.274 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.275 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.275 2 DEBUG os_vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e42bf44-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.284 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.284 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Deleting local config drive /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config because it was imported into RBD.
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.288 2 INFO os_vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1')
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.297 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start _get_guest_xml network_info=[{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.306 2 WARNING nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.315 2 DEBUG nova.virt.libvirt.host [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.316 2 DEBUG nova.virt.libvirt.host [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.323 2 DEBUG nova.virt.libvirt.host [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.324 2 DEBUG nova.virt.libvirt.host [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.324 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.324 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.325 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.325 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.326 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.326 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.326 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.327 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.327 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.327 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.327 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.328 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.328 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:45 compute-0 kernel: tap114d4e63-ee: entered promiscuous mode
Oct 14 09:15:45 compute-0 NetworkManager[44885]: <info>  [1760433345.3335] manager: (tap114d4e63-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Oct 14 09:15:45 compute-0 ovn_controller[152662]: 2025-10-14T09:15:45Z|01118|binding|INFO|Claiming lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 for this chassis.
Oct 14 09:15:45 compute-0 ovn_controller[152662]: 2025-10-14T09:15:45Z|01119|binding|INFO|114d4e63-ee15-4133-b8bc-9cd2b1861072: Claiming fa:16:3e:d1:b1:69 10.100.0.4
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.345 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:b1:69 10.100.0.4'], port_security=['fa:16:3e:d1:b1:69 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c28cafe5-40e7-47f9-8793-6193487fccc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=114d4e63-ee15-4133-b8bc-9cd2b1861072) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.346 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 114d4e63-ee15-4133-b8bc-9cd2b1861072 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.347 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.357 2 DEBUG oslo_concurrency.processutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:45 compute-0 ovn_controller[152662]: 2025-10-14T09:15:45Z|01120|binding|INFO|Setting lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 ovn-installed in OVS
Oct 14 09:15:45 compute-0 ovn_controller[152662]: 2025-10-14T09:15:45Z|01121|binding|INFO|Setting lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 up in Southbound
Oct 14 09:15:45 compute-0 systemd-udevd[362382]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.363 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[54acce03-57d8-41f7-859e-d94daed58017]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:45 compute-0 NetworkManager[44885]: <info>  [1760433345.3750] device (tap114d4e63-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:15:45 compute-0 NetworkManager[44885]: <info>  [1760433345.3756] device (tap114d4e63-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:15:45 compute-0 systemd-machined[214636]: New machine qemu-131-instance-00000069.
Oct 14 09:15:45 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-00000069.
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.398 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed328fb-f4ff-4c30-91cb-51b2fa380ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.399 2 DEBUG nova.network.neutron [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Updated VIF entry in instance network info cache for port 114d4e63-ee15-4133-b8bc-9cd2b1861072. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.401 2 DEBUG nova.network.neutron [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Updating instance_info_cache with network_info: [{"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.404 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[091c83e7-5d53-4b17-9162-568614237a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.429 2 DEBUG oslo_concurrency.lockutils [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.433 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b1f666-13f9-4eba-bf02-8255da95c3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.451 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3ad77b-ff69-4768-80cf-5d1e72e2c88e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 25, 'rx_bytes': 1000, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 25, 'rx_bytes': 1000, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362396, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[425ab8da-010e-444f-b077-a77b8dc463fc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362398, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362398, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.473 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.475 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.476 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.476 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.477 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.590 2 DEBUG nova.compute.manager [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.590 2 DEBUG oslo_concurrency.lockutils [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.591 2 DEBUG oslo_concurrency.lockutils [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.591 2 DEBUG oslo_concurrency.lockutils [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.591 2 DEBUG nova.compute.manager [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Processing event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:15:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525250669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.785 2 DEBUG oslo_concurrency.processutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:45 compute-0 nova_compute[259627]: 2025-10-14 09:15:45.829 2 DEBUG oslo_concurrency.processutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 40 op/s
Oct 14 09:15:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2525250669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57831561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.315 2 DEBUG oslo_concurrency.processutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.317 2 DEBUG nova.virt.libvirt.vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:39Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.317 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.318 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.319 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.335 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <uuid>33969555-fe06-4613-b244-d03c9b4180ba</uuid>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <name>instance-00000067</name>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-383304321</nova:name>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:15:45</nova:creationTime>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <nova:port uuid="7e42bf44-c1f8-49df-bd5f-a26abe43a832">
Oct 14 09:15:46 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <system>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <entry name="serial">33969555-fe06-4613-b244-d03c9b4180ba</entry>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <entry name="uuid">33969555-fe06-4613-b244-d03c9b4180ba</entry>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </system>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <os>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   </os>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <features>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   </features>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/33969555-fe06-4613-b244-d03c9b4180ba_disk">
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/33969555-fe06-4613-b244-d03c9b4180ba_disk.config">
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:46 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:20:8d:8f"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <target dev="tap7e42bf44-c1"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/console.log" append="off"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <video>
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </video>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:15:46 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:15:46 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:15:46 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:15:46 compute-0 nova_compute[259627]: </domain>
Oct 14 09:15:46 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.336 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.337 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.337 2 DEBUG nova.virt.libvirt.vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:39Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.338 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.338 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.338 2 DEBUG os_vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e42bf44-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e42bf44-c1, col_values=(('external_ids', {'iface-id': '7e42bf44-c1f8-49df-bd5f-a26abe43a832', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:8d:8f', 'vm-uuid': '33969555-fe06-4613-b244-d03c9b4180ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.3445] manager: (tap7e42bf44-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.349 2 INFO os_vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1')
Oct 14 09:15:46 compute-0 kernel: tap7e42bf44-c1: entered promiscuous mode
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.4125] manager: (tap7e42bf44-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Oct 14 09:15:46 compute-0 ovn_controller[152662]: 2025-10-14T09:15:46Z|01122|binding|INFO|Claiming lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 for this chassis.
Oct 14 09:15:46 compute-0 ovn_controller[152662]: 2025-10-14T09:15:46Z|01123|binding|INFO|7e42bf44-c1f8-49df-bd5f-a26abe43a832: Claiming fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 systemd-udevd[362386]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.4229] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.4238] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.4269] device (tap7e42bf44-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.4284] device (tap7e42bf44-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.429 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:8d:8f 10.100.0.4'], port_security=['fa:16:3e:20:8d:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33969555-fe06-4613-b244-d03c9b4180ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7fb2d92f-9be1-4133-9fba-da943dad4162', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a0b33e-96f5-46d7-a240-9f59c55a6b07, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7e42bf44-c1f8-49df-bd5f-a26abe43a832) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.430 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 in datapath 7f4225de-9f3f-48e2-bad7-a89cf4884a2e bound to our chassis
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.431 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.442 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98a5fff2-ea28-4508-ab31-8dfecc02fc00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.443 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f4225de-91 in ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.445 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f4225de-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06527965-6d28-4c82-9e36-663422feeb28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.446 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68b191cd-e54d-4da5-835e-01c1c8b7ed80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 systemd-machined[214636]: New machine qemu-132-instance-00000067.
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.457 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a52980a1-a868-463d-8b03-4b3f9fd40735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-00000067.
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[97238166-650c-4435-8ed5-5d5355a84e0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.512 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[655e02e9-5b40-45ba-91a9-e7904bd64eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14608336-d68a-423c-9938-c6fde0793a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.5203] manager: (tap7f4225de-90): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 ovn_controller[152662]: 2025-10-14T09:15:46Z|01124|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 ovn_controller[152662]: 2025-10-14T09:15:46Z|01125|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 ovn-installed in OVS
Oct 14 09:15:46 compute-0 ovn_controller[152662]: 2025-10-14T09:15:46Z|01126|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 up in Southbound
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.563 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6f00ac51-6b3a-4812-8006-fa009d32e702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.566 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5257e62-928b-43ec-9ec0-b41427863b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.5885] device (tap7f4225de-90): carrier: link connected
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.594 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e782124b-a700-49ca-bcf3-def7a7ec96c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.616 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20252c68-adf2-4fc6-b8cb-a5428b2e3e7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f4225de-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:e2:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712534, 'reachable_time': 28069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362505, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.632 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e1f864-8f73-43e9-96f3-f9839b8a176d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:e234'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712534, 'tstamp': 712534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362524, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.652 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec330b4-c41d-4648-bb6a-b194796e77cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f4225de-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:e2:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712534, 'reachable_time': 28069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362531, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.681 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[60e5b20c-ac3a-46e4-9ab1-b423f88d07c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3959f9c7-a4d5-4a9a-8468-d5babf302c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.745 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4225de-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.745 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.746 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f4225de-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 NetworkManager[44885]: <info>  [1760433346.7485] manager: (tap7f4225de-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Oct 14 09:15:46 compute-0 kernel: tap7f4225de-90: entered promiscuous mode
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.751 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f4225de-90, col_values=(('external_ids', {'iface-id': 'f5c700c1-27f2-4a9c-8db0-f69417ca2318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:15:46 compute-0 ovn_controller[152662]: 2025-10-14T09:15:46Z|01127|binding|INFO|Releasing lport f5c700c1-27f2-4a9c-8db0-f69417ca2318 from this chassis (sb_readonly=0)
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 nova_compute[259627]: 2025-10-14 09:15:46.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.770 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.771 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb70e6d-b7a4-42ef-8622-772fe12156dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.771 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:15:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.772 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'env', 'PROCESS_TAG=haproxy-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:15:47 compute-0 ceph-mon[74249]: pgmap v1853: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 40 op/s
Oct 14 09:15:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/57831561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:47 compute-0 podman[362581]: 2025-10-14 09:15:47.182050262 +0000 UTC m=+0.059324693 container create a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.210 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.212 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433347.209737, c28cafe5-40e7-47f9-8793-6193487fccc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.213 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] VM Started (Lifecycle Event)
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.220 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.226 2 INFO nova.virt.libvirt.driver [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance spawned successfully.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.226 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.242 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:47 compute-0 podman[362581]: 2025-10-14 09:15:47.152634427 +0000 UTC m=+0.029908858 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.248 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:47 compute-0 systemd[1]: Started libpod-conmon-a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6.scope.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.262 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.263 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.263 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.264 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.264 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.265 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.271 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.271 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433347.2101824, c28cafe5-40e7-47f9-8793-6193487fccc3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.272 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] VM Paused (Lifecycle Event)
Oct 14 09:15:47 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.300 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b20c8a5752e1d1bec1c2d117ad2bfbfb3601e7200bb4fc5f4f5f26fbe83704/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.314 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433347.2193303, c28cafe5-40e7-47f9-8793-6193487fccc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.314 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] VM Resumed (Lifecycle Event)
Oct 14 09:15:47 compute-0 podman[362581]: 2025-10-14 09:15:47.328389279 +0000 UTC m=+0.205663730 container init a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.330 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.335 2 INFO nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Took 9.98 seconds to spawn the instance on the hypervisor.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.336 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.336 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:47 compute-0 podman[362581]: 2025-10-14 09:15:47.339889893 +0000 UTC m=+0.217164304 container start a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:15:47 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [NOTICE]   (362601) : New worker (362603) forked
Oct 14 09:15:47 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [NOTICE]   (362601) : Loading success.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.367 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.396 2 INFO nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Took 11.42 seconds to build instance.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.416 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.700 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.702 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.702 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.703 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.703 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] No waiting events found dispatching network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.704 2 WARNING nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received unexpected event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 for instance with vm_state active and task_state None.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.705 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.705 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.706 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.706 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.707 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.707 2 WARNING nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state stopped and task_state powering-on.
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.708 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.708 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.709 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.709 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.710 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:15:47 compute-0 nova_compute[259627]: 2025-10-14 09:15:47.710 2 WARNING nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state stopped and task_state powering-on.
Oct 14 09:15:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.1 MiB/s wr, 40 op/s
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.022 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 33969555-fe06-4613-b244-d03c9b4180ba due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.023 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433348.0222712, 33969555-fe06-4613-b244-d03c9b4180ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.023 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Resumed (Lifecycle Event)
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.026 2 DEBUG nova.compute.manager [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.030 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance rebooted successfully.
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.030 2 DEBUG nova.compute.manager [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.058 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.061 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.106 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.107 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433348.0231667, 33969555-fe06-4613-b244-d03c9b4180ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.107 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Started (Lifecycle Event)
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.150 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.153 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.476 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.478 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.494 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.579 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.580 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.587 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.588 2 INFO nova.compute.claims [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.741 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433333.7404816, e1aea504-3ecf-4273-a867-66afb39de726 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.741 2 INFO nova.compute.manager [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Stopped (Lifecycle Event)
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.757 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:48 compute-0 nova_compute[259627]: 2025-10-14 09:15:48.805 2 DEBUG nova.compute.manager [None req-2548896b-b81c-4e44-9374-e34c866ec5e4 - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:49 compute-0 ceph-mon[74249]: pgmap v1854: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.1 MiB/s wr, 40 op/s
Oct 14 09:15:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991585889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.213 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.218 2 DEBUG nova.compute.provider_tree [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.237 2 DEBUG nova.scheduler.client.report [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.264 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.264 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.331 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.357 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.378 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.484 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.486 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.487 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Creating image(s)
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.524 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.564 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.603 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.609 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.714 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.715 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.716 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.717 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.741 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:49 compute-0 nova_compute[259627]: 2025-10-14 09:15:49.745 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.062 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.064 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "548b81f4-df26-4f76-910f-5a14445c93c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.065 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2991585889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.151 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.160 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] resizing rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.221 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.222 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.261 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.262 2 INFO nova.compute.claims [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.269 2 DEBUG nova.objects.instance [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'migration_context' on Instance uuid 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.292 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.292 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Ensure instance console log exists: /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.293 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.293 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.293 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.295 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.300 2 WARNING nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.304 2 DEBUG nova.virt.libvirt.host [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.305 2 DEBUG nova.virt.libvirt.host [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.308 2 DEBUG nova.virt.libvirt.host [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.308 2 DEBUG nova.virt.libvirt.host [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.309 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.309 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.310 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.310 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.310 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.310 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.311 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.311 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.311 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.312 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.312 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.312 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.315 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.386 2 DEBUG oslo_concurrency.lockutils [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.387 2 DEBUG oslo_concurrency.lockutils [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.387 2 DEBUG nova.compute.manager [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.391 2 DEBUG nova.compute.manager [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.392 2 DEBUG nova.objects.instance [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'flavor' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.421 2 DEBUG nova.virt.libvirt.driver [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.539 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:50 compute-0 podman[362864]: 2025-10-14 09:15:50.667472156 +0000 UTC m=+0.082575317 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:15:50 compute-0 podman[362863]: 2025-10-14 09:15:50.68997825 +0000 UTC m=+0.098536869 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 09:15:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383577701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.766 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.807 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:50 compute-0 nova_compute[259627]: 2025-10-14 09:15:50.816 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:15:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3091754112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.002 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.024 2 DEBUG nova.compute.provider_tree [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.048 2 DEBUG nova.scheduler.client.report [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.082 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.083 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:15:51 compute-0 ceph-mon[74249]: pgmap v1855: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 14 09:15:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1383577701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3091754112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.161 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.219 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.259 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:15:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824327778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.279 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.281 2 DEBUG nova.objects.instance [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.306 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <uuid>0c7c28d8-ba3d-471a-bf37-8ff1870d27c8</uuid>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <name>instance-0000006a</name>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerShowV247Test-server-1513228304</nova:name>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:15:50</nova:creationTime>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <nova:user uuid="c5d4a1c172e947e0a129b9f397f961cf">tempest-ServerShowV247Test-1595240674-project-member</nova:user>
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <nova:project uuid="222bb7bdbd34453db62947152ca9b44a">tempest-ServerShowV247Test-1595240674</nova:project>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <system>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <entry name="serial">0c7c28d8-ba3d-471a-bf37-8ff1870d27c8</entry>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <entry name="uuid">0c7c28d8-ba3d-471a-bf37-8ff1870d27c8</entry>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     </system>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <os>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   </os>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <features>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   </features>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk">
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config">
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:51 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/console.log" append="off"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <video>
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     </video>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:15:51 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:15:51 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:15:51 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:15:51 compute-0 nova_compute[259627]: </domain>
Oct 14 09:15:51 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.360 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.361 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.361 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Using config drive
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.408 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.419 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.421 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.421 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating image(s)
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.450 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.483 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.515 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.519 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.640 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.643 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.644 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.645 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.671 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.677 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 548b81f4-df26-4f76-910f-5a14445c93c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.825 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Creating config drive at /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config
Oct 14 09:15:51 compute-0 nova_compute[259627]: 2025-10-14 09:15:51.839 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp371n0rop execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.024 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp371n0rop" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.049 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.053 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.090 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 548b81f4-df26-4f76-910f-5a14445c93c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3824327778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.179 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] resizing rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.220 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.220 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Deleting local config drive /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config because it was imported into RBD.
Oct 14 09:15:52 compute-0 systemd-machined[214636]: New machine qemu-133-instance-0000006a.
Oct 14 09:15:52 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006a.
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.341 2 DEBUG nova.objects.instance [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'migration_context' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.372 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.372 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Ensure instance console log exists: /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.373 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.374 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.374 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.376 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.381 2 WARNING nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.386 2 DEBUG nova.virt.libvirt.host [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.387 2 DEBUG nova.virt.libvirt.host [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.390 2 DEBUG nova.virt.libvirt.host [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.390 2 DEBUG nova.virt.libvirt.host [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.390 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.390 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.392 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.392 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.392 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.392 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.394 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897067073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.876 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.913 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:52 compute-0 nova_compute[259627]: 2025-10-14 09:15:52.917 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:53 compute-0 ceph-mon[74249]: pgmap v1856: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Oct 14 09:15:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/897067073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:15:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391217156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.364 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.366 2 DEBUG nova.objects.instance [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'pci_devices' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.383 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <uuid>548b81f4-df26-4f76-910f-5a14445c93c5</uuid>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <name>instance-0000006b</name>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerShowV247Test-server-1778827495</nova:name>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:15:52</nova:creationTime>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <nova:user uuid="c5d4a1c172e947e0a129b9f397f961cf">tempest-ServerShowV247Test-1595240674-project-member</nova:user>
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <nova:project uuid="222bb7bdbd34453db62947152ca9b44a">tempest-ServerShowV247Test-1595240674</nova:project>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <system>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <entry name="serial">548b81f4-df26-4f76-910f-5a14445c93c5</entry>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <entry name="uuid">548b81f4-df26-4f76-910f-5a14445c93c5</entry>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     </system>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <os>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   </os>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <features>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   </features>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/548b81f4-df26-4f76-910f-5a14445c93c5_disk">
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/548b81f4-df26-4f76-910f-5a14445c93c5_disk.config">
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       </source>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:15:53 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/console.log" append="off"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <video>
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     </video>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:15:53 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:15:53 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:15:53 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:15:53 compute-0 nova_compute[259627]: </domain>
Oct 14 09:15:53 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.430 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.430 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.430 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Using config drive
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.451 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.603 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating config drive at /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.608 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud5hobtm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.746 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud5hobtm" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.782 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.785 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.986 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:15:53 compute-0 nova_compute[259627]: 2025-10-14 09:15:53.987 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deleting local config drive /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config because it was imported into RBD.
Oct 14 09:15:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Oct 14 09:15:54 compute-0 systemd-machined[214636]: New machine qemu-134-instance-0000006b.
Oct 14 09:15:54 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006b.
Oct 14 09:15:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2391217156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.633 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433354.6325357, 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.634 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] VM Resumed (Lifecycle Event)
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.636 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.636 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.639 2 INFO nova.virt.libvirt.driver [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance spawned successfully.
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.640 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.655 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.659 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.662 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.663 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.663 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.663 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.663 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.664 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.697 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.698 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433354.6356375, 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.698 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] VM Started (Lifecycle Event)
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.727 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.729 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.739 2 INFO nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Took 5.25 seconds to spawn the instance on the hypervisor.
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.739 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.750 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.787 2 INFO nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Took 6.25 seconds to build instance.
Oct 14 09:15:54 compute-0 nova_compute[259627]: 2025-10-14 09:15:54.803 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:55 compute-0 ceph-mon[74249]: pgmap v1857: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.174 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433355.1738203, 548b81f4-df26-4f76-910f-5a14445c93c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.174 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Resumed (Lifecycle Event)
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.175 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.176 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.178 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance spawned successfully.
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.179 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.215 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.218 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.227 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.228 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.228 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.229 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.230 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.230 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.272 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.272 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433355.173933, 548b81f4-df26-4f76-910f-5a14445c93c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.272 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Started (Lifecycle Event)
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.298 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.301 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.310 2 INFO nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Took 3.89 seconds to spawn the instance on the hypervisor.
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.311 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.321 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.371 2 INFO nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Took 5.17 seconds to build instance.
Oct 14 09:15:55 compute-0 nova_compute[259627]: 2025-10-14 09:15:55.409 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:15:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 237 op/s
Oct 14 09:15:56 compute-0 nova_compute[259627]: 2025-10-14 09:15:56.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:57 compute-0 ceph-mon[74249]: pgmap v1858: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 237 op/s
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.205 2 INFO nova.compute.manager [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Rebuilding instance
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.471 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.487 2 DEBUG nova.compute.manager [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.539 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'pci_requests' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.552 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'pci_devices' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.562 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'resources' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.575 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'migration_context' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.592 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:15:57 compute-0 nova_compute[259627]: 2025-10-14 09:15:57.598 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:15:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:15:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 233 op/s
Oct 14 09:15:58 compute-0 nova_compute[259627]: 2025-10-14 09:15:58.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:15:59 compute-0 ovn_controller[152662]: 2025-10-14T09:15:59Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:b1:69 10.100.0.4
Oct 14 09:15:59 compute-0 ovn_controller[152662]: 2025-10-14T09:15:59Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:b1:69 10.100.0.4
Oct 14 09:15:59 compute-0 ceph-mon[74249]: pgmap v1859: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 233 op/s
Oct 14 09:16:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 233 op/s
Oct 14 09:16:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:00.416 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:16:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:00.417 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:16:00 compute-0 nova_compute[259627]: 2025-10-14 09:16:00.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:00 compute-0 ovn_controller[152662]: 2025-10-14T09:16:00Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 09:16:00 compute-0 nova_compute[259627]: 2025-10-14 09:16:00.673 2 DEBUG nova.virt.libvirt.driver [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:16:01 compute-0 ceph-mon[74249]: pgmap v1860: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 233 op/s
Oct 14 09:16:01 compute-0 nova_compute[259627]: 2025-10-14 09:16:01.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 372 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 5.7 MiB/s wr, 449 op/s
Oct 14 09:16:02 compute-0 podman[363423]: 2025-10-14 09:16:02.684752639 +0000 UTC m=+0.082473954 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:16:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:16:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:16:02 compute-0 podman[363422]: 2025-10-14 09:16:02.783660027 +0000 UTC m=+0.173380165 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:16:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:16:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:16:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:16:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:16:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:02 compute-0 kernel: tap114d4e63-ee (unregistering): left promiscuous mode
Oct 14 09:16:02 compute-0 NetworkManager[44885]: <info>  [1760433362.9541] device (tap114d4e63-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:16:02 compute-0 nova_compute[259627]: 2025-10-14 09:16:02.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:02 compute-0 ovn_controller[152662]: 2025-10-14T09:16:02Z|01128|binding|INFO|Releasing lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 from this chassis (sb_readonly=0)
Oct 14 09:16:02 compute-0 ovn_controller[152662]: 2025-10-14T09:16:02Z|01129|binding|INFO|Setting lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 down in Southbound
Oct 14 09:16:02 compute-0 ovn_controller[152662]: 2025-10-14T09:16:02Z|01130|binding|INFO|Removing iface tap114d4e63-ee ovn-installed in OVS
Oct 14 09:16:02 compute-0 nova_compute[259627]: 2025-10-14 09:16:02.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:02.980 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:b1:69 10.100.0.4'], port_security=['fa:16:3e:d1:b1:69 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c28cafe5-40e7-47f9-8793-6193487fccc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=114d4e63-ee15-4133-b8bc-9cd2b1861072) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:16:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:02.982 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 114d4e63-ee15-4133-b8bc-9cd2b1861072 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis
Oct 14 09:16:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:02.984 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.004 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2364bea3-9f6b-4b9d-8c58-cae1a17de8ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:03 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 14 09:16:03 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Consumed 13.216s CPU time.
Oct 14 09:16:03 compute-0 systemd-machined[214636]: Machine qemu-131-instance-00000069 terminated.
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.037 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fdaa386b-045b-46f5-b7c5-aa77661ba5ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.040 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e49e9118-5f4a-4182-8953-02830729bff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.067 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc8eba9-653b-46a2-8897-893199a0f2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.086 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15b21982-7360-4376-bd70-ea8c7a7fb30d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 27, 'rx_bytes': 1000, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 27, 'rx_bytes': 1000, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363475, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.104 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dde9ff-2028-462a-942e-fdeb8130f290]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363476, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363476, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.106 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.111 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.112 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.112 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:03 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.112 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:16:03 compute-0 ceph-mon[74249]: pgmap v1861: 305 pgs: 305 active+clean; 372 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 5.7 MiB/s wr, 449 op/s
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.694 2 INFO nova.virt.libvirt.driver [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance shutdown successfully after 13 seconds.
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.702 2 INFO nova.virt.libvirt.driver [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance destroyed successfully.
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.704 2 DEBUG nova.objects.instance [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'numa_topology' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.724 2 DEBUG nova.compute.manager [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:03 compute-0 nova_compute[259627]: 2025-10-14 09:16:03.775 2 DEBUG oslo_concurrency.lockutils [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 372 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 290 op/s
Oct 14 09:16:05 compute-0 nova_compute[259627]: 2025-10-14 09:16:05.059 2 DEBUG nova.compute.manager [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-unplugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:05 compute-0 sudo[363489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:05 compute-0 nova_compute[259627]: 2025-10-14 09:16:05.061 2 DEBUG oslo_concurrency.lockutils [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:05 compute-0 nova_compute[259627]: 2025-10-14 09:16:05.062 2 DEBUG oslo_concurrency.lockutils [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:05 compute-0 nova_compute[259627]: 2025-10-14 09:16:05.063 2 DEBUG oslo_concurrency.lockutils [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:05 compute-0 nova_compute[259627]: 2025-10-14 09:16:05.063 2 DEBUG nova.compute.manager [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] No waiting events found dispatching network-vif-unplugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:16:05 compute-0 nova_compute[259627]: 2025-10-14 09:16:05.064 2 WARNING nova.compute.manager [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received unexpected event network-vif-unplugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 for instance with vm_state stopped and task_state None.
Oct 14 09:16:05 compute-0 sudo[363489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:05 compute-0 sudo[363489]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:05 compute-0 sudo[363514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:16:05 compute-0 sudo[363514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:05 compute-0 sudo[363514]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:05 compute-0 ceph-mon[74249]: pgmap v1862: 305 pgs: 305 active+clean; 372 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 290 op/s
Oct 14 09:16:05 compute-0 sudo[363539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:05 compute-0 sudo[363539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:05 compute-0 sudo[363539]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:05 compute-0 sudo[363564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:16:05 compute-0 sudo[363564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:16:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2509509054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:16:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:16:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2509509054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:16:05 compute-0 sudo[363564]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:16:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:16:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:16:05 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:16:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:16:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:16:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 35464b58-09f3-4df3-bb0e-00cd93f45fda does not exist
Oct 14 09:16:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ab35fca6-05df-460b-b2db-c9d0589b620c does not exist
Oct 14 09:16:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0784b6c0-3fae-45cd-b51f-062462d6c15a does not exist
Oct 14 09:16:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 294 op/s
Oct 14 09:16:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:16:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:16:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:16:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:16:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:16:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:16:06 compute-0 sudo[363621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:06 compute-0 sudo[363621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:06 compute-0 sudo[363621]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:06 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 14 09:16:06 compute-0 sudo[363646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:16:06 compute-0 sudo[363646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:06 compute-0 sudo[363646]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2509509054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:16:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2509509054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:16:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:16:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:16:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:16:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:16:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:16:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:16:06 compute-0 sudo[363671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:06 compute-0 sudo[363671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:06 compute-0 sudo[363671]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:06 compute-0 sudo[363696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:16:06 compute-0 sudo[363696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:06.454 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.741 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.741 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.741 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.741 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.742 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.742 2 INFO nova.compute.manager [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Terminating instance
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.743 2 DEBUG nova.compute.manager [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.750 2 INFO nova.virt.libvirt.driver [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance destroyed successfully.
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.751 2 DEBUG nova.objects.instance [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.772 2 DEBUG nova.virt.libvirt.vif [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-271457736',display_name='tempest-Íñstáñcé-775666780',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-271457736',id=105,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-tuwn13dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:16:05Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=c28cafe5-40e7-47f9-8793-6193487fccc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.773 2 DEBUG nova.network.os_vif_util [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.774 2 DEBUG nova.network.os_vif_util [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.774 2 DEBUG os_vif [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.777 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap114d4e63-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:16:06 compute-0 nova_compute[259627]: 2025-10-14 09:16:06.785 2 INFO os_vif [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee')
Oct 14 09:16:06 compute-0 podman[363760]: 2025-10-14 09:16:06.849210308 +0000 UTC m=+0.057261262 container create 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 09:16:06 compute-0 systemd[1]: Started libpod-conmon-7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2.scope.
Oct 14 09:16:06 compute-0 podman[363760]: 2025-10-14 09:16:06.817595139 +0000 UTC m=+0.025646123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:16:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:16:06 compute-0 podman[363760]: 2025-10-14 09:16:06.998552129 +0000 UTC m=+0.206603163 container init 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:16:07 compute-0 podman[363760]: 2025-10-14 09:16:07.009284003 +0000 UTC m=+0.217334967 container start 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:16:07 compute-0 podman[363760]: 2025-10-14 09:16:07.02011345 +0000 UTC m=+0.228164484 container attach 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 09:16:07 compute-0 suspicious_almeida[363795]: 167 167
Oct 14 09:16:07 compute-0 systemd[1]: libpod-7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2.scope: Deactivated successfully.
Oct 14 09:16:07 compute-0 podman[363760]: 2025-10-14 09:16:07.023090953 +0000 UTC m=+0.231141897 container died 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:16:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e7aec122f5704b5bcccfcfd46ff2938f7ca56867b2e1a74ea2b70b8edd07709-merged.mount: Deactivated successfully.
Oct 14 09:16:07 compute-0 podman[363760]: 2025-10-14 09:16:07.078652802 +0000 UTC m=+0.286703746 container remove 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:16:07 compute-0 systemd[1]: libpod-conmon-7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2.scope: Deactivated successfully.
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.185 2 DEBUG nova.compute.manager [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.186 2 DEBUG oslo_concurrency.lockutils [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.186 2 DEBUG oslo_concurrency.lockutils [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.187 2 DEBUG oslo_concurrency.lockutils [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.187 2 DEBUG nova.compute.manager [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] No waiting events found dispatching network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.188 2 WARNING nova.compute.manager [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received unexpected event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 for instance with vm_state stopped and task_state deleting.
Oct 14 09:16:07 compute-0 ceph-mon[74249]: pgmap v1863: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 294 op/s
Oct 14 09:16:07 compute-0 podman[363819]: 2025-10-14 09:16:07.28753696 +0000 UTC m=+0.058052322 container create 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:16:07 compute-0 systemd[1]: Started libpod-conmon-09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81.scope.
Oct 14 09:16:07 compute-0 podman[363819]: 2025-10-14 09:16:07.265009335 +0000 UTC m=+0.035524717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:16:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:07 compute-0 podman[363819]: 2025-10-14 09:16:07.387658848 +0000 UTC m=+0.158174210 container init 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:16:07 compute-0 podman[363819]: 2025-10-14 09:16:07.396491945 +0000 UTC m=+0.167007317 container start 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:16:07 compute-0 podman[363819]: 2025-10-14 09:16:07.400572726 +0000 UTC m=+0.171088088 container attach 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.440 2 INFO nova.virt.libvirt.driver [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Deleting instance files /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3_del
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.441 2 INFO nova.virt.libvirt.driver [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Deletion of /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3_del complete
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.499 2 INFO nova.compute.manager [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.499 2 DEBUG oslo.service.loopingcall [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.500 2 DEBUG nova.compute.manager [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.500 2 DEBUG nova.network.neutron [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.649 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:16:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:07 compute-0 nova_compute[259627]: 2025-10-14 09:16:07.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Oct 14 09:16:08 compute-0 nova_compute[259627]: 2025-10-14 09:16:08.266 2 INFO nova.compute.manager [None req-03568d0e-f610-49cd-8eb8-f36caa06d433 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Get console output
Oct 14 09:16:08 compute-0 nova_compute[259627]: 2025-10-14 09:16:08.278 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:16:08 compute-0 nova_compute[259627]: 2025-10-14 09:16:08.413 2 DEBUG nova.network.neutron [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:08 compute-0 nova_compute[259627]: 2025-10-14 09:16:08.440 2 INFO nova.compute.manager [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Took 0.94 seconds to deallocate network for instance.
Oct 14 09:16:08 compute-0 zealous_goldstine[363835]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:16:08 compute-0 zealous_goldstine[363835]: --> relative data size: 1.0
Oct 14 09:16:08 compute-0 zealous_goldstine[363835]: --> All data devices are unavailable
Oct 14 09:16:08 compute-0 nova_compute[259627]: 2025-10-14 09:16:08.501 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:08 compute-0 nova_compute[259627]: 2025-10-14 09:16:08.502 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:08 compute-0 systemd[1]: libpod-09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81.scope: Deactivated successfully.
Oct 14 09:16:08 compute-0 podman[363819]: 2025-10-14 09:16:08.522747544 +0000 UTC m=+1.293262916 container died 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0-merged.mount: Deactivated successfully.
Oct 14 09:16:08 compute-0 nova_compute[259627]: 2025-10-14 09:16:08.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:08 compute-0 podman[363819]: 2025-10-14 09:16:08.597433485 +0000 UTC m=+1.367948857 container remove 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:16:08 compute-0 systemd[1]: libpod-conmon-09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81.scope: Deactivated successfully.
Oct 14 09:16:08 compute-0 nova_compute[259627]: 2025-10-14 09:16:08.619 2 DEBUG oslo_concurrency.processutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:08 compute-0 sudo[363696]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:08 compute-0 sudo[363876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:08 compute-0 sudo[363876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:08 compute-0 sudo[363876]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:08 compute-0 sudo[363902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:16:08 compute-0 sudo[363902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:08 compute-0 sudo[363902]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:08 compute-0 sudo[363927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:08 compute-0 sudo[363927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:08 compute-0 sudo[363927]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:08 compute-0 sudo[363971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:16:08 compute-0 sudo[363971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640539548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.139 2 DEBUG oslo_concurrency.processutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.148 2 DEBUG nova.compute.provider_tree [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.186 2 DEBUG nova.scheduler.client.report [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.190 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.190 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.192 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.193 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.193 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.194 2 INFO nova.compute.manager [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Terminating instance
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.195 2 DEBUG nova.compute.manager [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.212 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:09 compute-0 ceph-mon[74249]: pgmap v1864: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Oct 14 09:16:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/640539548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:09 compute-0 kernel: tap7e42bf44-c1 (unregistering): left promiscuous mode
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.245 2 INFO nova.scheduler.client.report [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance c28cafe5-40e7-47f9-8793-6193487fccc3
Oct 14 09:16:09 compute-0 NetworkManager[44885]: <info>  [1760433369.2561] device (tap7e42bf44-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:09 compute-0 ovn_controller[152662]: 2025-10-14T09:16:09Z|01131|binding|INFO|Releasing lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 from this chassis (sb_readonly=0)
Oct 14 09:16:09 compute-0 ovn_controller[152662]: 2025-10-14T09:16:09Z|01132|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 down in Southbound
Oct 14 09:16:09 compute-0 ovn_controller[152662]: 2025-10-14T09:16:09Z|01133|binding|INFO|Removing iface tap7e42bf44-c1 ovn-installed in OVS
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.276 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:8d:8f 10.100.0.4'], port_security=['fa:16:3e:20:8d:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33969555-fe06-4613-b244-d03c9b4180ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7fb2d92f-9be1-4133-9fba-da943dad4162', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a0b33e-96f5-46d7-a240-9f59c55a6b07, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7e42bf44-c1f8-49df-bd5f-a26abe43a832) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.277 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 in datapath 7f4225de-9f3f-48e2-bad7-a89cf4884a2e unbound from our chassis
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.278 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a14f8b-be06-465a-956d-3e61cd4bc267]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.281 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e namespace which is not needed anymore
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG nova.compute.manager [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG nova.compute.manager [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing instance network info cache due to event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG oslo_concurrency.lockutils [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG oslo_concurrency.lockutils [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG nova.network.neutron [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:09 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 14 09:16:09 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000067.scope: Consumed 13.392s CPU time.
Oct 14 09:16:09 compute-0 systemd-machined[214636]: Machine qemu-132-instance-00000067 terminated.
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.348 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:09 compute-0 podman[364044]: 2025-10-14 09:16:09.375866641 +0000 UTC m=+0.052370172 container create 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:16:09 compute-0 systemd[1]: Started libpod-conmon-99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064.scope.
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.436 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance destroyed successfully.
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.437 2 DEBUG nova.objects.instance [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:09 compute-0 podman[364044]: 2025-10-14 09:16:09.35595426 +0000 UTC m=+0.032457811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.451 2 DEBUG nova.compute.manager [req-ada7b3b5-4683-45ed-a649-573f3ff49630 req-40a78d58-a54e-4d36-9fdb-77d0f9ea164b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-deleted-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.453 2 DEBUG nova.virt.libvirt.vif [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:48Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.453 2 DEBUG nova.network.os_vif_util [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.454 2 DEBUG nova.network.os_vif_util [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.454 2 DEBUG os_vif [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e42bf44-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:09 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:16:09 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [NOTICE]   (362601) : haproxy version is 2.8.14-c23fe91
Oct 14 09:16:09 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [NOTICE]   (362601) : path to executable is /usr/sbin/haproxy
Oct 14 09:16:09 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [WARNING]  (362601) : Exiting Master process...
Oct 14 09:16:09 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [WARNING]  (362601) : Exiting Master process...
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:16:09 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [ALERT]    (362601) : Current worker (362603) exited with code 143 (Terminated)
Oct 14 09:16:09 compute-0 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [WARNING]  (362601) : All workers exited. Exiting... (0)
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.469 2 INFO os_vif [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1')
Oct 14 09:16:09 compute-0 systemd[1]: libpod-a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6.scope: Deactivated successfully.
Oct 14 09:16:09 compute-0 podman[364077]: 2025-10-14 09:16:09.478998993 +0000 UTC m=+0.055482989 container died a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:16:09 compute-0 podman[364044]: 2025-10-14 09:16:09.492629568 +0000 UTC m=+0.169133109 container init 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:16:09 compute-0 podman[364044]: 2025-10-14 09:16:09.503712882 +0000 UTC m=+0.180216403 container start 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:16:09 compute-0 podman[364044]: 2025-10-14 09:16:09.507650409 +0000 UTC m=+0.184153960 container attach 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:16:09 compute-0 agitated_lumiere[364089]: 167 167
Oct 14 09:16:09 compute-0 systemd[1]: libpod-99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064.scope: Deactivated successfully.
Oct 14 09:16:09 compute-0 podman[364044]: 2025-10-14 09:16:09.509502154 +0000 UTC m=+0.186005695 container died 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6-userdata-shm.mount: Deactivated successfully.
Oct 14 09:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-88b20c8a5752e1d1bec1c2d117ad2bfbfb3601e7200bb4fc5f4f5f26fbe83704-merged.mount: Deactivated successfully.
Oct 14 09:16:09 compute-0 podman[364077]: 2025-10-14 09:16:09.531127377 +0000 UTC m=+0.107611363 container cleanup a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:16:09 compute-0 systemd[1]: libpod-conmon-a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6.scope: Deactivated successfully.
Oct 14 09:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f90829a5cc4deb50936b244f5ae36ea81f15dd759d405ef2104a58eca9f3cbfc-merged.mount: Deactivated successfully.
Oct 14 09:16:09 compute-0 podman[364044]: 2025-10-14 09:16:09.563804063 +0000 UTC m=+0.240307594 container remove 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 09:16:09 compute-0 systemd[1]: libpod-conmon-99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064.scope: Deactivated successfully.
Oct 14 09:16:09 compute-0 podman[364142]: 2025-10-14 09:16:09.61561604 +0000 UTC m=+0.054152996 container remove a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.621 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82f4ef9f-0a11-4510-8964-67d7c5f0c0f4]: (4, ('Tue Oct 14 09:16:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e (a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6)\na5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6\nTue Oct 14 09:16:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e (a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6)\na5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.626 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[657218a5-7b60-4394-96db-c9a78445c33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.627 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4225de-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:09 compute-0 kernel: tap7f4225de-90: left promiscuous mode
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.642 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d24820b-8527-4ffe-a752-e48a21283262]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.658 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3d7349-ad40-4e8e-9600-58256dbcde72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.661 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b016a66-17ff-49ed-9652-052cc6440339]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.682 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f39f819-f8d6-42ef-a1ca-1f8c1b13514e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712526, 'reachable_time': 18536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364162, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d7f4225de\x2d9f3f\x2d48e2\x2dbad7\x2da89cf4884a2e.mount: Deactivated successfully.
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.686 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:16:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.686 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c225fc9a-14a3-45a9-8159-9b59be96a3df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:09 compute-0 podman[364168]: 2025-10-14 09:16:09.753234982 +0000 UTC m=+0.040442038 container create 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:16:09 compute-0 systemd[1]: Started libpod-conmon-8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97.scope.
Oct 14 09:16:09 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:09 compute-0 podman[364168]: 2025-10-14 09:16:09.738381215 +0000 UTC m=+0.025588291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:09 compute-0 podman[364168]: 2025-10-14 09:16:09.864313579 +0000 UTC m=+0.151520655 container init 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:16:09 compute-0 podman[364168]: 2025-10-14 09:16:09.874541881 +0000 UTC m=+0.161748937 container start 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:16:09 compute-0 podman[364168]: 2025-10-14 09:16:09.878593321 +0000 UTC m=+0.165800407 container attach 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.923 2 INFO nova.virt.libvirt.driver [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Deleting instance files /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba_del
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.925 2 INFO nova.virt.libvirt.driver [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Deletion of /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba_del complete
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.982 2 INFO nova.compute.manager [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.983 2 DEBUG oslo.service.loopingcall [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.983 2 DEBUG nova.compute.manager [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:16:09 compute-0 nova_compute[259627]: 2025-10-14 09:16:09.983 2 DEBUG nova.network.neutron [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:16:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Oct 14 09:16:10 compute-0 nova_compute[259627]: 2025-10-14 09:16:10.593 2 DEBUG nova.network.neutron [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updated VIF entry in instance network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:16:10 compute-0 nova_compute[259627]: 2025-10-14 09:16:10.594 2 DEBUG nova.network.neutron [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]: {
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:     "0": [
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:         {
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "devices": [
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "/dev/loop3"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             ],
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_name": "ceph_lv0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_size": "21470642176",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "name": "ceph_lv0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "tags": {
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cluster_name": "ceph",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.crush_device_class": "",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.encrypted": "0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osd_id": "0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.type": "block",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.vdo": "0"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             },
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "type": "block",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "vg_name": "ceph_vg0"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:         }
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:     ],
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:     "1": [
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:         {
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "devices": [
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "/dev/loop4"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             ],
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_name": "ceph_lv1",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_size": "21470642176",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "name": "ceph_lv1",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "tags": {
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cluster_name": "ceph",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.crush_device_class": "",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.encrypted": "0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osd_id": "1",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.type": "block",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.vdo": "0"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             },
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "type": "block",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "vg_name": "ceph_vg1"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:         }
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:     ],
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:     "2": [
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:         {
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "devices": [
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "/dev/loop5"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             ],
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_name": "ceph_lv2",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_size": "21470642176",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "name": "ceph_lv2",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "tags": {
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.cluster_name": "ceph",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.crush_device_class": "",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.encrypted": "0",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osd_id": "2",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.type": "block",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:                 "ceph.vdo": "0"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             },
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "type": "block",
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:             "vg_name": "ceph_vg2"
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:         }
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]:     ]
Oct 14 09:16:10 compute-0 lucid_elgamal[364185]: }
Oct 14 09:16:10 compute-0 nova_compute[259627]: 2025-10-14 09:16:10.614 2 DEBUG oslo_concurrency.lockutils [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:16:10 compute-0 systemd[1]: libpod-8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97.scope: Deactivated successfully.
Oct 14 09:16:10 compute-0 podman[364168]: 2025-10-14 09:16:10.630844552 +0000 UTC m=+0.918051618 container died 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 09:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab-merged.mount: Deactivated successfully.
Oct 14 09:16:10 compute-0 podman[364168]: 2025-10-14 09:16:10.701210845 +0000 UTC m=+0.988417891 container remove 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:16:10 compute-0 systemd[1]: libpod-conmon-8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97.scope: Deactivated successfully.
Oct 14 09:16:10 compute-0 sudo[363971]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:10 compute-0 sudo[364205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:10 compute-0 sudo[364205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:10 compute-0 sudo[364205]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:10 compute-0 sudo[364230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:16:10 compute-0 sudo[364230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:10 compute-0 sudo[364230]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:10 compute-0 nova_compute[259627]: 2025-10-14 09:16:10.924 2 DEBUG nova.network.neutron [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:10 compute-0 sudo[364255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:10 compute-0 nova_compute[259627]: 2025-10-14 09:16:10.945 2 INFO nova.compute.manager [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Took 0.96 seconds to deallocate network for instance.
Oct 14 09:16:10 compute-0 sudo[364255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:10 compute-0 sudo[364255]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:10 compute-0 nova_compute[259627]: 2025-10-14 09:16:10.988 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:10 compute-0 nova_compute[259627]: 2025-10-14 09:16:10.989 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:11 compute-0 sudo[364280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:16:11 compute-0 sudo[364280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.119 2 DEBUG oslo_concurrency.processutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.155 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.156 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.156 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.157 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.157 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.158 2 INFO nova.compute.manager [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Terminating instance
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.159 2 DEBUG nova.compute.manager [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:16:11 compute-0 ovn_controller[152662]: 2025-10-14T09:16:11Z|01134|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:11 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 14 09:16:11 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Consumed 13.642s CPU time.
Oct 14 09:16:11 compute-0 kernel: tap05e92470-26 (unregistering): left promiscuous mode
Oct 14 09:16:11 compute-0 NetworkManager[44885]: <info>  [1760433371.2255] device (tap05e92470-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:16:11 compute-0 systemd-machined[214636]: Machine qemu-134-instance-0000006b terminated.
Oct 14 09:16:11 compute-0 ceph-mon[74249]: pgmap v1865: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Oct 14 09:16:11 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct 14 09:16:11 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Consumed 18.777s CPU time.
Oct 14 09:16:11 compute-0 systemd-machined[214636]: Machine qemu-117-instance-0000005f terminated.
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:11 compute-0 ovn_controller[152662]: 2025-10-14T09:16:11Z|01135|binding|INFO|Releasing lport 05e92470-2658-4ea2-9c44-e91cd5226905 from this chassis (sb_readonly=0)
Oct 14 09:16:11 compute-0 ovn_controller[152662]: 2025-10-14T09:16:11Z|01136|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 09:16:11 compute-0 ovn_controller[152662]: 2025-10-14T09:16:11Z|01137|binding|INFO|Removing iface tap05e92470-26 ovn-installed in OVS
Oct 14 09:16:11 compute-0 ovn_controller[152662]: 2025-10-14T09:16:11Z|01138|binding|INFO|Setting lport 05e92470-2658-4ea2-9c44-e91cd5226905 down in Southbound
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.357 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:fb:45 10.100.0.13'], port_security=['fa:16:3e:7f:fb:45 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd46b6953-9413-4e6a-94f7-7b5ac9634c16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=05e92470-2658-4ea2-9c44-e91cd5226905) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.358 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 05e92470-2658-4ea2-9c44-e91cd5226905 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.359 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbecee11-4892-4e36-88d8-98879af7bb1e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.360 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ececd49a-be6f-4914-8d10-1d9e99e49146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.360 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e namespace which is not needed anymore
Oct 14 09:16:11 compute-0 NetworkManager[44885]: <info>  [1760433371.3889] manager: (tap05e92470-26): new Tun device (/org/freedesktop/NetworkManager/Devices/455)
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.420 2 DEBUG nova.compute.manager [req-cc6fd093-4331-4d6f-b1b3-f26bf3df2216 req-c5eaa625-7606-427b-963f-eab976921eb5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-deleted-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:11 compute-0 podman[364367]: 2025-10-14 09:16:11.421133989 +0000 UTC m=+0.052720051 container create 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.428 2 INFO nova.virt.libvirt.driver [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance destroyed successfully.
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.428 2 DEBUG nova.objects.instance [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.439 2 DEBUG nova.virt.libvirt.vif [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:13:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1224416488',display_name='tempest-₡-1224416488',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1224416488',id=95,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-c5eeedgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:48Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d46b6953-9413-4e6a-94f7-7b5ac9634c16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.440 2 DEBUG nova.network.os_vif_util [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.441 2 DEBUG nova.network.os_vif_util [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.441 2 DEBUG os_vif [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05e92470-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.450 2 INFO os_vif [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26')
Oct 14 09:16:11 compute-0 systemd[1]: Started libpod-conmon-4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5.scope.
Oct 14 09:16:11 compute-0 podman[364367]: 2025-10-14 09:16:11.397711081 +0000 UTC m=+0.029297153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:16:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:16:11 compute-0 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [NOTICE]   (353603) : haproxy version is 2.8.14-c23fe91
Oct 14 09:16:11 compute-0 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [NOTICE]   (353603) : path to executable is /usr/sbin/haproxy
Oct 14 09:16:11 compute-0 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [WARNING]  (353603) : Exiting Master process...
Oct 14 09:16:11 compute-0 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [WARNING]  (353603) : Exiting Master process...
Oct 14 09:16:11 compute-0 podman[364367]: 2025-10-14 09:16:11.516762116 +0000 UTC m=+0.148348178 container init 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:16:11 compute-0 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [ALERT]    (353603) : Current worker (353605) exited with code 143 (Terminated)
Oct 14 09:16:11 compute-0 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [WARNING]  (353603) : All workers exited. Exiting... (0)
Oct 14 09:16:11 compute-0 systemd[1]: libpod-5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d.scope: Deactivated successfully.
Oct 14 09:16:11 compute-0 podman[364367]: 2025-10-14 09:16:11.524314442 +0000 UTC m=+0.155900484 container start 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:16:11 compute-0 podman[364367]: 2025-10-14 09:16:11.527771757 +0000 UTC m=+0.159357799 container attach 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:16:11 compute-0 podman[364421]: 2025-10-14 09:16:11.527978232 +0000 UTC m=+0.048637580 container died 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:16:11 compute-0 hardcore_babbage[364422]: 167 167
Oct 14 09:16:11 compute-0 podman[364367]: 2025-10-14 09:16:11.537756863 +0000 UTC m=+0.169342895 container died 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:16:11 compute-0 systemd[1]: libpod-4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5.scope: Deactivated successfully.
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.541 2 DEBUG nova.compute.manager [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.542 2 DEBUG oslo_concurrency.lockutils [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.542 2 DEBUG oslo_concurrency.lockutils [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.542 2 DEBUG oslo_concurrency.lockutils [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.542 2 DEBUG nova.compute.manager [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.543 2 WARNING nova.compute.manager [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state deleted and task_state None.
Oct 14 09:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d-userdata-shm.mount: Deactivated successfully.
Oct 14 09:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b011f8a1370d78d7a628d9aaf1bebb9c89844fdc71c852fd6c3b4b8bc5c51cd-merged.mount: Deactivated successfully.
Oct 14 09:16:11 compute-0 podman[364421]: 2025-10-14 09:16:11.565112617 +0000 UTC m=+0.085771955 container cleanup 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:16:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1089509023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c513db856503f7dc9d7ba243220ddbc6737731744ecb4a1e7efb0c2d236e0de-merged.mount: Deactivated successfully.
Oct 14 09:16:11 compute-0 podman[364367]: 2025-10-14 09:16:11.591869227 +0000 UTC m=+0.223455279 container remove 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.594 2 DEBUG oslo_concurrency.processutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.603 2 DEBUG nova.compute.provider_tree [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:11 compute-0 systemd[1]: libpod-conmon-4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5.scope: Deactivated successfully.
Oct 14 09:16:11 compute-0 systemd[1]: libpod-conmon-5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d.scope: Deactivated successfully.
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.619 2 DEBUG nova.scheduler.client.report [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.643 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:11 compute-0 podman[364473]: 2025-10-14 09:16:11.654413968 +0000 UTC m=+0.064111101 container remove 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b47cd716-e3be-42c8-b05e-066b630fa218]: (4, ('Tue Oct 14 09:16:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e (5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d)\n5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d\nTue Oct 14 09:16:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e (5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d)\n5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.661 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a85ec095-3832-4293-abce-89058f78404b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.662 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.696 2 INFO nova.scheduler.client.report [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance 33969555-fe06-4613-b244-d03c9b4180ba
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.705 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance shutdown successfully after 14 seconds.
Oct 14 09:16:11 compute-0 kernel: tapfbecee11-40: left promiscuous mode
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.715 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance destroyed successfully.
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcf835d-7291-43b7-bd11-d884ea80aba1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.728 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance destroyed successfully.
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[697ac0f8-3d74-4cd6-b9cd-3277873391b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.744 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4d8a4d-c5e1-4b76-a7f5-238a870551f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37c2959a-ffa7-4462-ae02-1c331a5bbc0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700068, 'reachable_time': 41015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364523, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:11 compute-0 systemd[1]: run-netns-ovnmeta\x2dfbecee11\x2d4892\x2d4e36\x2d88d8\x2d98879af7bb1e.mount: Deactivated successfully.
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.766 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:16:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.766 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[58cf4675-4d31-4638-966e-490e02b8c127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:11 compute-0 podman[364497]: 2025-10-14 09:16:11.775328058 +0000 UTC m=+0.048598288 container create da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.790 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:11 compute-0 systemd[1]: Started libpod-conmon-da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd.scope.
Oct 14 09:16:11 compute-0 podman[364497]: 2025-10-14 09:16:11.754894075 +0000 UTC m=+0.028164335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:16:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:11 compute-0 podman[364497]: 2025-10-14 09:16:11.874645736 +0000 UTC m=+0.147915996 container init da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:16:11 compute-0 podman[364497]: 2025-10-14 09:16:11.881984287 +0000 UTC m=+0.155254517 container start da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:16:11 compute-0 podman[364497]: 2025-10-14 09:16:11.887897573 +0000 UTC m=+0.161167833 container attach da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.897 2 INFO nova.virt.libvirt.driver [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Deleting instance files /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16_del
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.898 2 INFO nova.virt.libvirt.driver [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Deletion of /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16_del complete
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.977 2 INFO nova.compute.manager [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.980 2 DEBUG oslo.service.loopingcall [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.981 2 DEBUG nova.compute.manager [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:16:11 compute-0 nova_compute[259627]: 2025-10-14 09:16:11.981 2 DEBUG nova.network.neutron [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:16:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 279 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 403 op/s
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.153 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deleting instance files /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5_del
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.155 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deletion of /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5_del complete
Oct 14 09:16:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1089509023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.366 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.367 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating image(s)
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.401 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.448 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.478 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.483 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.569 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.571 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.572 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.572 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.601 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.607 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 548b81f4-df26-4f76-910f-5a14445c93c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.689 2 DEBUG nova.network.neutron [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.710 2 INFO nova.compute.manager [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Took 0.73 seconds to deallocate network for instance.
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.778 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.779 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:12 compute-0 nifty_booth[364533]: {
Oct 14 09:16:12 compute-0 nifty_booth[364533]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "osd_id": 2,
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "type": "bluestore"
Oct 14 09:16:12 compute-0 nifty_booth[364533]:     },
Oct 14 09:16:12 compute-0 nifty_booth[364533]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "osd_id": 1,
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "type": "bluestore"
Oct 14 09:16:12 compute-0 nifty_booth[364533]:     },
Oct 14 09:16:12 compute-0 nifty_booth[364533]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "osd_id": 0,
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:16:12 compute-0 nifty_booth[364533]:         "type": "bluestore"
Oct 14 09:16:12 compute-0 nifty_booth[364533]:     }
Oct 14 09:16:12 compute-0 nifty_booth[364533]: }
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.858 2 DEBUG oslo_concurrency.processutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:12 compute-0 systemd[1]: libpod-da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd.scope: Deactivated successfully.
Oct 14 09:16:12 compute-0 podman[364497]: 2025-10-14 09:16:12.873089235 +0000 UTC m=+1.146359495 container died da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6-merged.mount: Deactivated successfully.
Oct 14 09:16:12 compute-0 nova_compute[259627]: 2025-10-14 09:16:12.935 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 548b81f4-df26-4f76-910f-5a14445c93c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:12 compute-0 podman[364497]: 2025-10-14 09:16:12.944780422 +0000 UTC m=+1.218050672 container remove da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 14 09:16:12 compute-0 systemd[1]: libpod-conmon-da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd.scope: Deactivated successfully.
Oct 14 09:16:12 compute-0 sudo[364280]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:16:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:16:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:16:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:16:12 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c54b0415-85e0-49bb-bc66-d16bdd43b601 does not exist
Oct 14 09:16:12 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 48e33322-1993-4c5e-8ecd-b8c0cf0bd9f4 does not exist
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.033 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] resizing rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:16:13 compute-0 sudo[364706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:16:13 compute-0 sudo[364706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:13 compute-0 sudo[364706]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:13 compute-0 sudo[364772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:16:13 compute-0 sudo[364772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:16:13 compute-0 sudo[364772]: pam_unix(sudo:session): session closed for user root
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.136 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.136 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Ensure instance console log exists: /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.137 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.137 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.137 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.138 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.142 2 WARNING nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.146 2 DEBUG nova.virt.libvirt.host [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.147 2 DEBUG nova.virt.libvirt.host [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.149 2 DEBUG nova.virt.libvirt.host [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.149 2 DEBUG nova.virt.libvirt.host [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.150 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.150 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.150 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.150 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.152 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.152 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.179 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:13 compute-0 ceph-mon[74249]: pgmap v1866: 305 pgs: 305 active+clean; 279 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 403 op/s
Oct 14 09:16:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:16:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:16:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412028711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.315 2 DEBUG oslo_concurrency.processutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.322 2 DEBUG nova.compute.provider_tree [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.338 2 DEBUG nova.scheduler.client.report [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.357 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.382 2 INFO nova.scheduler.client.report [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance d46b6953-9413-4e6a-94f7-7b5ac9634c16
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.447 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-unplugged-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] No waiting events found dispatching network-vif-unplugged-05e92470-2658-4ea2-9c44-e91cd5226905 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.521 2 WARNING nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received unexpected event network-vif-unplugged-05e92470-2658-4ea2-9c44-e91cd5226905 for instance with vm_state deleted and task_state None.
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] No waiting events found dispatching network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.522 2 WARNING nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received unexpected event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 for instance with vm_state deleted and task_state None.
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.523 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-deleted-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:16:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626267375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.602 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.639 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:13 compute-0 nova_compute[259627]: 2025-10-14 09:16:13.645 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 279 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 4.3 MiB/s wr, 187 op/s
Oct 14 09:16:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:16:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398352582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:14 compute-0 nova_compute[259627]: 2025-10-14 09:16:14.077 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:14 compute-0 nova_compute[259627]: 2025-10-14 09:16:14.080 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <uuid>548b81f4-df26-4f76-910f-5a14445c93c5</uuid>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <name>instance-0000006b</name>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerShowV247Test-server-1778827495</nova:name>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:16:13</nova:creationTime>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <nova:user uuid="c5d4a1c172e947e0a129b9f397f961cf">tempest-ServerShowV247Test-1595240674-project-member</nova:user>
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <nova:project uuid="222bb7bdbd34453db62947152ca9b44a">tempest-ServerShowV247Test-1595240674</nova:project>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <system>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <entry name="serial">548b81f4-df26-4f76-910f-5a14445c93c5</entry>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <entry name="uuid">548b81f4-df26-4f76-910f-5a14445c93c5</entry>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     </system>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <os>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   </os>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <features>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   </features>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/548b81f4-df26-4f76-910f-5a14445c93c5_disk">
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       </source>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/548b81f4-df26-4f76-910f-5a14445c93c5_disk.config">
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       </source>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:16:14 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/console.log" append="off"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <video>
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     </video>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:16:14 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:16:14 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:16:14 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:16:14 compute-0 nova_compute[259627]: </domain>
Oct 14 09:16:14 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:16:14 compute-0 nova_compute[259627]: 2025-10-14 09:16:14.155 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:16:14 compute-0 nova_compute[259627]: 2025-10-14 09:16:14.156 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:16:14 compute-0 nova_compute[259627]: 2025-10-14 09:16:14.156 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Using config drive
Oct 14 09:16:14 compute-0 nova_compute[259627]: 2025-10-14 09:16:14.180 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:14 compute-0 nova_compute[259627]: 2025-10-14 09:16:14.200 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1412028711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3626267375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1398352582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:14 compute-0 nova_compute[259627]: 2025-10-14 09:16:14.256 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'keypairs' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:15 compute-0 nova_compute[259627]: 2025-10-14 09:16:15.131 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating config drive at /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config
Oct 14 09:16:15 compute-0 nova_compute[259627]: 2025-10-14 09:16:15.139 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45ysjlbj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:15 compute-0 ceph-mon[74249]: pgmap v1867: 305 pgs: 305 active+clean; 279 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 4.3 MiB/s wr, 187 op/s
Oct 14 09:16:15 compute-0 nova_compute[259627]: 2025-10-14 09:16:15.312 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45ysjlbj" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:15 compute-0 nova_compute[259627]: 2025-10-14 09:16:15.362 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:15 compute-0 nova_compute[259627]: 2025-10-14 09:16:15.367 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:15 compute-0 nova_compute[259627]: 2025-10-14 09:16:15.548 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:15 compute-0 nova_compute[259627]: 2025-10-14 09:16:15.550 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deleting local config drive /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config because it was imported into RBD.
Oct 14 09:16:15 compute-0 systemd-machined[214636]: New machine qemu-135-instance-0000006b.
Oct 14 09:16:15 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006b.
Oct 14 09:16:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 741 KiB/s rd, 6.1 MiB/s wr, 272 op/s
Oct 14 09:16:16 compute-0 nova_compute[259627]: 2025-10-14 09:16:16.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.097 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 548b81f4-df26-4f76-910f-5a14445c93c5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.098 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433377.0968168, 548b81f4-df26-4f76-910f-5a14445c93c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.098 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Resumed (Lifecycle Event)
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.103 2 DEBUG nova.compute.manager [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.103 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.108 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance spawned successfully.
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.109 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.137 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.141 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.153 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.154 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.154 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.154 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.155 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.155 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.197 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.198 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433377.1015692, 548b81f4-df26-4f76-910f-5a14445c93c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.198 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Started (Lifecycle Event)
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.255 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.259 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.264 2 DEBUG nova.compute.manager [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:17 compute-0 ceph-mon[74249]: pgmap v1868: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 741 KiB/s rd, 6.1 MiB/s wr, 272 op/s
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.293 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.326 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.327 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.327 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:16:17 compute-0 nova_compute[259627]: 2025-10-14 09:16:17.384 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 6.1 MiB/s wr, 268 op/s
Oct 14 09:16:18 compute-0 nova_compute[259627]: 2025-10-14 09:16:18.193 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433363.1928039, c28cafe5-40e7-47f9-8793-6193487fccc3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:18 compute-0 nova_compute[259627]: 2025-10-14 09:16:18.194 2 INFO nova.compute.manager [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] VM Stopped (Lifecycle Event)
Oct 14 09:16:18 compute-0 nova_compute[259627]: 2025-10-14 09:16:18.221 2 DEBUG nova.compute.manager [None req-768767c4-efe7-4764-bccc-73bcb29b1be9 - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:18 compute-0 nova_compute[259627]: 2025-10-14 09:16:18.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.123 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "548b81f4-df26-4f76-910f-5a14445c93c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.125 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.125 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "548b81f4-df26-4f76-910f-5a14445c93c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.126 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.127 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.128 2 INFO nova.compute.manager [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Terminating instance
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.130 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "refresh_cache-548b81f4-df26-4f76-910f-5a14445c93c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.131 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquired lock "refresh_cache-548b81f4-df26-4f76-910f-5a14445c93c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.132 2 DEBUG nova.network.neutron [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:16:19 compute-0 ceph-mon[74249]: pgmap v1869: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 6.1 MiB/s wr, 268 op/s
Oct 14 09:16:19 compute-0 nova_compute[259627]: 2025-10-14 09:16:19.623 2 DEBUG nova.network.neutron [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:16:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 6.1 MiB/s wr, 268 op/s
Oct 14 09:16:20 compute-0 nova_compute[259627]: 2025-10-14 09:16:20.230 2 DEBUG nova.network.neutron [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:20 compute-0 nova_compute[259627]: 2025-10-14 09:16:20.263 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Releasing lock "refresh_cache-548b81f4-df26-4f76-910f-5a14445c93c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:16:20 compute-0 nova_compute[259627]: 2025-10-14 09:16:20.265 2 DEBUG nova.compute.manager [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:16:20 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 14 09:16:20 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006b.scope: Consumed 4.653s CPU time.
Oct 14 09:16:20 compute-0 systemd-machined[214636]: Machine qemu-135-instance-0000006b terminated.
Oct 14 09:16:20 compute-0 nova_compute[259627]: 2025-10-14 09:16:20.489 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance destroyed successfully.
Oct 14 09:16:20 compute-0 nova_compute[259627]: 2025-10-14 09:16:20.490 2 DEBUG nova.objects.instance [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'resources' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:20 compute-0 nova_compute[259627]: 2025-10-14 09:16:20.951 2 INFO nova.virt.libvirt.driver [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deleting instance files /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5_del
Oct 14 09:16:20 compute-0 nova_compute[259627]: 2025-10-14 09:16:20.953 2 INFO nova.virt.libvirt.driver [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deletion of /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5_del complete
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.067 2 INFO nova.compute.manager [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.067 2 DEBUG oslo.service.loopingcall [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.068 2 DEBUG nova.compute.manager [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.068 2 DEBUG nova.network.neutron [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:16:21 compute-0 ceph-mon[74249]: pgmap v1870: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 6.1 MiB/s wr, 268 op/s
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.319 2 DEBUG nova.network.neutron [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.333 2 DEBUG nova.network.neutron [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.349 2 INFO nova.compute.manager [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Took 0.28 seconds to deallocate network for instance.
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.397 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.397 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.504 2 DEBUG oslo_concurrency.processutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:21 compute-0 podman[365018]: 2025-10-14 09:16:21.711681915 +0000 UTC m=+0.109437138 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:16:21 compute-0 podman[365019]: 2025-10-14 09:16:21.720475602 +0000 UTC m=+0.116317048 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:16:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760005497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.964 2 DEBUG oslo_concurrency.processutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.971 2 DEBUG nova.compute.provider_tree [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:21 compute-0 nova_compute[259627]: 2025-10-14 09:16:21.992 2 DEBUG nova.scheduler.client.report [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 153 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 343 op/s
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.019 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.052 2 INFO nova.scheduler.client.report [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Deleted allocations for instance 548b81f4-df26-4f76-910f-5a14445c93c5
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.197 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1760005497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.915 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.916 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.916 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.917 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.917 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.919 2 INFO nova.compute.manager [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Terminating instance
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.922 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "refresh_cache-0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.922 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquired lock "refresh_cache-0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:16:22 compute-0 nova_compute[259627]: 2025-10-14 09:16:22.923 2 DEBUG nova.network.neutron [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:16:23 compute-0 nova_compute[259627]: 2025-10-14 09:16:23.176 2 DEBUG nova.network.neutron [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:16:23 compute-0 ceph-mon[74249]: pgmap v1871: 305 pgs: 305 active+clean; 153 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 343 op/s
Oct 14 09:16:23 compute-0 nova_compute[259627]: 2025-10-14 09:16:23.553 2 DEBUG nova.network.neutron [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:23 compute-0 nova_compute[259627]: 2025-10-14 09:16:23.575 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Releasing lock "refresh_cache-0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:16:23 compute-0 nova_compute[259627]: 2025-10-14 09:16:23.577 2 DEBUG nova.compute.manager [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:16:23 compute-0 nova_compute[259627]: 2025-10-14 09:16:23.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:23 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 14 09:16:23 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Consumed 15.199s CPU time.
Oct 14 09:16:23 compute-0 systemd-machined[214636]: Machine qemu-133-instance-0000006a terminated.
Oct 14 09:16:23 compute-0 nova_compute[259627]: 2025-10-14 09:16:23.837 2 INFO nova.virt.libvirt.driver [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance destroyed successfully.
Oct 14 09:16:23 compute-0 nova_compute[259627]: 2025-10-14 09:16:23.838 2 DEBUG nova.objects.instance [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'resources' on Instance uuid 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 153 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.343 2 INFO nova.virt.libvirt.driver [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Deleting instance files /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_del
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.344 2 INFO nova.virt.libvirt.driver [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Deletion of /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_del complete
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.405 2 INFO nova.compute.manager [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.405 2 DEBUG oslo.service.loopingcall [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.406 2 DEBUG nova.compute.manager [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.406 2 DEBUG nova.network.neutron [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.435 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433369.4336374, 33969555-fe06-4613-b244-d03c9b4180ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.436 2 INFO nova.compute.manager [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Stopped (Lifecycle Event)
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.461 2 DEBUG nova.compute.manager [None req-7be534b2-c728-4c1c-b22a-cbb44a483298 - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.552 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.553 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.584 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.645 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.645 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.656 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.656 2 INFO nova.compute.claims [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.814 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.868 2 DEBUG nova.network.neutron [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.886 2 DEBUG nova.network.neutron [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.905 2 INFO nova.compute.manager [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Took 0.50 seconds to deallocate network for instance.
Oct 14 09:16:24 compute-0 nova_compute[259627]: 2025-10-14 09:16:24.986 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:25 compute-0 ceph-mon[74249]: pgmap v1872: 305 pgs: 305 active+clean; 153 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 14 09:16:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462707404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.353 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.359 2 DEBUG nova.compute.provider_tree [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.377 2 DEBUG nova.scheduler.client.report [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.405 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.406 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.410 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.464 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.490 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.496 2 DEBUG oslo_concurrency.processutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.578 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.696 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.698 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.698 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating image(s)
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.735 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.775 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.809 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.814 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.924 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.926 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.926 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.927 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.959 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:25 compute-0 nova_compute[259627]: 2025-10-14 09:16:25.965 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187996767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.006 2 DEBUG oslo_concurrency.processutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.014 2 DEBUG nova.compute.provider_tree [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.039 2 DEBUG nova.scheduler.client.report [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.064 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.104 2 INFO nova.scheduler.client.report [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Deleted allocations for instance 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.187 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.258 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1462707404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2187996767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.337 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] resizing rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.450 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433371.425544, d46b6953-9413-4e6a-94f7-7b5ac9634c16 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.450 2 INFO nova.compute.manager [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] VM Stopped (Lifecycle Event)
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.460 2 DEBUG nova.objects.instance [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'migration_context' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.481 2 DEBUG nova.compute.manager [None req-1952e744-7506-4ce4-a09c-e3f313583521 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.483 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.484 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Ensure instance console log exists: /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.484 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.485 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.485 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.488 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.495 2 WARNING nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.501 2 DEBUG nova.virt.libvirt.host [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.502 2 DEBUG nova.virt.libvirt.host [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.505 2 DEBUG nova.virt.libvirt.host [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.506 2 DEBUG nova.virt.libvirt.host [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.506 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.507 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.508 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.508 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.508 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.509 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.509 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.509 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.510 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.510 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.510 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.511 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.515 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:16:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1096043527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.952 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.983 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:26 compute-0 nova_compute[259627]: 2025-10-14 09:16:26.986 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:27 compute-0 ceph-mon[74249]: pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 14 09:16:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1096043527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:16:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4007108185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:27 compute-0 nova_compute[259627]: 2025-10-14 09:16:27.408 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:27 compute-0 nova_compute[259627]: 2025-10-14 09:16:27.412 2 DEBUG nova.objects.instance [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:27 compute-0 nova_compute[259627]: 2025-10-14 09:16:27.430 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <uuid>7eb647bc-75e4-4d38-aaa4-67570c4713f9</uuid>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <name>instance-0000006c</name>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerShowV257Test-server-1454600294</nova:name>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:16:26</nova:creationTime>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <nova:user uuid="c832ba0d968c4ca4a20d386152e5b5bb">tempest-ServerShowV257Test-1068269928-project-member</nova:user>
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <nova:project uuid="4ba227fc561b4e9cb9d86e1727015a7d">tempest-ServerShowV257Test-1068269928</nova:project>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <system>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <entry name="serial">7eb647bc-75e4-4d38-aaa4-67570c4713f9</entry>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <entry name="uuid">7eb647bc-75e4-4d38-aaa4-67570c4713f9</entry>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     </system>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <os>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   </os>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <features>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   </features>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk">
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       </source>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config">
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       </source>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:16:27 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/console.log" append="off"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <video>
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     </video>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:16:27 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:16:27 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:16:27 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:16:27 compute-0 nova_compute[259627]: </domain>
Oct 14 09:16:27 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:16:27 compute-0 nova_compute[259627]: 2025-10-14 09:16:27.495 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:16:27 compute-0 nova_compute[259627]: 2025-10-14 09:16:27.496 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:16:27 compute-0 nova_compute[259627]: 2025-10-14 09:16:27.497 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Using config drive
Oct 14 09:16:27 compute-0 nova_compute[259627]: 2025-10-14 09:16:27.534 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:28 compute-0 nova_compute[259627]: 2025-10-14 09:16:28.009 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating config drive at /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config
Oct 14 09:16:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Oct 14 09:16:28 compute-0 nova_compute[259627]: 2025-10-14 09:16:28.018 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqm8tngm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:28 compute-0 nova_compute[259627]: 2025-10-14 09:16:28.190 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqm8tngm" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:28 compute-0 nova_compute[259627]: 2025-10-14 09:16:28.222 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:28 compute-0 nova_compute[259627]: 2025-10-14 09:16:28.227 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4007108185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:28 compute-0 nova_compute[259627]: 2025-10-14 09:16:28.407 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:28 compute-0 nova_compute[259627]: 2025-10-14 09:16:28.408 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deleting local config drive /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config because it was imported into RBD.
Oct 14 09:16:28 compute-0 systemd-machined[214636]: New machine qemu-136-instance-0000006c.
Oct 14 09:16:28 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006c.
Oct 14 09:16:28 compute-0 nova_compute[259627]: 2025-10-14 09:16:28.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.254 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433389.2535648, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.254 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Resumed (Lifecycle Event)
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.257 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.257 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.261 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance spawned successfully.
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.261 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.286 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.294 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.299 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.300 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.300 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.301 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.301 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.302 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.328 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.329 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433389.2546666, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.329 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Started (Lifecycle Event)
Oct 14 09:16:29 compute-0 ceph-mon[74249]: pgmap v1874: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.358 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.362 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.391 2 INFO nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Took 3.69 seconds to spawn the instance on the hypervisor.
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.392 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.393 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.458 2 INFO nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Took 4.83 seconds to build instance.
Oct 14 09:16:29 compute-0 nova_compute[259627]: 2025-10-14 09:16:29.475 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Oct 14 09:16:30 compute-0 nova_compute[259627]: 2025-10-14 09:16:30.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:31 compute-0 ceph-mon[74249]: pgmap v1875: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.527 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.528 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.548 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.631 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.632 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.640 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.641 2 INFO nova.compute.claims [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.795 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:31 compute-0 nova_compute[259627]: 2025-10-14 09:16:31.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 88 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.051 2 INFO nova.compute.manager [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Rebuilding instance
Oct 14 09:16:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242382949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.359 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.360 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.367 2 DEBUG nova.compute.provider_tree [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4242382949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.383 2 DEBUG nova.compute.manager [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.384 2 DEBUG nova.scheduler.client.report [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.426 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.428 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.432 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.432 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.433 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.433 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.493 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'pci_requests' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.512 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.537 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.538 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.544 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'resources' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.561 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.565 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'migration_context' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.603 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.609 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.616 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.702 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.713 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.714 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Creating image(s)
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.742 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.770 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:16:32
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta']
Oct 14 09:16:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.799 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.803 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.849 2 DEBUG nova.policy [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:16:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.890 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.891 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.891 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.892 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.923 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.927 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8e0fe601-33b8-44f0-8452-d821825b9176_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2442803607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:32 compute-0 nova_compute[259627]: 2025-10-14 09:16:32.980 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.078 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.078 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:16:33 compute-0 podman[365609]: 2025-10-14 09:16:33.081210892 +0000 UTC m=+0.057057848 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 14 09:16:33 compute-0 podman[365607]: 2025-10-14 09:16:33.159316917 +0000 UTC m=+0.125909525 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:16:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.252 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8e0fe601-33b8-44f0-8452-d821825b9176_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.303 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.358 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.360 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.360 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.360 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:33 compute-0 ceph-mon[74249]: pgmap v1876: 305 pgs: 305 active+clean; 88 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Oct 14 09:16:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2442803607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.392 2 DEBUG nova.objects.instance [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.418 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.419 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Ensure instance console log exists: /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.419 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.420 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.420 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.459 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 7eb647bc-75e4-4d38-aaa4-67570c4713f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.459 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 8e0fe601-33b8-44f0-8452-d821825b9176 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.459 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.460 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.520 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:33 compute-0 nova_compute[259627]: 2025-10-14 09:16:33.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571790934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:34 compute-0 nova_compute[259627]: 2025-10-14 09:16:34.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 88 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct 14 09:16:34 compute-0 nova_compute[259627]: 2025-10-14 09:16:34.018 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:34 compute-0 nova_compute[259627]: 2025-10-14 09:16:34.038 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:34 compute-0 nova_compute[259627]: 2025-10-14 09:16:34.075 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:16:34 compute-0 nova_compute[259627]: 2025-10-14 09:16:34.076 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:34 compute-0 nova_compute[259627]: 2025-10-14 09:16:34.195 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Successfully created port: ae1bda9b-5957-43de-bcf7-97164f008565 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:16:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3571790934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.141 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Successfully updated port: ae1bda9b-5957-43de-bcf7-97164f008565 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.163 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.164 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.164 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.380 2 DEBUG nova.compute.manager [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.381 2 DEBUG nova.compute.manager [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing instance network info cache due to event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.382 2 DEBUG oslo_concurrency.lockutils [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:16:35 compute-0 ceph-mon[74249]: pgmap v1877: 305 pgs: 305 active+clean; 88 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.478 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.487 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433380.486112, 548b81f4-df26-4f76-910f-5a14445c93c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.487 2 INFO nova.compute.manager [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Stopped (Lifecycle Event)
Oct 14 09:16:35 compute-0 nova_compute[259627]: 2025-10-14 09:16:35.516 2 DEBUG nova.compute.manager [None req-eff9a11e-da52-4c24-9266-3537089ebd37 - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.597 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.617 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.618 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance network_info: |[{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.619 2 DEBUG oslo_concurrency.lockutils [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.620 2 DEBUG nova.network.neutron [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.625 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start _get_guest_xml network_info=[{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.633 2 WARNING nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.647 2 DEBUG nova.virt.libvirt.host [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.648 2 DEBUG nova.virt.libvirt.host [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.655 2 DEBUG nova.virt.libvirt.host [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.656 2 DEBUG nova.virt.libvirt.host [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.657 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.658 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.659 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.660 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.661 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.661 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.662 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.663 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.664 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.664 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.665 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.666 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:16:36 compute-0 nova_compute[259627]: 2025-10-14 09:16:36.672 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:16:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718026794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.153 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.187 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.193 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:37 compute-0 ceph-mon[74249]: pgmap v1878: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct 14 09:16:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2718026794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:16:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714991685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.638 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.642 2 DEBUG nova.virt.libvirt.vif [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1974746659',display_name='tempest-TestNetworkAdvancedServerOps-server-1974746659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1974746659',id=109,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLo//MvhWo3fWQGHjr+urxX4+VcySarwttwotGnxIUKeYHMPJD2ibaE4RsIINhRhtCN4FfH4X/uMkC3+sHENw8r8re1VC10CtKYPlARtw6F00oKxhFKvSmM/1BEvmDnTA==',key_name='tempest-TestNetworkAdvancedServerOps-261373730',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-ttn68b3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:16:32Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=8e0fe601-33b8-44f0-8452-d821825b9176,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.643 2 DEBUG nova.network.os_vif_util [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.645 2 DEBUG nova.network.os_vif_util [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.647 2 DEBUG nova.objects.instance [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.678 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <uuid>8e0fe601-33b8-44f0-8452-d821825b9176</uuid>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <name>instance-0000006d</name>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1974746659</nova:name>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:16:36</nova:creationTime>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <nova:port uuid="ae1bda9b-5957-43de-bcf7-97164f008565">
Oct 14 09:16:37 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <system>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <entry name="serial">8e0fe601-33b8-44f0-8452-d821825b9176</entry>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <entry name="uuid">8e0fe601-33b8-44f0-8452-d821825b9176</entry>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </system>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <os>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   </os>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <features>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   </features>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/8e0fe601-33b8-44f0-8452-d821825b9176_disk">
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/8e0fe601-33b8-44f0-8452-d821825b9176_disk.config">
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       </source>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:16:37 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b7:a7:bb"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <target dev="tapae1bda9b-59"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/console.log" append="off"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <video>
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </video>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:16:37 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:16:37 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:16:37 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:16:37 compute-0 nova_compute[259627]: </domain>
Oct 14 09:16:37 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.693 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Preparing to wait for external event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.694 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.694 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.695 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.696 2 DEBUG nova.virt.libvirt.vif [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1974746659',display_name='tempest-TestNetworkAdvancedServerOps-server-1974746659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1974746659',id=109,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLo//MvhWo3fWQGHjr+urxX4+VcySarwttwotGnxIUKeYHMPJD2ibaE4RsIINhRhtCN4FfH4X/uMkC3+sHENw8r8re1VC10CtKYPlARtw6F00oKxhFKvSmM/1BEvmDnTA==',key_name='tempest-TestNetworkAdvancedServerOps-261373730',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-ttn68b3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:16:32Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=8e0fe601-33b8-44f0-8452-d821825b9176,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.696 2 DEBUG nova.network.os_vif_util [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.697 2 DEBUG nova.network.os_vif_util [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.704 2 DEBUG os_vif [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae1bda9b-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae1bda9b-59, col_values=(('external_ids', {'iface-id': 'ae1bda9b-5957-43de-bcf7-97164f008565', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a7:bb', 'vm-uuid': '8e0fe601-33b8-44f0-8452-d821825b9176'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:37 compute-0 NetworkManager[44885]: <info>  [1760433397.7155] manager: (tapae1bda9b-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.722 2 INFO os_vif [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59')
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.782 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.782 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.782 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:b7:a7:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.783 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Using config drive
Oct 14 09:16:37 compute-0 nova_compute[259627]: 2025-10-14 09:16:37.804 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.367 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Creating config drive at /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.376 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89c02boe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3714991685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.441 2 DEBUG nova.network.neutron [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updated VIF entry in instance network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.442 2 DEBUG nova.network.neutron [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.461 2 DEBUG oslo_concurrency.lockutils [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.546 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89c02boe" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.586 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.590 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.787 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.788 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Deleting local config drive /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config because it was imported into RBD.
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.835 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433383.8332632, 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.835 2 INFO nova.compute.manager [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] VM Stopped (Lifecycle Event)
Oct 14 09:16:38 compute-0 kernel: tapae1bda9b-59: entered promiscuous mode
Oct 14 09:16:38 compute-0 NetworkManager[44885]: <info>  [1760433398.8372] manager: (tapae1bda9b-59): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:38 compute-0 ovn_controller[152662]: 2025-10-14T09:16:38Z|01139|binding|INFO|Claiming lport ae1bda9b-5957-43de-bcf7-97164f008565 for this chassis.
Oct 14 09:16:38 compute-0 ovn_controller[152662]: 2025-10-14T09:16:38Z|01140|binding|INFO|ae1bda9b-5957-43de-bcf7-97164f008565: Claiming fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.857 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.858 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d bound to our chassis
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.859 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.870 2 DEBUG nova.compute.manager [None req-148bb98d-af63-4be7-aacb-a8f5cb02b15a - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb1ad6b-9c45-40ae-aefe-b66e67821a7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.873 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd7e2e81-91 in ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.876 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd7e2e81-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.877 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4301f6df-dba5-4c81-ad2c-c6155c45036a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:38 compute-0 systemd-udevd[365902]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.878 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d12afcb-34ba-48f5-b198-ef31033c940a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:38 compute-0 systemd-machined[214636]: New machine qemu-137-instance-0000006d.
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.895 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b8990696-d985-4a54-bb74-6e35ce04beb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:38 compute-0 NetworkManager[44885]: <info>  [1760433398.9012] device (tapae1bda9b-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:16:38 compute-0 NetworkManager[44885]: <info>  [1760433398.9023] device (tapae1bda9b-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:16:38 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006d.
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.925 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d70c3be-bc93-4203-ad50-1d39c946cc44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:38 compute-0 ovn_controller[152662]: 2025-10-14T09:16:38Z|01141|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 ovn-installed in OVS
Oct 14 09:16:38 compute-0 ovn_controller[152662]: 2025-10-14T09:16:38Z|01142|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 up in Southbound
Oct 14 09:16:38 compute-0 nova_compute[259627]: 2025-10-14 09:16:38.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.956 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d8f6a8-fb14-407b-9b87-490b9407fcc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:38 compute-0 NetworkManager[44885]: <info>  [1760433398.9705] manager: (tapbd7e2e81-90): new Veth device (/org/freedesktop/NetworkManager/Devices/458)
Oct 14 09:16:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.969 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e593613c-fe91-4d68-9590-0f56351477e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.026 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4f02ae8c-b319-452d-9728-714fce872174]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.030 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4ab5ec-0aaf-423c-a935-7c611cbf742e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 NetworkManager[44885]: <info>  [1760433399.0575] device (tapbd7e2e81-90): carrier: link connected
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.061 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d8071ba8-84e0-4fa1-ab67-c6d0039d0e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.077 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.077 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.077 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.093 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[313e8bdd-b1ff-40ed-aa77-3b25736eb082]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd7e2e81-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:8e:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717781, 'reachable_time': 42678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365934, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.096 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.097 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.097 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.097 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.097 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.113 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3feb05-0a6b-446b-ac80-e825d168784d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:8ede'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717781, 'tstamp': 717781}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365935, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.143 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d44b2170-29eb-45aa-8c89-a9c8f3695ba4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd7e2e81-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:8e:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717781, 'reachable_time': 42678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365936, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[480afbf6-beb8-406c-9892-3831d04f7c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.279 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2d1faa-f256-4fbb-8cfe-e6686087b6ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.282 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd7e2e81-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.283 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.283 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd7e2e81-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:39 compute-0 kernel: tapbd7e2e81-90: entered promiscuous mode
Oct 14 09:16:39 compute-0 NetworkManager[44885]: <info>  [1760433399.3301] manager: (tapbd7e2e81-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.333 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd7e2e81-90, col_values=(('external_ids', {'iface-id': '9945b67f-e925-4ba1-a5f4-5c7846f9de7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:39 compute-0 ovn_controller[152662]: 2025-10-14T09:16:39Z|01143|binding|INFO|Releasing lport 9945b67f-e925-4ba1-a5f4-5c7846f9de7a from this chassis (sb_readonly=0)
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.364 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.365 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[97ff1e06-34bc-452d-81f5-dd293f208d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.366 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:16:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.368 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'env', 'PROCESS_TAG=haproxy-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:16:39 compute-0 ceph-mon[74249]: pgmap v1879: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.522 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.578 2 DEBUG nova.compute.manager [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.579 2 DEBUG oslo_concurrency.lockutils [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.579 2 DEBUG oslo_concurrency.lockutils [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.579 2 DEBUG oslo_concurrency.lockutils [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.579 2 DEBUG nova.compute.manager [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Processing event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.834 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433399.8342152, 8e0fe601-33b8-44f0-8452-d821825b9176 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.834 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Started (Lifecycle Event)
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.836 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.839 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.842 2 INFO nova.virt.libvirt.driver [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance spawned successfully.
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.842 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.858 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.860 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.861 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.861 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.861 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.862 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.862 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.866 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:39 compute-0 podman[366010]: 2025-10-14 09:16:39.891336247 +0000 UTC m=+0.053020248 container create b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.897 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.897 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433399.8343365, 8e0fe601-33b8-44f0-8452-d821825b9176 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.897 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Paused (Lifecycle Event)
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.909 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.921 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.924 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.925 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.925 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.925 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.926 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433399.8390906, 8e0fe601-33b8-44f0-8452-d821825b9176 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.926 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Resumed (Lifecycle Event)
Oct 14 09:16:39 compute-0 systemd[1]: Started libpod-conmon-b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a.scope.
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.939 2 INFO nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Took 7.24 seconds to spawn the instance on the hypervisor.
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.940 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.951 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.954 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:39 compute-0 podman[366010]: 2025-10-14 09:16:39.864625498 +0000 UTC m=+0.026309529 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:16:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4b02960d8aa9c8b5d05fa99183f05f82538622c06a41271790d657e5edb93b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:16:39 compute-0 nova_compute[259627]: 2025-10-14 09:16:39.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:39 compute-0 podman[366010]: 2025-10-14 09:16:39.9835528 +0000 UTC m=+0.145236821 container init b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:16:40 compute-0 nova_compute[259627]: 2025-10-14 09:16:40.000 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:16:40 compute-0 podman[366010]: 2025-10-14 09:16:40.004884435 +0000 UTC m=+0.166568436 container start b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:16:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 14 09:16:40 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [NOTICE]   (366029) : New worker (366031) forked
Oct 14 09:16:40 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [NOTICE]   (366029) : Loading success.
Oct 14 09:16:40 compute-0 nova_compute[259627]: 2025-10-14 09:16:40.036 2 INFO nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Took 8.44 seconds to build instance.
Oct 14 09:16:40 compute-0 nova_compute[259627]: 2025-10-14 09:16:40.051 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:40 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 14 09:16:41 compute-0 ceph-mon[74249]: pgmap v1880: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 14 09:16:41 compute-0 nova_compute[259627]: 2025-10-14 09:16:41.819 2 DEBUG nova.compute.manager [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:41 compute-0 nova_compute[259627]: 2025-10-14 09:16:41.819 2 DEBUG oslo_concurrency.lockutils [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:41 compute-0 nova_compute[259627]: 2025-10-14 09:16:41.820 2 DEBUG oslo_concurrency.lockutils [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:41 compute-0 nova_compute[259627]: 2025-10-14 09:16:41.820 2 DEBUG oslo_concurrency.lockutils [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:41 compute-0 nova_compute[259627]: 2025-10-14 09:16:41.820 2 DEBUG nova.compute.manager [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:16:41 compute-0 nova_compute[259627]: 2025-10-14 09:16:41.820 2 WARNING nova.compute.manager [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state active and task_state None.
Oct 14 09:16:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 155 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 174 op/s
Oct 14 09:16:42 compute-0 nova_compute[259627]: 2025-10-14 09:16:42.681 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 14 09:16:42 compute-0 nova_compute[259627]: 2025-10-14 09:16:42.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001064762811747086 of space, bias 1.0, pg target 0.31942884352412576 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:16:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:16:43 compute-0 ceph-mon[74249]: pgmap v1881: 305 pgs: 305 active+clean; 155 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 174 op/s
Oct 14 09:16:43 compute-0 nova_compute[259627]: 2025-10-14 09:16:43.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 155 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 09:16:44 compute-0 ovn_controller[152662]: 2025-10-14T09:16:44Z|01144|binding|INFO|Releasing lport 9945b67f-e925-4ba1-a5f4-5c7846f9de7a from this chassis (sb_readonly=0)
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:44 compute-0 NetworkManager[44885]: <info>  [1760433404.1107] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Oct 14 09:16:44 compute-0 NetworkManager[44885]: <info>  [1760433404.1119] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Oct 14 09:16:44 compute-0 ovn_controller[152662]: 2025-10-14T09:16:44Z|01145|binding|INFO|Releasing lport 9945b67f-e925-4ba1-a5f4-5c7846f9de7a from this chassis (sb_readonly=0)
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.460 2 DEBUG nova.compute.manager [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.461 2 DEBUG nova.compute.manager [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing instance network info cache due to event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.462 2 DEBUG oslo_concurrency.lockutils [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.462 2 DEBUG oslo_concurrency.lockutils [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.463 2 DEBUG nova.network.neutron [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:16:44 compute-0 nova_compute[259627]: 2025-10-14 09:16:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:16:45 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 14 09:16:45 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006c.scope: Consumed 13.203s CPU time.
Oct 14 09:16:45 compute-0 systemd-machined[214636]: Machine qemu-136-instance-0000006c terminated.
Oct 14 09:16:45 compute-0 ceph-mon[74249]: pgmap v1882: 305 pgs: 305 active+clean; 155 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 09:16:45 compute-0 nova_compute[259627]: 2025-10-14 09:16:45.696 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance shutdown successfully after 13 seconds.
Oct 14 09:16:45 compute-0 nova_compute[259627]: 2025-10-14 09:16:45.703 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance destroyed successfully.
Oct 14 09:16:45 compute-0 nova_compute[259627]: 2025-10-14 09:16:45.709 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance destroyed successfully.
Oct 14 09:16:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.144 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deleting instance files /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9_del
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.145 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deletion of /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9_del complete
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.310 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.311 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating image(s)
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.341 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.365 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.390 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.393 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.477 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.478 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.479 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.479 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.502 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.506 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.766 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.839 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] resizing rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.942 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.943 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Ensure instance console log exists: /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.944 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.944 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.944 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.946 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.951 2 WARNING nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.958 2 DEBUG nova.virt.libvirt.host [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.958 2 DEBUG nova.virt.libvirt.host [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.962 2 DEBUG nova.virt.libvirt.host [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.963 2 DEBUG nova.virt.libvirt.host [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.963 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.964 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.964 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.964 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.965 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.965 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.965 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.966 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.966 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.966 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.966 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.967 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.967 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.969 2 DEBUG nova.network.neutron [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updated VIF entry in instance network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:16:46 compute-0 nova_compute[259627]: 2025-10-14 09:16:46.970 2 DEBUG nova.network.neutron [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:47 compute-0 nova_compute[259627]: 2025-10-14 09:16:47.004 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:47 compute-0 nova_compute[259627]: 2025-10-14 09:16:47.046 2 DEBUG oslo_concurrency.lockutils [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:16:47 compute-0 ceph-mon[74249]: pgmap v1883: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct 14 09:16:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:16:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184964300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:47 compute-0 nova_compute[259627]: 2025-10-14 09:16:47.476 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:47 compute-0 nova_compute[259627]: 2025-10-14 09:16:47.500 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:47 compute-0 nova_compute[259627]: 2025-10-14 09:16:47.505 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:47 compute-0 nova_compute[259627]: 2025-10-14 09:16:47.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:16:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475672375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.004 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.008 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <uuid>7eb647bc-75e4-4d38-aaa4-67570c4713f9</uuid>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <name>instance-0000006c</name>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <nova:name>tempest-ServerShowV257Test-server-1454600294</nova:name>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:16:46</nova:creationTime>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <nova:user uuid="c832ba0d968c4ca4a20d386152e5b5bb">tempest-ServerShowV257Test-1068269928-project-member</nova:user>
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <nova:project uuid="4ba227fc561b4e9cb9d86e1727015a7d">tempest-ServerShowV257Test-1068269928</nova:project>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <system>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <entry name="serial">7eb647bc-75e4-4d38-aaa4-67570c4713f9</entry>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <entry name="uuid">7eb647bc-75e4-4d38-aaa4-67570c4713f9</entry>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     </system>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <os>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   </os>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <features>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   </features>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk">
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       </source>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config">
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       </source>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:16:48 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/console.log" append="off"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <video>
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     </video>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:16:48 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:16:48 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:16:48 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:16:48 compute-0 nova_compute[259627]: </domain>
Oct 14 09:16:48 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:16:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.077 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.079 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.080 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Using config drive
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.117 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.157 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.202 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'keypairs' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3184964300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3475672375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.520 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating config drive at /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.534 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj44_5_s1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.708 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj44_5_s1" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.743 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.750 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.934 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:48 compute-0 nova_compute[259627]: 2025-10-14 09:16:48.936 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deleting local config drive /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config because it was imported into RBD.
Oct 14 09:16:49 compute-0 systemd-machined[214636]: New machine qemu-138-instance-0000006c.
Oct 14 09:16:49 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006c.
Oct 14 09:16:49 compute-0 ceph-mon[74249]: pgmap v1884: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 14 09:16:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.291 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 7eb647bc-75e4-4d38-aaa4-67570c4713f9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.293 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433410.2914808, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.293 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Resumed (Lifecycle Event)
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.297 2 DEBUG nova.compute.manager [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.297 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.302 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance spawned successfully.
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.303 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.327 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.334 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.339 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.339 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.340 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.340 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.341 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.341 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.392 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.393 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433410.2931767, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.394 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Started (Lifecycle Event)
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.425 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.430 2 DEBUG nova.compute.manager [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.431 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.470 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.517 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.518 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.519 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 14 09:16:50 compute-0 nova_compute[259627]: 2025-10-14 09:16:50.602 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:51 compute-0 ceph-mon[74249]: pgmap v1885: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 14 09:16:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 138 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.073 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.074 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.075 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.075 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.076 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.078 2 INFO nova.compute.manager [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Terminating instance
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.080 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.080 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquired lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.081 2 DEBUG nova.network.neutron [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.366 2 DEBUG nova.network.neutron [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:16:52 compute-0 ovn_controller[152662]: 2025-10-14T09:16:52Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 09:16:52 compute-0 ovn_controller[152662]: 2025-10-14T09:16:52Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 09:16:52 compute-0 podman[366407]: 2025-10-14 09:16:52.680409352 +0000 UTC m=+0.084507094 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:16:52 compute-0 podman[366408]: 2025-10-14 09:16:52.684474352 +0000 UTC m=+0.088537453 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.767 2 DEBUG nova.network.neutron [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.780 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Releasing lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:16:52 compute-0 nova_compute[259627]: 2025-10-14 09:16:52.780 2 DEBUG nova.compute.manager [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:16:52 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 14 09:16:52 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006c.scope: Consumed 3.689s CPU time.
Oct 14 09:16:52 compute-0 systemd-machined[214636]: Machine qemu-138-instance-0000006c terminated.
Oct 14 09:16:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.014 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance destroyed successfully.
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.015 2 DEBUG nova.objects.instance [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'resources' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:53 compute-0 ceph-mon[74249]: pgmap v1886: 305 pgs: 305 active+clean; 138 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.491 2 INFO nova.virt.libvirt.driver [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deleting instance files /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9_del
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.492 2 INFO nova.virt.libvirt.driver [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deletion of /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9_del complete
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.542 2 INFO nova.compute.manager [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.543 2 DEBUG oslo.service.loopingcall [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.543 2 DEBUG nova.compute.manager [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.543 2 DEBUG nova.network.neutron [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.965 2 DEBUG nova.network.neutron [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.980 2 DEBUG nova.network.neutron [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:16:53 compute-0 nova_compute[259627]: 2025-10-14 09:16:53.996 2 INFO nova.compute.manager [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Took 0.45 seconds to deallocate network for instance.
Oct 14 09:16:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 138 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 164 op/s
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.050 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.051 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.126 2 DEBUG oslo_concurrency.processutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:16:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:16:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/495703736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.615 2 DEBUG oslo_concurrency.processutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.621 2 DEBUG nova.compute.provider_tree [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.647 2 DEBUG nova.scheduler.client.report [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.679 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.719 2 INFO nova.scheduler.client.report [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Deleted allocations for instance 7eb647bc-75e4-4d38-aaa4-67570c4713f9
Oct 14 09:16:54 compute-0 nova_compute[259627]: 2025-10-14 09:16:54.812 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:16:55 compute-0 ceph-mon[74249]: pgmap v1887: 305 pgs: 305 active+clean; 138 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 164 op/s
Oct 14 09:16:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/495703736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:16:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 310 op/s
Oct 14 09:16:56 compute-0 nova_compute[259627]: 2025-10-14 09:16:56.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:57 compute-0 ceph-mon[74249]: pgmap v1888: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 310 op/s
Oct 14 09:16:57 compute-0 nova_compute[259627]: 2025-10-14 09:16:57.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:16:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.519 2 INFO nova.compute.manager [None req-d2f3b051-6622-4e7b-878e-04209fdc4ea0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Get console output
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.533 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.937 2 DEBUG nova.objects.instance [None req-1e0814ea-774c-492f-8c98-e7642f5bc78f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.966 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433418.966416, 8e0fe601-33b8-44f0-8452-d821825b9176 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.967 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Paused (Lifecycle Event)
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.992 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:58 compute-0 nova_compute[259627]: 2025-10-14 09:16:58.998 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:16:59 compute-0 nova_compute[259627]: 2025-10-14 09:16:59.020 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 14 09:16:59 compute-0 ceph-mon[74249]: pgmap v1889: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 09:16:59 compute-0 kernel: tapae1bda9b-59 (unregistering): left promiscuous mode
Oct 14 09:16:59 compute-0 NetworkManager[44885]: <info>  [1760433419.6534] device (tapae1bda9b-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:16:59 compute-0 nova_compute[259627]: 2025-10-14 09:16:59.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:59 compute-0 ovn_controller[152662]: 2025-10-14T09:16:59Z|01146|binding|INFO|Releasing lport ae1bda9b-5957-43de-bcf7-97164f008565 from this chassis (sb_readonly=0)
Oct 14 09:16:59 compute-0 ovn_controller[152662]: 2025-10-14T09:16:59Z|01147|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 down in Southbound
Oct 14 09:16:59 compute-0 ovn_controller[152662]: 2025-10-14T09:16:59Z|01148|binding|INFO|Removing iface tapae1bda9b-59 ovn-installed in OVS
Oct 14 09:16:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.676 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:16:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.681 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d unbound from our chassis
Oct 14 09:16:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.684 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:16:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.686 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[876435db-c1f3-4526-a89c-05c8cd12d1ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:16:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.688 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d namespace which is not needed anymore
Oct 14 09:16:59 compute-0 nova_compute[259627]: 2025-10-14 09:16:59.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:16:59 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 14 09:16:59 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006d.scope: Consumed 13.158s CPU time.
Oct 14 09:16:59 compute-0 systemd-machined[214636]: Machine qemu-137-instance-0000006d terminated.
Oct 14 09:16:59 compute-0 nova_compute[259627]: 2025-10-14 09:16:59.819 2 DEBUG nova.compute.manager [None req-1e0814ea-774c-492f-8c98-e7642f5bc78f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:16:59 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [NOTICE]   (366029) : haproxy version is 2.8.14-c23fe91
Oct 14 09:16:59 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [NOTICE]   (366029) : path to executable is /usr/sbin/haproxy
Oct 14 09:16:59 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [WARNING]  (366029) : Exiting Master process...
Oct 14 09:16:59 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [WARNING]  (366029) : Exiting Master process...
Oct 14 09:16:59 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [ALERT]    (366029) : Current worker (366031) exited with code 143 (Terminated)
Oct 14 09:16:59 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [WARNING]  (366029) : All workers exited. Exiting... (0)
Oct 14 09:16:59 compute-0 systemd[1]: libpod-b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a.scope: Deactivated successfully.
Oct 14 09:16:59 compute-0 podman[366521]: 2025-10-14 09:16:59.887966192 +0000 UTC m=+0.059836125 container died b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a-userdata-shm.mount: Deactivated successfully.
Oct 14 09:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e4b02960d8aa9c8b5d05fa99183f05f82538622c06a41271790d657e5edb93b-merged.mount: Deactivated successfully.
Oct 14 09:16:59 compute-0 podman[366521]: 2025-10-14 09:16:59.946736121 +0000 UTC m=+0.118606084 container cleanup b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:16:59 compute-0 systemd[1]: libpod-conmon-b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a.scope: Deactivated successfully.
Oct 14 09:17:00 compute-0 podman[366559]: 2025-10-14 09:17:00.020916119 +0000 UTC m=+0.047063851 container remove b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:17:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.028 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[10357666-6cb3-43e2-8bcc-408d3ded7743]: (4, ('Tue Oct 14 09:16:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d (b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a)\nb270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a\nTue Oct 14 09:16:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d (b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a)\nb270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.030 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2be7e4b0-b45a-4bf6-98e8-a208354b3d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.032 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd7e2e81-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:00 compute-0 kernel: tapbd7e2e81-90: left promiscuous mode
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.078 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd449731-3a89-48be-80bc-44ba66f1d8de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.103 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03eaec65-863c-4ffb-b179-d38d6e037390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.105 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[599a99ce-edb4-4673-a015-46196fac4f47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.124 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac9afd2-0f20-46b6-ac6f-4e9cf0bc73e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717771, 'reachable_time': 23324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366577, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.127 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:17:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.127 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[15f670c6-53c5-4c93-a7e4-270a76895636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dbd7e2e81\x2d9355\x2d4e48\x2dbcd1\x2d3cdb592b9c9d.mount: Deactivated successfully.
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.255 2 DEBUG nova.compute.manager [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-unplugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.255 2 DEBUG oslo_concurrency.lockutils [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.256 2 DEBUG oslo_concurrency.lockutils [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.256 2 DEBUG oslo_concurrency.lockutils [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.256 2 DEBUG nova.compute.manager [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-unplugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.257 2 WARNING nova.compute.manager [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-unplugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state suspended and task_state None.
Oct 14 09:17:00 compute-0 nova_compute[259627]: 2025-10-14 09:17:00.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:01.176 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:17:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:01.177 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:17:01 compute-0 nova_compute[259627]: 2025-10-14 09:17:01.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:01 compute-0 ceph-mon[74249]: pgmap v1890: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 09:17:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 09:17:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:17:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:17:02 compute-0 nova_compute[259627]: 2025-10-14 09:17:02.748 2 DEBUG nova.compute.manager [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:02 compute-0 nova_compute[259627]: 2025-10-14 09:17:02.749 2 DEBUG oslo_concurrency.lockutils [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:02 compute-0 nova_compute[259627]: 2025-10-14 09:17:02.749 2 DEBUG oslo_concurrency.lockutils [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:02 compute-0 nova_compute[259627]: 2025-10-14 09:17:02.750 2 DEBUG oslo_concurrency.lockutils [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:02 compute-0 nova_compute[259627]: 2025-10-14 09:17:02.750 2 DEBUG nova.compute.manager [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:17:02 compute-0 nova_compute[259627]: 2025-10-14 09:17:02.750 2 WARNING nova.compute.manager [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state suspended and task_state None.
Oct 14 09:17:02 compute-0 nova_compute[259627]: 2025-10-14 09:17:02.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:17:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:17:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:17:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:17:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:03 compute-0 ceph-mon[74249]: pgmap v1891: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 09:17:03 compute-0 nova_compute[259627]: 2025-10-14 09:17:03.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:03 compute-0 podman[366580]: 2025-10-14 09:17:03.707359187 +0000 UTC m=+0.103935683 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:17:03 compute-0 podman[366579]: 2025-10-14 09:17:03.756151809 +0000 UTC m=+0.156437206 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:17:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 14 09:17:04 compute-0 nova_compute[259627]: 2025-10-14 09:17:04.115 2 INFO nova.compute.manager [None req-f24b0e6c-49d2-4efa-ad31-87963eebc7bd e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Get console output
Oct 14 09:17:04 compute-0 nova_compute[259627]: 2025-10-14 09:17:04.424 2 INFO nova.compute.manager [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Resuming
Oct 14 09:17:04 compute-0 nova_compute[259627]: 2025-10-14 09:17:04.425 2 DEBUG nova.objects.instance [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:17:04 compute-0 nova_compute[259627]: 2025-10-14 09:17:04.470 2 DEBUG oslo_concurrency.lockutils [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:17:04 compute-0 nova_compute[259627]: 2025-10-14 09:17:04.471 2 DEBUG oslo_concurrency.lockutils [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:17:04 compute-0 nova_compute[259627]: 2025-10-14 09:17:04.472 2 DEBUG nova.network.neutron [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:17:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:05.179 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:05 compute-0 ceph-mon[74249]: pgmap v1892: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 14 09:17:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.567 2 DEBUG nova.network.neutron [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.595 2 DEBUG oslo_concurrency.lockutils [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.603 2 DEBUG nova.virt.libvirt.vif [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1974746659',display_name='tempest-TestNetworkAdvancedServerOps-server-1974746659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1974746659',id=109,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLo//MvhWo3fWQGHjr+urxX4+VcySarwttwotGnxIUKeYHMPJD2ibaE4RsIINhRhtCN4FfH4X/uMkC3+sHENw8r8re1VC10CtKYPlARtw6F00oKxhFKvSmM/1BEvmDnTA==',key_name='tempest-TestNetworkAdvancedServerOps-261373730',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:16:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-ttn68b3m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:16:59Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=8e0fe601-33b8-44f0-8452-d821825b9176,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.604 2 DEBUG nova.network.os_vif_util [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.605 2 DEBUG nova.network.os_vif_util [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.606 2 DEBUG os_vif [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae1bda9b-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae1bda9b-59, col_values=(('external_ids', {'iface-id': 'ae1bda9b-5957-43de-bcf7-97164f008565', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a7:bb', 'vm-uuid': '8e0fe601-33b8-44f0-8452-d821825b9176'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.616 2 INFO os_vif [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59')
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.652 2 DEBUG nova.objects.instance [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:17:06 compute-0 kernel: tapae1bda9b-59: entered promiscuous mode
Oct 14 09:17:06 compute-0 NetworkManager[44885]: <info>  [1760433426.7463] manager: (tapae1bda9b-59): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Oct 14 09:17:06 compute-0 ovn_controller[152662]: 2025-10-14T09:17:06Z|01149|binding|INFO|Claiming lport ae1bda9b-5957-43de-bcf7-97164f008565 for this chassis.
Oct 14 09:17:06 compute-0 ovn_controller[152662]: 2025-10-14T09:17:06Z|01150|binding|INFO|ae1bda9b-5957-43de-bcf7-97164f008565: Claiming fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 NetworkManager[44885]: <info>  [1760433426.7655] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 NetworkManager[44885]: <info>  [1760433426.7669] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.769 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.771 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d bound to our chassis
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.772 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.789 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[605cc3d1-52ab-453d-a42c-ec2a92cd4dbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.790 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd7e2e81-91 in ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.795 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd7e2e81-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc197c12-312a-491b-8011-bf734a3cd7d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.797 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e86f69c-2c4e-46cf-93cb-e4f31e63475f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 systemd-udevd[366635]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.811 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4fd08a-c6e9-4756-910b-85454107f66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 systemd-machined[214636]: New machine qemu-139-instance-0000006d.
Oct 14 09:17:06 compute-0 NetworkManager[44885]: <info>  [1760433426.8242] device (tapae1bda9b-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:17:06 compute-0 NetworkManager[44885]: <info>  [1760433426.8272] device (tapae1bda9b-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:17:06 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-0000006d.
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91331368-b36d-4e4d-a2b4-9090202ab31e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.892 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a3435688-acdc-44ec-adf2-99978d600b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 systemd-udevd[366638]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:17:06 compute-0 ovn_controller[152662]: 2025-10-14T09:17:06Z|01151|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 ovn-installed in OVS
Oct 14 09:17:06 compute-0 ovn_controller[152662]: 2025-10-14T09:17:06Z|01152|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 up in Southbound
Oct 14 09:17:06 compute-0 nova_compute[259627]: 2025-10-14 09:17:06.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:06 compute-0 NetworkManager[44885]: <info>  [1760433426.9062] manager: (tapbd7e2e81-90): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.902 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe17ea-f208-4f06-ac84-0b14287330ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.932 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0a563b-7a30-46c9-88d3-93ff09149c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.936 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[14101090-af6b-457e-bf29-e313a83dc026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 NetworkManager[44885]: <info>  [1760433426.9551] device (tapbd7e2e81-90): carrier: link connected
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.960 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[955d662f-7691-4d97-a69f-0b4b43bed870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.975 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9e4671-dbb0-42bf-9ca4-2775e15b185a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd7e2e81-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:8e:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720571, 'reachable_time': 37118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366666, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.989 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a57b6e47-a55d-470c-adf2-a8b07185d783]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:8ede'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720571, 'tstamp': 720571}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366667, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.004 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[194336f1-4fa6-4281-8032-a419dacd18e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd7e2e81-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:8e:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720571, 'reachable_time': 37118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366668, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.033 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a278e057-834d-47df-a997-b87041b13bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.114 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0c144d-941c-4807-a33b-bb722b5c845b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.117 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd7e2e81-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.117 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.117 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd7e2e81-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:07 compute-0 NetworkManager[44885]: <info>  [1760433427.1387] manager: (tapbd7e2e81-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:07 compute-0 kernel: tapbd7e2e81-90: entered promiscuous mode
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.143 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd7e2e81-90, col_values=(('external_ids', {'iface-id': '9945b67f-e925-4ba1-a5f4-5c7846f9de7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:07 compute-0 ovn_controller[152662]: 2025-10-14T09:17:07Z|01153|binding|INFO|Releasing lport 9945b67f-e925-4ba1-a5f4-5c7846f9de7a from this chassis (sb_readonly=0)
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.180 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e57bf702-6ea4-4afe-bef4-265ce807130f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.183 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:17:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.186 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'env', 'PROCESS_TAG=haproxy-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:17:07 compute-0 podman[366742]: 2025-10-14 09:17:07.613913378 +0000 UTC m=+0.062660595 container create 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 09:17:07 compute-0 ceph-mon[74249]: pgmap v1893: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 14 09:17:07 compute-0 systemd[1]: Started libpod-conmon-5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5.scope.
Oct 14 09:17:07 compute-0 podman[366742]: 2025-10-14 09:17:07.579697805 +0000 UTC m=+0.028445112 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:17:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae44bdbb224e73e78d9bd19aa707268d3d53d52647578ee723bd9e3305ab7185/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:07 compute-0 podman[366742]: 2025-10-14 09:17:07.70610198 +0000 UTC m=+0.154849217 container init 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 09:17:07 compute-0 podman[366742]: 2025-10-14 09:17:07.711410211 +0000 UTC m=+0.160157428 container start 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 09:17:07 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [NOTICE]   (366761) : New worker (366763) forked
Oct 14 09:17:07 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [NOTICE]   (366761) : Loading success.
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.893 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 8e0fe601-33b8-44f0-8452-d821825b9176 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.893 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433427.8927, 8e0fe601-33b8-44f0-8452-d821825b9176 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.894 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Started (Lifecycle Event)
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.922 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.949 2 DEBUG nova.compute.manager [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.950 2 DEBUG nova.objects.instance [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.954 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.970 2 INFO nova.virt.libvirt.driver [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance running successfully.
Oct 14 09:17:07 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.971 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.972 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433427.8967888, 8e0fe601-33b8-44f0-8452-d821825b9176 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.972 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Resumed (Lifecycle Event)
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.974 2 DEBUG nova.virt.libvirt.guest [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.974 2 DEBUG nova.compute.manager [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.993 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:07 compute-0 nova_compute[259627]: 2025-10-14 09:17:07.996 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.012 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433413.0108325, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.012 2 INFO nova.compute.manager [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Stopped (Lifecycle Event)
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.020 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:17:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.038 2 DEBUG nova.compute.manager [None req-658878cf-62d7-450b-bba7-c80930442b0d - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.471 2 DEBUG nova.compute.manager [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.471 2 DEBUG oslo_concurrency.lockutils [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.472 2 DEBUG oslo_concurrency.lockutils [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.472 2 DEBUG oslo_concurrency.lockutils [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.472 2 DEBUG nova.compute.manager [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.473 2 WARNING nova.compute.manager [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state active and task_state None.
Oct 14 09:17:08 compute-0 nova_compute[259627]: 2025-10-14 09:17:08.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:09 compute-0 nova_compute[259627]: 2025-10-14 09:17:09.352 2 INFO nova.compute.manager [None req-563710b9-02bf-4070-a2a2-92e427f1d468 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Get console output
Oct 14 09:17:09 compute-0 nova_compute[259627]: 2025-10-14 09:17:09.360 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:17:09 compute-0 ceph-mon[74249]: pgmap v1894: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:17:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.806 2 DEBUG nova.compute.manager [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.807 2 DEBUG oslo_concurrency.lockutils [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.808 2 DEBUG oslo_concurrency.lockutils [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.808 2 DEBUG oslo_concurrency.lockutils [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.809 2 DEBUG nova.compute.manager [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.809 2 WARNING nova.compute.manager [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state active and task_state None.
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.867 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.867 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.868 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.868 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.869 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.871 2 INFO nova.compute.manager [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Terminating instance
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.873 2 DEBUG nova.compute.manager [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:17:10 compute-0 kernel: tapae1bda9b-59 (unregistering): left promiscuous mode
Oct 14 09:17:10 compute-0 NetworkManager[44885]: <info>  [1760433430.9229] device (tapae1bda9b-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:10 compute-0 ovn_controller[152662]: 2025-10-14T09:17:10Z|01154|binding|INFO|Releasing lport ae1bda9b-5957-43de-bcf7-97164f008565 from this chassis (sb_readonly=0)
Oct 14 09:17:10 compute-0 ovn_controller[152662]: 2025-10-14T09:17:10Z|01155|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 down in Southbound
Oct 14 09:17:10 compute-0 ovn_controller[152662]: 2025-10-14T09:17:10Z|01156|binding|INFO|Removing iface tapae1bda9b-59 ovn-installed in OVS
Oct 14 09:17:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.940 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:17:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.941 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d unbound from our chassis
Oct 14 09:17:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.942 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:17:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.943 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[069d8ff0-52bf-4066-bbe5-e9b2bf12c207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.943 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d namespace which is not needed anymore
Oct 14 09:17:10 compute-0 nova_compute[259627]: 2025-10-14 09:17:10.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 14 09:17:11 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006d.scope: Consumed 1.131s CPU time.
Oct 14 09:17:11 compute-0 systemd-machined[214636]: Machine qemu-139-instance-0000006d terminated.
Oct 14 09:17:11 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [NOTICE]   (366761) : haproxy version is 2.8.14-c23fe91
Oct 14 09:17:11 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [NOTICE]   (366761) : path to executable is /usr/sbin/haproxy
Oct 14 09:17:11 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [WARNING]  (366761) : Exiting Master process...
Oct 14 09:17:11 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [ALERT]    (366761) : Current worker (366763) exited with code 143 (Terminated)
Oct 14 09:17:11 compute-0 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [WARNING]  (366761) : All workers exited. Exiting... (0)
Oct 14 09:17:11 compute-0 systemd[1]: libpod-5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5.scope: Deactivated successfully.
Oct 14 09:17:11 compute-0 podman[366796]: 2025-10-14 09:17:11.092391159 +0000 UTC m=+0.048652520 container died 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:17:11 compute-0 kernel: tapae1bda9b-59: entered promiscuous mode
Oct 14 09:17:11 compute-0 kernel: tapae1bda9b-59 (unregistering): left promiscuous mode
Oct 14 09:17:11 compute-0 NetworkManager[44885]: <info>  [1760433431.1027] manager: (tapae1bda9b-59): new Tun device (/org/freedesktop/NetworkManager/Devices/467)
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01157|binding|INFO|Claiming lport ae1bda9b-5957-43de-bcf7-97164f008565 for this chassis.
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01158|binding|INFO|ae1bda9b-5957-43de-bcf7-97164f008565: Claiming fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.116 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.127 2 INFO nova.virt.libvirt.driver [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance destroyed successfully.
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.128 2 DEBUG nova.objects.instance [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01159|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 ovn-installed in OVS
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01160|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 up in Southbound
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01161|binding|INFO|Releasing lport ae1bda9b-5957-43de-bcf7-97164f008565 from this chassis (sb_readonly=1)
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01162|if_status|INFO|Dropped 1 log messages in last 145 seconds (most recently, 145 seconds ago) due to excessive rate
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01163|if_status|INFO|Not setting lport ae1bda9b-5957-43de-bcf7-97164f008565 down as sb is readonly
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01164|binding|INFO|Removing iface tapae1bda9b-59 ovn-installed in OVS
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01165|binding|INFO|Releasing lport ae1bda9b-5957-43de-bcf7-97164f008565 from this chassis (sb_readonly=0)
Oct 14 09:17:11 compute-0 ovn_controller[152662]: 2025-10-14T09:17:11Z|01166|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 down in Southbound
Oct 14 09:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5-userdata-shm.mount: Deactivated successfully.
Oct 14 09:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae44bdbb224e73e78d9bd19aa707268d3d53d52647578ee723bd9e3305ab7185-merged.mount: Deactivated successfully.
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.142 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.144 2 DEBUG nova.virt.libvirt.vif [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1974746659',display_name='tempest-TestNetworkAdvancedServerOps-server-1974746659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1974746659',id=109,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLo//MvhWo3fWQGHjr+urxX4+VcySarwttwotGnxIUKeYHMPJD2ibaE4RsIINhRhtCN4FfH4X/uMkC3+sHENw8r8re1VC10CtKYPlARtw6F00oKxhFKvSmM/1BEvmDnTA==',key_name='tempest-TestNetworkAdvancedServerOps-261373730',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:16:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-ttn68b3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:17:08Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=8e0fe601-33b8-44f0-8452-d821825b9176,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.144 2 DEBUG nova.network.os_vif_util [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.145 2 DEBUG nova.network.os_vif_util [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.145 2 DEBUG os_vif [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae1bda9b-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 podman[366796]: 2025-10-14 09:17:11.150550942 +0000 UTC m=+0.106812283 container cleanup 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.151 2 INFO os_vif [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59')
Oct 14 09:17:11 compute-0 systemd[1]: libpod-conmon-5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5.scope: Deactivated successfully.
Oct 14 09:17:11 compute-0 podman[366834]: 2025-10-14 09:17:11.218131418 +0000 UTC m=+0.045210855 container remove 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.225 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8649b1d9-ca4f-4289-a64d-9405daf48fc2]: (4, ('Tue Oct 14 09:17:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d (5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5)\n5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5\nTue Oct 14 09:17:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d (5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5)\n5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d48ef5-127e-48a0-879c-09e987e71c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.228 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd7e2e81-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 kernel: tapbd7e2e81-90: left promiscuous mode
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae884c7c-c95d-4bbb-a297-b66ea46a54c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.260 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01a6a100-a9ba-4f73-8254-a8e372979a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.261 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34b04b29-bc60-4961-9a34-1ee82f8f07ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1d5e9a-3667-4161-93c5-81f4cb239915]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720564, 'reachable_time': 40133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366860, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.284 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.284 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ac501521-29ee-4a6b-8f7c-fa560b99d2ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.285 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d unbound from our chassis
Oct 14 09:17:11 compute-0 systemd[1]: run-netns-ovnmeta\x2dbd7e2e81\x2d9355\x2d4e48\x2dbcd1\x2d3cdb592b9c9d.mount: Deactivated successfully.
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.286 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9ca503-2b02-4253-86a7-5178bbf8f8cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.287 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d unbound from our chassis
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.288 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:17:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.288 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24d36271-3218-4d24-9d8e-146ff397e000]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.526 2 INFO nova.virt.libvirt.driver [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Deleting instance files /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176_del
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.527 2 INFO nova.virt.libvirt.driver [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Deletion of /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176_del complete
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.587 2 INFO nova.compute.manager [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.588 2 DEBUG oslo.service.loopingcall [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.589 2 DEBUG nova.compute.manager [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:17:11 compute-0 nova_compute[259627]: 2025-10-14 09:17:11.589 2 DEBUG nova.network.neutron [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:17:11 compute-0 ceph-mon[74249]: pgmap v1895: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:17:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct 14 09:17:12 compute-0 nova_compute[259627]: 2025-10-14 09:17:12.412 2 DEBUG nova.network.neutron [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:17:12 compute-0 nova_compute[259627]: 2025-10-14 09:17:12.447 2 INFO nova.compute.manager [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Took 0.86 seconds to deallocate network for instance.
Oct 14 09:17:12 compute-0 nova_compute[259627]: 2025-10-14 09:17:12.514 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:12 compute-0 nova_compute[259627]: 2025-10-14 09:17:12.515 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:12 compute-0 nova_compute[259627]: 2025-10-14 09:17:12.545 2 DEBUG nova.compute.manager [req-b5358482-892b-4666-9e46-b3833f96372f req-a278d2ab-e129-4f59-96f7-72f9537f1fd9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-deleted-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:12 compute-0 nova_compute[259627]: 2025-10-14 09:17:12.574 2 DEBUG oslo_concurrency.processutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:17:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3590132240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.011 2 DEBUG nova.compute.manager [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.013 2 DEBUG nova.compute.manager [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing instance network info cache due to event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.014 2 DEBUG oslo_concurrency.lockutils [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.014 2 DEBUG oslo_concurrency.lockutils [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.015 2 DEBUG nova.network.neutron [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.020 2 DEBUG oslo_concurrency.processutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.028 2 DEBUG nova.compute.provider_tree [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.044 2 DEBUG nova.scheduler.client.report [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.068 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.097 2 INFO nova.scheduler.client.report [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance 8e0fe601-33b8-44f0-8452-d821825b9176
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.179 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:13 compute-0 sudo[366884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:13 compute-0 sudo[366884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:13 compute-0 sudo[366884]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:13 compute-0 sudo[366909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:17:13 compute-0 sudo[366909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:13 compute-0 sudo[366909]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.381 2 DEBUG nova.network.neutron [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:17:13 compute-0 sudo[366934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:13 compute-0 sudo[366934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:13 compute-0 sudo[366934]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:13 compute-0 sudo[366959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 14 09:17:13 compute-0 sudo[366959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:13 compute-0 ceph-mon[74249]: pgmap v1896: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct 14 09:17:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3590132240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.660610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433660658, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1762, "num_deletes": 254, "total_data_size": 2548741, "memory_usage": 2599216, "flush_reason": "Manual Compaction"}
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433675697, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2498041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38393, "largest_seqno": 40154, "table_properties": {"data_size": 2490139, "index_size": 4652, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17414, "raw_average_key_size": 20, "raw_value_size": 2473911, "raw_average_value_size": 2903, "num_data_blocks": 206, "num_entries": 852, "num_filter_entries": 852, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433278, "oldest_key_time": 1760433278, "file_creation_time": 1760433433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 15137 microseconds, and 9422 cpu microseconds.
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.675755) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2498041 bytes OK
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.675779) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.677979) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.678008) EVENT_LOG_v1 {"time_micros": 1760433433677998, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.678062) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2541097, prev total WAL file size 2541097, number of live WAL files 2.
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.679521) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2439KB)], [86(8355KB)]
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433679580, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11054559, "oldest_snapshot_seqno": -1}
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6458 keys, 9388871 bytes, temperature: kUnknown
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433734938, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9388871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9344897, "index_size": 26712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164219, "raw_average_key_size": 25, "raw_value_size": 9228441, "raw_average_value_size": 1428, "num_data_blocks": 1072, "num_entries": 6458, "num_filter_entries": 6458, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.736072) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9388871 bytes
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.737703) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.5 rd, 166.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.2 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 6980, records dropped: 522 output_compression: NoCompression
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.737723) EVENT_LOG_v1 {"time_micros": 1760433433737714, "job": 50, "event": "compaction_finished", "compaction_time_micros": 56249, "compaction_time_cpu_micros": 37063, "output_level": 6, "num_output_files": 1, "total_output_size": 9388871, "num_input_records": 6980, "num_output_records": 6458, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433738335, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433739643, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.679420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:17:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.802 2 DEBUG nova.network.neutron [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:17:13 compute-0 nova_compute[259627]: 2025-10-14 09:17:13.823 2 DEBUG oslo_concurrency.lockutils [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:17:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 14 09:17:14 compute-0 podman[367057]: 2025-10-14 09:17:14.150521471 +0000 UTC m=+0.088094642 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:17:14 compute-0 podman[367057]: 2025-10-14 09:17:14.284655027 +0000 UTC m=+0.222228178 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 09:17:14 compute-0 sudo[366959]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:17:14 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:17:14 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:15 compute-0 sudo[367212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:15 compute-0 sudo[367212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:15 compute-0 sudo[367212]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:15 compute-0 sudo[367237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:17:15 compute-0 sudo[367237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:15 compute-0 sudo[367237]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:15 compute-0 sudo[367262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:15 compute-0 sudo[367262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:15 compute-0 sudo[367262]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:15 compute-0 sudo[367287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:17:15 compute-0 sudo[367287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:15 compute-0 sudo[367287]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:15 compute-0 ceph-mon[74249]: pgmap v1897: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 14 09:17:15 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:15 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:17:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:17:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:17:15 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:17:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:17:15 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:15 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 592e3910-49a1-4c5b-8e0c-8343b4277b7c does not exist
Oct 14 09:17:15 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d1f7e463-9e65-4140-a16c-6bff4819af3c does not exist
Oct 14 09:17:15 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6158858e-dfeb-4c25-86b8-540db55ae2f4 does not exist
Oct 14 09:17:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:17:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:17:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:17:15 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:17:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:17:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:17:15 compute-0 sudo[367344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:15 compute-0 sudo[367344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:15 compute-0 sudo[367344]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:15 compute-0 sudo[367369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:17:15 compute-0 sudo[367369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:15 compute-0 sudo[367369]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:15 compute-0 sudo[367394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:15 compute-0 sudo[367394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:15 compute-0 sudo[367394]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 09:17:16 compute-0 sudo[367419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:17:16 compute-0 sudo[367419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:16 compute-0 nova_compute[259627]: 2025-10-14 09:17:16.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:16 compute-0 nova_compute[259627]: 2025-10-14 09:17:16.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:16 compute-0 podman[367485]: 2025-10-14 09:17:16.50094726 +0000 UTC m=+0.055480618 container create 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:17:16 compute-0 nova_compute[259627]: 2025-10-14 09:17:16.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:16 compute-0 systemd[1]: Started libpod-conmon-4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e.scope.
Oct 14 09:17:16 compute-0 podman[367485]: 2025-10-14 09:17:16.481131462 +0000 UTC m=+0.035664850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:17:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:17:16 compute-0 podman[367485]: 2025-10-14 09:17:16.646317223 +0000 UTC m=+0.200850591 container init 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:17:16 compute-0 podman[367485]: 2025-10-14 09:17:16.658074593 +0000 UTC m=+0.212607941 container start 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:17:16 compute-0 podman[367485]: 2025-10-14 09:17:16.661486067 +0000 UTC m=+0.216019425 container attach 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:17:16 compute-0 crazy_clarke[367502]: 167 167
Oct 14 09:17:16 compute-0 systemd[1]: libpod-4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e.scope: Deactivated successfully.
Oct 14 09:17:16 compute-0 podman[367485]: 2025-10-14 09:17:16.666638564 +0000 UTC m=+0.221171912 container died 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:17:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:17:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:17:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:17:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:17:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ca724b0dea3e14b57a3dfe829864c83ead37f4c46acfdf72b0b517b7e3d8a93-merged.mount: Deactivated successfully.
Oct 14 09:17:16 compute-0 podman[367485]: 2025-10-14 09:17:16.706065106 +0000 UTC m=+0.260598454 container remove 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 09:17:16 compute-0 systemd[1]: libpod-conmon-4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e.scope: Deactivated successfully.
Oct 14 09:17:16 compute-0 podman[367525]: 2025-10-14 09:17:16.86774337 +0000 UTC m=+0.042690813 container create 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:17:16 compute-0 systemd[1]: Started libpod-conmon-07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c.scope.
Oct 14 09:17:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:17:16 compute-0 podman[367525]: 2025-10-14 09:17:16.84907624 +0000 UTC m=+0.024023703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:16 compute-0 podman[367525]: 2025-10-14 09:17:16.960393044 +0000 UTC m=+0.135340507 container init 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:17:16 compute-0 podman[367525]: 2025-10-14 09:17:16.974339888 +0000 UTC m=+0.149287331 container start 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:17:16 compute-0 podman[367525]: 2025-10-14 09:17:16.977703051 +0000 UTC m=+0.152650514 container attach 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 09:17:17 compute-0 ceph-mon[74249]: pgmap v1898: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 09:17:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 09:17:18 compute-0 great_pascal[367542]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:17:18 compute-0 great_pascal[367542]: --> relative data size: 1.0
Oct 14 09:17:18 compute-0 great_pascal[367542]: --> All data devices are unavailable
Oct 14 09:17:18 compute-0 systemd[1]: libpod-07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c.scope: Deactivated successfully.
Oct 14 09:17:18 compute-0 systemd[1]: libpod-07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c.scope: Consumed 1.112s CPU time.
Oct 14 09:17:18 compute-0 podman[367525]: 2025-10-14 09:17:18.121743967 +0000 UTC m=+1.296691460 container died 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327-merged.mount: Deactivated successfully.
Oct 14 09:17:18 compute-0 podman[367525]: 2025-10-14 09:17:18.205306877 +0000 UTC m=+1.380254360 container remove 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 14 09:17:18 compute-0 systemd[1]: libpod-conmon-07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c.scope: Deactivated successfully.
Oct 14 09:17:18 compute-0 sudo[367419]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:18 compute-0 sudo[367584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:18 compute-0 sudo[367584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:18 compute-0 sudo[367584]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:18 compute-0 sudo[367609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:17:18 compute-0 sudo[367609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:18 compute-0 sudo[367609]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:18 compute-0 sudo[367634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:18 compute-0 sudo[367634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:18 compute-0 sudo[367634]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:18 compute-0 sudo[367659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:17:18 compute-0 sudo[367659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:18 compute-0 nova_compute[259627]: 2025-10-14 09:17:18.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:18 compute-0 podman[367725]: 2025-10-14 09:17:18.954968133 +0000 UTC m=+0.067599817 container create d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:17:19 compute-0 systemd[1]: Started libpod-conmon-d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069.scope.
Oct 14 09:17:19 compute-0 podman[367725]: 2025-10-14 09:17:18.925458315 +0000 UTC m=+0.038090009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:17:19 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:17:19 compute-0 podman[367725]: 2025-10-14 09:17:19.055160512 +0000 UTC m=+0.167792296 container init d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:17:19 compute-0 podman[367725]: 2025-10-14 09:17:19.067326462 +0000 UTC m=+0.179958136 container start d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 09:17:19 compute-0 podman[367725]: 2025-10-14 09:17:19.07172836 +0000 UTC m=+0.184360095 container attach d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 09:17:19 compute-0 loving_swartz[367742]: 167 167
Oct 14 09:17:19 compute-0 podman[367725]: 2025-10-14 09:17:19.075159805 +0000 UTC m=+0.187791489 container died d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:17:19 compute-0 systemd[1]: libpod-d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069.scope: Deactivated successfully.
Oct 14 09:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3989aa692df6b47e94af2cf37197c9f7ef72acabbf9b0c4d3bd3eb8e77cc162d-merged.mount: Deactivated successfully.
Oct 14 09:17:19 compute-0 podman[367725]: 2025-10-14 09:17:19.128584912 +0000 UTC m=+0.241216596 container remove d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:17:19 compute-0 systemd[1]: libpod-conmon-d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069.scope: Deactivated successfully.
Oct 14 09:17:19 compute-0 podman[367767]: 2025-10-14 09:17:19.388348814 +0000 UTC m=+0.071858482 container create d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 09:17:19 compute-0 systemd[1]: Started libpod-conmon-d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f.scope.
Oct 14 09:17:19 compute-0 podman[367767]: 2025-10-14 09:17:19.361570944 +0000 UTC m=+0.045080652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:17:19 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:19 compute-0 podman[367767]: 2025-10-14 09:17:19.494527221 +0000 UTC m=+0.178036939 container init d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:17:19 compute-0 podman[367767]: 2025-10-14 09:17:19.515942149 +0000 UTC m=+0.199451817 container start d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:17:19 compute-0 podman[367767]: 2025-10-14 09:17:19.520350898 +0000 UTC m=+0.203860606 container attach d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:17:19 compute-0 ceph-mon[74249]: pgmap v1899: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 09:17:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 09:17:20 compute-0 objective_cori[367784]: {
Oct 14 09:17:20 compute-0 objective_cori[367784]:     "0": [
Oct 14 09:17:20 compute-0 objective_cori[367784]:         {
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "devices": [
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "/dev/loop3"
Oct 14 09:17:20 compute-0 objective_cori[367784]:             ],
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_name": "ceph_lv0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_size": "21470642176",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "name": "ceph_lv0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "tags": {
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cluster_name": "ceph",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.crush_device_class": "",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.encrypted": "0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osd_id": "0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.type": "block",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.vdo": "0"
Oct 14 09:17:20 compute-0 objective_cori[367784]:             },
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "type": "block",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "vg_name": "ceph_vg0"
Oct 14 09:17:20 compute-0 objective_cori[367784]:         }
Oct 14 09:17:20 compute-0 objective_cori[367784]:     ],
Oct 14 09:17:20 compute-0 objective_cori[367784]:     "1": [
Oct 14 09:17:20 compute-0 objective_cori[367784]:         {
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "devices": [
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "/dev/loop4"
Oct 14 09:17:20 compute-0 objective_cori[367784]:             ],
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_name": "ceph_lv1",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_size": "21470642176",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "name": "ceph_lv1",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "tags": {
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cluster_name": "ceph",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.crush_device_class": "",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.encrypted": "0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osd_id": "1",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.type": "block",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.vdo": "0"
Oct 14 09:17:20 compute-0 objective_cori[367784]:             },
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "type": "block",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "vg_name": "ceph_vg1"
Oct 14 09:17:20 compute-0 objective_cori[367784]:         }
Oct 14 09:17:20 compute-0 objective_cori[367784]:     ],
Oct 14 09:17:20 compute-0 objective_cori[367784]:     "2": [
Oct 14 09:17:20 compute-0 objective_cori[367784]:         {
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "devices": [
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "/dev/loop5"
Oct 14 09:17:20 compute-0 objective_cori[367784]:             ],
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_name": "ceph_lv2",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_size": "21470642176",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "name": "ceph_lv2",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "tags": {
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.cluster_name": "ceph",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.crush_device_class": "",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.encrypted": "0",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osd_id": "2",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.type": "block",
Oct 14 09:17:20 compute-0 objective_cori[367784]:                 "ceph.vdo": "0"
Oct 14 09:17:20 compute-0 objective_cori[367784]:             },
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "type": "block",
Oct 14 09:17:20 compute-0 objective_cori[367784]:             "vg_name": "ceph_vg2"
Oct 14 09:17:20 compute-0 objective_cori[367784]:         }
Oct 14 09:17:20 compute-0 objective_cori[367784]:     ]
Oct 14 09:17:20 compute-0 objective_cori[367784]: }
Oct 14 09:17:20 compute-0 podman[367767]: 2025-10-14 09:17:20.322142959 +0000 UTC m=+1.005652627 container died d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:17:20 compute-0 systemd[1]: libpod-d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f.scope: Deactivated successfully.
Oct 14 09:17:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5-merged.mount: Deactivated successfully.
Oct 14 09:17:20 compute-0 podman[367767]: 2025-10-14 09:17:20.379333399 +0000 UTC m=+1.062843037 container remove d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 09:17:20 compute-0 systemd[1]: libpod-conmon-d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f.scope: Deactivated successfully.
Oct 14 09:17:20 compute-0 sudo[367659]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:20 compute-0 sudo[367805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:20 compute-0 sudo[367805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:20 compute-0 sudo[367805]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:20 compute-0 sudo[367830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:17:20 compute-0 sudo[367830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:20 compute-0 sudo[367830]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:20 compute-0 sudo[367855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:20 compute-0 sudo[367855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:20 compute-0 sudo[367855]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:20 compute-0 sudo[367880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:17:20 compute-0 sudo[367880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:21 compute-0 nova_compute[259627]: 2025-10-14 09:17:21.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:21 compute-0 podman[367946]: 2025-10-14 09:17:21.257658436 +0000 UTC m=+0.068601661 container create 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 09:17:21 compute-0 systemd[1]: Started libpod-conmon-52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66.scope.
Oct 14 09:17:21 compute-0 podman[367946]: 2025-10-14 09:17:21.229553054 +0000 UTC m=+0.040496349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:17:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:17:21 compute-0 podman[367946]: 2025-10-14 09:17:21.369568395 +0000 UTC m=+0.180511680 container init 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:17:21 compute-0 podman[367946]: 2025-10-14 09:17:21.378346111 +0000 UTC m=+0.189289336 container start 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:17:21 compute-0 podman[367946]: 2025-10-14 09:17:21.382563735 +0000 UTC m=+0.193507010 container attach 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:17:21 compute-0 exciting_germain[367963]: 167 167
Oct 14 09:17:21 compute-0 systemd[1]: libpod-52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66.scope: Deactivated successfully.
Oct 14 09:17:21 compute-0 podman[367946]: 2025-10-14 09:17:21.385143809 +0000 UTC m=+0.196087064 container died 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b430b634a78da92063c905473125bde27515c177d46e6dd9061fe44f25d8452a-merged.mount: Deactivated successfully.
Oct 14 09:17:21 compute-0 podman[367946]: 2025-10-14 09:17:21.441760014 +0000 UTC m=+0.252703249 container remove 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 14 09:17:21 compute-0 systemd[1]: libpod-conmon-52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66.scope: Deactivated successfully.
Oct 14 09:17:21 compute-0 podman[367989]: 2025-10-14 09:17:21.671522607 +0000 UTC m=+0.068594942 container create 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:17:21 compute-0 systemd[1]: Started libpod-conmon-876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82.scope.
Oct 14 09:17:21 compute-0 podman[367989]: 2025-10-14 09:17:21.644425409 +0000 UTC m=+0.041497794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:17:21 compute-0 ceph-mon[74249]: pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 09:17:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:21 compute-0 podman[367989]: 2025-10-14 09:17:21.785141117 +0000 UTC m=+0.182213512 container init 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:17:21 compute-0 podman[367989]: 2025-10-14 09:17:21.796809225 +0000 UTC m=+0.193881550 container start 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:17:21 compute-0 podman[367989]: 2025-10-14 09:17:21.800977177 +0000 UTC m=+0.198049572 container attach 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:17:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 09:17:22 compute-0 wonderful_nash[368006]: {
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "osd_id": 2,
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "type": "bluestore"
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:     },
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "osd_id": 1,
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "type": "bluestore"
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:     },
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "osd_id": 0,
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:         "type": "bluestore"
Oct 14 09:17:22 compute-0 wonderful_nash[368006]:     }
Oct 14 09:17:22 compute-0 wonderful_nash[368006]: }
Oct 14 09:17:22 compute-0 systemd[1]: libpod-876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82.scope: Deactivated successfully.
Oct 14 09:17:22 compute-0 systemd[1]: libpod-876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82.scope: Consumed 1.052s CPU time.
Oct 14 09:17:22 compute-0 podman[367989]: 2025-10-14 09:17:22.841608565 +0000 UTC m=+1.238680900 container died 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3-merged.mount: Deactivated successfully.
Oct 14 09:17:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:22 compute-0 podman[367989]: 2025-10-14 09:17:22.899880011 +0000 UTC m=+1.296952316 container remove 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:17:22 compute-0 systemd[1]: libpod-conmon-876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82.scope: Deactivated successfully.
Oct 14 09:17:22 compute-0 sudo[367880]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:17:22 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:17:22 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:22 compute-0 podman[368039]: 2025-10-14 09:17:22.988154876 +0000 UTC m=+0.112547944 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 14 09:17:22 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 17e6efa1-b253-42e8-ab80-a0a87e197e43 does not exist
Oct 14 09:17:22 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e96bf0d9-6511-4f62-b6ae-022e5a7ee6f9 does not exist
Oct 14 09:17:23 compute-0 podman[368042]: 2025-10-14 09:17:23.011729607 +0000 UTC m=+0.136355901 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:17:23 compute-0 sudo[368088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:17:23 compute-0 sudo[368088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:23 compute-0 sudo[368088]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:23 compute-0 sudo[368113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:17:23 compute-0 sudo[368113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:17:23 compute-0 sudo[368113]: pam_unix(sudo:session): session closed for user root
Oct 14 09:17:23 compute-0 ceph-mon[74249]: pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 09:17:23 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:23 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:17:23 compute-0 nova_compute[259627]: 2025-10-14 09:17:23.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:17:25 compute-0 ceph-mon[74249]: pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:17:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:17:26 compute-0 nova_compute[259627]: 2025-10-14 09:17:26.127 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433431.1264865, 8e0fe601-33b8-44f0-8452-d821825b9176 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:17:26 compute-0 nova_compute[259627]: 2025-10-14 09:17:26.128 2 INFO nova.compute.manager [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Stopped (Lifecycle Event)
Oct 14 09:17:26 compute-0 nova_compute[259627]: 2025-10-14 09:17:26.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:26 compute-0 nova_compute[259627]: 2025-10-14 09:17:26.215 2 DEBUG nova.compute.manager [None req-a6ef2daf-8567-4726-9d53-bc3a20788986 - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:27 compute-0 ceph-mon[74249]: pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:17:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:28 compute-0 nova_compute[259627]: 2025-10-14 09:17:28.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:28 compute-0 ceph-mon[74249]: pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:30 compute-0 nova_compute[259627]: 2025-10-14 09:17:30.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:31 compute-0 ceph-mon[74249]: pgmap v1905: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:31 compute-0 nova_compute[259627]: 2025-10-14 09:17:31.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:31 compute-0 nova_compute[259627]: 2025-10-14 09:17:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.008 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:17:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264772301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.446 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.663 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.664 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3715MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.664 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.665 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.767 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.767 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:17:32 compute-0 nova_compute[259627]: 2025-10-14 09:17:32.787 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:17:32
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root']
Oct 14 09:17:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:17:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:33 compute-0 ceph-mon[74249]: pgmap v1906: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3264772301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:17:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:17:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:17:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578243817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:17:33 compute-0 nova_compute[259627]: 2025-10-14 09:17:33.221 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:33 compute-0 nova_compute[259627]: 2025-10-14 09:17:33.230 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:17:33 compute-0 nova_compute[259627]: 2025-10-14 09:17:33.245 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:17:33 compute-0 nova_compute[259627]: 2025-10-14 09:17:33.281 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:17:33 compute-0 nova_compute[259627]: 2025-10-14 09:17:33.282 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:33 compute-0 nova_compute[259627]: 2025-10-14 09:17:33.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/578243817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:17:34 compute-0 nova_compute[259627]: 2025-10-14 09:17:34.279 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:34 compute-0 nova_compute[259627]: 2025-10-14 09:17:34.280 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:34 compute-0 nova_compute[259627]: 2025-10-14 09:17:34.280 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:34 compute-0 podman[368184]: 2025-10-14 09:17:34.689244486 +0000 UTC m=+0.088385360 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 09:17:34 compute-0 podman[368183]: 2025-10-14 09:17:34.736969892 +0000 UTC m=+0.141864758 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 14 09:17:35 compute-0 ceph-mon[74249]: pgmap v1907: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:36 compute-0 nova_compute[259627]: 2025-10-14 09:17:36.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:37 compute-0 ceph-mon[74249]: pgmap v1908: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:38 compute-0 nova_compute[259627]: 2025-10-14 09:17:38.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:38 compute-0 nova_compute[259627]: 2025-10-14 09:17:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:38 compute-0 nova_compute[259627]: 2025-10-14 09:17:38.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:17:38 compute-0 nova_compute[259627]: 2025-10-14 09:17:38.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:17:39 compute-0 nova_compute[259627]: 2025-10-14 09:17:38.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:17:39 compute-0 nova_compute[259627]: 2025-10-14 09:17:38.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:39 compute-0 nova_compute[259627]: 2025-10-14 09:17:39.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:17:39 compute-0 ceph-mon[74249]: pgmap v1909: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:39 compute-0 nova_compute[259627]: 2025-10-14 09:17:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.051 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.051 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.067 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.153 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.154 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:41 compute-0 ceph-mon[74249]: pgmap v1910: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.164 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.164 2 INFO nova.compute.claims [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.276 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:17:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628149342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.795 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.805 2 DEBUG nova.compute.provider_tree [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.829 2 DEBUG nova.scheduler.client.report [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.874 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.876 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.955 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.956 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:17:41 compute-0 nova_compute[259627]: 2025-10-14 09:17:41.984 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.004 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:17:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.118 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.120 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.121 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Creating image(s)
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.156 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:17:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1628149342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.196 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.218 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.223 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.274 2 DEBUG nova.policy [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.326 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.327 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.327 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.328 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.349 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.353 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 22c6f034-c238-499b-8b07-1c0f5879297e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.628 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 22c6f034-c238-499b-8b07-1c0f5879297e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.723 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.836 2 DEBUG nova.objects.instance [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 22c6f034-c238-499b-8b07-1c0f5879297e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.851 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.852 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Ensure instance console log exists: /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.852 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.853 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:42 compute-0 nova_compute[259627]: 2025-10-14 09:17:42.853 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:43 compute-0 nova_compute[259627]: 2025-10-14 09:17:43.167 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Successfully created port: 9053aa9d-2747-4b65-9480-c8d5d0c126fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:17:43 compute-0 ceph-mon[74249]: pgmap v1911: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:17:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:17:43 compute-0 nova_compute[259627]: 2025-10-14 09:17:43.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:44 compute-0 nova_compute[259627]: 2025-10-14 09:17:44.672 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Successfully updated port: 9053aa9d-2747-4b65-9480-c8d5d0c126fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:17:44 compute-0 nova_compute[259627]: 2025-10-14 09:17:44.686 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:17:44 compute-0 nova_compute[259627]: 2025-10-14 09:17:44.687 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:17:44 compute-0 nova_compute[259627]: 2025-10-14 09:17:44.687 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:17:44 compute-0 nova_compute[259627]: 2025-10-14 09:17:44.873 2 DEBUG nova.compute.manager [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:44 compute-0 nova_compute[259627]: 2025-10-14 09:17:44.874 2 DEBUG nova.compute.manager [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing instance network info cache due to event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:17:44 compute-0 nova_compute[259627]: 2025-10-14 09:17:44.875 2 DEBUG oslo_concurrency.lockutils [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:17:45 compute-0 nova_compute[259627]: 2025-10-14 09:17:45.114 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:17:45 compute-0 ceph-mon[74249]: pgmap v1912: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:17:45 compute-0 nova_compute[259627]: 2025-10-14 09:17:45.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.712 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.730 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.731 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance network_info: |[{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.732 2 DEBUG oslo_concurrency.lockutils [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.732 2 DEBUG nova.network.neutron [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.736 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start _get_guest_xml network_info=[{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.742 2 WARNING nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.747 2 DEBUG nova.virt.libvirt.host [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.748 2 DEBUG nova.virt.libvirt.host [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.759 2 DEBUG nova.virt.libvirt.host [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.759 2 DEBUG nova.virt.libvirt.host [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.760 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.761 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.761 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.762 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.762 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.763 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.763 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.764 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.764 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.765 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.765 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.765 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:17:46 compute-0 nova_compute[259627]: 2025-10-14 09:17:46.770 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:47 compute-0 ceph-mon[74249]: pgmap v1913: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:17:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:17:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021480470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.232 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.260 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.264 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:17:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783090329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.717 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.718 2 DEBUG nova.virt.libvirt.vif [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-620217933',display_name='tempest-TestNetworkBasicOps-server-620217933',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-620217933',id=110,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFmMenfL1sUYkbBrkES3junvmmfxbYlPSkqdfU7GziKOgK8Al6Uo4AuziFU4SUs4TqWyCWrEHu5DJJsxFuoF8JeKv+GpLFEK8bJLhTgJlxj4kUg8oBgVMIWKSTqwaqC6zA==',key_name='tempest-TestNetworkBasicOps-2091857695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ol26b3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:17:42Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=22c6f034-c238-499b-8b07-1c0f5879297e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.719 2 DEBUG nova.network.os_vif_util [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.719 2 DEBUG nova.network.os_vif_util [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.720 2 DEBUG nova.objects.instance [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 22c6f034-c238-499b-8b07-1c0f5879297e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.737 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <uuid>22c6f034-c238-499b-8b07-1c0f5879297e</uuid>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <name>instance-0000006e</name>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-620217933</nova:name>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:17:46</nova:creationTime>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <nova:port uuid="9053aa9d-2747-4b65-9480-c8d5d0c126fc">
Oct 14 09:17:47 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <system>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <entry name="serial">22c6f034-c238-499b-8b07-1c0f5879297e</entry>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <entry name="uuid">22c6f034-c238-499b-8b07-1c0f5879297e</entry>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </system>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <os>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   </os>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <features>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   </features>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/22c6f034-c238-499b-8b07-1c0f5879297e_disk">
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       </source>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/22c6f034-c238-499b-8b07-1c0f5879297e_disk.config">
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       </source>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:17:47 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:e4:68:6d"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <target dev="tap9053aa9d-27"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/console.log" append="off"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <video>
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </video>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:17:47 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:17:47 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:17:47 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:17:47 compute-0 nova_compute[259627]: </domain>
Oct 14 09:17:47 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.738 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Preparing to wait for external event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.738 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.738 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.739 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.739 2 DEBUG nova.virt.libvirt.vif [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-620217933',display_name='tempest-TestNetworkBasicOps-server-620217933',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-620217933',id=110,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFmMenfL1sUYkbBrkES3junvmmfxbYlPSkqdfU7GziKOgK8Al6Uo4AuziFU4SUs4TqWyCWrEHu5DJJsxFuoF8JeKv+GpLFEK8bJLhTgJlxj4kUg8oBgVMIWKSTqwaqC6zA==',key_name='tempest-TestNetworkBasicOps-2091857695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ol26b3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:17:42Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=22c6f034-c238-499b-8b07-1c0f5879297e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.740 2 DEBUG nova.network.os_vif_util [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.740 2 DEBUG nova.network.os_vif_util [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.741 2 DEBUG os_vif [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.742 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.742 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.744 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9053aa9d-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9053aa9d-27, col_values=(('external_ids', {'iface-id': '9053aa9d-2747-4b65-9480-c8d5d0c126fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:68:6d', 'vm-uuid': '22c6f034-c238-499b-8b07-1c0f5879297e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:47 compute-0 NetworkManager[44885]: <info>  [1760433467.7483] manager: (tap9053aa9d-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.756 2 INFO os_vif [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27')
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.804 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.805 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.805 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:e4:68:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.805 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Using config drive
Oct 14 09:17:47 compute-0 nova_compute[259627]: 2025-10-14 09:17:47.827 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:17:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:17:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4021480470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:17:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/783090329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.445 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Creating config drive at /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.453 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx4e71hs_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.518 2 DEBUG nova.network.neutron [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updated VIF entry in instance network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.520 2 DEBUG nova.network.neutron [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.539 2 DEBUG oslo_concurrency.lockutils [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.630 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx4e71hs_" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.664 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.668 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.883 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.884 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Deleting local config drive /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config because it was imported into RBD.
Oct 14 09:17:48 compute-0 kernel: tap9053aa9d-27: entered promiscuous mode
Oct 14 09:17:48 compute-0 ovn_controller[152662]: 2025-10-14T09:17:48Z|01167|binding|INFO|Claiming lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc for this chassis.
Oct 14 09:17:48 compute-0 ovn_controller[152662]: 2025-10-14T09:17:48Z|01168|binding|INFO|9053aa9d-2747-4b65-9480-c8d5d0c126fc: Claiming fa:16:3e:e4:68:6d 10.100.0.12
Oct 14 09:17:48 compute-0 nova_compute[259627]: 2025-10-14 09:17:48.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:48 compute-0 NetworkManager[44885]: <info>  [1760433468.9997] manager: (tap9053aa9d-27): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.023 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:68:6d 10.100.0.12'], port_security=['fa:16:3e:e4:68:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '22c6f034-c238-499b-8b07-1c0f5879297e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ac21096-7818-4990-8868-baedcdcf8f83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f54993ea-ba9b-4930-8e05-f68e5e27d0b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cda2976-3075-437c-b2bd-7362e360206b, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9053aa9d-2747-4b65-9480-c8d5d0c126fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.026 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9053aa9d-2747-4b65-9480-c8d5d0c126fc in datapath 6ac21096-7818-4990-8868-baedcdcf8f83 bound to our chassis
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.028 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ac21096-7818-4990-8868-baedcdcf8f83
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.047 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c54d5f1-3de7-4aab-b4d9-111755d5b0cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.048 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ac21096-71 in ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.052 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ac21096-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a90082-02cc-46ec-887e-42daa6a69f6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.054 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29b97d4f-9ab2-4d0f-8a44-84d6d3336f98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 systemd-machined[214636]: New machine qemu-140-instance-0000006e.
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.074 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f45636f4-6a8b-45ac-854b-d5c69b71f505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-0000006e.
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:49 compute-0 ovn_controller[152662]: 2025-10-14T09:17:49Z|01169|binding|INFO|Setting lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc ovn-installed in OVS
Oct 14 09:17:49 compute-0 ovn_controller[152662]: 2025-10-14T09:17:49Z|01170|binding|INFO|Setting lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc up in Southbound
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:49 compute-0 systemd-udevd[368554]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.104 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c13ef9-2935-4ce7-88fb-f3a935cff89a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 NetworkManager[44885]: <info>  [1760433469.1351] device (tap9053aa9d-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:17:49 compute-0 NetworkManager[44885]: <info>  [1760433469.1372] device (tap9053aa9d-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.156 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[36230e10-b75f-4dde-b549-c609c301a896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.165 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7813270a-5662-46e0-8c6e-9a8f567b81e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 NetworkManager[44885]: <info>  [1760433469.1667] manager: (tap6ac21096-70): new Veth device (/org/freedesktop/NetworkManager/Devices/470)
Oct 14 09:17:49 compute-0 ceph-mon[74249]: pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.219 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3dee8b89-d856-4cd3-a2f5-0ec1f1cd6c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.225 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e9b639-16f8-44b9-8590-453887d25b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 NetworkManager[44885]: <info>  [1760433469.2660] device (tap6ac21096-70): carrier: link connected
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.273 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a9592ac3-43bf-43c3-9281-2fde5c853419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4408c64-63c7-48fa-b07d-0070d85af943]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ac21096-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c8:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724802, 'reachable_time': 41515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368584, 'error': None, 'target': 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3db162-3624-47bd-aed6-82637939117c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:c8fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724802, 'tstamp': 724802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368585, 'error': None, 'target': 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.331 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3f68c23b-c92d-4f45-a99b-0ecfd2832d01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ac21096-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c8:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724802, 'reachable_time': 41515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368586, 'error': None, 'target': 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.379 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2846fc17-ebcc-4e32-8748-7d862192ad5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.457 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04c6c183-6a5f-474b-b517-ad213094e569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.459 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac21096-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.459 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.460 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ac21096-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:49 compute-0 NetworkManager[44885]: <info>  [1760433469.5097] manager: (tap6ac21096-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Oct 14 09:17:49 compute-0 kernel: tap6ac21096-70: entered promiscuous mode
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.515 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ac21096-70, col_values=(('external_ids', {'iface-id': '437783e2-5dd6-4d07-bfc8-e4c0a4808267'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:49 compute-0 ovn_controller[152662]: 2025-10-14T09:17:49Z|01171|binding|INFO|Releasing lport 437783e2-5dd6-4d07-bfc8-e4c0a4808267 from this chassis (sb_readonly=0)
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.519 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ac21096-7818-4990-8868-baedcdcf8f83.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ac21096-7818-4990-8868-baedcdcf8f83.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.520 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[23b3f1f8-d43b-4718-8e16-20cc6892e760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.522 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-6ac21096-7818-4990-8868-baedcdcf8f83
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/6ac21096-7818-4990-8868-baedcdcf8f83.pid.haproxy
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 6ac21096-7818-4990-8868-baedcdcf8f83
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:17:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.523 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'env', 'PROCESS_TAG=haproxy-6ac21096-7818-4990-8868-baedcdcf8f83', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ac21096-7818-4990-8868-baedcdcf8f83.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.714 2 DEBUG nova.compute.manager [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.722 2 DEBUG oslo_concurrency.lockutils [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.723 2 DEBUG oslo_concurrency.lockutils [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.723 2 DEBUG oslo_concurrency.lockutils [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:49 compute-0 nova_compute[259627]: 2025-10-14 09:17:49.723 2 DEBUG nova.compute.manager [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Processing event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:17:49 compute-0 podman[368661]: 2025-10-14 09:17:49.976708716 +0000 UTC m=+0.088721257 container create 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:17:50 compute-0 podman[368661]: 2025-10-14 09:17:49.933391259 +0000 UTC m=+0.045403800 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:17:50 compute-0 systemd[1]: Started libpod-conmon-6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd.scope.
Oct 14 09:17:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 09:17:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f2cda8d9a484751f468c0a5f14364c4cc3b93a764e75f80052eecd6130f987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:17:50 compute-0 podman[368661]: 2025-10-14 09:17:50.079239483 +0000 UTC m=+0.191252034 container init 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:17:50 compute-0 podman[368661]: 2025-10-14 09:17:50.091478905 +0000 UTC m=+0.203491446 container start 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:17:50 compute-0 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [NOTICE]   (368680) : New worker (368682) forked
Oct 14 09:17:50 compute-0 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [NOTICE]   (368680) : Loading success.
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.240 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433470.2389555, 22c6f034-c238-499b-8b07-1c0f5879297e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.241 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] VM Started (Lifecycle Event)
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.244 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.247 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.251 2 INFO nova.virt.libvirt.driver [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance spawned successfully.
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.251 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.271 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.277 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.281 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.282 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.283 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.283 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.284 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.284 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.311 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.311 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433470.2391984, 22c6f034-c238-499b-8b07-1c0f5879297e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.312 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] VM Paused (Lifecycle Event)
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.347 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.350 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433470.2468646, 22c6f034-c238-499b-8b07-1c0f5879297e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] VM Resumed (Lifecycle Event)
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.362 2 INFO nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Took 8.24 seconds to spawn the instance on the hypervisor.
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.362 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.369 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.372 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.401 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.428 2 INFO nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Took 9.31 seconds to build instance.
Oct 14 09:17:50 compute-0 nova_compute[259627]: 2025-10-14 09:17:50.444 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:51 compute-0 ceph-mon[74249]: pgmap v1915: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 09:17:51 compute-0 nova_compute[259627]: 2025-10-14 09:17:51.914 2 DEBUG nova.compute.manager [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:51 compute-0 nova_compute[259627]: 2025-10-14 09:17:51.915 2 DEBUG oslo_concurrency.lockutils [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:17:51 compute-0 nova_compute[259627]: 2025-10-14 09:17:51.915 2 DEBUG oslo_concurrency.lockutils [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:17:51 compute-0 nova_compute[259627]: 2025-10-14 09:17:51.916 2 DEBUG oslo_concurrency.lockutils [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:17:51 compute-0 nova_compute[259627]: 2025-10-14 09:17:51.916 2 DEBUG nova.compute.manager [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] No waiting events found dispatching network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:17:51 compute-0 nova_compute[259627]: 2025-10-14 09:17:51.916 2 WARNING nova.compute.manager [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received unexpected event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc for instance with vm_state active and task_state None.
Oct 14 09:17:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:17:52 compute-0 nova_compute[259627]: 2025-10-14 09:17:52.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:53 compute-0 ceph-mon[74249]: pgmap v1916: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:17:53 compute-0 podman[368691]: 2025-10-14 09:17:53.658688164 +0000 UTC m=+0.074724163 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:17:53 compute-0 podman[368692]: 2025-10-14 09:17:53.721728037 +0000 UTC m=+0.122891829 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:17:53 compute-0 nova_compute[259627]: 2025-10-14 09:17:53.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:17:54 compute-0 nova_compute[259627]: 2025-10-14 09:17:54.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:54 compute-0 NetworkManager[44885]: <info>  [1760433474.1756] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Oct 14 09:17:54 compute-0 NetworkManager[44885]: <info>  [1760433474.1763] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Oct 14 09:17:54 compute-0 ovn_controller[152662]: 2025-10-14T09:17:54Z|01172|binding|INFO|Releasing lport 437783e2-5dd6-4d07-bfc8-e4c0a4808267 from this chassis (sb_readonly=0)
Oct 14 09:17:54 compute-0 ovn_controller[152662]: 2025-10-14T09:17:54Z|01173|binding|INFO|Releasing lport 437783e2-5dd6-4d07-bfc8-e4c0a4808267 from this chassis (sb_readonly=0)
Oct 14 09:17:54 compute-0 nova_compute[259627]: 2025-10-14 09:17:54.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:54 compute-0 nova_compute[259627]: 2025-10-14 09:17:54.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:54 compute-0 nova_compute[259627]: 2025-10-14 09:17:54.680 2 DEBUG nova.compute.manager [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:17:54 compute-0 nova_compute[259627]: 2025-10-14 09:17:54.681 2 DEBUG nova.compute.manager [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing instance network info cache due to event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:17:54 compute-0 nova_compute[259627]: 2025-10-14 09:17:54.682 2 DEBUG oslo_concurrency.lockutils [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:17:54 compute-0 nova_compute[259627]: 2025-10-14 09:17:54.682 2 DEBUG oslo_concurrency.lockutils [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:17:54 compute-0 nova_compute[259627]: 2025-10-14 09:17:54.683 2 DEBUG nova.network.neutron [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:17:55 compute-0 ceph-mon[74249]: pgmap v1917: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:17:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:17:56 compute-0 nova_compute[259627]: 2025-10-14 09:17:56.726 2 DEBUG nova.network.neutron [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updated VIF entry in instance network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:17:56 compute-0 nova_compute[259627]: 2025-10-14 09:17:56.727 2 DEBUG nova.network.neutron [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:17:56 compute-0 nova_compute[259627]: 2025-10-14 09:17:56.759 2 DEBUG oslo_concurrency.lockutils [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:17:57 compute-0 ceph-mon[74249]: pgmap v1918: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:17:57 compute-0 nova_compute[259627]: 2025-10-14 09:17:57.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:17:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:17:58 compute-0 nova_compute[259627]: 2025-10-14 09:17:58.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:17:58 compute-0 nova_compute[259627]: 2025-10-14 09:17:58.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:17:59 compute-0 ceph-mon[74249]: pgmap v1919: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:18:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:18:01 compute-0 ceph-mon[74249]: pgmap v1920: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:18:01 compute-0 ovn_controller[152662]: 2025-10-14T09:18:01Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:68:6d 10.100.0.12
Oct 14 09:18:01 compute-0 ovn_controller[152662]: 2025-10-14T09:18:01Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:68:6d 10.100.0.12
Oct 14 09:18:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 72 op/s
Oct 14 09:18:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:02.037 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:02.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:18:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:02.041 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:02 compute-0 nova_compute[259627]: 2025-10-14 09:18:02.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:18:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:18:02 compute-0 nova_compute[259627]: 2025-10-14 09:18:02.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:18:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:18:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:18:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:18:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:03 compute-0 ceph-mon[74249]: pgmap v1921: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 72 op/s
Oct 14 09:18:03 compute-0 nova_compute[259627]: 2025-10-14 09:18:03.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 09:18:05 compute-0 ceph-mon[74249]: pgmap v1922: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 09:18:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:18:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859659764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:18:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:18:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859659764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:18:05 compute-0 podman[368731]: 2025-10-14 09:18:05.68475457 +0000 UTC m=+0.082234737 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 14 09:18:05 compute-0 podman[368730]: 2025-10-14 09:18:05.705617495 +0000 UTC m=+0.114451042 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:18:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 14 09:18:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1859659764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:18:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1859659764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:18:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:07.033 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:07.033 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:07 compute-0 ceph-mon[74249]: pgmap v1923: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 14 09:18:07 compute-0 nova_compute[259627]: 2025-10-14 09:18:07.594 2 INFO nova.compute.manager [None req-4fd78700-5112-4910-8122-da67fa1bbd10 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Get console output
Oct 14 09:18:07 compute-0 nova_compute[259627]: 2025-10-14 09:18:07.600 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:18:07 compute-0 nova_compute[259627]: 2025-10-14 09:18:07.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:18:08 compute-0 nova_compute[259627]: 2025-10-14 09:18:08.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:09 compute-0 ceph-mon[74249]: pgmap v1924: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:18:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:18:11 compute-0 ceph-mon[74249]: pgmap v1925: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:18:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:18:12 compute-0 nova_compute[259627]: 2025-10-14 09:18:12.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:13 compute-0 ceph-mon[74249]: pgmap v1926: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:18:13 compute-0 nova_compute[259627]: 2025-10-14 09:18:13.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:18:15 compute-0 ceph-mon[74249]: pgmap v1927: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:18:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:18:17 compute-0 ceph-mon[74249]: pgmap v1928: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:18:17 compute-0 nova_compute[259627]: 2025-10-14 09:18:17.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 14 09:18:18 compute-0 nova_compute[259627]: 2025-10-14 09:18:18.750 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:18 compute-0 nova_compute[259627]: 2025-10-14 09:18:18.750 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:18 compute-0 nova_compute[259627]: 2025-10-14 09:18:18.773 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:18:18 compute-0 nova_compute[259627]: 2025-10-14 09:18:18.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:18 compute-0 nova_compute[259627]: 2025-10-14 09:18:18.877 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:18 compute-0 nova_compute[259627]: 2025-10-14 09:18:18.878 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:18 compute-0 nova_compute[259627]: 2025-10-14 09:18:18.887 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:18:18 compute-0 nova_compute[259627]: 2025-10-14 09:18:18.888 2 INFO nova.compute.claims [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.016 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:19 compute-0 ceph-mon[74249]: pgmap v1929: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 14 09:18:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:18:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554790416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.513 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.522 2 DEBUG nova.compute.provider_tree [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.544 2 DEBUG nova.scheduler.client.report [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.571 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.572 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.626 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.626 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.649 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.667 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.756 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.758 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.758 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Creating image(s)
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.788 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.825 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.860 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.865 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.971 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.971 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.972 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:19 compute-0 nova_compute[259627]: 2025-10-14 09:18:19.972 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.000 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.004 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.110 2 DEBUG nova.policy [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.338 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2554790416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.426 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.525 2 DEBUG nova.objects.instance [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 78ff2018-e6dc-4337-8a74-90e5a3963a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.541 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.541 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Ensure instance console log exists: /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.542 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.542 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.542 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:20 compute-0 nova_compute[259627]: 2025-10-14 09:18:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:18:21 compute-0 ceph-mon[74249]: pgmap v1930: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 14 09:18:21 compute-0 nova_compute[259627]: 2025-10-14 09:18:21.483 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Successfully created port: da41936a-3d81-49ed-9021-22d56f07b75b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:18:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 142 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 747 KiB/s wr, 3 op/s
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.424 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Successfully updated port: da41936a-3d81-49ed-9021-22d56f07b75b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.444 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.444 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.445 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.650 2 DEBUG nova.compute.manager [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-changed-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.650 2 DEBUG nova.compute.manager [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Refreshing instance network info cache due to event network-changed-da41936a-3d81-49ed-9021-22d56f07b75b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.651 2 DEBUG oslo_concurrency.lockutils [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:22 compute-0 nova_compute[259627]: 2025-10-14 09:18:22.901 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:18:23 compute-0 sudo[368965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:23 compute-0 sudo[368965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:23 compute-0 sudo[368965]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:23 compute-0 sudo[368990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:18:23 compute-0 sudo[368990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:23 compute-0 sudo[368990]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:23 compute-0 sudo[369015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:23 compute-0 sudo[369015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:23 compute-0 sudo[369015]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:23 compute-0 sudo[369040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:18:23 compute-0 sudo[369040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:23 compute-0 ceph-mon[74249]: pgmap v1931: 305 pgs: 305 active+clean; 142 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 747 KiB/s wr, 3 op/s
Oct 14 09:18:23 compute-0 nova_compute[259627]: 2025-10-14 09:18:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:24 compute-0 sudo[369040]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 142 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 735 KiB/s wr, 2 op/s
Oct 14 09:18:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:18:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:18:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:18:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:18:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:18:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:18:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3b8bc598-dff9-4cab-9ae9-a5a146209f30 does not exist
Oct 14 09:18:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev cdbdb531-5ee3-409e-bb73-d08609da5c5b does not exist
Oct 14 09:18:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b9dd1b44-bda7-4576-b952-e78b34446bc3 does not exist
Oct 14 09:18:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:18:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:18:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:18:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:18:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:18:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:18:24 compute-0 sudo[369097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:24 compute-0 sudo[369097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:24 compute-0 sudo[369097]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:24 compute-0 sudo[369134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:18:24 compute-0 sudo[369134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:24 compute-0 sudo[369134]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:24 compute-0 podman[369121]: 2025-10-14 09:18:24.316211468 +0000 UTC m=+0.087901508 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:18:24 compute-0 podman[369122]: 2025-10-14 09:18:24.33943574 +0000 UTC m=+0.111510089 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 14 09:18:24 compute-0 sudo[369187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:24 compute-0 sudo[369187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:24 compute-0 sudo[369187]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:24 compute-0 sudo[369212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:18:24 compute-0 sudo[369212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:18:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:18:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:18:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:18:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:18:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.542 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Updating instance_info_cache with network_info: [{"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.561 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.562 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance network_info: |[{"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.563 2 DEBUG oslo_concurrency.lockutils [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.563 2 DEBUG nova.network.neutron [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Refreshing network info cache for port da41936a-3d81-49ed-9021-22d56f07b75b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.569 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start _get_guest_xml network_info=[{"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.578 2 WARNING nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.586 2 DEBUG nova.virt.libvirt.host [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.586 2 DEBUG nova.virt.libvirt.host [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.598 2 DEBUG nova.virt.libvirt.host [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.598 2 DEBUG nova.virt.libvirt.host [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.599 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.599 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.600 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.600 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.601 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.601 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.602 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.602 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.602 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.603 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.603 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.604 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:18:24 compute-0 nova_compute[259627]: 2025-10-14 09:18:24.607 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:24 compute-0 podman[369279]: 2025-10-14 09:18:24.786450188 +0000 UTC m=+0.045998165 container create f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:18:24 compute-0 systemd[1]: Started libpod-conmon-f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8.scope.
Oct 14 09:18:24 compute-0 podman[369279]: 2025-10-14 09:18:24.763339028 +0000 UTC m=+0.022887015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:18:24 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:18:24 compute-0 podman[369279]: 2025-10-14 09:18:24.893887526 +0000 UTC m=+0.153435593 container init f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 14 09:18:24 compute-0 podman[369279]: 2025-10-14 09:18:24.902826476 +0000 UTC m=+0.162374493 container start f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:18:24 compute-0 podman[369279]: 2025-10-14 09:18:24.906191309 +0000 UTC m=+0.165739296 container attach f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:18:24 compute-0 jolly_mendeleev[369312]: 167 167
Oct 14 09:18:24 compute-0 systemd[1]: libpod-f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8.scope: Deactivated successfully.
Oct 14 09:18:24 compute-0 podman[369279]: 2025-10-14 09:18:24.912655378 +0000 UTC m=+0.172203395 container died f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:18:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8866373d7df29d742d77df53aaafccc026991a07ad6c1af16b39fa2a36510118-merged.mount: Deactivated successfully.
Oct 14 09:18:24 compute-0 podman[369279]: 2025-10-14 09:18:24.957917734 +0000 UTC m=+0.217465711 container remove f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:18:24 compute-0 systemd[1]: libpod-conmon-f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8.scope: Deactivated successfully.
Oct 14 09:18:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:18:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580672828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.099 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.135 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:25 compute-0 podman[369337]: 2025-10-14 09:18:25.140696279 +0000 UTC m=+0.053179552 container create e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.141 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:25 compute-0 systemd[1]: Started libpod-conmon-e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0.scope.
Oct 14 09:18:25 compute-0 podman[369337]: 2025-10-14 09:18:25.11683027 +0000 UTC m=+0.029313583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:18:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:25 compute-0 podman[369337]: 2025-10-14 09:18:25.252393832 +0000 UTC m=+0.164877215 container init e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 09:18:25 compute-0 podman[369337]: 2025-10-14 09:18:25.273407999 +0000 UTC m=+0.185891292 container start e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:18:25 compute-0 podman[369337]: 2025-10-14 09:18:25.278535086 +0000 UTC m=+0.191018449 container attach e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:18:25 compute-0 sshd-session[368775]: Invalid user a from 188.150.249.96 port 52352
Oct 14 09:18:25 compute-0 ceph-mon[74249]: pgmap v1932: 305 pgs: 305 active+clean; 142 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 735 KiB/s wr, 2 op/s
Oct 14 09:18:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1580672828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:18:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:18:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381779128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.605 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.607 2 DEBUG nova.virt.libvirt.vif [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-656534791',display_name='tempest-TestNetworkBasicOps-server-656534791',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-656534791',id=111,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqWOfZtqt05D/ybUIudjZC2rvrYHVQund30LctGyNDPNNhzNoX0uP/AuNuya2hh573jsTHFAB6yaR0srnVECa+MfsVPVgRLFSopv6Xf9JyA1XlxHXKy8htlY2LEMZWHEA==',key_name='tempest-TestNetworkBasicOps-500375878',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-qnjtb5gx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:18:19Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=78ff2018-e6dc-4337-8a74-90e5a3963a12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.607 2 DEBUG nova.network.os_vif_util [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.608 2 DEBUG nova.network.os_vif_util [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.609 2 DEBUG nova.objects.instance [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78ff2018-e6dc-4337-8a74-90e5a3963a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.634 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <uuid>78ff2018-e6dc-4337-8a74-90e5a3963a12</uuid>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <name>instance-0000006f</name>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-656534791</nova:name>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:18:24</nova:creationTime>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <nova:port uuid="da41936a-3d81-49ed-9021-22d56f07b75b">
Oct 14 09:18:25 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <system>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <entry name="serial">78ff2018-e6dc-4337-8a74-90e5a3963a12</entry>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <entry name="uuid">78ff2018-e6dc-4337-8a74-90e5a3963a12</entry>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </system>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <os>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   </os>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <features>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   </features>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/78ff2018-e6dc-4337-8a74-90e5a3963a12_disk">
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       </source>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config">
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       </source>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:18:25 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:44:0a:a6"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <target dev="tapda41936a-3d"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/console.log" append="off"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <video>
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </video>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:18:25 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:18:25 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:18:25 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:18:25 compute-0 nova_compute[259627]: </domain>
Oct 14 09:18:25 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.635 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Preparing to wait for external event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.635 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.635 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.636 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.636 2 DEBUG nova.virt.libvirt.vif [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-656534791',display_name='tempest-TestNetworkBasicOps-server-656534791',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-656534791',id=111,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqWOfZtqt05D/ybUIudjZC2rvrYHVQund30LctGyNDPNNhzNoX0uP/AuNuya2hh573jsTHFAB6yaR0srnVECa+MfsVPVgRLFSopv6Xf9JyA1XlxHXKy8htlY2LEMZWHEA==',key_name='tempest-TestNetworkBasicOps-500375878',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-qnjtb5gx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:18:19Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=78ff2018-e6dc-4337-8a74-90e5a3963a12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.637 2 DEBUG nova.network.os_vif_util [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.637 2 DEBUG nova.network.os_vif_util [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.638 2 DEBUG os_vif [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda41936a-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda41936a-3d, col_values=(('external_ids', {'iface-id': 'da41936a-3d81-49ed-9021-22d56f07b75b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:0a:a6', 'vm-uuid': '78ff2018-e6dc-4337-8a74-90e5a3963a12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:25 compute-0 NetworkManager[44885]: <info>  [1760433505.6528] manager: (tapda41936a-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.663 2 INFO os_vif [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d')
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.725 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.725 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.725 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:44:0a:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.726 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Using config drive
Oct 14 09:18:25 compute-0 nova_compute[259627]: 2025-10-14 09:18:25.746 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:25 compute-0 sshd-session[368775]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:18:25 compute-0 sshd-session[368775]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:18:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.095 2 DEBUG nova.network.neutron [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Updated VIF entry in instance network info cache for port da41936a-3d81-49ed-9021-22d56f07b75b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.096 2 DEBUG nova.network.neutron [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Updating instance_info_cache with network_info: [{"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.120 2 DEBUG oslo_concurrency.lockutils [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.236 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Creating config drive at /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.245 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp412iiwh8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.411 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp412iiwh8" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:26 compute-0 jolly_keldysh[369375]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:18:26 compute-0 jolly_keldysh[369375]: --> relative data size: 1.0
Oct 14 09:18:26 compute-0 jolly_keldysh[369375]: --> All data devices are unavailable
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.444 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.450 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:26 compute-0 systemd[1]: libpod-e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0.scope: Deactivated successfully.
Oct 14 09:18:26 compute-0 systemd[1]: libpod-e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0.scope: Consumed 1.112s CPU time.
Oct 14 09:18:26 compute-0 podman[369337]: 2025-10-14 09:18:26.455897463 +0000 UTC m=+1.368380756 container died e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2-merged.mount: Deactivated successfully.
Oct 14 09:18:26 compute-0 podman[369337]: 2025-10-14 09:18:26.517214414 +0000 UTC m=+1.429697697 container remove e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:18:26 compute-0 systemd[1]: libpod-conmon-e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0.scope: Deactivated successfully.
Oct 14 09:18:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2381779128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:18:26 compute-0 sudo[369212]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:26 compute-0 sudo[369496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:26 compute-0 sudo[369496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:26 compute-0 sudo[369496]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.636 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.637 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Deleting local config drive /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config because it was imported into RBD.
Oct 14 09:18:26 compute-0 NetworkManager[44885]: <info>  [1760433506.6843] manager: (tapda41936a-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Oct 14 09:18:26 compute-0 kernel: tapda41936a-3d: entered promiscuous mode
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:26 compute-0 ovn_controller[152662]: 2025-10-14T09:18:26Z|01174|binding|INFO|Claiming lport da41936a-3d81-49ed-9021-22d56f07b75b for this chassis.
Oct 14 09:18:26 compute-0 ovn_controller[152662]: 2025-10-14T09:18:26Z|01175|binding|INFO|da41936a-3d81-49ed-9021-22d56f07b75b: Claiming fa:16:3e:44:0a:a6 10.100.0.23
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:26 compute-0 sudo[369524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:18:26 compute-0 sudo[369524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.703 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:0a:a6 10.100.0.23'], port_security=['fa:16:3e:44:0a:a6 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '78ff2018-e6dc-4337-8a74-90e5a3963a12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f71257d1-6873-4759-898a-ce32e06e2fe5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9d71fdc-7fb6-4c43-8934-1eec5337be2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cdc378b-1288-436b-9821-34981ec9f7fe, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=da41936a-3d81-49ed-9021-22d56f07b75b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.704 162547 INFO neutron.agent.ovn.metadata.agent [-] Port da41936a-3d81-49ed-9021-22d56f07b75b in datapath f71257d1-6873-4759-898a-ce32e06e2fe5 bound to our chassis
Oct 14 09:18:26 compute-0 sudo[369524]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.705 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f71257d1-6873-4759-898a-ce32e06e2fe5
Oct 14 09:18:26 compute-0 systemd-udevd[369561]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15c95710-9154-44dd-953e-ec19ff0968ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.719 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf71257d1-61 in ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.721 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf71257d1-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57ec8a8d-a322-4834-b68b-7f2cbb903eec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 systemd-machined[214636]: New machine qemu-141-instance-0000006f.
Oct 14 09:18:26 compute-0 NetworkManager[44885]: <info>  [1760433506.7302] device (tapda41936a-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:18:26 compute-0 NetworkManager[44885]: <info>  [1760433506.7316] device (tapda41936a-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78ef373a-bde3-4923-a43a-5fceb09ca2b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:26 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-0000006f.
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.751 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7999880c-9387-4786-b702-03d71e80fb68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_controller[152662]: 2025-10-14T09:18:26Z|01176|binding|INFO|Setting lport da41936a-3d81-49ed-9021-22d56f07b75b ovn-installed in OVS
Oct 14 09:18:26 compute-0 ovn_controller[152662]: 2025-10-14T09:18:26Z|01177|binding|INFO|Setting lport da41936a-3d81-49ed-9021-22d56f07b75b up in Southbound
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.767 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e21b4a3b-6956-4170-80f9-de59d9136135]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 sudo[369564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:26 compute-0 sudo[369564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:26 compute-0 sudo[369564]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.797 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0083f93e-4d9d-4c0c-932e-9e271925b5b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 NetworkManager[44885]: <info>  [1760433506.8043] manager: (tapf71257d1-60): new Veth device (/org/freedesktop/NetworkManager/Devices/476)
Oct 14 09:18:26 compute-0 systemd-udevd[369572]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.803 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af80acdf-b9c9-44e3-b6f0-dee65997d7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.839 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8844502f-943a-4145-bc3b-f5d9393e45bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 sudo[369598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:18:26 compute-0 sudo[369598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.842 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a53df3-0a8e-4a8b-972e-9e8a73e1a4e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 NetworkManager[44885]: <info>  [1760433506.8619] device (tapf71257d1-60): carrier: link connected
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.865 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6a6352-23ca-45ab-95e3-64fc59ea18ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.880 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[accdf4ad-dbb7-4ba9-9075-17ca0d548636]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf71257d1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:52:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728562, 'reachable_time': 37622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369645, 'error': None, 'target': 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.893 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32ea3e58-178f-4c18-afb1-af6c77489831]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefb:5253'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728562, 'tstamp': 728562}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369646, 'error': None, 'target': 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.907 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03069423-17f3-4a4b-bc96-1857b1cc6977]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf71257d1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:52:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728562, 'reachable_time': 37622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369647, 'error': None, 'target': 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.934 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19003c18-31a3-42ba-bbb4-aceb25d0366f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff74600b-cf52-4d75-9d38-ff42ff0e93d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.985 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf71257d1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.986 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.986 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf71257d1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:26 compute-0 kernel: tapf71257d1-60: entered promiscuous mode
Oct 14 09:18:26 compute-0 NetworkManager[44885]: <info>  [1760433506.9902] manager: (tapf71257d1-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Oct 14 09:18:26 compute-0 nova_compute[259627]: 2025-10-14 09:18:26.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.999 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf71257d1-60, col_values=(('external_ids', {'iface-id': '1080cece-d27b-4391-82d1-6b7871bf79ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:27 compute-0 nova_compute[259627]: 2025-10-14 09:18:27.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:27 compute-0 ovn_controller[152662]: 2025-10-14T09:18:27Z|01178|binding|INFO|Releasing lport 1080cece-d27b-4391-82d1-6b7871bf79ba from this chassis (sb_readonly=0)
Oct 14 09:18:27 compute-0 nova_compute[259627]: 2025-10-14 09:18:27.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:27.015 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f71257d1-6873-4759-898a-ce32e06e2fe5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f71257d1-6873-4759-898a-ce32e06e2fe5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:27.015 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[012980ac-d769-430d-ac00-7d3be2352960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:27.016 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-f71257d1-6873-4759-898a-ce32e06e2fe5
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/f71257d1-6873-4759-898a-ce32e06e2fe5.pid.haproxy
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID f71257d1-6873-4759-898a-ce32e06e2fe5
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:18:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:27.017 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'env', 'PROCESS_TAG=haproxy-f71257d1-6873-4759-898a-ce32e06e2fe5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f71257d1-6873-4759-898a-ce32e06e2fe5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:18:27 compute-0 nova_compute[259627]: 2025-10-14 09:18:27.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:27 compute-0 podman[369696]: 2025-10-14 09:18:27.194556387 +0000 UTC m=+0.045958534 container create e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 14 09:18:27 compute-0 systemd[1]: Started libpod-conmon-e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc.scope.
Oct 14 09:18:27 compute-0 podman[369696]: 2025-10-14 09:18:27.170439353 +0000 UTC m=+0.021841480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:18:27 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:18:27 compute-0 podman[369696]: 2025-10-14 09:18:27.325249108 +0000 UTC m=+0.176651235 container init e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:18:27 compute-0 podman[369696]: 2025-10-14 09:18:27.33464295 +0000 UTC m=+0.186045067 container start e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:18:27 compute-0 podman[369696]: 2025-10-14 09:18:27.339147751 +0000 UTC m=+0.190549858 container attach e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:18:27 compute-0 sad_bell[369717]: 167 167
Oct 14 09:18:27 compute-0 systemd[1]: libpod-e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc.scope: Deactivated successfully.
Oct 14 09:18:27 compute-0 podman[369696]: 2025-10-14 09:18:27.342682138 +0000 UTC m=+0.194084245 container died e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 09:18:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-624d0a7e7e71146fbb07872a14b4aa53f659cadd4ebe59eb3c5afa7ada743ff5-merged.mount: Deactivated successfully.
Oct 14 09:18:27 compute-0 podman[369696]: 2025-10-14 09:18:27.382492739 +0000 UTC m=+0.233894856 container remove e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 14 09:18:27 compute-0 systemd[1]: libpod-conmon-e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc.scope: Deactivated successfully.
Oct 14 09:18:27 compute-0 podman[369738]: 2025-10-14 09:18:27.430474642 +0000 UTC m=+0.082854513 container create c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:18:27 compute-0 systemd[1]: Started libpod-conmon-c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3.scope.
Oct 14 09:18:27 compute-0 podman[369738]: 2025-10-14 09:18:27.3975464 +0000 UTC m=+0.049926241 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:18:27 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e7bed502d13946ec34b297bcd718011311b8b7aa0eaf61e9799a54475ca22e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:27 compute-0 podman[369738]: 2025-10-14 09:18:27.539188961 +0000 UTC m=+0.191568802 container init c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 09:18:27 compute-0 podman[369738]: 2025-10-14 09:18:27.547125137 +0000 UTC m=+0.199504988 container start c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:18:27 compute-0 ceph-mon[74249]: pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:18:27 compute-0 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [NOTICE]   (369829) : New worker (369831) forked
Oct 14 09:18:27 compute-0 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [NOTICE]   (369829) : Loading success.
Oct 14 09:18:27 compute-0 podman[369815]: 2025-10-14 09:18:27.621593452 +0000 UTC m=+0.089536847 container create a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 09:18:27 compute-0 systemd[1]: Started libpod-conmon-a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391.scope.
Oct 14 09:18:27 compute-0 podman[369815]: 2025-10-14 09:18:27.590742712 +0000 UTC m=+0.058686137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:18:27 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:27 compute-0 podman[369815]: 2025-10-14 09:18:27.713854986 +0000 UTC m=+0.181798401 container init a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:18:27 compute-0 podman[369815]: 2025-10-14 09:18:27.727110893 +0000 UTC m=+0.195054288 container start a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:18:27 compute-0 podman[369815]: 2025-10-14 09:18:27.731496271 +0000 UTC m=+0.199439696 container attach a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:18:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:27 compute-0 sshd-session[368775]: Failed password for invalid user a from 188.150.249.96 port 52352 ssh2
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.020 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433508.018999, 78ff2018-e6dc-4337-8a74-90e5a3963a12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.021 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] VM Started (Lifecycle Event)
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.041 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.047 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433508.023958, 78ff2018-e6dc-4337-8a74-90e5a3963a12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.048 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] VM Paused (Lifecycle Event)
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.065 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:18:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.069 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.086 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]: {
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:     "0": [
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:         {
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "devices": [
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "/dev/loop3"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             ],
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_name": "ceph_lv0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_size": "21470642176",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "name": "ceph_lv0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "tags": {
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cluster_name": "ceph",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.crush_device_class": "",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.encrypted": "0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osd_id": "0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.type": "block",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.vdo": "0"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             },
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "type": "block",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "vg_name": "ceph_vg0"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:         }
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:     ],
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:     "1": [
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:         {
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "devices": [
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "/dev/loop4"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             ],
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_name": "ceph_lv1",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_size": "21470642176",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "name": "ceph_lv1",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "tags": {
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cluster_name": "ceph",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.crush_device_class": "",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.encrypted": "0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osd_id": "1",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.type": "block",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.vdo": "0"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             },
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "type": "block",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "vg_name": "ceph_vg1"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:         }
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:     ],
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:     "2": [
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:         {
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "devices": [
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "/dev/loop5"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             ],
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_name": "ceph_lv2",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_size": "21470642176",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "name": "ceph_lv2",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "tags": {
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.cluster_name": "ceph",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.crush_device_class": "",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.encrypted": "0",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osd_id": "2",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.type": "block",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:                 "ceph.vdo": "0"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             },
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "type": "block",
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:             "vg_name": "ceph_vg2"
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:         }
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]:     ]
Oct 14 09:18:28 compute-0 optimistic_lovelace[369844]: }
Oct 14 09:18:28 compute-0 systemd[1]: libpod-a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391.scope: Deactivated successfully.
Oct 14 09:18:28 compute-0 podman[369815]: 2025-10-14 09:18:28.51775096 +0000 UTC m=+0.985694355 container died a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 14 09:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8-merged.mount: Deactivated successfully.
Oct 14 09:18:28 compute-0 podman[369815]: 2025-10-14 09:18:28.592803809 +0000 UTC m=+1.060747204 container remove a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:18:28 compute-0 systemd[1]: libpod-conmon-a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391.scope: Deactivated successfully.
Oct 14 09:18:28 compute-0 sudo[369598]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:28 compute-0 sudo[369865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:28 compute-0 sudo[369865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:28 compute-0 sudo[369865]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:28 compute-0 sudo[369890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:18:28 compute-0 sudo[369890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:28 compute-0 sudo[369890]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:28 compute-0 nova_compute[259627]: 2025-10-14 09:18:28.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:28 compute-0 sudo[369915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:28 compute-0 sudo[369915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:28 compute-0 sudo[369915]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:28 compute-0 sudo[369940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:18:28 compute-0 sudo[369940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:29 compute-0 podman[370006]: 2025-10-14 09:18:29.427079001 +0000 UTC m=+0.055985311 container create 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:18:29 compute-0 systemd[1]: Started libpod-conmon-430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc.scope.
Oct 14 09:18:29 compute-0 podman[370006]: 2025-10-14 09:18:29.39945288 +0000 UTC m=+0.028359240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:18:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:18:29 compute-0 podman[370006]: 2025-10-14 09:18:29.53292509 +0000 UTC m=+0.161831450 container init 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:18:29 compute-0 podman[370006]: 2025-10-14 09:18:29.543037919 +0000 UTC m=+0.171944229 container start 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 09:18:29 compute-0 adoring_volhard[370022]: 167 167
Oct 14 09:18:29 compute-0 podman[370006]: 2025-10-14 09:18:29.550328489 +0000 UTC m=+0.179234789 container attach 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:18:29 compute-0 systemd[1]: libpod-430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc.scope: Deactivated successfully.
Oct 14 09:18:29 compute-0 podman[370006]: 2025-10-14 09:18:29.552254417 +0000 UTC m=+0.181160727 container died 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:18:29 compute-0 ceph-mon[74249]: pgmap v1934: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c909d7239c03b4d9236547c9b46b017413575e0951edf84cbf8e29ffc1ea5184-merged.mount: Deactivated successfully.
Oct 14 09:18:29 compute-0 podman[370006]: 2025-10-14 09:18:29.606455762 +0000 UTC m=+0.235362072 container remove 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:18:29 compute-0 systemd[1]: libpod-conmon-430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc.scope: Deactivated successfully.
Oct 14 09:18:29 compute-0 sshd-session[368775]: Connection closed by invalid user a 188.150.249.96 port 52352 [preauth]
Oct 14 09:18:29 compute-0 podman[370045]: 2025-10-14 09:18:29.882472225 +0000 UTC m=+0.076683581 container create f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:18:29 compute-0 systemd[1]: Started libpod-conmon-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope.
Oct 14 09:18:29 compute-0 podman[370045]: 2025-10-14 09:18:29.853415759 +0000 UTC m=+0.047627165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:18:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:18:29 compute-0 podman[370045]: 2025-10-14 09:18:29.993347468 +0000 UTC m=+0.187558864 container init f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:18:30 compute-0 podman[370045]: 2025-10-14 09:18:30.010739157 +0000 UTC m=+0.204950503 container start f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:18:30 compute-0 podman[370045]: 2025-10-14 09:18:30.015044633 +0000 UTC m=+0.209255979 container attach f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:18:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:18:30 compute-0 nova_compute[259627]: 2025-10-14 09:18:30.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]: {
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "osd_id": 2,
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "type": "bluestore"
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:     },
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "osd_id": 1,
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "type": "bluestore"
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:     },
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "osd_id": 0,
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:         "type": "bluestore"
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]:     }
Oct 14 09:18:31 compute-0 dazzling_chatterjee[370061]: }
Oct 14 09:18:31 compute-0 systemd[1]: libpod-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope: Deactivated successfully.
Oct 14 09:18:31 compute-0 systemd[1]: libpod-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope: Consumed 1.209s CPU time.
Oct 14 09:18:31 compute-0 conmon[370061]: conmon f77a0a2a318c4fed8108 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope/container/memory.events
Oct 14 09:18:31 compute-0 podman[370045]: 2025-10-14 09:18:31.215317985 +0000 UTC m=+1.409529301 container died f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e-merged.mount: Deactivated successfully.
Oct 14 09:18:31 compute-0 podman[370045]: 2025-10-14 09:18:31.273983511 +0000 UTC m=+1.468194837 container remove f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:18:31 compute-0 systemd[1]: libpod-conmon-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope: Deactivated successfully.
Oct 14 09:18:31 compute-0 sudo[369940]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:18:31 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:18:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:18:31 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:18:31 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev efe05eb0-081c-472e-910a-a7b0de0f21f5 does not exist
Oct 14 09:18:31 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ab9b5431-b01c-49fb-a6fb-337082cf67b4 does not exist
Oct 14 09:18:31 compute-0 sudo[370110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:18:31 compute-0 sudo[370110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:31 compute-0 sudo[370110]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:31 compute-0 sudo[370135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:18:31 compute-0 sudo[370135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:18:31 compute-0 sudo[370135]: pam_unix(sudo:session): session closed for user root
Oct 14 09:18:31 compute-0 ceph-mon[74249]: pgmap v1935: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:18:31 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:18:31 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.801 2 DEBUG nova.compute.manager [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.804 2 DEBUG oslo_concurrency.lockutils [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.804 2 DEBUG oslo_concurrency.lockutils [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.805 2 DEBUG oslo_concurrency.lockutils [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.805 2 DEBUG nova.compute.manager [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Processing event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.806 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.813 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433511.8133683, 78ff2018-e6dc-4337-8a74-90e5a3963a12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.814 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] VM Resumed (Lifecycle Event)
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.818 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.823 2 INFO nova.virt.libvirt.driver [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance spawned successfully.
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.824 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.846 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.855 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.862 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.863 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.863 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.864 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.865 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.865 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.877 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.929 2 INFO nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Took 12.17 seconds to spawn the instance on the hypervisor.
Oct 14 09:18:31 compute-0 nova_compute[259627]: 2025-10-14 09:18:31.930 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:18:32 compute-0 nova_compute[259627]: 2025-10-14 09:18:32.040 2 INFO nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Took 13.20 seconds to build instance.
Oct 14 09:18:32 compute-0 nova_compute[259627]: 2025-10-14 09:18:32.065 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:18:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 14 09:18:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 14 09:18:32 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:18:32
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.mgr']
Oct 14 09:18:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:18:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:32 compute-0 nova_compute[259627]: 2025-10-14 09:18:32.997 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:32 compute-0 nova_compute[259627]: 2025-10-14 09:18:32.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.020 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:18:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:18:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:18:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664284408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.493 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.594 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.595 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.602 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.602 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:18:33 compute-0 ceph-mon[74249]: pgmap v1936: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:18:33 compute-0 ceph-mon[74249]: osdmap e267: 3 total, 3 up, 3 in
Oct 14 09:18:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/664284408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.866 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.867 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3443MB free_disk=59.92180633544922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.868 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:33 compute-0 nova_compute[259627]: 2025-10-14 09:18:33.868 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.092 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 22c6f034-c238-499b-8b07-1c0f5879297e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.093 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 78ff2018-e6dc-4337-8a74-90e5a3963a12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.093 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.094 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.276 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:18:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059653271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.752 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.758 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.904 2 DEBUG nova.compute.manager [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.905 2 DEBUG oslo_concurrency.lockutils [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.906 2 DEBUG oslo_concurrency.lockutils [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.907 2 DEBUG oslo_concurrency.lockutils [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.907 2 DEBUG nova.compute.manager [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] No waiting events found dispatching network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:18:34 compute-0 nova_compute[259627]: 2025-10-14 09:18:34.908 2 WARNING nova.compute.manager [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received unexpected event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b for instance with vm_state active and task_state None.
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.021 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.048 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.049 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.617 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "333c933a-d8e8-42b0-ab77-72546d8ab982" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.618 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:35 compute-0 ceph-mon[74249]: pgmap v1938: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Oct 14 09:18:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1059653271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.665 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.766 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.767 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.775 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:18:35 compute-0 nova_compute[259627]: 2025-10-14 09:18:35.776 2 INFO nova.compute.claims [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:18:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 104 op/s
Oct 14 09:18:36 compute-0 podman[370207]: 2025-10-14 09:18:36.683923697 +0000 UTC m=+0.092820549 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 09:18:36 compute-0 podman[370206]: 2025-10-14 09:18:36.699158432 +0000 UTC m=+0.111085739 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:18:36 compute-0 nova_compute[259627]: 2025-10-14 09:18:36.765 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.025 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.027 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.028 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:37.177 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:37.179 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:18:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:37.181 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:37.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84ab5b71-0af2-4f7d-bb98-74afc1a24e2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:18:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/587226045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.204 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.211 2 DEBUG nova.compute.provider_tree [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.229 2 DEBUG nova.scheduler.client.report [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.256 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.257 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.324 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.325 2 DEBUG nova.network.neutron [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.484 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.647 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:18:37 compute-0 ceph-mon[74249]: pgmap v1939: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 104 op/s
Oct 14 09:18:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/587226045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.787 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.791 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.792 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Creating image(s)
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.829 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.869 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.894 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.899 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "499f01f883833265e17c7f0a92fa640265d12fd1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.900 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "499f01f883833265e17c7f0a92fa640265d12fd1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.903 2 DEBUG nova.network.neutron [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 09:18:37 compute-0 nova_compute[259627]: 2025-10-14 09:18:37.903 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:18:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 104 op/s
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.297 2 DEBUG nova.virt.libvirt.imagebackend [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/727bbed1-74a5-4c59-8492-67ee2fd49862/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/727bbed1-74a5-4c59-8492-67ee2fd49862/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.367 2 DEBUG nova.virt.libvirt.imagebackend [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/727bbed1-74a5-4c59-8492-67ee2fd49862/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.369 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] cloning images/727bbed1-74a5-4c59-8492-67ee2fd49862@snap to None/333c933a-d8e8-42b0-ab77-72546d8ab982_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:18:38 compute-0 sshd-session[370066]: Invalid user nil from 188.150.249.96 port 55286
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.513 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "499f01f883833265e17c7f0a92fa640265d12fd1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.700 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] resizing rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.809 2 DEBUG nova.objects.instance [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lazy-loading 'migration_context' on Instance uuid 333c933a-d8e8-42b0-ab77-72546d8ab982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.832 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.833 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Ensure instance console log exists: /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.834 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.834 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.835 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.838 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='ef3c7bff170b5c0c8fa1a2079ceceb0f',container_format='bare',created_at=2025-10-14T09:18:32Z,direct_url=<?>,disk_format='raw',id=727bbed1-74a5-4c59-8492-67ee2fd49862,min_disk=0,min_ram=0,name='tempest-image-dependency-test-705407316',owner='8af92d0b8536433fbf169b1e300f923f',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-14T09:18:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '727bbed1-74a5-4c59-8492-67ee2fd49862'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.846 2 WARNING nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.855 2 DEBUG nova.virt.libvirt.host [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.856 2 DEBUG nova.virt.libvirt.host [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.863 2 DEBUG nova.virt.libvirt.host [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.864 2 DEBUG nova.virt.libvirt.host [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.866 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.867 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='ef3c7bff170b5c0c8fa1a2079ceceb0f',container_format='bare',created_at=2025-10-14T09:18:32Z,direct_url=<?>,disk_format='raw',id=727bbed1-74a5-4c59-8492-67ee2fd49862,min_disk=0,min_ram=0,name='tempest-image-dependency-test-705407316',owner='8af92d0b8536433fbf169b1e300f923f',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-14T09:18:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.868 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.869 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.869 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.870 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.871 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.872 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.873 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.874 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.874 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.875 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.882 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:38 compute-0 sshd-session[370066]: Failed none for invalid user nil from 188.150.249.96 port 55286 ssh2
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:18:38 compute-0 nova_compute[259627]: 2025-10-14 09:18:38.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.009 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.259 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.259 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.260 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.260 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 22c6f034-c238-499b-8b07-1c0f5879297e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:18:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:18:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320813269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.349 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.379 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.385 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:39 compute-0 sshd-session[370066]: Connection closed by invalid user nil 188.150.249.96 port 55286 [preauth]
Oct 14 09:18:39 compute-0 ceph-mon[74249]: pgmap v1940: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 104 op/s
Oct 14 09:18:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2320813269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:18:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:18:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193110767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.819 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.822 2 DEBUG nova.objects.instance [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lazy-loading 'pci_devices' on Instance uuid 333c933a-d8e8-42b0-ab77-72546d8ab982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.839 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <uuid>333c933a-d8e8-42b0-ab77-72546d8ab982</uuid>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <name>instance-00000070</name>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <nova:name>instance-depend-image</nova:name>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:18:38</nova:creationTime>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <nova:user uuid="bf09463654864456a851828eb4e88fb2">tempest-ImageDependencyTests-1811756685-project-member</nova:user>
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <nova:project uuid="8af92d0b8536433fbf169b1e300f923f">tempest-ImageDependencyTests-1811756685</nova:project>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="727bbed1-74a5-4c59-8492-67ee2fd49862"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <system>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <entry name="serial">333c933a-d8e8-42b0-ab77-72546d8ab982</entry>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <entry name="uuid">333c933a-d8e8-42b0-ab77-72546d8ab982</entry>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     </system>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <os>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   </os>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <features>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   </features>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/333c933a-d8e8-42b0-ab77-72546d8ab982_disk">
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config">
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:18:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/console.log" append="off"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <video>
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     </video>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:18:39 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:18:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:18:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:18:39 compute-0 nova_compute[259627]: </domain>
Oct 14 09:18:39 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.901 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.902 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.903 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Using config drive
Oct 14 09:18:39 compute-0 nova_compute[259627]: 2025-10-14 09:18:39.937 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 93 op/s
Oct 14 09:18:40 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.238 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Creating config drive at /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config
Oct 14 09:18:40 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.246 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0amxw25 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:40 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.408 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0amxw25" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:40 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.451 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:18:40 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.464 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:40 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.617 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:40 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.618 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Deleting local config drive /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config because it was imported into RBD.
Oct 14 09:18:40 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/193110767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:18:40 compute-0 systemd-machined[214636]: New machine qemu-142-instance-00000070.
Oct 14 09:18:40 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000070.
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:40.999 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.015 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.015 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.016 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.017 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:41.123 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:41.124 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:18:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:41.126 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:41.127 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1773c96f-97f2-419c-bbb8-308d6df8514d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.625 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433521.6247602, 333c933a-d8e8-42b0-ab77-72546d8ab982 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.626 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] VM Resumed (Lifecycle Event)
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.629 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.629 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.633 2 INFO nova.virt.libvirt.driver [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance spawned successfully.
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.633 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.648 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.653 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.659 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.660 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.660 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.661 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.661 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.662 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:18:41 compute-0 ceph-mon[74249]: pgmap v1941: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 93 op/s
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.702 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.703 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433521.6251338, 333c933a-d8e8-42b0-ab77-72546d8ab982 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.703 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] VM Started (Lifecycle Event)
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.736 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.740 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.752 2 INFO nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 3.96 seconds to spawn the instance on the hypervisor.
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.752 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.765 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.819 2 INFO nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 6.08 seconds to build instance.
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.838 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:41 compute-0 nova_compute[259627]: 2025-10-14 09:18:41.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.3 KiB/s wr, 133 op/s
Oct 14 09:18:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:42.143 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:42.146 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:18:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:42.148 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:42.149 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[09149002-dd75-45f4-9a0c-afee462a20de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001108513218727616 of space, bias 1.0, pg target 0.3325539656182848 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:18:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:18:43 compute-0 nova_compute[259627]: 2025-10-14 09:18:43.273 2 DEBUG nova.compute.manager [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:18:43 compute-0 nova_compute[259627]: 2025-10-14 09:18:43.323 2 INFO nova.compute.manager [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] instance snapshotting
Oct 14 09:18:43 compute-0 ovn_controller[152662]: 2025-10-14T09:18:43Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:0a:a6 10.100.0.23
Oct 14 09:18:43 compute-0 ovn_controller[152662]: 2025-10-14T09:18:43Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:0a:a6 10.100.0.23
Oct 14 09:18:43 compute-0 ceph-mon[74249]: pgmap v1942: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.3 KiB/s wr, 133 op/s
Oct 14 09:18:43 compute-0 nova_compute[259627]: 2025-10-14 09:18:43.843 2 INFO nova.virt.libvirt.driver [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Beginning live snapshot process
Oct 14 09:18:43 compute-0 nova_compute[259627]: 2025-10-14 09:18:43.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:43 compute-0 nova_compute[259627]: 2025-10-14 09:18:43.995 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] creating snapshot(b16323738b854d8aa86180d83eca67ff) on rbd image(333c933a-d8e8-42b0-ab77-72546d8ab982_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:18:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.4 KiB/s wr, 116 op/s
Oct 14 09:18:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 14 09:18:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 14 09:18:44 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 14 09:18:44 compute-0 nova_compute[259627]: 2025-10-14 09:18:44.782 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] cloning vms/333c933a-d8e8-42b0-ab77-72546d8ab982_disk@b16323738b854d8aa86180d83eca67ff to images/4ed7be1c-eb92-4aed-821e-c93c8b3e0961 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:18:44 compute-0 nova_compute[259627]: 2025-10-14 09:18:44.922 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] flattening images/4ed7be1c-eb92-4aed-821e-c93c8b3e0961 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:18:45 compute-0 nova_compute[259627]: 2025-10-14 09:18:45.088 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] removing snapshot(b16323738b854d8aa86180d83eca67ff) on rbd image(333c933a-d8e8-42b0-ab77-72546d8ab982_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:18:45 compute-0 nova_compute[259627]: 2025-10-14 09:18:45.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 14 09:18:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 14 09:18:45 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 14 09:18:45 compute-0 ceph-mon[74249]: pgmap v1943: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.4 KiB/s wr, 116 op/s
Oct 14 09:18:45 compute-0 ceph-mon[74249]: osdmap e268: 3 total, 3 up, 3 in
Oct 14 09:18:45 compute-0 nova_compute[259627]: 2025-10-14 09:18:45.753 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] creating snapshot(snap) on rbd image(4ed7be1c-eb92-4aed-821e-c93c8b3e0961) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:18:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 200 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.2 MiB/s wr, 234 op/s
Oct 14 09:18:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:46.539 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:46.542 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:18:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:46.544 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:46.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c37f9c5d-250f-4975-8fe1-a48ceb4a58ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 14 09:18:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 14 09:18:46 compute-0 ceph-mon[74249]: osdmap e269: 3 total, 3 up, 3 in
Oct 14 09:18:46 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 14 09:18:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:47.403 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:47.405 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:18:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:47.406 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:47.407 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[648a4e4c-a346-4f59-b723-d62b95fa87e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:47 compute-0 ceph-mon[74249]: pgmap v1946: 305 pgs: 305 active+clean; 200 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.2 MiB/s wr, 234 op/s
Oct 14 09:18:47 compute-0 ceph-mon[74249]: osdmap e270: 3 total, 3 up, 3 in
Oct 14 09:18:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:47 compute-0 nova_compute[259627]: 2025-10-14 09:18:47.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 200 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 243 op/s
Oct 14 09:18:48 compute-0 nova_compute[259627]: 2025-10-14 09:18:48.560 2 INFO nova.virt.libvirt.driver [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Snapshot image upload complete
Oct 14 09:18:48 compute-0 nova_compute[259627]: 2025-10-14 09:18:48.560 2 INFO nova.compute.manager [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 5.23 seconds to snapshot the instance on the hypervisor.
Oct 14 09:18:48 compute-0 nova_compute[259627]: 2025-10-14 09:18:48.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:49 compute-0 ceph-mon[74249]: pgmap v1948: 305 pgs: 305 active+clean; 200 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 243 op/s
Oct 14 09:18:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 4.3 MiB/s wr, 249 op/s
Oct 14 09:18:50 compute-0 sshd-session[370533]: Invalid user admin from 188.150.249.96 port 57382
Oct 14 09:18:50 compute-0 nova_compute[259627]: 2025-10-14 09:18:50.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 14 09:18:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 14 09:18:50 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 14 09:18:51 compute-0 sshd-session[370533]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:18:51 compute-0 sshd-session[370533]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.425 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.426 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.426 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.426 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.427 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.428 2 INFO nova.compute.manager [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Terminating instance
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.429 2 DEBUG nova.compute.manager [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:18:51 compute-0 kernel: tapda41936a-3d (unregistering): left promiscuous mode
Oct 14 09:18:51 compute-0 NetworkManager[44885]: <info>  [1760433531.4932] device (tapda41936a-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:51 compute-0 ovn_controller[152662]: 2025-10-14T09:18:51Z|01179|binding|INFO|Releasing lport da41936a-3d81-49ed-9021-22d56f07b75b from this chassis (sb_readonly=0)
Oct 14 09:18:51 compute-0 ovn_controller[152662]: 2025-10-14T09:18:51Z|01180|binding|INFO|Setting lport da41936a-3d81-49ed-9021-22d56f07b75b down in Southbound
Oct 14 09:18:51 compute-0 ovn_controller[152662]: 2025-10-14T09:18:51Z|01181|binding|INFO|Removing iface tapda41936a-3d ovn-installed in OVS
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.523 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:0a:a6 10.100.0.23'], port_security=['fa:16:3e:44:0a:a6 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '78ff2018-e6dc-4337-8a74-90e5a3963a12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f71257d1-6873-4759-898a-ce32e06e2fe5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9d71fdc-7fb6-4c43-8934-1eec5337be2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cdc378b-1288-436b-9821-34981ec9f7fe, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=da41936a-3d81-49ed-9021-22d56f07b75b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.525 162547 INFO neutron.agent.ovn.metadata.agent [-] Port da41936a-3d81-49ed-9021-22d56f07b75b in datapath f71257d1-6873-4759-898a-ce32e06e2fe5 unbound from our chassis
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.527 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f71257d1-6873-4759-898a-ce32e06e2fe5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.528 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb9632e-92ce-4dc6-8c1d-19e4c73ce1d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.532 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 namespace which is not needed anymore
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.537 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "333c933a-d8e8-42b0-ab77-72546d8ab982" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.537 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.538 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "333c933a-d8e8-42b0-ab77-72546d8ab982-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.539 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.540 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.541 2 INFO nova.compute.manager [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Terminating instance
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.542 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "refresh_cache-333c933a-d8e8-42b0-ab77-72546d8ab982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.543 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquired lock "refresh_cache-333c933a-d8e8-42b0-ab77-72546d8ab982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.543 2 DEBUG nova.network.neutron [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:51 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 14 09:18:51 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006f.scope: Consumed 13.058s CPU time.
Oct 14 09:18:51 compute-0 systemd-machined[214636]: Machine qemu-141-instance-0000006f terminated.
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.666 2 INFO nova.virt.libvirt.driver [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance destroyed successfully.
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.667 2 DEBUG nova.objects.instance [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 78ff2018-e6dc-4337-8a74-90e5a3963a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.687 2 DEBUG nova.virt.libvirt.vif [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-656534791',display_name='tempest-TestNetworkBasicOps-server-656534791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-656534791',id=111,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqWOfZtqt05D/ybUIudjZC2rvrYHVQund30LctGyNDPNNhzNoX0uP/AuNuya2hh573jsTHFAB6yaR0srnVECa+MfsVPVgRLFSopv6Xf9JyA1XlxHXKy8htlY2LEMZWHEA==',key_name='tempest-TestNetworkBasicOps-500375878',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:18:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-qnjtb5gx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:18:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=78ff2018-e6dc-4337-8a74-90e5a3963a12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.687 2 DEBUG nova.network.os_vif_util [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.688 2 DEBUG nova.network.os_vif_util [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.688 2 DEBUG os_vif [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda41936a-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.695 2 INFO os_vif [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d')
Oct 14 09:18:51 compute-0 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [NOTICE]   (369829) : haproxy version is 2.8.14-c23fe91
Oct 14 09:18:51 compute-0 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [NOTICE]   (369829) : path to executable is /usr/sbin/haproxy
Oct 14 09:18:51 compute-0 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [WARNING]  (369829) : Exiting Master process...
Oct 14 09:18:51 compute-0 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [WARNING]  (369829) : Exiting Master process...
Oct 14 09:18:51 compute-0 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [ALERT]    (369829) : Current worker (369831) exited with code 143 (Terminated)
Oct 14 09:18:51 compute-0 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [WARNING]  (369829) : All workers exited. Exiting... (0)
Oct 14 09:18:51 compute-0 systemd[1]: libpod-c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3.scope: Deactivated successfully.
Oct 14 09:18:51 compute-0 podman[370816]: 2025-10-14 09:18:51.712735881 +0000 UTC m=+0.051771866 container died c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3-userdata-shm.mount: Deactivated successfully.
Oct 14 09:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-06e7bed502d13946ec34b297bcd718011311b8b7aa0eaf61e9799a54475ca22e-merged.mount: Deactivated successfully.
Oct 14 09:18:51 compute-0 ceph-mon[74249]: pgmap v1949: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 4.3 MiB/s wr, 249 op/s
Oct 14 09:18:51 compute-0 ceph-mon[74249]: osdmap e271: 3 total, 3 up, 3 in
Oct 14 09:18:51 compute-0 podman[370816]: 2025-10-14 09:18:51.760278933 +0000 UTC m=+0.099314918 container cleanup c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.765 2 DEBUG nova.network.neutron [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:18:51 compute-0 systemd[1]: libpod-conmon-c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3.scope: Deactivated successfully.
Oct 14 09:18:51 compute-0 podman[370874]: 2025-10-14 09:18:51.825207503 +0000 UTC m=+0.041627647 container remove c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.831 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2da8c025-5fda-4f60-9de9-ee2f1976d91a]: (4, ('Tue Oct 14 09:18:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 (c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3)\nc4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3\nTue Oct 14 09:18:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 (c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3)\nc4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.833 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e94e544-68fa-407b-a469-4f3c20d98fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.833 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf71257d1-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:51 compute-0 kernel: tapf71257d1-60: left promiscuous mode
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:51 compute-0 nova_compute[259627]: 2025-10-14 09:18:51.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca52414-b9f3-405d-8e85-0031327217dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.872 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88ec172a-d62b-49cc-b368-42fcd28a8a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[80c891c5-9ddc-4c52-8c31-be4874821a3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.886 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.889 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc52b1c-3db9-4dd6-97a4-6d768c8af77c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728555, 'reachable_time': 35951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370889, 'error': None, 'target': 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:51 compute-0 systemd[1]: run-netns-ovnmeta\x2df71257d1\x2d6873\x2d4759\x2d898a\x2dce32e06e2fe5.mount: Deactivated successfully.
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.893 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.893 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3c120476-dd8b-4c87-8f1d-5ee01a38a52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.894 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.895 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eda305c5-09b3-496b-b2e6-4fb2a69b2336]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 639 KiB/s wr, 140 op/s
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.080 2 INFO nova.virt.libvirt.driver [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Deleting instance files /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12_del
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.081 2 INFO nova.virt.libvirt.driver [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Deletion of /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12_del complete
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.085 2 DEBUG nova.compute.manager [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-unplugged-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.085 2 DEBUG oslo_concurrency.lockutils [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.086 2 DEBUG oslo_concurrency.lockutils [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.086 2 DEBUG oslo_concurrency.lockutils [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.086 2 DEBUG nova.compute.manager [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] No waiting events found dispatching network-vif-unplugged-da41936a-3d81-49ed-9021-22d56f07b75b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.086 2 DEBUG nova.compute.manager [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-unplugged-da41936a-3d81-49ed-9021-22d56f07b75b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.147 2 INFO nova.compute.manager [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.148 2 DEBUG oslo.service.loopingcall [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.148 2 DEBUG nova.compute.manager [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.149 2 DEBUG nova.network.neutron [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.441 2 DEBUG nova.network.neutron [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.468 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Releasing lock "refresh_cache-333c933a-d8e8-42b0-ab77-72546d8ab982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.469 2 DEBUG nova.compute.manager [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:18:52 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 14 09:18:52 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Consumed 1.365s CPU time.
Oct 14 09:18:52 compute-0 systemd-machined[214636]: Machine qemu-142-instance-00000070 terminated.
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.699 2 INFO nova.virt.libvirt.driver [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance destroyed successfully.
Oct 14 09:18:52 compute-0 nova_compute[259627]: 2025-10-14 09:18:52.703 2 DEBUG nova.objects.instance [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lazy-loading 'resources' on Instance uuid 333c933a-d8e8-42b0-ab77-72546d8ab982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:18:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 14 09:18:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 14 09:18:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.138 2 DEBUG nova.network.neutron [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.170 2 INFO nova.compute.manager [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Took 1.02 seconds to deallocate network for instance.
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.225 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.226 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.266 2 DEBUG nova.compute.manager [req-2268a940-ac40-4480-8d92-9558e99b299c req-1076ee9e-5820-40d8-8bbe-adbfe590a762 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-deleted-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:18:53 compute-0 sshd-session[370533]: Failed password for invalid user admin from 188.150.249.96 port 57382 ssh2
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.349 2 DEBUG oslo_concurrency.processutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:53.418 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:53.420 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:18:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:53.422 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:53.422 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[923c9886-64e8-4299-949e-3d378d2f4011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:53 compute-0 ceph-mon[74249]: pgmap v1951: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 639 KiB/s wr, 140 op/s
Oct 14 09:18:53 compute-0 ceph-mon[74249]: osdmap e272: 3 total, 3 up, 3 in
Oct 14 09:18:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:18:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735234365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.808 2 DEBUG oslo_concurrency.processutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.817 2 DEBUG nova.compute.provider_tree [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.834 2 DEBUG nova.scheduler.client.report [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.855 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.884 2 INFO nova.scheduler.client.report [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 78ff2018-e6dc-4337-8a74-90e5a3963a12
Oct 14 09:18:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 14 09:18:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 14 09:18:53 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 14 09:18:53 compute-0 nova_compute[259627]: 2025-10-14 09:18:53.957 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 30 KiB/s wr, 89 op/s
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.168 2 DEBUG nova.compute.manager [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.169 2 DEBUG oslo_concurrency.lockutils [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.170 2 DEBUG oslo_concurrency.lockutils [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.170 2 DEBUG oslo_concurrency.lockutils [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.170 2 DEBUG nova.compute.manager [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] No waiting events found dispatching network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.171 2 WARNING nova.compute.manager [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received unexpected event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b for instance with vm_state deleted and task_state None.
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.312 2 INFO nova.virt.libvirt.driver [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Deleting instance files /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982_del
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.313 2 INFO nova.virt.libvirt.driver [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Deletion of /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982_del complete
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.382 2 INFO nova.compute.manager [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 1.91 seconds to destroy the instance on the hypervisor.
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.383 2 DEBUG oslo.service.loopingcall [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.384 2 DEBUG nova.compute.manager [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.384 2 DEBUG nova.network.neutron [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.544 2 DEBUG nova.network.neutron [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.558 2 DEBUG nova.network.neutron [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.593 2 INFO nova.compute.manager [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 0.21 seconds to deallocate network for instance.
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.640 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.641 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:54 compute-0 podman[370936]: 2025-10-14 09:18:54.690556435 +0000 UTC m=+0.090739428 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:18:54 compute-0 podman[370935]: 2025-10-14 09:18:54.70337579 +0000 UTC m=+0.108070914 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 14 09:18:54 compute-0 nova_compute[259627]: 2025-10-14 09:18:54.737 2 DEBUG oslo_concurrency.processutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:18:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/735234365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:54 compute-0 ceph-mon[74249]: osdmap e273: 3 total, 3 up, 3 in
Oct 14 09:18:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:18:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536847294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:55 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.235 2 DEBUG oslo_concurrency.processutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:18:55 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.241 2 DEBUG nova.compute.provider_tree [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:18:55 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.255 2 DEBUG nova.scheduler.client.report [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:18:55 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.307 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:55 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.335 2 INFO nova.scheduler.client.report [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Deleted allocations for instance 333c933a-d8e8-42b0-ab77-72546d8ab982
Oct 14 09:18:55 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.404 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:55 compute-0 sshd-session[370533]: Connection closed by invalid user admin 188.150.249.96 port 57382 [preauth]
Oct 14 09:18:55 compute-0 ceph-mon[74249]: pgmap v1954: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 30 KiB/s wr, 89 op/s
Oct 14 09:18:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3536847294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:18:55 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:18:55 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:18:56 compute-0 nova_compute[259627]: 2025-10-14 09:18:55.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:18:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 35 KiB/s wr, 226 op/s
Oct 14 09:18:56 compute-0 ovn_controller[152662]: 2025-10-14T09:18:56Z|01182|binding|INFO|Releasing lport 437783e2-5dd6-4d07-bfc8-e4c0a4808267 from this chassis (sb_readonly=0)
Oct 14 09:18:56 compute-0 nova_compute[259627]: 2025-10-14 09:18:56.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:56 compute-0 nova_compute[259627]: 2025-10-14 09:18:56.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.276 2 DEBUG nova.compute.manager [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.276 2 DEBUG nova.compute.manager [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing instance network info cache due to event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.277 2 DEBUG oslo_concurrency.lockutils [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.277 2 DEBUG oslo_concurrency.lockutils [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.277 2 DEBUG nova.network.neutron [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.386 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.388 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.388 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.389 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.389 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.391 2 INFO nova.compute.manager [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Terminating instance
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.394 2 DEBUG nova.compute.manager [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:18:57 compute-0 kernel: tap9053aa9d-27 (unregistering): left promiscuous mode
Oct 14 09:18:57 compute-0 NetworkManager[44885]: <info>  [1760433537.4581] device (tap9053aa9d-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:18:57 compute-0 ovn_controller[152662]: 2025-10-14T09:18:57Z|01183|binding|INFO|Releasing lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc from this chassis (sb_readonly=0)
Oct 14 09:18:57 compute-0 ovn_controller[152662]: 2025-10-14T09:18:57Z|01184|binding|INFO|Setting lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc down in Southbound
Oct 14 09:18:57 compute-0 ovn_controller[152662]: 2025-10-14T09:18:57Z|01185|binding|INFO|Removing iface tap9053aa9d-27 ovn-installed in OVS
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.482 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:68:6d 10.100.0.12'], port_security=['fa:16:3e:e4:68:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '22c6f034-c238-499b-8b07-1c0f5879297e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ac21096-7818-4990-8868-baedcdcf8f83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f54993ea-ba9b-4930-8e05-f68e5e27d0b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cda2976-3075-437c-b2bd-7362e360206b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9053aa9d-2747-4b65-9480-c8d5d0c126fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.483 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9053aa9d-2747-4b65-9480-c8d5d0c126fc in datapath 6ac21096-7818-4990-8868-baedcdcf8f83 unbound from our chassis
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.484 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ac21096-7818-4990-8868-baedcdcf8f83, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.487 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2063a0e1-0346-423d-9275-252fb46cc342]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.488 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 namespace which is not needed anymore
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 14 09:18:57 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Consumed 16.315s CPU time.
Oct 14 09:18:57 compute-0 systemd-machined[214636]: Machine qemu-140-instance-0000006e terminated.
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.644 2 INFO nova.virt.libvirt.driver [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance destroyed successfully.
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.645 2 DEBUG nova.objects.instance [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 22c6f034-c238-499b-8b07-1c0f5879297e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.663 2 DEBUG nova.virt.libvirt.vif [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-620217933',display_name='tempest-TestNetworkBasicOps-server-620217933',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-620217933',id=110,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFmMenfL1sUYkbBrkES3junvmmfxbYlPSkqdfU7GziKOgK8Al6Uo4AuziFU4SUs4TqWyCWrEHu5DJJsxFuoF8JeKv+GpLFEK8bJLhTgJlxj4kUg8oBgVMIWKSTqwaqC6zA==',key_name='tempest-TestNetworkBasicOps-2091857695',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:17:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ol26b3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:17:50Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=22c6f034-c238-499b-8b07-1c0f5879297e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.664 2 DEBUG nova.network.os_vif_util [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.665 2 DEBUG nova.network.os_vif_util [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.665 2 DEBUG os_vif [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9053aa9d-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.676 2 INFO os_vif [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27')
Oct 14 09:18:57 compute-0 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [NOTICE]   (368680) : haproxy version is 2.8.14-c23fe91
Oct 14 09:18:57 compute-0 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [NOTICE]   (368680) : path to executable is /usr/sbin/haproxy
Oct 14 09:18:57 compute-0 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [WARNING]  (368680) : Exiting Master process...
Oct 14 09:18:57 compute-0 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [WARNING]  (368680) : Exiting Master process...
Oct 14 09:18:57 compute-0 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [ALERT]    (368680) : Current worker (368682) exited with code 143 (Terminated)
Oct 14 09:18:57 compute-0 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [WARNING]  (368680) : All workers exited. Exiting... (0)
Oct 14 09:18:57 compute-0 systemd[1]: libpod-6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd.scope: Deactivated successfully.
Oct 14 09:18:57 compute-0 podman[371028]: 2025-10-14 09:18:57.722139522 +0000 UTC m=+0.069265858 container died 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd-userdata-shm.mount: Deactivated successfully.
Oct 14 09:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1f2cda8d9a484751f468c0a5f14364c4cc3b93a764e75f80052eecd6130f987-merged.mount: Deactivated successfully.
Oct 14 09:18:57 compute-0 podman[371028]: 2025-10-14 09:18:57.76831728 +0000 UTC m=+0.115443616 container cleanup 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:18:57 compute-0 ceph-mon[74249]: pgmap v1955: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 35 KiB/s wr, 226 op/s
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.785 2 DEBUG nova.compute.manager [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-unplugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.785 2 DEBUG oslo_concurrency.lockutils [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.786 2 DEBUG oslo_concurrency.lockutils [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.786 2 DEBUG oslo_concurrency.lockutils [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.786 2 DEBUG nova.compute.manager [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] No waiting events found dispatching network-vif-unplugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.787 2 DEBUG nova.compute.manager [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-unplugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:18:57 compute-0 systemd[1]: libpod-conmon-6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd.scope: Deactivated successfully.
Oct 14 09:18:57 compute-0 podman[371079]: 2025-10-14 09:18:57.861847216 +0000 UTC m=+0.056051823 container remove 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33c033b2-fa29-4b51-bc6c-3025872f26a5]: (4, ('Tue Oct 14 09:18:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 (6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd)\n6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd\nTue Oct 14 09:18:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 (6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd)\n6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.876 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68d4483d-4762-4194-9343-5308b3c1fdde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.877 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac21096-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 kernel: tap6ac21096-70: left promiscuous mode
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.890 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da855d8f-e66b-4da8-bbd7-441fbe8147ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:18:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 14 09:18:57 compute-0 nova_compute[259627]: 2025-10-14 09:18:57.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 14 09:18:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.917 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12a56123-e1b7-47c4-90e0-007ea72cb551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.918 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e74769b-03f5-4ac3-9d7d-f6673de6bd9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.938 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98047527-5270-445c-a482-1b67eeba8a18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724791, 'reachable_time': 36676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371096, 'error': None, 'target': 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ac21096\x2d7818\x2d4990\x2d8868\x2dbaedcdcf8f83.mount: Deactivated successfully.
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.940 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:18:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.940 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4b4855-aa52-48a1-9032-e933b7f3e8b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 8.0 KiB/s wr, 142 op/s
Oct 14 09:18:58 compute-0 nova_compute[259627]: 2025-10-14 09:18:58.175 2 INFO nova.virt.libvirt.driver [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Deleting instance files /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e_del
Oct 14 09:18:58 compute-0 nova_compute[259627]: 2025-10-14 09:18:58.176 2 INFO nova.virt.libvirt.driver [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Deletion of /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e_del complete
Oct 14 09:18:58 compute-0 nova_compute[259627]: 2025-10-14 09:18:58.226 2 INFO nova.compute.manager [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:18:58 compute-0 nova_compute[259627]: 2025-10-14 09:18:58.227 2 DEBUG oslo.service.loopingcall [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:18:58 compute-0 nova_compute[259627]: 2025-10-14 09:18:58.228 2 DEBUG nova.compute.manager [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:18:58 compute-0 nova_compute[259627]: 2025-10-14 09:18:58.228 2 DEBUG nova.network.neutron [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:18:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:58.636 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:18:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:58.637 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:18:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:58.639 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:18:58 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:18:58.640 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e68169-40e3-48bf-bd17-b00e52371e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:18:58 compute-0 nova_compute[259627]: 2025-10-14 09:18:58.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:18:58 compute-0 ceph-mon[74249]: osdmap e274: 3 total, 3 up, 3 in
Oct 14 09:18:58 compute-0 ceph-mon[74249]: pgmap v1957: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 8.0 KiB/s wr, 142 op/s
Oct 14 09:18:59 compute-0 nova_compute[259627]: 2025-10-14 09:18:59.383 2 DEBUG nova.network.neutron [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updated VIF entry in instance network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:18:59 compute-0 nova_compute[259627]: 2025-10-14 09:18:59.384 2 DEBUG nova.network.neutron [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:18:59 compute-0 nova_compute[259627]: 2025-10-14 09:18:59.405 2 DEBUG nova.network.neutron [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:18:59 compute-0 nova_compute[259627]: 2025-10-14 09:18:59.408 2 DEBUG oslo_concurrency.lockutils [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:18:59 compute-0 nova_compute[259627]: 2025-10-14 09:18:59.423 2 INFO nova.compute.manager [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Took 1.20 seconds to deallocate network for instance.
Oct 14 09:18:59 compute-0 nova_compute[259627]: 2025-10-14 09:18:59.484 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:18:59 compute-0 nova_compute[259627]: 2025-10-14 09:18:59.484 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:18:59 compute-0 nova_compute[259627]: 2025-10-14 09:18:59.540 2 DEBUG oslo_concurrency.processutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.010 2 DEBUG nova.compute.manager [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.011 2 DEBUG oslo_concurrency.lockutils [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.012 2 DEBUG oslo_concurrency.lockutils [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.012 2 DEBUG oslo_concurrency.lockutils [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.013 2 DEBUG nova.compute.manager [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] No waiting events found dispatching network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.013 2 WARNING nova.compute.manager [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received unexpected event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc for instance with vm_state deleted and task_state None.
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.014 2 DEBUG nova.compute.manager [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-deleted-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:19:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:19:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/204320955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.079 2 DEBUG oslo_concurrency.processutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 96 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 7.2 KiB/s wr, 127 op/s
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.089 2 DEBUG nova.compute.provider_tree [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.104 2 DEBUG nova.scheduler.client.report [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:19:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/204320955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.125 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.161 2 INFO nova.scheduler.client.report [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 22c6f034-c238-499b-8b07-1c0f5879297e
Oct 14 09:19:00 compute-0 nova_compute[259627]: 2025-10-14 09:19:00.225 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:01 compute-0 ceph-mon[74249]: pgmap v1958: 305 pgs: 305 active+clean; 96 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 7.2 KiB/s wr, 127 op/s
Oct 14 09:19:01 compute-0 anacron[27473]: Job `cron.monthly' started
Oct 14 09:19:01 compute-0 anacron[27473]: Job `cron.monthly' terminated
Oct 14 09:19:01 compute-0 anacron[27473]: Normal exit (3 jobs run)
Oct 14 09:19:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 7.6 KiB/s wr, 145 op/s
Oct 14 09:19:02 compute-0 nova_compute[259627]: 2025-10-14 09:19:02.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:19:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:19:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:19:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:19:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:19:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:19:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:03 compute-0 ceph-mon[74249]: pgmap v1959: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 7.6 KiB/s wr, 145 op/s
Oct 14 09:19:03 compute-0 nova_compute[259627]: 2025-10-14 09:19:03.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 6.2 KiB/s wr, 118 op/s
Oct 14 09:19:04 compute-0 nova_compute[259627]: 2025-10-14 09:19:04.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:04 compute-0 nova_compute[259627]: 2025-10-14 09:19:04.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:04.929 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:19:04 compute-0 nova_compute[259627]: 2025-10-14 09:19:04.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:04.930 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:19:05 compute-0 ceph-mon[74249]: pgmap v1960: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 6.2 KiB/s wr, 118 op/s
Oct 14 09:19:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:19:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2359575977' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:19:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:19:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2359575977' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:19:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Oct 14 09:19:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2359575977' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:19:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2359575977' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:19:06 compute-0 unix_chkpwd[371123]: password check failed for user (root)
Oct 14 09:19:06 compute-0 sshd-session[370996]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96  user=root
Oct 14 09:19:06 compute-0 nova_compute[259627]: 2025-10-14 09:19:06.664 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433531.6633725, 78ff2018-e6dc-4337-8a74-90e5a3963a12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:19:06 compute-0 nova_compute[259627]: 2025-10-14 09:19:06.666 2 INFO nova.compute.manager [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] VM Stopped (Lifecycle Event)
Oct 14 09:19:06 compute-0 nova_compute[259627]: 2025-10-14 09:19:06.719 2 DEBUG nova.compute.manager [None req-0987e0f5-984f-4fc5-a00b-5dbb0e211e4b - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:19:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:07 compute-0 ceph-mon[74249]: pgmap v1961: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Oct 14 09:19:07 compute-0 nova_compute[259627]: 2025-10-14 09:19:07.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:07 compute-0 podman[371125]: 2025-10-14 09:19:07.686764335 +0000 UTC m=+0.085807645 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 14 09:19:07 compute-0 nova_compute[259627]: 2025-10-14 09:19:07.696 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433532.6944122, 333c933a-d8e8-42b0-ab77-72546d8ab982 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:19:07 compute-0 nova_compute[259627]: 2025-10-14 09:19:07.696 2 INFO nova.compute.manager [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] VM Stopped (Lifecycle Event)
Oct 14 09:19:07 compute-0 podman[371124]: 2025-10-14 09:19:07.721706646 +0000 UTC m=+0.126111009 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:19:07 compute-0 nova_compute[259627]: 2025-10-14 09:19:07.734 2 DEBUG nova.compute.manager [None req-4d1458ff-3f35-4130-9dd7-69f6239af731 - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:19:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 14 09:19:08 compute-0 sshd-session[370996]: Failed password for root from 188.150.249.96 port 60804 ssh2
Oct 14 09:19:08 compute-0 nova_compute[259627]: 2025-10-14 09:19:08.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:08.933 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:09 compute-0 ceph-mon[74249]: pgmap v1962: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 14 09:19:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:19:10 compute-0 sshd-session[370996]: Connection closed by authenticating user root 188.150.249.96 port 60804 [preauth]
Oct 14 09:19:11 compute-0 ceph-mon[74249]: pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:19:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 09:19:12 compute-0 nova_compute[259627]: 2025-10-14 09:19:12.642 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433537.641699, 22c6f034-c238-499b-8b07-1c0f5879297e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:19:12 compute-0 nova_compute[259627]: 2025-10-14 09:19:12.643 2 INFO nova.compute.manager [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] VM Stopped (Lifecycle Event)
Oct 14 09:19:12 compute-0 nova_compute[259627]: 2025-10-14 09:19:12.671 2 DEBUG nova.compute.manager [None req-d01b572a-a589-4a1f-9397-6d4943a9b6b1 - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:19:12 compute-0 nova_compute[259627]: 2025-10-14 09:19:12.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:13 compute-0 ceph-mon[74249]: pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 09:19:13 compute-0 nova_compute[259627]: 2025-10-14 09:19:13.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:15 compute-0 ceph-mon[74249]: pgmap v1965: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:17 compute-0 ceph-mon[74249]: pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:17 compute-0 nova_compute[259627]: 2025-10-14 09:19:17.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:18 compute-0 nova_compute[259627]: 2025-10-14 09:19:18.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:19 compute-0 ceph-mon[74249]: pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:19 compute-0 sshd-session[371169]: Invalid user orangepi from 188.150.249.96 port 35972
Oct 14 09:19:19 compute-0 nova_compute[259627]: 2025-10-14 09:19:19.918 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:19 compute-0 nova_compute[259627]: 2025-10-14 09:19:19.919 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:19 compute-0 sshd-session[371169]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:19:19 compute-0 sshd-session[371169]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:19:19 compute-0 nova_compute[259627]: 2025-10-14 09:19:19.943 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.050 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.051 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.062 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.062 2 INFO nova.compute.claims [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:19:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.207 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:19:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266580968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.688 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.700 2 DEBUG nova.compute.provider_tree [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.717 2 DEBUG nova.scheduler.client.report [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.748 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.749 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.802 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.803 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.829 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.861 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.960 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.961 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.962 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Creating image(s)
Oct 14 09:19:20 compute-0 nova_compute[259627]: 2025-10-14 09:19:20.992 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.030 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.067 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.073 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.183 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.185 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.186 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.186 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.222 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.227 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:21 compute-0 ceph-mon[74249]: pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2266580968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.365 2 DEBUG nova.policy [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.527 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.576 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.665 2 DEBUG nova.objects.instance [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.679 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.679 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Ensure instance console log exists: /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.679 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.680 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:21 compute-0 nova_compute[259627]: 2025-10-14 09:19:21.680 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:22 compute-0 sshd-session[371169]: Failed password for invalid user orangepi from 188.150.249.96 port 35972 ssh2
Oct 14 09:19:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:22 compute-0 nova_compute[259627]: 2025-10-14 09:19:22.552 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Successfully created port: 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:19:22 compute-0 nova_compute[259627]: 2025-10-14 09:19:22.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:22 compute-0 sshd-session[371169]: Connection closed by invalid user orangepi 188.150.249.96 port 35972 [preauth]
Oct 14 09:19:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:23 compute-0 ceph-mon[74249]: pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:23 compute-0 nova_compute[259627]: 2025-10-14 09:19:23.728 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Successfully updated port: 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:19:23 compute-0 nova_compute[259627]: 2025-10-14 09:19:23.745 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:19:23 compute-0 nova_compute[259627]: 2025-10-14 09:19:23.745 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:19:23 compute-0 nova_compute[259627]: 2025-10-14 09:19:23.746 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:19:23 compute-0 nova_compute[259627]: 2025-10-14 09:19:23.869 2 DEBUG nova.compute.manager [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:19:23 compute-0 nova_compute[259627]: 2025-10-14 09:19:23.870 2 DEBUG nova.compute.manager [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing instance network info cache due to event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:19:23 compute-0 nova_compute[259627]: 2025-10-14 09:19:23.870 2 DEBUG oslo_concurrency.lockutils [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:19:23 compute-0 nova_compute[259627]: 2025-10-14 09:19:23.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:24 compute-0 nova_compute[259627]: 2025-10-14 09:19:24.508 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:19:25 compute-0 ceph-mon[74249]: pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:19:25 compute-0 podman[371362]: 2025-10-14 09:19:25.680930596 +0000 UTC m=+0.082587317 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 09:19:25 compute-0 podman[371363]: 2025-10-14 09:19:25.692813888 +0000 UTC m=+0.091221449 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.707 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.731 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.732 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance network_info: |[{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.732 2 DEBUG oslo_concurrency.lockutils [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.732 2 DEBUG nova.network.neutron [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.734 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start _get_guest_xml network_info=[{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.739 2 WARNING nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.743 2 DEBUG nova.virt.libvirt.host [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.744 2 DEBUG nova.virt.libvirt.host [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.746 2 DEBUG nova.virt.libvirt.host [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.747 2 DEBUG nova.virt.libvirt.host [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.747 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.747 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:19:25 compute-0 nova_compute[259627]: 2025-10-14 09:19:25.752 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:19:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:19:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240021058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.268 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2240021058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.303 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.308 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:19:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3183136731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.835 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.836 2 DEBUG nova.virt.libvirt.vif [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:19:20Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.837 2 DEBUG nova.network.os_vif_util [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.838 2 DEBUG nova.network.os_vif_util [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.839 2 DEBUG nova.objects.instance [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.859 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <uuid>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</uuid>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <name>instance-00000071</name>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:19:25</nova:creationTime>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 09:19:26 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <system>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <entry name="serial">e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <entry name="uuid">e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </system>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <os>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   </os>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <features>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   </features>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk">
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       </source>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config">
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       </source>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:19:26 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:cf:57:80"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <target dev="tap654f4bc5-29"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log" append="off"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <video>
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </video>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:19:26 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:19:26 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:19:26 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:19:26 compute-0 nova_compute[259627]: </domain>
Oct 14 09:19:26 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.861 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Preparing to wait for external event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.861 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.862 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.863 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.864 2 DEBUG nova.virt.libvirt.vif [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:19:20Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.865 2 DEBUG nova.network.os_vif_util [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.866 2 DEBUG nova.network.os_vif_util [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.866 2 DEBUG os_vif [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap654f4bc5-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.876 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap654f4bc5-29, col_values=(('external_ids', {'iface-id': '654f4bc5-29db-40e4-bc4e-bdd325e98e7a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:57:80', 'vm-uuid': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:26 compute-0 NetworkManager[44885]: <info>  [1760433566.8802] manager: (tap654f4bc5-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.891 2 INFO os_vif [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29')
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.948 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.949 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.949 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:cf:57:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.950 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Using config drive
Oct 14 09:19:26 compute-0 nova_compute[259627]: 2025-10-14 09:19:26.974 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:19:27 compute-0 ceph-mon[74249]: pgmap v1971: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:19:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3183136731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:19:27 compute-0 nova_compute[259627]: 2025-10-14 09:19:27.505 2 DEBUG nova.network.neutron [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updated VIF entry in instance network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:19:27 compute-0 nova_compute[259627]: 2025-10-14 09:19:27.506 2 DEBUG nova.network.neutron [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:19:27 compute-0 nova_compute[259627]: 2025-10-14 09:19:27.528 2 DEBUG oslo_concurrency.lockutils [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:19:27 compute-0 nova_compute[259627]: 2025-10-14 09:19:27.617 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Creating config drive at /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config
Oct 14 09:19:27 compute-0 nova_compute[259627]: 2025-10-14 09:19:27.623 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdgl34lc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:27 compute-0 nova_compute[259627]: 2025-10-14 09:19:27.771 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdgl34lc" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:27 compute-0 nova_compute[259627]: 2025-10-14 09:19:27.812 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:19:27 compute-0 nova_compute[259627]: 2025-10-14 09:19:27.816 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.025 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.026 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Deleting local config drive /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config because it was imported into RBD.
Oct 14 09:19:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:19:28 compute-0 kernel: tap654f4bc5-29: entered promiscuous mode
Oct 14 09:19:28 compute-0 NetworkManager[44885]: <info>  [1760433568.1064] manager: (tap654f4bc5-29): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:28 compute-0 ovn_controller[152662]: 2025-10-14T09:19:28Z|01186|binding|INFO|Claiming lport 654f4bc5-29db-40e4-bc4e-bdd325e98e7a for this chassis.
Oct 14 09:19:28 compute-0 ovn_controller[152662]: 2025-10-14T09:19:28Z|01187|binding|INFO|654f4bc5-29db-40e4-bc4e-bdd325e98e7a: Claiming fa:16:3e:cf:57:80 10.100.0.11
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.132 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:57:80 10.100.0.11'], port_security=['fa:16:3e:cf:57:80 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-414bade6-3739-49f9-bce9-e93105157bbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '885d3329-1a6a-4e62-8633-0f83200cd25a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=553b14d5-fce8-4530-b374-6c59ead23d8e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=654f4bc5-29db-40e4-bc4e-bdd325e98e7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.133 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a in datapath 414bade6-3739-49f9-bce9-e93105157bbe bound to our chassis
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.134 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 414bade6-3739-49f9-bce9-e93105157bbe
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.147 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a07615-1a1a-4927-ad36-0b6e381c1257]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.147 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap414bade6-31 in ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.149 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap414bade6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.149 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e17fd673-efaa-43a3-8c97-cd47b2e4fe0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.150 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0acc3fc-64e2-4329-a8aa-47d3dbd6edc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 systemd-udevd[371538]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.161 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fe008e6a-62e1-435a-a1c6-05ad1d97c816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 systemd-machined[214636]: New machine qemu-143-instance-00000071.
Oct 14 09:19:28 compute-0 NetworkManager[44885]: <info>  [1760433568.1788] device (tap654f4bc5-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:19:28 compute-0 NetworkManager[44885]: <info>  [1760433568.1802] device (tap654f4bc5-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:19:28 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000071.
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0a53117e-f4fc-4f77-bb2e-74bcdb77da33]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:28 compute-0 ovn_controller[152662]: 2025-10-14T09:19:28Z|01188|binding|INFO|Setting lport 654f4bc5-29db-40e4-bc4e-bdd325e98e7a ovn-installed in OVS
Oct 14 09:19:28 compute-0 ovn_controller[152662]: 2025-10-14T09:19:28Z|01189|binding|INFO|Setting lport 654f4bc5-29db-40e4-bc4e-bdd325e98e7a up in Southbound
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.225 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[986fbab3-0363-43fb-8520-e61fe8bb2e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.230 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[205217cd-0a10-4acf-9fc4-e5d636f404d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 NetworkManager[44885]: <info>  [1760433568.2317] manager: (tap414bade6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Oct 14 09:19:28 compute-0 systemd-udevd[371541]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.276 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e955f50d-2c3a-40ca-8d31-fa824f223a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.282 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2488c25b-7e41-4b67-ae4a-21314f8b97a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 NetworkManager[44885]: <info>  [1760433568.3118] device (tap414bade6-30): carrier: link connected
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.320 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ef729f4d-c3d5-4101-95c1-11d864003f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01e9963c-4438-45c4-96ae-be39de7dac37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap414bade6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:05:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734707, 'reachable_time': 22252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371569, 'error': None, 'target': 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.360 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f27a192-92b8-4cdb-a846-4455a59d05dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:598'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734707, 'tstamp': 734707}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371570, 'error': None, 'target': 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.387 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7e28c3-fe47-4280-ac33-cf11935ecf57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap414bade6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:05:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734707, 'reachable_time': 22252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371571, 'error': None, 'target': 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.436 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e751b3d-2ef8-4894-8b2b-c2f50c81b2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.488 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8:0:1:f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.536 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9c9592-3069-476a-9c59-447e24af1b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.539 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap414bade6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.539 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.540 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap414bade6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:28 compute-0 NetworkManager[44885]: <info>  [1760433568.5437] manager: (tap414bade6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct 14 09:19:28 compute-0 kernel: tap414bade6-30: entered promiscuous mode
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap414bade6-30, col_values=(('external_ids', {'iface-id': '19347ace-27ce-4391-bacc-c18e3400875e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:28 compute-0 ovn_controller[152662]: 2025-10-14T09:19:28Z|01190|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.580 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/414bade6-3739-49f9-bce9-e93105157bbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/414bade6-3739-49f9-bce9-e93105157bbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.581 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa06418b-ab6f-42f8-87f4-8941be593a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.582 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-414bade6-3739-49f9-bce9-e93105157bbe
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/414bade6-3739-49f9-bce9-e93105157bbe.pid.haproxy
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 414bade6-3739-49f9-bce9-e93105157bbe
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:19:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.583 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'env', 'PROCESS_TAG=haproxy-414bade6-3739-49f9-bce9-e93105157bbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/414bade6-3739-49f9-bce9-e93105157bbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:19:28 compute-0 nova_compute[259627]: 2025-10-14 09:19:28.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:29 compute-0 podman[371645]: 2025-10-14 09:19:29.020250037 +0000 UTC m=+0.064446269 container create 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:19:29 compute-0 systemd[1]: Started libpod-conmon-3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96.scope.
Oct 14 09:19:29 compute-0 podman[371645]: 2025-10-14 09:19:28.984483546 +0000 UTC m=+0.028679768 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:19:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ae02de95adf7e48c08e1cd0b516979c12355cab1b044706270723c27c91c4aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:29 compute-0 podman[371645]: 2025-10-14 09:19:29.103597452 +0000 UTC m=+0.147793694 container init 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:19:29 compute-0 podman[371645]: 2025-10-14 09:19:29.110407359 +0000 UTC m=+0.154603571 container start 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:19:29 compute-0 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [NOTICE]   (371664) : New worker (371666) forked
Oct 14 09:19:29 compute-0 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [NOTICE]   (371664) : Loading success.
Oct 14 09:19:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:29.172 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated
Oct 14 09:19:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:29.173 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:19:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:19:29.174 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88b0678c-aa0f-403d-aab9-67e253460672]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:19:29 compute-0 ceph-mon[74249]: pgmap v1972: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:19:29 compute-0 nova_compute[259627]: 2025-10-14 09:19:29.420 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433569.4197912, e83e1125-7b9a-4ca1-9040-40dbc3e0237b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:19:29 compute-0 nova_compute[259627]: 2025-10-14 09:19:29.422 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] VM Started (Lifecycle Event)
Oct 14 09:19:29 compute-0 nova_compute[259627]: 2025-10-14 09:19:29.457 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:19:29 compute-0 nova_compute[259627]: 2025-10-14 09:19:29.463 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433569.4199967, e83e1125-7b9a-4ca1-9040-40dbc3e0237b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:19:29 compute-0 nova_compute[259627]: 2025-10-14 09:19:29.463 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] VM Paused (Lifecycle Event)
Oct 14 09:19:29 compute-0 nova_compute[259627]: 2025-10-14 09:19:29.498 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:19:29 compute-0 nova_compute[259627]: 2025-10-14 09:19:29.502 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:19:29 compute-0 nova_compute[259627]: 2025-10-14 09:19:29.533 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:19:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.028 2 DEBUG nova.compute.manager [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.029 2 DEBUG oslo_concurrency.lockutils [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.029 2 DEBUG oslo_concurrency.lockutils [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.030 2 DEBUG oslo_concurrency.lockutils [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.030 2 DEBUG nova.compute.manager [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Processing event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.031 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.036 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433571.0363295, e83e1125-7b9a-4ca1-9040-40dbc3e0237b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.037 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] VM Resumed (Lifecycle Event)
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.041 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.046 2 INFO nova.virt.libvirt.driver [-] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance spawned successfully.
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.047 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.063 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.069 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.082 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.083 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.084 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.085 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.085 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.086 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.124 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.179 2 INFO nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Took 10.22 seconds to spawn the instance on the hypervisor.
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.180 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.262 2 INFO nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Took 11.24 seconds to build instance.
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.281 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:31 compute-0 ceph-mon[74249]: pgmap v1973: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:19:31 compute-0 sudo[371675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:31 compute-0 sudo[371675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:31 compute-0 sudo[371675]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:31 compute-0 sudo[371700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:19:31 compute-0 sudo[371700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:31 compute-0 sudo[371700]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:31 compute-0 sudo[371725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:31 compute-0 sudo[371725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:31 compute-0 sudo[371725]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:31 compute-0 sudo[371750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:19:31 compute-0 sudo[371750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:31 compute-0 nova_compute[259627]: 2025-10-14 09:19:31.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:19:32 compute-0 sudo[371750]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:32 compute-0 sshd-session[371360]: Invalid user support from 188.150.249.96 port 38542
Oct 14 09:19:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:19:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:19:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:19:32 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:19:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:19:32 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 759961b7-f397-4428-9d7a-0c600d867455 does not exist
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a7faf60-85e3-4b06-bb9a-c806743a4138 does not exist
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev bf61d6d9-c2ad-4459-93a1-4639411e629e does not exist
Oct 14 09:19:32 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:19:32 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:19:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:19:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:19:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:19:32 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:19:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:19:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:19:32 compute-0 sudo[371806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:32 compute-0 sudo[371806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:32 compute-0 sudo[371806]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:32 compute-0 sudo[371831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:19:32 compute-0 sudo[371831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:32 compute-0 sudo[371831]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:32 compute-0 sudo[371856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:32 compute-0 sudo[371856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:32 compute-0 sudo[371856]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:32 compute-0 sudo[371881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:19:32 compute-0 sudo[371881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:19:32
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 14 09:19:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:19:32 compute-0 sshd-session[371360]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:19:32 compute-0 sshd-session[371360]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:19:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.043 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.043 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.043 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.043 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.044 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:33 compute-0 podman[371946]: 2025-10-14 09:19:33.115910601 +0000 UTC m=+0.038525981 container create 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.128 2 DEBUG nova.compute.manager [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.128 2 DEBUG oslo_concurrency.lockutils [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.128 2 DEBUG oslo_concurrency.lockutils [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.129 2 DEBUG oslo_concurrency.lockutils [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.129 2 DEBUG nova.compute.manager [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.129 2 WARNING nova.compute.manager [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received unexpected event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a for instance with vm_state active and task_state None.
Oct 14 09:19:33 compute-0 systemd[1]: Started libpod-conmon-38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746.scope.
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:19:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:19:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:19:33 compute-0 podman[371946]: 2025-10-14 09:19:33.099509367 +0000 UTC m=+0.022124797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:19:33 compute-0 podman[371946]: 2025-10-14 09:19:33.200689811 +0000 UTC m=+0.123305211 container init 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 09:19:33 compute-0 podman[371946]: 2025-10-14 09:19:33.206944395 +0000 UTC m=+0.129559775 container start 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:19:33 compute-0 podman[371946]: 2025-10-14 09:19:33.210035151 +0000 UTC m=+0.132650541 container attach 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:19:33 compute-0 funny_northcutt[371962]: 167 167
Oct 14 09:19:33 compute-0 systemd[1]: libpod-38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746.scope: Deactivated successfully.
Oct 14 09:19:33 compute-0 conmon[371962]: conmon 38f5be4f79f3c45e634c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746.scope/container/memory.events
Oct 14 09:19:33 compute-0 podman[371946]: 2025-10-14 09:19:33.218052358 +0000 UTC m=+0.140667738 container died 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 09:19:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-518e75aee414bf0158c1147bb795b1c9ea24eef06365d5a4f842252129ead23a-merged.mount: Deactivated successfully.
Oct 14 09:19:33 compute-0 podman[371946]: 2025-10-14 09:19:33.258354132 +0000 UTC m=+0.180969522 container remove 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:19:33 compute-0 systemd[1]: libpod-conmon-38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746.scope: Deactivated successfully.
Oct 14 09:19:33 compute-0 ceph-mon[74249]: pgmap v1974: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:19:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:19:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:19:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:19:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:19:33 compute-0 podman[372005]: 2025-10-14 09:19:33.463090348 +0000 UTC m=+0.052531466 container create a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:19:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:19:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646332625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:19:33 compute-0 systemd[1]: Started libpod-conmon-a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3.scope.
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.509 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:33 compute-0 podman[372005]: 2025-10-14 09:19:33.443905985 +0000 UTC m=+0.033347083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:19:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:33 compute-0 podman[372005]: 2025-10-14 09:19:33.603342945 +0000 UTC m=+0.192784053 container init a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:19:33 compute-0 podman[372005]: 2025-10-14 09:19:33.610759107 +0000 UTC m=+0.200200185 container start a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:19:33 compute-0 podman[372005]: 2025-10-14 09:19:33.615056753 +0000 UTC m=+0.204497871 container attach a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.639 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.640 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.802 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.805 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.805 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.805 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.921 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.922 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.922 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.946 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.971 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:19:33 compute-0 nova_compute[259627]: 2025-10-14 09:19:33.971 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:19:34 compute-0 nova_compute[259627]: 2025-10-14 09:19:34.006 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:19:34 compute-0 nova_compute[259627]: 2025-10-14 09:19:34.039 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:19:34 compute-0 nova_compute[259627]: 2025-10-14 09:19:34.085 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:19:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:19:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/646332625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:19:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:19:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817955444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:19:34 compute-0 nova_compute[259627]: 2025-10-14 09:19:34.579 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:19:34 compute-0 nova_compute[259627]: 2025-10-14 09:19:34.596 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:19:34 compute-0 nova_compute[259627]: 2025-10-14 09:19:34.627 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:19:34 compute-0 nova_compute[259627]: 2025-10-14 09:19:34.650 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:19:34 compute-0 nova_compute[259627]: 2025-10-14 09:19:34.653 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:19:34 compute-0 inspiring_brattain[372021]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:19:34 compute-0 inspiring_brattain[372021]: --> relative data size: 1.0
Oct 14 09:19:34 compute-0 inspiring_brattain[372021]: --> All data devices are unavailable
Oct 14 09:19:34 compute-0 systemd[1]: libpod-a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3.scope: Deactivated successfully.
Oct 14 09:19:34 compute-0 podman[372005]: 2025-10-14 09:19:34.791553219 +0000 UTC m=+1.380994297 container died a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:19:34 compute-0 systemd[1]: libpod-a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3.scope: Consumed 1.105s CPU time.
Oct 14 09:19:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f-merged.mount: Deactivated successfully.
Oct 14 09:19:34 compute-0 podman[372005]: 2025-10-14 09:19:34.841790107 +0000 UTC m=+1.431231185 container remove a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:19:34 compute-0 systemd[1]: libpod-conmon-a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3.scope: Deactivated successfully.
Oct 14 09:19:34 compute-0 sudo[371881]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:34 compute-0 sudo[372085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:34 compute-0 sudo[372085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:34 compute-0 sudo[372085]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:35 compute-0 sudo[372110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:19:35 compute-0 sudo[372110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:35 compute-0 sudo[372110]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:35 compute-0 sudo[372135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:35 compute-0 sudo[372135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:35 compute-0 sudo[372135]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:35 compute-0 sudo[372160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:19:35 compute-0 sudo[372160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:35 compute-0 ceph-mon[74249]: pgmap v1975: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:19:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1817955444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:19:35 compute-0 sshd-session[371360]: Failed password for invalid user support from 188.150.249.96 port 38542 ssh2
Oct 14 09:19:35 compute-0 podman[372227]: 2025-10-14 09:19:35.653985475 +0000 UTC m=+0.075521772 container create baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:19:35 compute-0 systemd[1]: Started libpod-conmon-baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f.scope.
Oct 14 09:19:35 compute-0 podman[372227]: 2025-10-14 09:19:35.628254611 +0000 UTC m=+0.049790958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:19:35 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:19:35 compute-0 podman[372227]: 2025-10-14 09:19:35.742534428 +0000 UTC m=+0.164070775 container init baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 09:19:35 compute-0 podman[372227]: 2025-10-14 09:19:35.754872532 +0000 UTC m=+0.176408859 container start baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:19:35 compute-0 podman[372227]: 2025-10-14 09:19:35.759519916 +0000 UTC m=+0.181056303 container attach baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:19:35 compute-0 bold_goldberg[372243]: 167 167
Oct 14 09:19:35 compute-0 systemd[1]: libpod-baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f.scope: Deactivated successfully.
Oct 14 09:19:35 compute-0 conmon[372243]: conmon baa1aa20673f892f0bec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f.scope/container/memory.events
Oct 14 09:19:35 compute-0 podman[372227]: 2025-10-14 09:19:35.763986696 +0000 UTC m=+0.185523003 container died baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 09:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ed200ab9f975706f1c58c0ea2818d60e39d7d2ff23eca0fa5a2c4dcf93517cb-merged.mount: Deactivated successfully.
Oct 14 09:19:35 compute-0 podman[372227]: 2025-10-14 09:19:35.814369138 +0000 UTC m=+0.235905435 container remove baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:19:35 compute-0 systemd[1]: libpod-conmon-baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f.scope: Deactivated successfully.
Oct 14 09:19:36 compute-0 podman[372267]: 2025-10-14 09:19:36.020628622 +0000 UTC m=+0.063762533 container create a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:19:36 compute-0 NetworkManager[44885]: <info>  [1760433576.0604] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Oct 14 09:19:36 compute-0 NetworkManager[44885]: <info>  [1760433576.0618] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:36 compute-0 podman[372267]: 2025-10-14 09:19:35.996726613 +0000 UTC m=+0.039860564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:19:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 09:19:36 compute-0 systemd[1]: Started libpod-conmon-a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79.scope.
Oct 14 09:19:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:36 compute-0 podman[372267]: 2025-10-14 09:19:36.141504111 +0000 UTC m=+0.184638032 container init a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:19:36 compute-0 podman[372267]: 2025-10-14 09:19:36.147883278 +0000 UTC m=+0.191017179 container start a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:19:36 compute-0 podman[372267]: 2025-10-14 09:19:36.151333143 +0000 UTC m=+0.194467074 container attach a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:36 compute-0 ovn_controller[152662]: 2025-10-14T09:19:36Z|01191|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.278 2 DEBUG nova.compute.manager [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.279 2 DEBUG nova.compute.manager [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing instance network info cache due to event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.279 2 DEBUG oslo_concurrency.lockutils [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.279 2 DEBUG oslo_concurrency.lockutils [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.279 2 DEBUG nova.network.neutron [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.626 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.626 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.626 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:36 compute-0 sad_morse[372283]: {
Oct 14 09:19:36 compute-0 sad_morse[372283]:     "0": [
Oct 14 09:19:36 compute-0 sad_morse[372283]:         {
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "devices": [
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "/dev/loop3"
Oct 14 09:19:36 compute-0 sad_morse[372283]:             ],
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_name": "ceph_lv0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_size": "21470642176",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "name": "ceph_lv0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "tags": {
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cluster_name": "ceph",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.crush_device_class": "",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.encrypted": "0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osd_id": "0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.type": "block",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.vdo": "0"
Oct 14 09:19:36 compute-0 sad_morse[372283]:             },
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "type": "block",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "vg_name": "ceph_vg0"
Oct 14 09:19:36 compute-0 sad_morse[372283]:         }
Oct 14 09:19:36 compute-0 sad_morse[372283]:     ],
Oct 14 09:19:36 compute-0 sad_morse[372283]:     "1": [
Oct 14 09:19:36 compute-0 sad_morse[372283]:         {
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "devices": [
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "/dev/loop4"
Oct 14 09:19:36 compute-0 sad_morse[372283]:             ],
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_name": "ceph_lv1",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_size": "21470642176",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "name": "ceph_lv1",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "tags": {
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cluster_name": "ceph",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.crush_device_class": "",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.encrypted": "0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osd_id": "1",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.type": "block",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.vdo": "0"
Oct 14 09:19:36 compute-0 sad_morse[372283]:             },
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "type": "block",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "vg_name": "ceph_vg1"
Oct 14 09:19:36 compute-0 sad_morse[372283]:         }
Oct 14 09:19:36 compute-0 sad_morse[372283]:     ],
Oct 14 09:19:36 compute-0 sad_morse[372283]:     "2": [
Oct 14 09:19:36 compute-0 sad_morse[372283]:         {
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "devices": [
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "/dev/loop5"
Oct 14 09:19:36 compute-0 sad_morse[372283]:             ],
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_name": "ceph_lv2",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_size": "21470642176",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "name": "ceph_lv2",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "tags": {
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.cluster_name": "ceph",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.crush_device_class": "",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.encrypted": "0",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osd_id": "2",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.type": "block",
Oct 14 09:19:36 compute-0 sad_morse[372283]:                 "ceph.vdo": "0"
Oct 14 09:19:36 compute-0 sad_morse[372283]:             },
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "type": "block",
Oct 14 09:19:36 compute-0 sad_morse[372283]:             "vg_name": "ceph_vg2"
Oct 14 09:19:36 compute-0 sad_morse[372283]:         }
Oct 14 09:19:36 compute-0 sad_morse[372283]:     ]
Oct 14 09:19:36 compute-0 sad_morse[372283]: }
Oct 14 09:19:36 compute-0 systemd[1]: libpod-a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79.scope: Deactivated successfully.
Oct 14 09:19:36 compute-0 podman[372267]: 2025-10-14 09:19:36.875067881 +0000 UTC m=+0.918201772 container died a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:19:36 compute-0 nova_compute[259627]: 2025-10-14 09:19:36.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6-merged.mount: Deactivated successfully.
Oct 14 09:19:36 compute-0 podman[372267]: 2025-10-14 09:19:36.953909494 +0000 UTC m=+0.997043415 container remove a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:19:36 compute-0 systemd[1]: libpod-conmon-a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79.scope: Deactivated successfully.
Oct 14 09:19:36 compute-0 sudo[372160]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:37 compute-0 sudo[372306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:37 compute-0 sudo[372306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:37 compute-0 sudo[372306]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:37 compute-0 sudo[372331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:19:37 compute-0 sudo[372331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:37 compute-0 sudo[372331]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:37 compute-0 sudo[372356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:37 compute-0 sudo[372356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:37 compute-0 sudo[372356]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:37 compute-0 sudo[372381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:19:37 compute-0 sudo[372381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:37 compute-0 sshd-session[371360]: Connection closed by invalid user support 188.150.249.96 port 38542 [preauth]
Oct 14 09:19:37 compute-0 ceph-mon[74249]: pgmap v1976: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 09:19:37 compute-0 podman[372448]: 2025-10-14 09:19:37.567022805 +0000 UTC m=+0.035269490 container create 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:19:37 compute-0 systemd[1]: Started libpod-conmon-30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc.scope.
Oct 14 09:19:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:19:37 compute-0 podman[372448]: 2025-10-14 09:19:37.552498277 +0000 UTC m=+0.020744982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:19:37 compute-0 podman[372448]: 2025-10-14 09:19:37.658000708 +0000 UTC m=+0.126247403 container init 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:19:37 compute-0 podman[372448]: 2025-10-14 09:19:37.669503121 +0000 UTC m=+0.137749806 container start 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 09:19:37 compute-0 podman[372448]: 2025-10-14 09:19:37.67352468 +0000 UTC m=+0.141771475 container attach 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:19:37 compute-0 optimistic_swirles[372465]: 167 167
Oct 14 09:19:37 compute-0 systemd[1]: libpod-30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc.scope: Deactivated successfully.
Oct 14 09:19:37 compute-0 podman[372448]: 2025-10-14 09:19:37.679108828 +0000 UTC m=+0.147355533 container died 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 09:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5ba36a051ec9516d939916880faa296e929e048fce59ae73d097e76e7a3b76e-merged.mount: Deactivated successfully.
Oct 14 09:19:37 compute-0 podman[372448]: 2025-10-14 09:19:37.726555227 +0000 UTC m=+0.194801912 container remove 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:19:37 compute-0 systemd[1]: libpod-conmon-30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc.scope: Deactivated successfully.
Oct 14 09:19:37 compute-0 podman[372478]: 2025-10-14 09:19:37.815876029 +0000 UTC m=+0.074436226 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 14 09:19:37 compute-0 podman[372491]: 2025-10-14 09:19:37.893512792 +0000 UTC m=+0.112527404 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:19:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:37 compute-0 podman[372529]: 2025-10-14 09:19:37.925224764 +0000 UTC m=+0.051812568 container create e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:19:37 compute-0 systemd[1]: Started libpod-conmon-e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa.scope.
Oct 14 09:19:38 compute-0 podman[372529]: 2025-10-14 09:19:37.907134698 +0000 UTC m=+0.033722542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:19:38 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:19:38 compute-0 podman[372529]: 2025-10-14 09:19:38.024510541 +0000 UTC m=+0.151098345 container init e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:19:38 compute-0 podman[372529]: 2025-10-14 09:19:38.037413679 +0000 UTC m=+0.164001513 container start e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:19:38 compute-0 podman[372529]: 2025-10-14 09:19:38.041206482 +0000 UTC m=+0.167794306 container attach e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:19:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:19:38 compute-0 nova_compute[259627]: 2025-10-14 09:19:38.395 2 DEBUG nova.network.neutron [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updated VIF entry in instance network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:19:38 compute-0 nova_compute[259627]: 2025-10-14 09:19:38.395 2 DEBUG nova.network.neutron [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:19:38 compute-0 nova_compute[259627]: 2025-10-14 09:19:38.578 2 DEBUG oslo_concurrency.lockutils [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:19:38 compute-0 nova_compute[259627]: 2025-10-14 09:19:38.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:38 compute-0 nova_compute[259627]: 2025-10-14 09:19:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:38 compute-0 nova_compute[259627]: 2025-10-14 09:19:38.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:19:39 compute-0 strange_maxwell[372547]: {
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "osd_id": 2,
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "type": "bluestore"
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:     },
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "osd_id": 1,
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "type": "bluestore"
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:     },
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "osd_id": 0,
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:         "type": "bluestore"
Oct 14 09:19:39 compute-0 strange_maxwell[372547]:     }
Oct 14 09:19:39 compute-0 strange_maxwell[372547]: }
Oct 14 09:19:39 compute-0 systemd[1]: libpod-e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa.scope: Deactivated successfully.
Oct 14 09:19:39 compute-0 systemd[1]: libpod-e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa.scope: Consumed 1.009s CPU time.
Oct 14 09:19:39 compute-0 podman[372529]: 2025-10-14 09:19:39.048394235 +0000 UTC m=+1.174982069 container died e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f-merged.mount: Deactivated successfully.
Oct 14 09:19:39 compute-0 podman[372529]: 2025-10-14 09:19:39.106063096 +0000 UTC m=+1.232650880 container remove e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:19:39 compute-0 systemd[1]: libpod-conmon-e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa.scope: Deactivated successfully.
Oct 14 09:19:39 compute-0 sudo[372381]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:19:39 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:19:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:19:39 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:19:39 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7a0bd132-1bae-4651-b7c8-058693ae50a2 does not exist
Oct 14 09:19:39 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 41dacb89-5a06-4c01-a644-87bf83ec71fe does not exist
Oct 14 09:19:39 compute-0 sudo[372595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:19:39 compute-0 sudo[372595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:39 compute-0 sudo[372595]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:39 compute-0 sudo[372620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:19:39 compute-0 sudo[372620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:19:39 compute-0 sudo[372620]: pam_unix(sudo:session): session closed for user root
Oct 14 09:19:39 compute-0 ceph-mon[74249]: pgmap v1977: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:19:39 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:19:39 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:19:39 compute-0 nova_compute[259627]: 2025-10-14 09:19:39.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:39 compute-0 nova_compute[259627]: 2025-10-14 09:19:39.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:19:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:19:40 compute-0 nova_compute[259627]: 2025-10-14 09:19:40.179 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:19:40 compute-0 ovn_controller[152662]: 2025-10-14T09:19:40Z|01192|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 09:19:40 compute-0 nova_compute[259627]: 2025-10-14 09:19:40.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:41 compute-0 ceph-mon[74249]: pgmap v1978: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:19:41 compute-0 nova_compute[259627]: 2025-10-14 09:19:41.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 64 op/s
Oct 14 09:19:42 compute-0 ovn_controller[152662]: 2025-10-14T09:19:42Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:57:80 10.100.0.11
Oct 14 09:19:42 compute-0 ovn_controller[152662]: 2025-10-14T09:19:42Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:57:80 10.100.0.11
Oct 14 09:19:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.918950) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433582919049, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1616, "num_deletes": 253, "total_data_size": 2467508, "memory_usage": 2514056, "flush_reason": "Manual Compaction"}
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433582939660, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2409933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40155, "largest_seqno": 41770, "table_properties": {"data_size": 2402445, "index_size": 4431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14804, "raw_average_key_size": 18, "raw_value_size": 2387343, "raw_average_value_size": 3048, "num_data_blocks": 197, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433434, "oldest_key_time": 1760433434, "file_creation_time": 1760433582, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 20789 microseconds, and 11264 cpu microseconds.
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.939735) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2409933 bytes OK
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.939764) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.944044) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.944103) EVENT_LOG_v1 {"time_micros": 1760433582944089, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.944135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2460431, prev total WAL file size 2460431, number of live WAL files 2.
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.945397) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2353KB)], [89(9168KB)]
Oct 14 09:19:42 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433582945553, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11798804, "oldest_snapshot_seqno": -1}
Oct 14 09:19:42 compute-0 nova_compute[259627]: 2025-10-14 09:19:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6720 keys, 11077497 bytes, temperature: kUnknown
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433583023314, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11077497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11029602, "index_size": 29991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 171508, "raw_average_key_size": 25, "raw_value_size": 10906266, "raw_average_value_size": 1622, "num_data_blocks": 1194, "num_entries": 6720, "num_filter_entries": 6720, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433582, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.023529) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11077497 bytes
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.025005) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.6 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(9.5) write-amplify(4.6) OK, records in: 7241, records dropped: 521 output_compression: NoCompression
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.025040) EVENT_LOG_v1 {"time_micros": 1760433583025031, "job": 52, "event": "compaction_finished", "compaction_time_micros": 77808, "compaction_time_cpu_micros": 50538, "output_level": 6, "num_output_files": 1, "total_output_size": 11077497, "num_input_records": 7241, "num_output_records": 6720, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433583025569, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433583028985, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.945221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:19:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:19:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:19:43 compute-0 ceph-mon[74249]: pgmap v1979: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 64 op/s
Oct 14 09:19:43 compute-0 nova_compute[259627]: 2025-10-14 09:19:43.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 09:19:44 compute-0 ovn_controller[152662]: 2025-10-14T09:19:44Z|01193|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 09:19:44 compute-0 nova_compute[259627]: 2025-10-14 09:19:44.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:45 compute-0 ceph-mon[74249]: pgmap v1980: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 09:19:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 14 09:19:46 compute-0 sshd-session[372470]: Invalid user ubnt from 188.150.249.96 port 41780
Oct 14 09:19:46 compute-0 nova_compute[259627]: 2025-10-14 09:19:46.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:47 compute-0 sshd-session[372470]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:19:47 compute-0 sshd-session[372470]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:19:47 compute-0 ceph-mon[74249]: pgmap v1981: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 14 09:19:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:19:48 compute-0 sshd-session[372470]: Failed password for invalid user ubnt from 188.150.249.96 port 41780 ssh2
Oct 14 09:19:48 compute-0 nova_compute[259627]: 2025-10-14 09:19:48.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:48 compute-0 nova_compute[259627]: 2025-10-14 09:19:48.992 2 INFO nova.compute.manager [None req-95373ae1-a25b-4741-8a59-d6b25cd93b68 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Get console output
Oct 14 09:19:49 compute-0 nova_compute[259627]: 2025-10-14 09:19:49.000 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:19:49 compute-0 ceph-mon[74249]: pgmap v1982: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:19:49 compute-0 nova_compute[259627]: 2025-10-14 09:19:49.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:19:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:19:50 compute-0 sshd-session[372470]: Connection closed by invalid user ubnt 188.150.249.96 port 41780 [preauth]
Oct 14 09:19:51 compute-0 ceph-mon[74249]: pgmap v1983: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:19:51 compute-0 nova_compute[259627]: 2025-10-14 09:19:51.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:19:52 compute-0 nova_compute[259627]: 2025-10-14 09:19:52.707 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:19:52 compute-0 nova_compute[259627]: 2025-10-14 09:19:52.708 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:19:52 compute-0 nova_compute[259627]: 2025-10-14 09:19:52.708 2 DEBUG nova.objects.instance [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'flavor' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:19:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:53 compute-0 ceph-mon[74249]: pgmap v1984: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:19:53 compute-0 nova_compute[259627]: 2025-10-14 09:19:53.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:53 compute-0 nova_compute[259627]: 2025-10-14 09:19:53.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:54 compute-0 nova_compute[259627]: 2025-10-14 09:19:54.058 2 DEBUG nova.objects.instance [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_requests' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:19:54 compute-0 nova_compute[259627]: 2025-10-14 09:19:54.078 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:19:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:19:54 compute-0 nova_compute[259627]: 2025-10-14 09:19:54.540 2 DEBUG nova.policy [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:19:55 compute-0 ceph-mon[74249]: pgmap v1985: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:19:55 compute-0 nova_compute[259627]: 2025-10-14 09:19:55.486 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Successfully created port: 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:19:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:19:56 compute-0 nova_compute[259627]: 2025-10-14 09:19:56.489 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Successfully updated port: 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:19:56 compute-0 nova_compute[259627]: 2025-10-14 09:19:56.509 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:19:56 compute-0 nova_compute[259627]: 2025-10-14 09:19:56.510 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:19:56 compute-0 nova_compute[259627]: 2025-10-14 09:19:56.510 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:19:56 compute-0 nova_compute[259627]: 2025-10-14 09:19:56.674 2 DEBUG nova.compute.manager [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-changed-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:19:56 compute-0 nova_compute[259627]: 2025-10-14 09:19:56.674 2 DEBUG nova.compute.manager [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing instance network info cache due to event network-changed-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:19:56 compute-0 nova_compute[259627]: 2025-10-14 09:19:56.675 2 DEBUG oslo_concurrency.lockutils [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:19:56 compute-0 podman[372650]: 2025-10-14 09:19:56.707148691 +0000 UTC m=+0.115907088 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:19:56 compute-0 podman[372651]: 2025-10-14 09:19:56.711559939 +0000 UTC m=+0.111872478 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:19:56 compute-0 nova_compute[259627]: 2025-10-14 09:19:56.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:57 compute-0 ceph-mon[74249]: pgmap v1986: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:19:57 compute-0 nova_compute[259627]: 2025-10-14 09:19:57.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:19:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 09:19:58 compute-0 nova_compute[259627]: 2025-10-14 09:19:58.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:59 compute-0 ceph-mon[74249]: pgmap v1987: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.906 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.929 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.931 2 DEBUG oslo_concurrency.lockutils [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.931 2 DEBUG nova.network.neutron [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing network info cache for port 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.936 2 DEBUG nova.virt.libvirt.vif [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.937 2 DEBUG nova.network.os_vif_util [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.938 2 DEBUG nova.network.os_vif_util [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.939 2 DEBUG os_vif [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.942 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.947 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b04a9d3-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.948 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b04a9d3-64, col_values=(('external_ids', {'iface-id': '8b04a9d3-64de-44c8-bcff-b2443ae3e9ef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:28:d8', 'vm-uuid': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:19:59 compute-0 NetworkManager[44885]: <info>  [1760433599.9524] manager: (tap8b04a9d3-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.964 2 INFO os_vif [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64')
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.965 2 DEBUG nova.virt.libvirt.vif [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.966 2 DEBUG nova.network.os_vif_util [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.968 2 DEBUG nova.network.os_vif_util [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:19:59 compute-0 nova_compute[259627]: 2025-10-14 09:19:59.972 2 DEBUG nova.virt.libvirt.guest [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] attach device xml: <interface type="ethernet">
Oct 14 09:19:59 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:eb:28:d8"/>
Oct 14 09:19:59 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:19:59 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:19:59 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:19:59 compute-0 nova_compute[259627]:   <target dev="tap8b04a9d3-64"/>
Oct 14 09:19:59 compute-0 nova_compute[259627]: </interface>
Oct 14 09:19:59 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 09:19:59 compute-0 NetworkManager[44885]: <info>  [1760433599.9919] manager: (tap8b04a9d3-64): new Tun device (/org/freedesktop/NetworkManager/Devices/485)
Oct 14 09:19:59 compute-0 kernel: tap8b04a9d3-64: entered promiscuous mode
Oct 14 09:20:00 compute-0 ovn_controller[152662]: 2025-10-14T09:20:00Z|01194|binding|INFO|Claiming lport 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef for this chassis.
Oct 14 09:20:00 compute-0 ovn_controller[152662]: 2025-10-14T09:20:00Z|01195|binding|INFO|8b04a9d3-64de-44c8-bcff-b2443ae3e9ef: Claiming fa:16:3e:eb:28:d8 10.100.0.28
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.013 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:28:d8 10.100.0.28'], port_security=['fa:16:3e:eb:28:d8 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9df519b2-744d-4554-ab73-bff1872a6efb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a98a09c3-f853-4e04-9105-38927284cdec, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.015 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef in datapath 9df519b2-744d-4554-ab73-bff1872a6efb bound to our chassis
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.017 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9df519b2-744d-4554-ab73-bff1872a6efb
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.037 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef631c68-8674-4613-97c0-43678071ce38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9df519b2-71 in ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.041 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9df519b2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.041 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[641c23a0-bf9d-4263-8526-5c3b13ceef98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.044 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90b34977-3d3b-4f9f-b484-ebeb18343a8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 systemd-udevd[372699]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.061 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a11e5fdf-5875-4d72-88d6-78ab3fd0dcc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 NetworkManager[44885]: <info>  [1760433600.0753] device (tap8b04a9d3-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:20:00 compute-0 NetworkManager[44885]: <info>  [1760433600.0770] device (tap8b04a9d3-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.086 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2beff4fe-029e-47b9-a896-e873a2cee0bf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.101 2 DEBUG nova.virt.libvirt.driver [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.102 2 DEBUG nova.virt.libvirt.driver [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.102 2 DEBUG nova.virt.libvirt.driver [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:cf:57:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.103 2 DEBUG nova.virt.libvirt.driver [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:eb:28:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:20:00 compute-0 ovn_controller[152662]: 2025-10-14T09:20:00Z|01196|binding|INFO|Setting lport 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef ovn-installed in OVS
Oct 14 09:20:00 compute-0 ovn_controller[152662]: 2025-10-14T09:20:00Z|01197|binding|INFO|Setting lport 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef up in Southbound
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.137 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8762e1-d717-45f1-ac25-b6149a759b60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.142 2 DEBUG nova.virt.libvirt.guest [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:00 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:20:00</nova:creationTime>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 09:20:00 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     <nova:port uuid="8b04a9d3-64de-44c8-bcff-b2443ae3e9ef">
Oct 14 09:20:00 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 14 09:20:00 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:00 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:20:00 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:20:00 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:20:00 compute-0 systemd-udevd[372702]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:20:00 compute-0 NetworkManager[44885]: <info>  [1760433600.1453] manager: (tap9df519b2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/486)
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.146 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[53c8d628-5353-49db-a4ee-8a58d3913050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.182 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.197 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aff9d1f0-2a8a-43f3-899f-3123ae3d6d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.201 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c60688aa-2962-40bb-ba27-8e138be71c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 NetworkManager[44885]: <info>  [1760433600.2286] device (tap9df519b2-70): carrier: link connected
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.236 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3d990b59-3dc7-472e-8658-2e512ac185e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[244371a6-dc6b-4b94-b97e-d497bf3069ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9df519b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:5f:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737898, 'reachable_time': 38821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372724, 'error': None, 'target': 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.278 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d62a062-c678-4099-bf79-41e5774f8915]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:5fd0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737898, 'tstamp': 737898}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372725, 'error': None, 'target': 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.299 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b38e581a-a1e2-4e4c-9801-4f1cf908e1ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9df519b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:5f:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737898, 'reachable_time': 38821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372726, 'error': None, 'target': 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[365a7af7-c78c-49a5-a62a-a71d9610277b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.426 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f4093359-dc90-4692-aad5-da857db59d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.428 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9df519b2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.429 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.429 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9df519b2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:00 compute-0 kernel: tap9df519b2-70: entered promiscuous mode
Oct 14 09:20:00 compute-0 NetworkManager[44885]: <info>  [1760433600.4335] manager: (tap9df519b2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9df519b2-70, col_values=(('external_ids', {'iface-id': 'bfea9a1c-62f2-4502-94c0-26cc3839339f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:00 compute-0 ovn_controller[152662]: 2025-10-14T09:20:00Z|01198|binding|INFO|Releasing lport bfea9a1c-62f2-4502-94c0-26cc3839339f from this chassis (sb_readonly=0)
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.444 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9df519b2-744d-4554-ab73-bff1872a6efb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9df519b2-744d-4554-ab73-bff1872a6efb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70140fe3-c3e2-4395-912b-51847b59c8a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.446 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-9df519b2-744d-4554-ab73-bff1872a6efb
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/9df519b2-744d-4554-ab73-bff1872a6efb.pid.haproxy
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 9df519b2-744d-4554-ab73-bff1872a6efb
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:20:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.447 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'env', 'PROCESS_TAG=haproxy-9df519b2-744d-4554-ab73-bff1872a6efb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9df519b2-744d-4554-ab73-bff1872a6efb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:20:00 compute-0 nova_compute[259627]: 2025-10-14 09:20:00.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:00 compute-0 podman[372756]: 2025-10-14 09:20:00.863427959 +0000 UTC m=+0.066772217 container create 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:20:00 compute-0 podman[372756]: 2025-10-14 09:20:00.823764351 +0000 UTC m=+0.027108669 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:20:00 compute-0 systemd[1]: Started libpod-conmon-1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808.scope.
Oct 14 09:20:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ace653ee5c7b9417ef3ee5867719a97fbc60cb7a1e7d9da2e335f4a61a3aa9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:00 compute-0 sshd-session[372647]: Invalid user user from 188.150.249.96 port 44804
Oct 14 09:20:00 compute-0 podman[372756]: 2025-10-14 09:20:00.981107019 +0000 UTC m=+0.184451317 container init 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:20:00 compute-0 podman[372756]: 2025-10-14 09:20:00.991793472 +0000 UTC m=+0.195137720 container start 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:20:01 compute-0 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [NOTICE]   (372775) : New worker (372777) forked
Oct 14 09:20:01 compute-0 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [NOTICE]   (372775) : Loading success.
Oct 14 09:20:01 compute-0 ceph-mon[74249]: pgmap v1988: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 09:20:01 compute-0 sshd-session[372647]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:20:01 compute-0 sshd-session[372647]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:20:01 compute-0 nova_compute[259627]: 2025-10-14 09:20:01.794 2 DEBUG nova.compute.manager [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:01 compute-0 nova_compute[259627]: 2025-10-14 09:20:01.794 2 DEBUG oslo_concurrency.lockutils [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:01 compute-0 nova_compute[259627]: 2025-10-14 09:20:01.795 2 DEBUG oslo_concurrency.lockutils [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:01 compute-0 nova_compute[259627]: 2025-10-14 09:20:01.795 2 DEBUG oslo_concurrency.lockutils [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:01 compute-0 nova_compute[259627]: 2025-10-14 09:20:01.796 2 DEBUG nova.compute.manager [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:01 compute-0 nova_compute[259627]: 2025-10-14 09:20:01.796 2 WARNING nova.compute.manager [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received unexpected event network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef for instance with vm_state active and task_state None.
Oct 14 09:20:01 compute-0 ovn_controller[152662]: 2025-10-14T09:20:01Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:28:d8 10.100.0.28
Oct 14 09:20:01 compute-0 ovn_controller[152662]: 2025-10-14T09:20:01Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:28:d8 10.100.0.28
Oct 14 09:20:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.154 2 DEBUG oslo_concurrency.lockutils [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.154 2 DEBUG oslo_concurrency.lockutils [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.173 2 DEBUG nova.objects.instance [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'flavor' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.201 2 DEBUG nova.virt.libvirt.vif [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.202 2 DEBUG nova.network.os_vif_util [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.204 2 DEBUG nova.network.os_vif_util [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.207 2 DEBUG nova.network.neutron [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updated VIF entry in instance network info cache for port 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.208 2 DEBUG nova.network.neutron [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.211 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.214 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.216 2 DEBUG nova.virt.libvirt.driver [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Attempting to detach device tap8b04a9d3-64 from instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.216 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:eb:28:d8"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <target dev="tap8b04a9d3-64"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]: </interface>
Oct 14 09:20:02 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.222 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.224 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <name>instance-00000071</name>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <uuid>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</uuid>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:20:00</nova:creationTime>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:port uuid="8b04a9d3-64de-44c8-bcff-b2443ae3e9ef">
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:20:02 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <system>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='serial'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='uuid'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </system>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <os>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </os>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <features>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </features>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk' index='2'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config' index='1'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:cf:57:80'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target dev='tap654f4bc5-29'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:eb:28:d8'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target dev='tap8b04a9d3-64'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='net1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log' append='off'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </target>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log' append='off'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </console>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <video>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </video>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c356,c432</label>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c356,c432</imagelabel>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:20:02 compute-0 nova_compute[259627]: </domain>
Oct 14 09:20:02 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.225 2 INFO nova.virt.libvirt.driver [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully detached device tap8b04a9d3-64 from instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b from the persistent domain config.
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.225 2 DEBUG nova.virt.libvirt.driver [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] (1/8): Attempting to detach device tap8b04a9d3-64 with device alias net1 from instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.225 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:eb:28:d8"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <target dev="tap8b04a9d3-64"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]: </interface>
Oct 14 09:20:02 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.230 2 DEBUG oslo_concurrency.lockutils [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:02 compute-0 kernel: tap8b04a9d3-64 (unregistering): left promiscuous mode
Oct 14 09:20:02 compute-0 NetworkManager[44885]: <info>  [1760433602.3444] device (tap8b04a9d3-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:20:02 compute-0 ovn_controller[152662]: 2025-10-14T09:20:02Z|01199|binding|INFO|Releasing lport 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef from this chassis (sb_readonly=0)
Oct 14 09:20:02 compute-0 ovn_controller[152662]: 2025-10-14T09:20:02Z|01200|binding|INFO|Setting lport 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef down in Southbound
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.360 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760433602.3594189, e83e1125-7b9a-4ca1-9040-40dbc3e0237b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:02 compute-0 ovn_controller[152662]: 2025-10-14T09:20:02Z|01201|binding|INFO|Removing iface tap8b04a9d3-64 ovn-installed in OVS
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.364 2 DEBUG nova.virt.libvirt.driver [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Start waiting for the detach event from libvirt for device tap8b04a9d3-64 with device alias net1 for instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.365 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.372 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <name>instance-00000071</name>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <uuid>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</uuid>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:20:00</nova:creationTime>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:port uuid="8b04a9d3-64de-44c8-bcff-b2443ae3e9ef">
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:20:02 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <system>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='serial'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='uuid'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </system>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <os>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </os>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <features>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </features>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk' index='2'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config' index='1'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:cf:57:80'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target dev='tap654f4bc5-29'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log' append='off'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       </target>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log' append='off'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </console>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <video>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </video>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c356,c432</label>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c356,c432</imagelabel>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:20:02 compute-0 nova_compute[259627]: </domain>
Oct 14 09:20:02 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.372 2 INFO nova.virt.libvirt.driver [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully detached device tap8b04a9d3-64 from instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b from the live domain config.
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.374 2 DEBUG nova.virt.libvirt.vif [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.375 2 DEBUG nova.network.os_vif_util [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.376 2 DEBUG nova.network.os_vif_util [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.377 2 DEBUG os_vif [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.373 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:28:d8 10.100.0.28'], port_security=['fa:16:3e:eb:28:d8 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9df519b2-744d-4554-ab73-bff1872a6efb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a98a09c3-f853-4e04-9105-38927284cdec, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.376 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef in datapath 9df519b2-744d-4554-ab73-bff1872a6efb unbound from our chassis
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.379 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9df519b2-744d-4554-ab73-bff1872a6efb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b04a9d3-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.381 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fef11805-17b2-4bcd-83e7-6e3b6377b4ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.382 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb namespace which is not needed anymore
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.403 2 INFO os_vif [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64')
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.404 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:20:02</nova:creationTime>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:20:02 compute-0 rsyslogd[1002]: imjournal from <np0005486808:nova_compute>: begin to drop messages due to rate-limiting
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 09:20:02 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:20:02 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:02 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:20:02 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:20:02 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:20:02 compute-0 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [NOTICE]   (372775) : haproxy version is 2.8.14-c23fe91
Oct 14 09:20:02 compute-0 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [NOTICE]   (372775) : path to executable is /usr/sbin/haproxy
Oct 14 09:20:02 compute-0 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [WARNING]  (372775) : Exiting Master process...
Oct 14 09:20:02 compute-0 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [ALERT]    (372775) : Current worker (372777) exited with code 143 (Terminated)
Oct 14 09:20:02 compute-0 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [WARNING]  (372775) : All workers exited. Exiting... (0)
Oct 14 09:20:02 compute-0 systemd[1]: libpod-1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808.scope: Deactivated successfully.
Oct 14 09:20:02 compute-0 podman[372807]: 2025-10-14 09:20:02.544543572 +0000 UTC m=+0.054128885 container died 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808-userdata-shm.mount: Deactivated successfully.
Oct 14 09:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ace653ee5c7b9417ef3ee5867719a97fbc60cb7a1e7d9da2e335f4a61a3aa9c-merged.mount: Deactivated successfully.
Oct 14 09:20:02 compute-0 podman[372807]: 2025-10-14 09:20:02.60530039 +0000 UTC m=+0.114885703 container cleanup 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:20:02 compute-0 systemd[1]: libpod-conmon-1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808.scope: Deactivated successfully.
Oct 14 09:20:02 compute-0 podman[372836]: 2025-10-14 09:20:02.709695933 +0000 UTC m=+0.081744416 container remove 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42e9ac86-f531-4dd4-acfe-419d9cc3a1dd]: (4, ('Tue Oct 14 09:20:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb (1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808)\n1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808\nTue Oct 14 09:20:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb (1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808)\n1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.715 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0edc7c-0b27-49eb-b1a1-914b66d8b2da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.716 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9df519b2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:02 compute-0 kernel: tap9df519b2-70: left promiscuous mode
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:20:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:20:02 compute-0 nova_compute[259627]: 2025-10-14 09:20:02.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.747 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa9c093-0a27-473b-91e8-9674b76aa3db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.786 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e48fcc69-06b8-4736-bb7a-8df3b9db9272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.787 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1521c0ef-38db-4d9b-b861-1a39e8584033]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:20:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:20:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:20:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.803 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6db624-3f10-43e2-b8e8-36217a8210b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737889, 'reachable_time': 19521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372852, 'error': None, 'target': 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d9df519b2\x2d744d\x2d4554\x2dab73\x2dbff1872a6efb.mount: Deactivated successfully.
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.808 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:20:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:02.808 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[edde839b-19fd-45bf-ac02-cbe7bb073948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:03 compute-0 ceph-mon[74249]: pgmap v1989: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.621 2 DEBUG oslo_concurrency.lockutils [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.621 2 DEBUG oslo_concurrency.lockutils [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.622 2 DEBUG nova.network.neutron [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.768 2 DEBUG nova.compute.manager [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-deleted-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.769 2 INFO nova.compute.manager [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Neutron deleted interface 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef; detaching it from the instance and deleting it from the info cache
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.769 2 DEBUG nova.network.neutron [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.799 2 DEBUG nova.objects.instance [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'system_metadata' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:03 compute-0 sshd-session[372647]: Failed password for invalid user user from 188.150.249.96 port 44804 ssh2
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.835 2 DEBUG nova.objects.instance [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'flavor' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.868 2 DEBUG nova.virt.libvirt.vif [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.868 2 DEBUG nova.network.os_vif_util [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.869 2 DEBUG nova.network.os_vif_util [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.874 2 DEBUG nova.virt.libvirt.guest [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.879 2 DEBUG nova.virt.libvirt.guest [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <name>instance-00000071</name>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <uuid>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</uuid>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:20:02</nova:creationTime>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:20:03 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <system>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='serial'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='uuid'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </system>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <os>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </os>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <features>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </features>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk' index='2'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config' index='1'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:cf:57:80'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target dev='tap654f4bc5-29'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log' append='off'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </target>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log' append='off'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </console>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <video>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </video>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c356,c432</label>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c356,c432</imagelabel>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:20:03 compute-0 nova_compute[259627]: </domain>
Oct 14 09:20:03 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.880 2 DEBUG nova.virt.libvirt.guest [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.885 2 DEBUG nova.virt.libvirt.guest [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <name>instance-00000071</name>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <uuid>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</uuid>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:20:02</nova:creationTime>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:20:03 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <system>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='serial'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='uuid'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </system>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <os>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </os>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <features>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </features>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk' index='2'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config' index='1'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:cf:57:80'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target dev='tap654f4bc5-29'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log' append='off'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       </target>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log' append='off'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </console>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </input>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <video>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </video>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c356,c432</label>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c356,c432</imagelabel>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:20:03 compute-0 nova_compute[259627]: </domain>
Oct 14 09:20:03 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.886 2 WARNING nova.virt.libvirt.driver [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Detaching interface fa:16:3e:eb:28:d8 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap8b04a9d3-64' not found.
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.887 2 DEBUG nova.virt.libvirt.vif [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.888 2 DEBUG nova.network.os_vif_util [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.889 2 DEBUG nova.network.os_vif_util [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.890 2 DEBUG os_vif [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b04a9d3-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.896 2 INFO os_vif [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64')
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.897 2 DEBUG nova.virt.libvirt.guest [req-149120a5-116e-46eb-a106-2e543d73de43 req-34dd1aa2-d83a-4d00-9558-3ff239027b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:20:03</nova:creationTime>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 09:20:03 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:20:03 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:20:03 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:20:03 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:20:03 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:03 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.999 2 DEBUG nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:03.999 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.000 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.000 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.000 2 DEBUG nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.000 2 WARNING nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received unexpected event network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef for instance with vm_state active and task_state None.
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.000 2 DEBUG nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-unplugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.001 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.001 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.001 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.001 2 DEBUG nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-unplugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.002 2 WARNING nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received unexpected event network-vif-unplugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef for instance with vm_state active and task_state None.
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.002 2 DEBUG nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.002 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.002 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.003 2 DEBUG oslo_concurrency.lockutils [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.003 2 DEBUG nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.003 2 WARNING nova.compute.manager [req-220dfef8-af31-457a-b3bd-d6459b93a6b5 req-20aba1f6-7d03-4c28-8bf4-c4362bd1cc09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received unexpected event network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef for instance with vm_state active and task_state None.
Oct 14 09:20:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 14 09:20:04 compute-0 ovn_controller[152662]: 2025-10-14T09:20:04Z|01202|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 09:20:04 compute-0 nova_compute[259627]: 2025-10-14 09:20:04.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:05.039 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:20:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:05.041 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:20:05 compute-0 ceph-mon[74249]: pgmap v1990: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.489 2 INFO nova.network.neutron [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Port 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.489 2 DEBUG nova.network.neutron [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.507 2 DEBUG oslo_concurrency.lockutils [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.527 2 DEBUG oslo_concurrency.lockutils [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:20:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721622348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:20:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:20:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721622348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.704 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.705 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.705 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.705 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.706 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.708 2 INFO nova.compute.manager [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Terminating instance
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.709 2 DEBUG nova.compute.manager [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:20:05 compute-0 kernel: tap654f4bc5-29 (unregistering): left promiscuous mode
Oct 14 09:20:05 compute-0 NetworkManager[44885]: <info>  [1760433605.7769] device (tap654f4bc5-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:20:05 compute-0 ovn_controller[152662]: 2025-10-14T09:20:05Z|01203|binding|INFO|Releasing lport 654f4bc5-29db-40e4-bc4e-bdd325e98e7a from this chassis (sb_readonly=0)
Oct 14 09:20:05 compute-0 ovn_controller[152662]: 2025-10-14T09:20:05Z|01204|binding|INFO|Setting lport 654f4bc5-29db-40e4-bc4e-bdd325e98e7a down in Southbound
Oct 14 09:20:05 compute-0 ovn_controller[152662]: 2025-10-14T09:20:05Z|01205|binding|INFO|Removing iface tap654f4bc5-29 ovn-installed in OVS
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:05.790 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:57:80 10.100.0.11'], port_security=['fa:16:3e:cf:57:80 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-414bade6-3739-49f9-bce9-e93105157bbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '885d3329-1a6a-4e62-8633-0f83200cd25a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=553b14d5-fce8-4530-b374-6c59ead23d8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=654f4bc5-29db-40e4-bc4e-bdd325e98e7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:20:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:05.791 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a in datapath 414bade6-3739-49f9-bce9-e93105157bbe unbound from our chassis
Oct 14 09:20:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:05.792 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 414bade6-3739-49f9-bce9-e93105157bbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:20:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:05.793 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d76071-5ce7-4dc2-aec6-aeff8c56cae5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:05.794 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe namespace which is not needed anymore
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:05 compute-0 sshd-session[372647]: Connection closed by invalid user user 188.150.249.96 port 44804 [preauth]
Oct 14 09:20:05 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct 14 09:20:05 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Consumed 14.211s CPU time.
Oct 14 09:20:05 compute-0 systemd-machined[214636]: Machine qemu-143-instance-00000071 terminated.
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.858 2 DEBUG nova.compute.manager [req-556fe52f-557e-47e5-9e09-7858f33b8c4a req-6cd23e99-e3b0-4277-815c-fa739736ad5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.859 2 DEBUG nova.compute.manager [req-556fe52f-557e-47e5-9e09-7858f33b8c4a req-6cd23e99-e3b0-4277-815c-fa739736ad5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing instance network info cache due to event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.859 2 DEBUG oslo_concurrency.lockutils [req-556fe52f-557e-47e5-9e09-7858f33b8c4a req-6cd23e99-e3b0-4277-815c-fa739736ad5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.859 2 DEBUG oslo_concurrency.lockutils [req-556fe52f-557e-47e5-9e09-7858f33b8c4a req-6cd23e99-e3b0-4277-815c-fa739736ad5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.860 2 DEBUG nova.network.neutron [req-556fe52f-557e-47e5-9e09-7858f33b8c4a req-6cd23e99-e3b0-4277-815c-fa739736ad5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.944 2 INFO nova.virt.libvirt.driver [-] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance destroyed successfully.
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.945 2 DEBUG nova.objects.instance [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.961 2 DEBUG nova.virt.libvirt.vif [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.962 2 DEBUG nova.network.os_vif_util [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.962 2 DEBUG nova.network.os_vif_util [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.963 2 DEBUG os_vif [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap654f4bc5-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:05 compute-0 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [NOTICE]   (371664) : haproxy version is 2.8.14-c23fe91
Oct 14 09:20:05 compute-0 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [NOTICE]   (371664) : path to executable is /usr/sbin/haproxy
Oct 14 09:20:05 compute-0 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [WARNING]  (371664) : Exiting Master process...
Oct 14 09:20:05 compute-0 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [WARNING]  (371664) : Exiting Master process...
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:05 compute-0 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [ALERT]    (371664) : Current worker (371666) exited with code 143 (Terminated)
Oct 14 09:20:05 compute-0 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [WARNING]  (371664) : All workers exited. Exiting... (0)
Oct 14 09:20:05 compute-0 nova_compute[259627]: 2025-10-14 09:20:05.971 2 INFO os_vif [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29')
Oct 14 09:20:05 compute-0 systemd[1]: libpod-3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96.scope: Deactivated successfully.
Oct 14 09:20:05 compute-0 podman[372880]: 2025-10-14 09:20:05.981632385 +0000 UTC m=+0.059447556 container died 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96-userdata-shm.mount: Deactivated successfully.
Oct 14 09:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ae02de95adf7e48c08e1cd0b516979c12355cab1b044706270723c27c91c4aa-merged.mount: Deactivated successfully.
Oct 14 09:20:06 compute-0 podman[372880]: 2025-10-14 09:20:06.034753114 +0000 UTC m=+0.112568295 container cleanup 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:20:06 compute-0 systemd[1]: libpod-conmon-3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96.scope: Deactivated successfully.
Oct 14 09:20:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Oct 14 09:20:06 compute-0 podman[372937]: 2025-10-14 09:20:06.127335386 +0000 UTC m=+0.057977430 container remove 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.135 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[754d06c2-0c8f-4d4a-8b51-90c9278f7d17]: (4, ('Tue Oct 14 09:20:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe (3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96)\n3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96\nTue Oct 14 09:20:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe (3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96)\n3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.137 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd9503d-7ef7-4ae2-b566-6ff5548f0691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.139 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap414bade6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:06 compute-0 kernel: tap414bade6-30: left promiscuous mode
Oct 14 09:20:06 compute-0 nova_compute[259627]: 2025-10-14 09:20:06.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:06 compute-0 nova_compute[259627]: 2025-10-14 09:20:06.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.202 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[08107c8a-c45d-4225-bd15-008e094001a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.228 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5244f2-35bf-48ae-9392-d198a6ce6734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.231 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[534bacff-5b09-4c86-94ad-efa66e6ac05f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.248 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd2c65f-617f-4f08-94a1-75f8666b00d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734698, 'reachable_time': 21216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372954, 'error': None, 'target': 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d414bade6\x2d3739\x2d49f9\x2dbce9\x2de93105157bbe.mount: Deactivated successfully.
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.253 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:20:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:06.253 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7d45ffe0-7a77-4e20-96b3-3ac4a4cdb952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:06 compute-0 nova_compute[259627]: 2025-10-14 09:20:06.386 2 INFO nova.virt.libvirt.driver [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Deleting instance files /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_del
Oct 14 09:20:06 compute-0 nova_compute[259627]: 2025-10-14 09:20:06.387 2 INFO nova.virt.libvirt.driver [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Deletion of /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_del complete
Oct 14 09:20:06 compute-0 nova_compute[259627]: 2025-10-14 09:20:06.479 2 INFO nova.compute.manager [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 14 09:20:06 compute-0 nova_compute[259627]: 2025-10-14 09:20:06.480 2 DEBUG oslo.service.loopingcall [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:20:06 compute-0 nova_compute[259627]: 2025-10-14 09:20:06.480 2 DEBUG nova.compute.manager [-] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:20:06 compute-0 nova_compute[259627]: 2025-10-14 09:20:06.480 2 DEBUG nova.network.neutron [-] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:20:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2721622348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:20:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2721622348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:20:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:07.035 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:07.036 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:07.036 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.358 2 DEBUG nova.network.neutron [req-556fe52f-557e-47e5-9e09-7858f33b8c4a req-6cd23e99-e3b0-4277-815c-fa739736ad5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updated VIF entry in instance network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.359 2 DEBUG nova.network.neutron [req-556fe52f-557e-47e5-9e09-7858f33b8c4a req-6cd23e99-e3b0-4277-815c-fa739736ad5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.389 2 DEBUG oslo_concurrency.lockutils [req-556fe52f-557e-47e5-9e09-7858f33b8c4a req-6cd23e99-e3b0-4277-815c-fa739736ad5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:07 compute-0 ceph-mon[74249]: pgmap v1991: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Oct 14 09:20:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.977 2 DEBUG nova.compute.manager [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-unplugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.978 2 DEBUG oslo_concurrency.lockutils [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.982 2 DEBUG oslo_concurrency.lockutils [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.982 2 DEBUG oslo_concurrency.lockutils [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.983 2 DEBUG nova.compute.manager [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-unplugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.983 2 DEBUG nova.compute.manager [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-unplugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.984 2 DEBUG nova.compute.manager [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.984 2 DEBUG oslo_concurrency.lockutils [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.985 2 DEBUG oslo_concurrency.lockutils [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.986 2 DEBUG oslo_concurrency.lockutils [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.987 2 DEBUG nova.compute.manager [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:07 compute-0 nova_compute[259627]: 2025-10-14 09:20:07.988 2 WARNING nova.compute.manager [req-c2fd027c-1260-4dde-8408-3a64e720569a req-00681017-e418-489b-9dc9-69e313e2355f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received unexpected event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a for instance with vm_state active and task_state deleting.
Oct 14 09:20:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 1023 B/s wr, 0 op/s
Oct 14 09:20:08 compute-0 nova_compute[259627]: 2025-10-14 09:20:08.582 2 DEBUG nova.network.neutron [-] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:08 compute-0 nova_compute[259627]: 2025-10-14 09:20:08.608 2 INFO nova.compute.manager [-] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Took 2.13 seconds to deallocate network for instance.
Oct 14 09:20:08 compute-0 nova_compute[259627]: 2025-10-14 09:20:08.662 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:08 compute-0 nova_compute[259627]: 2025-10-14 09:20:08.663 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:08 compute-0 nova_compute[259627]: 2025-10-14 09:20:08.672 2 DEBUG nova.compute.manager [req-4cf1b2a2-126d-41bc-b95d-84d30e426b14 req-9c894320-d88a-46b7-854a-87bb9c7d7c5a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-deleted-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:08 compute-0 podman[372957]: 2025-10-14 09:20:08.690430076 +0000 UTC m=+0.094179002 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:20:08 compute-0 nova_compute[259627]: 2025-10-14 09:20:08.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:08 compute-0 podman[372956]: 2025-10-14 09:20:08.743079394 +0000 UTC m=+0.152020528 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:20:08 compute-0 nova_compute[259627]: 2025-10-14 09:20:08.785 2 DEBUG oslo_concurrency.processutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:08 compute-0 nova_compute[259627]: 2025-10-14 09:20:08.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:20:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255189160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:09 compute-0 nova_compute[259627]: 2025-10-14 09:20:09.227 2 DEBUG oslo_concurrency.processutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:09 compute-0 nova_compute[259627]: 2025-10-14 09:20:09.236 2 DEBUG nova.compute.provider_tree [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:20:09 compute-0 nova_compute[259627]: 2025-10-14 09:20:09.253 2 DEBUG nova.scheduler.client.report [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:20:09 compute-0 nova_compute[259627]: 2025-10-14 09:20:09.287 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:09 compute-0 nova_compute[259627]: 2025-10-14 09:20:09.335 2 INFO nova.scheduler.client.report [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b
Oct 14 09:20:09 compute-0 nova_compute[259627]: 2025-10-14 09:20:09.418 2 DEBUG oslo_concurrency.lockutils [None req-71845cb7-8f12-43ae-8e3f-ab6d9e904121 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:09 compute-0 ceph-mon[74249]: pgmap v1992: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 1023 B/s wr, 0 op/s
Oct 14 09:20:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2255189160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:10.043 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 86 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Oct 14 09:20:11 compute-0 nova_compute[259627]: 2025-10-14 09:20:11.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:11 compute-0 ceph-mon[74249]: pgmap v1993: 305 pgs: 305 active+clean; 86 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Oct 14 09:20:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 14 09:20:12 compute-0 nova_compute[259627]: 2025-10-14 09:20:12.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:12 compute-0 nova_compute[259627]: 2025-10-14 09:20:12.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:13 compute-0 ceph-mon[74249]: pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 14 09:20:13 compute-0 nova_compute[259627]: 2025-10-14 09:20:13.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 14 09:20:14 compute-0 sshd-session[372943]: Connection closed by authenticating user root 188.150.249.96 port 47920 [preauth]
Oct 14 09:20:15 compute-0 ceph-mon[74249]: pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.095 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.095 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.130 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.327 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.327 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.356 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.357 2 INFO nova.compute.claims [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.534 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:20:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513575439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.981 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:16 compute-0 nova_compute[259627]: 2025-10-14 09:20:16.988 2 DEBUG nova.compute.provider_tree [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.009 2 DEBUG nova.scheduler.client.report [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.045 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.046 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.098 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.099 2 DEBUG nova.network.neutron [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.119 2 INFO nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.140 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.249 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.250 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.251 2 INFO nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Creating image(s)
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.272 2 DEBUG nova.storage.rbd_utils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.294 2 DEBUG nova.storage.rbd_utils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.314 2 DEBUG nova.storage.rbd_utils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.317 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.390 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.391 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.392 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.392 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.416 2 DEBUG nova.storage.rbd_utils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.420 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fd5669ba-0261-423e-8586-66c91ff570a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:17 compute-0 ceph-mon[74249]: pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 14 09:20:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/513575439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.716 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fd5669ba-0261-423e-8586-66c91ff570a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.764 2 DEBUG nova.policy [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d77c101777148edbee39ba308af8e60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b859f880079e4e6db96cdef422402fa1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.810 2 DEBUG nova.storage.rbd_utils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] resizing rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.895 2 DEBUG nova.objects.instance [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'migration_context' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.915 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.915 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Ensure instance console log exists: /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.916 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.916 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:17 compute-0 nova_compute[259627]: 2025-10-14 09:20:17.916 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:20:19 compute-0 nova_compute[259627]: 2025-10-14 09:20:19.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:19 compute-0 nova_compute[259627]: 2025-10-14 09:20:19.072 2 DEBUG nova.network.neutron [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Successfully created port: 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:20:19 compute-0 ceph-mon[74249]: pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:20:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 57 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 865 KiB/s wr, 41 op/s
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.819 2 DEBUG nova.network.neutron [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Successfully updated port: 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.844 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.845 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.845 2 DEBUG nova.network.neutron [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.941 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433605.9403274, e83e1125-7b9a-4ca1-9040-40dbc3e0237b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.941 2 INFO nova.compute.manager [-] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] VM Stopped (Lifecycle Event)
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.961 2 DEBUG nova.compute.manager [req-f2d77887-34c6-414a-b1ae-15e931d69088 req-e95081ba-b468-41e5-b154-c7338e320cef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.962 2 DEBUG nova.compute.manager [req-f2d77887-34c6-414a-b1ae-15e931d69088 req-e95081ba-b468-41e5-b154-c7338e320cef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing instance network info cache due to event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.963 2 DEBUG oslo_concurrency.lockutils [req-f2d77887-34c6-414a-b1ae-15e931d69088 req-e95081ba-b468-41e5-b154-c7338e320cef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:20 compute-0 nova_compute[259627]: 2025-10-14 09:20:20.965 2 DEBUG nova.compute.manager [None req-b77ddff1-f1b2-4f3f-bd42-07f5f53ae98e - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:21 compute-0 nova_compute[259627]: 2025-10-14 09:20:21.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:21 compute-0 nova_compute[259627]: 2025-10-14 09:20:21.141 2 DEBUG nova.network.neutron [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:20:21 compute-0 ceph-mon[74249]: pgmap v1998: 305 pgs: 305 active+clean; 57 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 865 KiB/s wr, 41 op/s
Oct 14 09:20:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 88 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.212 2 DEBUG nova.network.neutron [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.239 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.240 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance network_info: |[{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.240 2 DEBUG oslo_concurrency.lockutils [req-f2d77887-34c6-414a-b1ae-15e931d69088 req-e95081ba-b468-41e5-b154-c7338e320cef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.241 2 DEBUG nova.network.neutron [req-f2d77887-34c6-414a-b1ae-15e931d69088 req-e95081ba-b468-41e5-b154-c7338e320cef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.244 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Start _get_guest_xml network_info=[{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.250 2 WARNING nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.255 2 DEBUG nova.virt.libvirt.host [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.256 2 DEBUG nova.virt.libvirt.host [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.263 2 DEBUG nova.virt.libvirt.host [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.263 2 DEBUG nova.virt.libvirt.host [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.264 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.264 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.265 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.265 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.266 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.266 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.266 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.267 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.267 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.267 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.268 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.268 2 DEBUG nova.virt.hardware [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.271 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:20:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/725406559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.788 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.813 2 DEBUG nova.storage.rbd_utils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:22 compute-0 nova_compute[259627]: 2025-10-14 09:20:22.817 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:22 compute-0 unix_chkpwd[373272]: password check failed for user (root)
Oct 14 09:20:22 compute-0 sshd-session[373022]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96  user=root
Oct 14 09:20:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:20:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/994973256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.243 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.244 2 DEBUG nova.virt.libvirt.vif [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1080027921',display_name='tempest-TestShelveInstance-server-1080027921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1080027921',id=114,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDaFeG/xJmvGbKmYgn4dJf37Cqex3YsQYrFJ72iAZg+c2DsrdPgi+tOr4SRSonbIRwf/h+BLYvaqfFBXVZQ0pwUCpayjdgXBvqZr1W73e5QpjlvvzQksOoSh5mhqEQBaTw==',key_name='tempest-TestShelveInstance-1777944733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b859f880079e4e6db96cdef422402fa1',ramdisk_id='',reservation_id='r-371aolc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1721835966',owner_user_name='tempest-TestShelveInstance-1721835966-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:20:17Z,user_data=None,user_id='8d77c101777148edbee39ba308af8e60',uuid=fd5669ba-0261-423e-8586-66c91ff570a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.245 2 DEBUG nova.network.os_vif_util [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converting VIF {"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.245 2 DEBUG nova.network.os_vif_util [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.246 2 DEBUG nova.objects.instance [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.262 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <uuid>fd5669ba-0261-423e-8586-66c91ff570a4</uuid>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <name>instance-00000072</name>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <nova:name>tempest-TestShelveInstance-server-1080027921</nova:name>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:20:22</nova:creationTime>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <nova:user uuid="8d77c101777148edbee39ba308af8e60">tempest-TestShelveInstance-1721835966-project-member</nova:user>
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <nova:project uuid="b859f880079e4e6db96cdef422402fa1">tempest-TestShelveInstance-1721835966</nova:project>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <nova:port uuid="8a93d82c-2ad9-4fb9-8867-f4d2cdac487f">
Oct 14 09:20:23 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <system>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <entry name="serial">fd5669ba-0261-423e-8586-66c91ff570a4</entry>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <entry name="uuid">fd5669ba-0261-423e-8586-66c91ff570a4</entry>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </system>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <os>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   </os>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <features>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   </features>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fd5669ba-0261-423e-8586-66c91ff570a4_disk">
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fd5669ba-0261-423e-8586-66c91ff570a4_disk.config">
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:20:23 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:5a:cb:1b"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <target dev="tap8a93d82c-2a"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/console.log" append="off"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <video>
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </video>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:20:23 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:20:23 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:20:23 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:20:23 compute-0 nova_compute[259627]: </domain>
Oct 14 09:20:23 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.263 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Preparing to wait for external event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.263 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.264 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.264 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.264 2 DEBUG nova.virt.libvirt.vif [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1080027921',display_name='tempest-TestShelveInstance-server-1080027921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1080027921',id=114,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDaFeG/xJmvGbKmYgn4dJf37Cqex3YsQYrFJ72iAZg+c2DsrdPgi+tOr4SRSonbIRwf/h+BLYvaqfFBXVZQ0pwUCpayjdgXBvqZr1W73e5QpjlvvzQksOoSh5mhqEQBaTw==',key_name='tempest-TestShelveInstance-1777944733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b859f880079e4e6db96cdef422402fa1',ramdisk_id='',reservation_id='r-371aolc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1721835966',owner_user_name='tempest-TestShelveInstance-1721835966-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:20:17Z,user_data=None,user_id='8d77c101777148edbee39ba308af8e60',uuid=fd5669ba-0261-423e-8586-66c91ff570a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.265 2 DEBUG nova.network.os_vif_util [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converting VIF {"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.265 2 DEBUG nova.network.os_vif_util [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.266 2 DEBUG os_vif [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a93d82c-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a93d82c-2a, col_values=(('external_ids', {'iface-id': '8a93d82c-2ad9-4fb9-8867-f4d2cdac487f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:cb:1b', 'vm-uuid': 'fd5669ba-0261-423e-8586-66c91ff570a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:20:23 compute-0 NetworkManager[44885]: <info>  [1760433623.3212] manager: (tap8a93d82c-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.327 2 INFO os_vif [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a')
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.425 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.426 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.426 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] No VIF found with MAC fa:16:3e:5a:cb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.427 2 INFO nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Using config drive
Oct 14 09:20:23 compute-0 nova_compute[259627]: 2025-10-14 09:20:23.455 2 DEBUG nova.storage.rbd_utils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:23 compute-0 ceph-mon[74249]: pgmap v1999: 305 pgs: 305 active+clean; 88 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 14 09:20:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/725406559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:20:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/994973256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.077 2 INFO nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Creating config drive at /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.087 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb6w_kizl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 88 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.233 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb6w_kizl" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.256 2 DEBUG nova.storage.rbd_utils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.261 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config fd5669ba-0261-423e-8586-66c91ff570a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.324 2 DEBUG nova.network.neutron [req-f2d77887-34c6-414a-b1ae-15e931d69088 req-e95081ba-b468-41e5-b154-c7338e320cef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updated VIF entry in instance network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.328 2 DEBUG nova.network.neutron [req-f2d77887-34c6-414a-b1ae-15e931d69088 req-e95081ba-b468-41e5-b154-c7338e320cef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.346 2 DEBUG oslo_concurrency.lockutils [req-f2d77887-34c6-414a-b1ae-15e931d69088 req-e95081ba-b468-41e5-b154-c7338e320cef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.468 2 DEBUG oslo_concurrency.processutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config fd5669ba-0261-423e-8586-66c91ff570a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.469 2 INFO nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Deleting local config drive /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config because it was imported into RBD.
Oct 14 09:20:24 compute-0 kernel: tap8a93d82c-2a: entered promiscuous mode
Oct 14 09:20:24 compute-0 NetworkManager[44885]: <info>  [1760433624.5546] manager: (tap8a93d82c-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/489)
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:24 compute-0 ovn_controller[152662]: 2025-10-14T09:20:24Z|01206|binding|INFO|Claiming lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f for this chassis.
Oct 14 09:20:24 compute-0 ovn_controller[152662]: 2025-10-14T09:20:24Z|01207|binding|INFO|8a93d82c-2ad9-4fb9-8867-f4d2cdac487f: Claiming fa:16:3e:5a:cb:1b 10.100.0.8
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.579 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:cb:1b 10.100.0.8'], port_security=['fa:16:3e:5a:cb:1b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fd5669ba-0261-423e-8586-66c91ff570a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b859f880079e4e6db96cdef422402fa1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb7ff285-1f24-4ca1-a6b2-fb0966d27f7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1272bcfa-9334-4a66-b4ee-2da7a182025f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.581 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f in datapath 6c11d5e6-13c9-49c7-982d-8d1198ac7dea bound to our chassis
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.583 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c11d5e6-13c9-49c7-982d-8d1198ac7dea
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.603 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85f2106d-9623-4d7d-bdbc-f0e65a15a887]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.604 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c11d5e6-11 in ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.606 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c11d5e6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.607 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12379c5d-bf62-4d99-bc0d-6364e5e6d125]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 systemd-machined[214636]: New machine qemu-144-instance-00000072.
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.608 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad687b2f-65d0-454a-8b23-243fcadaf266]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.626 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[966a8946-996a-4607-94ee-5a654d1a130c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000072.
Oct 14 09:20:24 compute-0 systemd-udevd[373352]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.661 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b216899e-c8b2-4f8b-83ff-d176430e54c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:24 compute-0 NetworkManager[44885]: <info>  [1760433624.6698] device (tap8a93d82c-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:20:24 compute-0 ovn_controller[152662]: 2025-10-14T09:20:24Z|01208|binding|INFO|Setting lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f ovn-installed in OVS
Oct 14 09:20:24 compute-0 ovn_controller[152662]: 2025-10-14T09:20:24Z|01209|binding|INFO|Setting lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f up in Southbound
Oct 14 09:20:24 compute-0 NetworkManager[44885]: <info>  [1760433624.6710] device (tap8a93d82c-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.705 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e5500112-38cc-4e37-9fdb-5964ef631d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.711 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ad2d7d-3894-4fac-b593-ec7fe4f635c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 NetworkManager[44885]: <info>  [1760433624.7133] manager: (tap6c11d5e6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/490)
Oct 14 09:20:24 compute-0 systemd-udevd[373355]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:20:24 compute-0 sshd-session[373022]: Failed password for root from 188.150.249.96 port 49774 ssh2
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.762 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8a05bd7c-b5b5-4770-800e-4bcdbd938dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.766 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4be06f08-0b6a-48c6-ac92-f734852fdb00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 NetworkManager[44885]: <info>  [1760433624.7955] device (tap6c11d5e6-10): carrier: link connected
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.804 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[766e4d7e-5bee-41c5-b616-f8c238001374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.827 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab6015f-f672-4569-abdb-eb2fb55f77b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c11d5e6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:99:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740355, 'reachable_time': 22466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373382, 'error': None, 'target': 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.850 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8699af5f-0e37-4ff1-9f01-ba45345aa53f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:9900'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 740355, 'tstamp': 740355}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373383, 'error': None, 'target': 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.883 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b518f124-37fd-4284-a048-63b052170f63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c11d5e6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:99:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740355, 'reachable_time': 22466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373384, 'error': None, 'target': 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.919 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5746a9-e535-4b0c-aa24-14cf5af8e026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01422a8c-453f-450a-b16b-884976e5283e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.989 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c11d5e6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.990 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.990 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c11d5e6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:24 compute-0 NetworkManager[44885]: <info>  [1760433624.9923] manager: (tap6c11d5e6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Oct 14 09:20:24 compute-0 kernel: tap6c11d5e6-10: entered promiscuous mode
Oct 14 09:20:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:24.995 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c11d5e6-10, col_values=(('external_ids', {'iface-id': 'b7b18d0e-c023-45b7-896e-bf4c2ce15855'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:24 compute-0 nova_compute[259627]: 2025-10-14 09:20:24.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:24 compute-0 ovn_controller[152662]: 2025-10-14T09:20:24Z|01210|binding|INFO|Releasing lport b7b18d0e-c023-45b7-896e-bf4c2ce15855 from this chassis (sb_readonly=0)
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:25.023 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c11d5e6-13c9-49c7-982d-8d1198ac7dea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c11d5e6-13c9-49c7-982d-8d1198ac7dea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:25.024 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f51a8736-65e9-4908-b2eb-4d7aa4e73f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:25.025 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-6c11d5e6-13c9-49c7-982d-8d1198ac7dea
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/6c11d5e6-13c9-49c7-982d-8d1198ac7dea.pid.haproxy
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 6c11d5e6-13c9-49c7-982d-8d1198ac7dea
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:20:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:25.026 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'env', 'PROCESS_TAG=haproxy-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c11d5e6-13c9-49c7-982d-8d1198ac7dea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:20:25 compute-0 podman[373458]: 2025-10-14 09:20:25.488583331 +0000 UTC m=+0.074083527 container create d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.495 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433625.49461, fd5669ba-0261-423e-8586-66c91ff570a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.496 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Started (Lifecycle Event)
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.520 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.525 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433625.4949844, fd5669ba-0261-423e-8586-66c91ff570a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.525 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Paused (Lifecycle Event)
Oct 14 09:20:25 compute-0 podman[373458]: 2025-10-14 09:20:25.443669634 +0000 UTC m=+0.029169890 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:20:25 compute-0 systemd[1]: Started libpod-conmon-d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a.scope.
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.540 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.544 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:20:25 compute-0 nova_compute[259627]: 2025-10-14 09:20:25.561 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:20:25 compute-0 ceph-mon[74249]: pgmap v2000: 305 pgs: 305 active+clean; 88 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:20:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1dc91f394747fc08b06f6f17ab00bb2c39967dc47162c30c76e21bf21f5448/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:25 compute-0 podman[373458]: 2025-10-14 09:20:25.598526881 +0000 UTC m=+0.184027147 container init d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:20:25 compute-0 podman[373458]: 2025-10-14 09:20:25.607437761 +0000 UTC m=+0.192937957 container start d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:20:25 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[373473]: [NOTICE]   (373477) : New worker (373479) forked
Oct 14 09:20:25 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[373473]: [NOTICE]   (373477) : Loading success.
Oct 14 09:20:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 88 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:20:26 compute-0 sshd-session[373022]: Connection closed by authenticating user root 188.150.249.96 port 49774 [preauth]
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.915 2 DEBUG nova.compute.manager [req-469139ab-2465-4b35-b86c-992d30c31e9a req-2bd3fe1c-e582-4bc6-a418-f84b7ff3ecb9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.916 2 DEBUG oslo_concurrency.lockutils [req-469139ab-2465-4b35-b86c-992d30c31e9a req-2bd3fe1c-e582-4bc6-a418-f84b7ff3ecb9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.916 2 DEBUG oslo_concurrency.lockutils [req-469139ab-2465-4b35-b86c-992d30c31e9a req-2bd3fe1c-e582-4bc6-a418-f84b7ff3ecb9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.917 2 DEBUG oslo_concurrency.lockutils [req-469139ab-2465-4b35-b86c-992d30c31e9a req-2bd3fe1c-e582-4bc6-a418-f84b7ff3ecb9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.917 2 DEBUG nova.compute.manager [req-469139ab-2465-4b35-b86c-992d30c31e9a req-2bd3fe1c-e582-4bc6-a418-f84b7ff3ecb9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Processing event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.918 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.922 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433626.9222286, fd5669ba-0261-423e-8586-66c91ff570a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.922 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Resumed (Lifecycle Event)
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.926 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.930 2 INFO nova.virt.libvirt.driver [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance spawned successfully.
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.931 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.954 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.962 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.969 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.970 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.970 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.971 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.972 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:26 compute-0 nova_compute[259627]: 2025-10-14 09:20:26.973 2 DEBUG nova.virt.libvirt.driver [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:27 compute-0 nova_compute[259627]: 2025-10-14 09:20:27.010 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:20:27 compute-0 nova_compute[259627]: 2025-10-14 09:20:27.052 2 INFO nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Took 9.80 seconds to spawn the instance on the hypervisor.
Oct 14 09:20:27 compute-0 nova_compute[259627]: 2025-10-14 09:20:27.053 2 DEBUG nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:27 compute-0 nova_compute[259627]: 2025-10-14 09:20:27.132 2 INFO nova.compute.manager [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Took 10.87 seconds to build instance.
Oct 14 09:20:27 compute-0 nova_compute[259627]: 2025-10-14 09:20:27.153 2 DEBUG oslo_concurrency.lockutils [None req-472af2d3-8430-4d89-8127-5dcc2bf3a610 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:27 compute-0 ceph-mon[74249]: pgmap v2001: 305 pgs: 305 active+clean; 88 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:20:27 compute-0 podman[373489]: 2025-10-14 09:20:27.662517771 +0000 UTC m=+0.067821203 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:20:27 compute-0 podman[373490]: 2025-10-14 09:20:27.672692062 +0000 UTC m=+0.073580705 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 14 09:20:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:27 compute-0 nova_compute[259627]: 2025-10-14 09:20:27.950 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "6a505551-bc3f-4254-966f-ca344358f8ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:27 compute-0 nova_compute[259627]: 2025-10-14 09:20:27.951 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:27 compute-0 nova_compute[259627]: 2025-10-14 09:20:27.980 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.087 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.088 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.095 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.096 2 INFO nova.compute.claims [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:20:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 88 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.295 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:20:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335589077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.837 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.843 2 DEBUG nova.compute.provider_tree [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.856 2 DEBUG nova.scheduler.client.report [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.874 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.875 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.927 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.928 2 DEBUG nova.network.neutron [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.947 2 INFO nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:20:28 compute-0 nova_compute[259627]: 2025-10-14 09:20:28.968 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.011 2 DEBUG nova.compute.manager [req-36e90299-56f5-4e20-b1d6-60e7ff45ceec req-771269fa-2ec5-4f27-9bea-33a7f9e3168f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.011 2 DEBUG oslo_concurrency.lockutils [req-36e90299-56f5-4e20-b1d6-60e7ff45ceec req-771269fa-2ec5-4f27-9bea-33a7f9e3168f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.012 2 DEBUG oslo_concurrency.lockutils [req-36e90299-56f5-4e20-b1d6-60e7ff45ceec req-771269fa-2ec5-4f27-9bea-33a7f9e3168f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.012 2 DEBUG oslo_concurrency.lockutils [req-36e90299-56f5-4e20-b1d6-60e7ff45ceec req-771269fa-2ec5-4f27-9bea-33a7f9e3168f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.012 2 DEBUG nova.compute.manager [req-36e90299-56f5-4e20-b1d6-60e7ff45ceec req-771269fa-2ec5-4f27-9bea-33a7f9e3168f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] No waiting events found dispatching network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.013 2 WARNING nova.compute.manager [req-36e90299-56f5-4e20-b1d6-60e7ff45ceec req-771269fa-2ec5-4f27-9bea-33a7f9e3168f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received unexpected event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f for instance with vm_state active and task_state None.
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.063 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.064 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.065 2 INFO nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Creating image(s)
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.099 2 DEBUG nova.storage.rbd_utils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 6a505551-bc3f-4254-966f-ca344358f8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.133 2 DEBUG nova.storage.rbd_utils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 6a505551-bc3f-4254-966f-ca344358f8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.166 2 DEBUG nova.storage.rbd_utils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 6a505551-bc3f-4254-966f-ca344358f8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.172 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.278 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.281 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.282 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.283 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.314 2 DEBUG nova.storage.rbd_utils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 6a505551-bc3f-4254-966f-ca344358f8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.318 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6a505551-bc3f-4254-966f-ca344358f8ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.568 2 DEBUG nova.policy [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:20:29 compute-0 ceph-mon[74249]: pgmap v2002: 305 pgs: 305 active+clean; 88 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:20:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2335589077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.623 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6a505551-bc3f-4254-966f-ca344358f8ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.703 2 DEBUG nova.storage.rbd_utils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 6a505551-bc3f-4254-966f-ca344358f8ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.836 2 DEBUG nova.objects.instance [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a505551-bc3f-4254-966f-ca344358f8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.850 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.851 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Ensure instance console log exists: /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.852 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.852 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:29 compute-0 nova_compute[259627]: 2025-10-14 09:20:29.853 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 103 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 2.6 MiB/s wr, 63 op/s
Oct 14 09:20:30 compute-0 nova_compute[259627]: 2025-10-14 09:20:30.438 2 DEBUG nova.network.neutron [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Successfully created port: 8210d83b-b3db-4515-b65b-c49829132abf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:20:31 compute-0 ceph-mon[74249]: pgmap v2003: 305 pgs: 305 active+clean; 103 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 2.6 MiB/s wr, 63 op/s
Oct 14 09:20:31 compute-0 nova_compute[259627]: 2025-10-14 09:20:31.663 2 DEBUG nova.network.neutron [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Successfully updated port: 8210d83b-b3db-4515-b65b-c49829132abf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:20:31 compute-0 nova_compute[259627]: 2025-10-14 09:20:31.679 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:31 compute-0 nova_compute[259627]: 2025-10-14 09:20:31.679 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:31 compute-0 nova_compute[259627]: 2025-10-14 09:20:31.679 2 DEBUG nova.network.neutron [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:20:31 compute-0 nova_compute[259627]: 2025-10-14 09:20:31.855 2 DEBUG nova.compute.manager [req-d23dcd89-b9e9-4ed3-a57f-d0b1771786bd req-2bc887ca-3407-409c-9300-f8b10227c1a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-changed-8210d83b-b3db-4515-b65b-c49829132abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:31 compute-0 nova_compute[259627]: 2025-10-14 09:20:31.855 2 DEBUG nova.compute.manager [req-d23dcd89-b9e9-4ed3-a57f-d0b1771786bd req-2bc887ca-3407-409c-9300-f8b10227c1a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Refreshing instance network info cache due to event network-changed-8210d83b-b3db-4515-b65b-c49829132abf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:20:31 compute-0 nova_compute[259627]: 2025-10-14 09:20:31.856 2 DEBUG oslo_concurrency.lockutils [req-d23dcd89-b9e9-4ed3-a57f-d0b1771786bd req-2bc887ca-3407-409c-9300-f8b10227c1a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 134 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 113 op/s
Oct 14 09:20:32 compute-0 nova_compute[259627]: 2025-10-14 09:20:32.293 2 DEBUG nova.network.neutron [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:20:32
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'volumes', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Oct 14 09:20:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:20:32 compute-0 NetworkManager[44885]: <info>  [1760433632.8367] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Oct 14 09:20:32 compute-0 NetworkManager[44885]: <info>  [1760433632.8391] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Oct 14 09:20:32 compute-0 nova_compute[259627]: 2025-10-14 09:20:32.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:32 compute-0 nova_compute[259627]: 2025-10-14 09:20:32.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:32 compute-0 nova_compute[259627]: 2025-10-14 09:20:32.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:32 compute-0 ovn_controller[152662]: 2025-10-14T09:20:32Z|01211|binding|INFO|Releasing lport b7b18d0e-c023-45b7-896e-bf4c2ce15855 from this chassis (sb_readonly=0)
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.006 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:20:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:20:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/96645864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.468 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.550 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.552 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:20:33 compute-0 ceph-mon[74249]: pgmap v2004: 305 pgs: 305 active+clean; 134 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 113 op/s
Oct 14 09:20:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/96645864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.623 2 DEBUG nova.network.neutron [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Updating instance_info_cache with network_info: [{"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.646 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.647 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Instance network_info: |[{"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.647 2 DEBUG oslo_concurrency.lockutils [req-d23dcd89-b9e9-4ed3-a57f-d0b1771786bd req-2bc887ca-3407-409c-9300-f8b10227c1a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.647 2 DEBUG nova.network.neutron [req-d23dcd89-b9e9-4ed3-a57f-d0b1771786bd req-2bc887ca-3407-409c-9300-f8b10227c1a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Refreshing network info cache for port 8210d83b-b3db-4515-b65b-c49829132abf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.649 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Start _get_guest_xml network_info=[{"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.655 2 WARNING nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.658 2 DEBUG nova.virt.libvirt.host [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.659 2 DEBUG nova.virt.libvirt.host [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.665 2 DEBUG nova.virt.libvirt.host [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.666 2 DEBUG nova.virt.libvirt.host [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.666 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.667 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.667 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.667 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.667 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.668 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.668 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.668 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.668 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.669 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.669 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.669 2 DEBUG nova.virt.hardware [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.671 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.852 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.853 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3646MB free_disk=59.94662857055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.853 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.854 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.953 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance fd5669ba-0261-423e-8586-66c91ff570a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.953 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6a505551-bc3f-4254-966f-ca344358f8ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.954 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.954 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.960 2 DEBUG nova.compute.manager [req-1050b3d3-98dc-4219-933c-efbc5bdcb4b3 req-5b18d676-b740-4f09-b289-a2c384aa15d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.961 2 DEBUG nova.compute.manager [req-1050b3d3-98dc-4219-933c-efbc5bdcb4b3 req-5b18d676-b740-4f09-b289-a2c384aa15d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing instance network info cache due to event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.961 2 DEBUG oslo_concurrency.lockutils [req-1050b3d3-98dc-4219-933c-efbc5bdcb4b3 req-5b18d676-b740-4f09-b289-a2c384aa15d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.961 2 DEBUG oslo_concurrency.lockutils [req-1050b3d3-98dc-4219-933c-efbc5bdcb4b3 req-5b18d676-b740-4f09-b289-a2c384aa15d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:33 compute-0 nova_compute[259627]: 2025-10-14 09:20:33.962 2 DEBUG nova.network.neutron [req-1050b3d3-98dc-4219-933c-efbc5bdcb4b3 req-5b18d676-b740-4f09-b289-a2c384aa15d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.007 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 134 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:20:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:20:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500044264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.163 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.186 2 DEBUG nova.storage.rbd_utils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 6a505551-bc3f-4254-966f-ca344358f8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.190 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:20:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032462373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.455 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.464 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.485 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.523 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.524 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:20:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349966936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:20:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2500044264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:20:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2032462373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:20:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/349966936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.626 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.628 2 DEBUG nova.virt.libvirt.vif [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:20:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629499808',display_name='tempest-TestNetworkBasicOps-server-629499808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629499808',id=115,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP5Gh4rdsdSynoxSEF7tq/kTUtYCuC2zsLAToH//zOjD330tt1Y4Nmd3BgWIZVekiJHZCljvM1fo2eUZ6bfCX6pV1ICXWzyF2zMOpuUQORM1Q5pU9nSa2DumnhiOt/fsgA==',key_name='tempest-TestNetworkBasicOps-2079468078',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-0mgzd3ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:20:28Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=6a505551-bc3f-4254-966f-ca344358f8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.629 2 DEBUG nova.network.os_vif_util [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.631 2 DEBUG nova.network.os_vif_util [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:90:f9,bridge_name='br-int',has_traffic_filtering=True,id=8210d83b-b3db-4515-b65b-c49829132abf,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8210d83b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.633 2 DEBUG nova.objects.instance [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a505551-bc3f-4254-966f-ca344358f8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.655 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <uuid>6a505551-bc3f-4254-966f-ca344358f8ac</uuid>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <name>instance-00000073</name>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-629499808</nova:name>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:20:33</nova:creationTime>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <nova:port uuid="8210d83b-b3db-4515-b65b-c49829132abf">
Oct 14 09:20:34 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <system>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <entry name="serial">6a505551-bc3f-4254-966f-ca344358f8ac</entry>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <entry name="uuid">6a505551-bc3f-4254-966f-ca344358f8ac</entry>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </system>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <os>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   </os>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <features>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   </features>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6a505551-bc3f-4254-966f-ca344358f8ac_disk">
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6a505551-bc3f-4254-966f-ca344358f8ac_disk.config">
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       </source>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:20:34 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b3:90:f9"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <target dev="tap8210d83b-b3"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac/console.log" append="off"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <video>
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </video>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:20:34 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:20:34 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:20:34 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:20:34 compute-0 nova_compute[259627]: </domain>
Oct 14 09:20:34 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.666 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Preparing to wait for external event network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.667 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.667 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.668 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.669 2 DEBUG nova.virt.libvirt.vif [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:20:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629499808',display_name='tempest-TestNetworkBasicOps-server-629499808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629499808',id=115,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP5Gh4rdsdSynoxSEF7tq/kTUtYCuC2zsLAToH//zOjD330tt1Y4Nmd3BgWIZVekiJHZCljvM1fo2eUZ6bfCX6pV1ICXWzyF2zMOpuUQORM1Q5pU9nSa2DumnhiOt/fsgA==',key_name='tempest-TestNetworkBasicOps-2079468078',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-0mgzd3ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:20:28Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=6a505551-bc3f-4254-966f-ca344358f8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.669 2 DEBUG nova.network.os_vif_util [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.670 2 DEBUG nova.network.os_vif_util [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:90:f9,bridge_name='br-int',has_traffic_filtering=True,id=8210d83b-b3db-4515-b65b-c49829132abf,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8210d83b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.672 2 DEBUG os_vif [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:90:f9,bridge_name='br-int',has_traffic_filtering=True,id=8210d83b-b3db-4515-b65b-c49829132abf,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8210d83b-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.674 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8210d83b-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8210d83b-b3, col_values=(('external_ids', {'iface-id': '8210d83b-b3db-4515-b65b-c49829132abf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:90:f9', 'vm-uuid': '6a505551-bc3f-4254-966f-ca344358f8ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:34 compute-0 NetworkManager[44885]: <info>  [1760433634.6873] manager: (tap8210d83b-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.693 2 INFO os_vif [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:90:f9,bridge_name='br-int',has_traffic_filtering=True,id=8210d83b-b3db-4515-b65b-c49829132abf,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8210d83b-b3')
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.761 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.762 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.762 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:b3:90:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.763 2 INFO nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Using config drive
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.791 2 DEBUG nova.storage.rbd_utils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 6a505551-bc3f-4254-966f-ca344358f8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.926 2 DEBUG nova.network.neutron [req-d23dcd89-b9e9-4ed3-a57f-d0b1771786bd req-2bc887ca-3407-409c-9300-f8b10227c1a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Updated VIF entry in instance network info cache for port 8210d83b-b3db-4515-b65b-c49829132abf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.927 2 DEBUG nova.network.neutron [req-d23dcd89-b9e9-4ed3-a57f-d0b1771786bd req-2bc887ca-3407-409c-9300-f8b10227c1a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Updating instance_info_cache with network_info: [{"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:34 compute-0 nova_compute[259627]: 2025-10-14 09:20:34.958 2 DEBUG oslo_concurrency.lockutils [req-d23dcd89-b9e9-4ed3-a57f-d0b1771786bd req-2bc887ca-3407-409c-9300-f8b10227c1a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.204 2 INFO nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Creating config drive at /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac/disk.config
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.210 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2emxr8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.368 2 DEBUG nova.network.neutron [req-1050b3d3-98dc-4219-933c-efbc5bdcb4b3 req-5b18d676-b740-4f09-b289-a2c384aa15d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updated VIF entry in instance network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.372 2 DEBUG nova.network.neutron [req-1050b3d3-98dc-4219-933c-efbc5bdcb4b3 req-5b18d676-b740-4f09-b289-a2c384aa15d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.376 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2emxr8n" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.412 2 DEBUG nova.storage.rbd_utils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 6a505551-bc3f-4254-966f-ca344358f8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.416 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac/disk.config 6a505551-bc3f-4254-966f-ca344358f8ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.459 2 DEBUG oslo_concurrency.lockutils [req-1050b3d3-98dc-4219-933c-efbc5bdcb4b3 req-5b18d676-b740-4f09-b289-a2c384aa15d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.571 2 DEBUG oslo_concurrency.processutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac/disk.config 6a505551-bc3f-4254-966f-ca344358f8ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.573 2 INFO nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Deleting local config drive /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac/disk.config because it was imported into RBD.
Oct 14 09:20:35 compute-0 kernel: tap8210d83b-b3: entered promiscuous mode
Oct 14 09:20:35 compute-0 ceph-mon[74249]: pgmap v2005: 305 pgs: 305 active+clean; 134 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:20:35 compute-0 ovn_controller[152662]: 2025-10-14T09:20:35Z|01212|binding|INFO|Claiming lport 8210d83b-b3db-4515-b65b-c49829132abf for this chassis.
Oct 14 09:20:35 compute-0 ovn_controller[152662]: 2025-10-14T09:20:35Z|01213|binding|INFO|8210d83b-b3db-4515-b65b-c49829132abf: Claiming fa:16:3e:b3:90:f9 10.100.0.12
Oct 14 09:20:35 compute-0 NetworkManager[44885]: <info>  [1760433635.6345] manager: (tap8210d83b-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.644 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:90:f9 10.100.0.12'], port_security=['fa:16:3e:b3:90:f9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6a505551-bc3f-4254-966f-ca344358f8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61c1acdc-e817-4d26-8900-47d35332175a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8c6014f-d547-4ba8-9654-07ba6669ba65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfec6e64-6ad9-4ee1-953d-480113ac60ef, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8210d83b-b3db-4515-b65b-c49829132abf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.646 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8210d83b-b3db-4515-b65b-c49829132abf in datapath 61c1acdc-e817-4d26-8900-47d35332175a bound to our chassis
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.647 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61c1acdc-e817-4d26-8900-47d35332175a
Oct 14 09:20:35 compute-0 systemd-udevd[373894]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d70d10f6-8308-47e8-89ba-e4ba308bda99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.662 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61c1acdc-e1 in ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.663 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61c1acdc-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.663 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[66f4aac3-b49f-4f38-90c2-a35f67eeadc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.665 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[00962854-fd19-4b0e-8ab6-fb8022508d52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_controller[152662]: 2025-10-14T09:20:35Z|01214|binding|INFO|Setting lport 8210d83b-b3db-4515-b65b-c49829132abf ovn-installed in OVS
Oct 14 09:20:35 compute-0 ovn_controller[152662]: 2025-10-14T09:20:35Z|01215|binding|INFO|Setting lport 8210d83b-b3db-4515-b65b-c49829132abf up in Southbound
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:35 compute-0 NetworkManager[44885]: <info>  [1760433635.6797] device (tap8210d83b-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:20:35 compute-0 NetworkManager[44885]: <info>  [1760433635.6813] device (tap8210d83b-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.682 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb3d68c-2017-404b-a763-e27c08978cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 systemd-machined[214636]: New machine qemu-145-instance-00000073.
Oct 14 09:20:35 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000073.
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.710 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3d0c11-92d7-4e5b-bf5e-f4b93d9bc87a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.750 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b586f3-63b0-400f-816a-dc3aa64db012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 systemd-udevd[373900]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.758 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07300d39-5c38-4381-91cb-0ad2741e0138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 NetworkManager[44885]: <info>  [1760433635.7603] manager: (tap61c1acdc-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.810 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b7df7a40-1d40-4704-96c6-1865c64ddfa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.814 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf098aa-6a39-4446-89bc-ab088e173102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 NetworkManager[44885]: <info>  [1760433635.8374] device (tap61c1acdc-e0): carrier: link connected
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.845 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70b73daf-2c7a-4bf7-8bd2-6bb9cdf4be8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.865 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4ff41c-15e6-430f-bfad-7bd16b839760]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61c1acdc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:79:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741459, 'reachable_time': 28535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373929, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.882 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14303616-9f21-4b18-bfc7-d249785e2a43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:79bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741459, 'tstamp': 741459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373930, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.903 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c95ea6b-42fa-4047-83d0-8bb48588f6b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61c1acdc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:79:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741459, 'reachable_time': 28535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373931, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.929 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[051cea1c-2f87-4f9d-91bd-a91ccdd72519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[064966cd-b4b4-4c18-8889-9da5acc02347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.989 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61c1acdc-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.990 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.990 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61c1acdc-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:35 compute-0 kernel: tap61c1acdc-e0: entered promiscuous mode
Oct 14 09:20:35 compute-0 NetworkManager[44885]: <info>  [1760433635.9924] manager: (tap61c1acdc-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Oct 14 09:20:35 compute-0 nova_compute[259627]: 2025-10-14 09:20:35.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:35.995 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61c1acdc-e0, col_values=(('external_ids', {'iface-id': '6f7bf4a5-58bf-4ec8-a189-0a9624df5601'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:35 compute-0 ovn_controller[152662]: 2025-10-14T09:20:35Z|01216|binding|INFO|Releasing lport 6f7bf4a5-58bf-4ec8-a189-0a9624df5601 from this chassis (sb_readonly=0)
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:36.017 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61c1acdc-e817-4d26-8900-47d35332175a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61c1acdc-e817-4d26-8900-47d35332175a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:36.019 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4485ffdc-09d4-4c21-bbfb-bbcb64f13778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:36.020 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-61c1acdc-e817-4d26-8900-47d35332175a
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/61c1acdc-e817-4d26-8900-47d35332175a.pid.haproxy
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 61c1acdc-e817-4d26-8900-47d35332175a
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:20:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:36.020 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'env', 'PROCESS_TAG=haproxy-61c1acdc-e817-4d26-8900-47d35332175a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61c1acdc-e817-4d26-8900-47d35332175a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.066 2 DEBUG nova.compute.manager [req-0cbca1a2-be6b-4386-96ea-fd7c6b5c76d4 req-7993126f-db32-4242-b2fc-0bfa64ee11d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.066 2 DEBUG oslo_concurrency.lockutils [req-0cbca1a2-be6b-4386-96ea-fd7c6b5c76d4 req-7993126f-db32-4242-b2fc-0bfa64ee11d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.067 2 DEBUG oslo_concurrency.lockutils [req-0cbca1a2-be6b-4386-96ea-fd7c6b5c76d4 req-7993126f-db32-4242-b2fc-0bfa64ee11d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.067 2 DEBUG oslo_concurrency.lockutils [req-0cbca1a2-be6b-4386-96ea-fd7c6b5c76d4 req-7993126f-db32-4242-b2fc-0bfa64ee11d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.067 2 DEBUG nova.compute.manager [req-0cbca1a2-be6b-4386-96ea-fd7c6b5c76d4 req-7993126f-db32-4242-b2fc-0bfa64ee11d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Processing event network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:20:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 134 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 09:20:36 compute-0 podman[374005]: 2025-10-14 09:20:36.416242536 +0000 UTC m=+0.063847964 container create b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:20:36 compute-0 systemd[1]: Started libpod-conmon-b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f.scope.
Oct 14 09:20:36 compute-0 podman[374005]: 2025-10-14 09:20:36.38107824 +0000 UTC m=+0.028683688 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:20:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c345a7d17434a672ce527f345c9de737a5f5d6b44645de6f5bf9601a24428d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:36 compute-0 podman[374005]: 2025-10-14 09:20:36.498684238 +0000 UTC m=+0.146289686 container init b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:20:36 compute-0 podman[374005]: 2025-10-14 09:20:36.508309255 +0000 UTC m=+0.155914683 container start b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.525 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.525 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.526 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:36 compute-0 neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a[374021]: [NOTICE]   (374025) : New worker (374027) forked
Oct 14 09:20:36 compute-0 neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a[374021]: [NOTICE]   (374025) : Loading success.
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.540 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.541 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433636.541615, 6a505551-bc3f-4254-966f-ca344358f8ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] VM Started (Lifecycle Event)
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.547 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.550 2 INFO nova.virt.libvirt.driver [-] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Instance spawned successfully.
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.550 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.581 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.584 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.593 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.593 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.593 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.594 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.594 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.594 2 DEBUG nova.virt.libvirt.driver [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.603 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.603 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433636.5425496, 6a505551-bc3f-4254-966f-ca344358f8ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.603 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] VM Paused (Lifecycle Event)
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.626 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.628 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433636.5456612, 6a505551-bc3f-4254-966f-ca344358f8ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.629 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] VM Resumed (Lifecycle Event)
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.651 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.655 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.664 2 INFO nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Took 7.60 seconds to spawn the instance on the hypervisor.
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.665 2 DEBUG nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.675 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.731 2 INFO nova.compute.manager [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Took 8.68 seconds to build instance.
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.747 2 DEBUG oslo_concurrency.lockutils [None req-1dd4263c-892d-42cb-ae30-0f940010ef2c 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:36 compute-0 sshd-session[373488]: Invalid user admin from 188.150.249.96 port 52486
Oct 14 09:20:36 compute-0 nova_compute[259627]: 2025-10-14 09:20:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:37 compute-0 sshd-session[373488]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:20:37 compute-0 sshd-session[373488]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:20:37 compute-0 ceph-mon[74249]: pgmap v2006: 305 pgs: 305 active+clean; 134 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 09:20:37 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 09:20:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 134 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 14 09:20:38 compute-0 nova_compute[259627]: 2025-10-14 09:20:38.171 2 DEBUG nova.compute.manager [req-41c9c687-2e0e-4a8b-adab-539c22de963e req-d1686fb6-3fdc-4f88-aaaf-36e8e459e7c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:38 compute-0 nova_compute[259627]: 2025-10-14 09:20:38.172 2 DEBUG oslo_concurrency.lockutils [req-41c9c687-2e0e-4a8b-adab-539c22de963e req-d1686fb6-3fdc-4f88-aaaf-36e8e459e7c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:38 compute-0 nova_compute[259627]: 2025-10-14 09:20:38.172 2 DEBUG oslo_concurrency.lockutils [req-41c9c687-2e0e-4a8b-adab-539c22de963e req-d1686fb6-3fdc-4f88-aaaf-36e8e459e7c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:38 compute-0 nova_compute[259627]: 2025-10-14 09:20:38.173 2 DEBUG oslo_concurrency.lockutils [req-41c9c687-2e0e-4a8b-adab-539c22de963e req-d1686fb6-3fdc-4f88-aaaf-36e8e459e7c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:38 compute-0 nova_compute[259627]: 2025-10-14 09:20:38.173 2 DEBUG nova.compute.manager [req-41c9c687-2e0e-4a8b-adab-539c22de963e req-d1686fb6-3fdc-4f88-aaaf-36e8e459e7c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] No waiting events found dispatching network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:38 compute-0 nova_compute[259627]: 2025-10-14 09:20:38.174 2 WARNING nova.compute.manager [req-41c9c687-2e0e-4a8b-adab-539c22de963e req-d1686fb6-3fdc-4f88-aaaf-36e8e459e7c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received unexpected event network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf for instance with vm_state active and task_state None.
Oct 14 09:20:38 compute-0 ovn_controller[152662]: 2025-10-14T09:20:38Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:cb:1b 10.100.0.8
Oct 14 09:20:38 compute-0 ovn_controller[152662]: 2025-10-14T09:20:38Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:cb:1b 10.100.0.8
Oct 14 09:20:38 compute-0 nova_compute[259627]: 2025-10-14 09:20:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:38 compute-0 nova_compute[259627]: 2025-10-14 09:20:38.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:20:39 compute-0 nova_compute[259627]: 2025-10-14 09:20:39.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:39 compute-0 sudo[374036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:39 compute-0 sudo[374036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:39 compute-0 sudo[374036]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:39 compute-0 sshd-session[373488]: Failed password for invalid user admin from 188.150.249.96 port 52486 ssh2
Oct 14 09:20:39 compute-0 sudo[374072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:20:39 compute-0 sudo[374072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:39 compute-0 sudo[374072]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:39 compute-0 podman[374061]: 2025-10-14 09:20:39.586912902 +0000 UTC m=+0.094965942 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:20:39 compute-0 sudo[374125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:39 compute-0 sudo[374125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:39 compute-0 sudo[374125]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:39 compute-0 podman[374060]: 2025-10-14 09:20:39.640761299 +0000 UTC m=+0.163600723 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:20:39 compute-0 ceph-mon[74249]: pgmap v2007: 305 pgs: 305 active+clean; 134 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 14 09:20:39 compute-0 nova_compute[259627]: 2025-10-14 09:20:39.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:39 compute-0 sudo[374153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:20:39 compute-0 sudo[374153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:39 compute-0 nova_compute[259627]: 2025-10-14 09:20:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:39 compute-0 nova_compute[259627]: 2025-10-14 09:20:39.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:20:39 compute-0 nova_compute[259627]: 2025-10-14 09:20:39.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:20:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 147 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 154 op/s
Oct 14 09:20:40 compute-0 sudo[374153]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:20:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:20:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:20:40 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:20:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:20:40 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:20:40 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8be9df0c-9118-4e29-b85d-9e6996d6472a does not exist
Oct 14 09:20:40 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5099644e-1256-4de2-b134-f9366d5d1d75 does not exist
Oct 14 09:20:40 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 11da2a06-60c1-4720-86f5-3918a51e269c does not exist
Oct 14 09:20:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:20:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:20:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:20:40 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:20:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:20:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:20:40 compute-0 sudo[374209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:40 compute-0 sudo[374209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:40 compute-0 sudo[374209]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:40 compute-0 sudo[374234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:20:40 compute-0 sudo[374234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:40 compute-0 sudo[374234]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.510 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.511 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.511 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.511 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:40 compute-0 sudo[374259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:40 compute-0 sudo[374259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:40 compute-0 sudo[374259]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:40 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:20:40 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:20:40 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:20:40 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:20:40 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:20:40 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:20:40 compute-0 sudo[374284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:20:40 compute-0 sudo[374284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.783 2 DEBUG nova.compute.manager [req-0e5658db-e245-4fce-89fe-d8bdd0f1e7f9 req-98f87f63-81d7-4c29-98d9-99d98c56ac7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-changed-8210d83b-b3db-4515-b65b-c49829132abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.783 2 DEBUG nova.compute.manager [req-0e5658db-e245-4fce-89fe-d8bdd0f1e7f9 req-98f87f63-81d7-4c29-98d9-99d98c56ac7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Refreshing instance network info cache due to event network-changed-8210d83b-b3db-4515-b65b-c49829132abf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.784 2 DEBUG oslo_concurrency.lockutils [req-0e5658db-e245-4fce-89fe-d8bdd0f1e7f9 req-98f87f63-81d7-4c29-98d9-99d98c56ac7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.784 2 DEBUG oslo_concurrency.lockutils [req-0e5658db-e245-4fce-89fe-d8bdd0f1e7f9 req-98f87f63-81d7-4c29-98d9-99d98c56ac7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:40 compute-0 nova_compute[259627]: 2025-10-14 09:20:40.785 2 DEBUG nova.network.neutron [req-0e5658db-e245-4fce-89fe-d8bdd0f1e7f9 req-98f87f63-81d7-4c29-98d9-99d98c56ac7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Refreshing network info cache for port 8210d83b-b3db-4515-b65b-c49829132abf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:20:41 compute-0 podman[374349]: 2025-10-14 09:20:41.066060958 +0000 UTC m=+0.068704044 container create 974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:20:41 compute-0 systemd[1]: Started libpod-conmon-974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1.scope.
Oct 14 09:20:41 compute-0 podman[374349]: 2025-10-14 09:20:41.039942534 +0000 UTC m=+0.042585690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:20:41 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:41 compute-0 podman[374349]: 2025-10-14 09:20:41.166875863 +0000 UTC m=+0.169518979 container init 974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:20:41 compute-0 podman[374349]: 2025-10-14 09:20:41.176252164 +0000 UTC m=+0.178895230 container start 974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cray, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:20:41 compute-0 podman[374349]: 2025-10-14 09:20:41.179823082 +0000 UTC m=+0.182466248 container attach 974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:20:41 compute-0 gallant_cray[374365]: 167 167
Oct 14 09:20:41 compute-0 systemd[1]: libpod-974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1.scope: Deactivated successfully.
Oct 14 09:20:41 compute-0 podman[374349]: 2025-10-14 09:20:41.189417878 +0000 UTC m=+0.192060934 container died 974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cray, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:20:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-db5c4d5f15e0fb9ec132333636053b83047fc3e7783f037842b2abf4ddfd319f-merged.mount: Deactivated successfully.
Oct 14 09:20:41 compute-0 podman[374349]: 2025-10-14 09:20:41.240276642 +0000 UTC m=+0.242919748 container remove 974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 09:20:41 compute-0 systemd[1]: libpod-conmon-974ddd03c44f23f1468d3120d567b96ed6706e185f46ef317f3fad79978dc2a1.scope: Deactivated successfully.
Oct 14 09:20:41 compute-0 podman[374390]: 2025-10-14 09:20:41.434261573 +0000 UTC m=+0.047039450 container create ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 09:20:41 compute-0 systemd[1]: Started libpod-conmon-ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344.scope.
Oct 14 09:20:41 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a87ce5a15fa09fb3afe704e645fc71e0cfb86251b973672d002776dd1d0de5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a87ce5a15fa09fb3afe704e645fc71e0cfb86251b973672d002776dd1d0de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a87ce5a15fa09fb3afe704e645fc71e0cfb86251b973672d002776dd1d0de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a87ce5a15fa09fb3afe704e645fc71e0cfb86251b973672d002776dd1d0de5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a87ce5a15fa09fb3afe704e645fc71e0cfb86251b973672d002776dd1d0de5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:41 compute-0 sshd-session[373488]: Connection closed by invalid user admin 188.150.249.96 port 52486 [preauth]
Oct 14 09:20:41 compute-0 podman[374390]: 2025-10-14 09:20:41.413654555 +0000 UTC m=+0.026432442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:20:41 compute-0 podman[374390]: 2025-10-14 09:20:41.537916358 +0000 UTC m=+0.150694265 container init ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:20:41 compute-0 podman[374390]: 2025-10-14 09:20:41.54976894 +0000 UTC m=+0.162546847 container start ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swanson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:20:41 compute-0 podman[374390]: 2025-10-14 09:20:41.553870271 +0000 UTC m=+0.166648188 container attach ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:20:41 compute-0 ceph-mon[74249]: pgmap v2008: 305 pgs: 305 active+clean; 147 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 154 op/s
Oct 14 09:20:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 201 op/s
Oct 14 09:20:42 compute-0 nova_compute[259627]: 2025-10-14 09:20:42.320 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:42 compute-0 nova_compute[259627]: 2025-10-14 09:20:42.348 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:42 compute-0 nova_compute[259627]: 2025-10-14 09:20:42.349 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:20:42 compute-0 nova_compute[259627]: 2025-10-14 09:20:42.454 2 DEBUG nova.network.neutron [req-0e5658db-e245-4fce-89fe-d8bdd0f1e7f9 req-98f87f63-81d7-4c29-98d9-99d98c56ac7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Updated VIF entry in instance network info cache for port 8210d83b-b3db-4515-b65b-c49829132abf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:20:42 compute-0 nova_compute[259627]: 2025-10-14 09:20:42.456 2 DEBUG nova.network.neutron [req-0e5658db-e245-4fce-89fe-d8bdd0f1e7f9 req-98f87f63-81d7-4c29-98d9-99d98c56ac7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Updating instance_info_cache with network_info: [{"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:42 compute-0 nova_compute[259627]: 2025-10-14 09:20:42.489 2 DEBUG oslo_concurrency.lockutils [req-0e5658db-e245-4fce-89fe-d8bdd0f1e7f9 req-98f87f63-81d7-4c29-98d9-99d98c56ac7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:42 compute-0 festive_swanson[374406]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:20:42 compute-0 festive_swanson[374406]: --> relative data size: 1.0
Oct 14 09:20:42 compute-0 festive_swanson[374406]: --> All data devices are unavailable
Oct 14 09:20:42 compute-0 systemd[1]: libpod-ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344.scope: Deactivated successfully.
Oct 14 09:20:42 compute-0 podman[374390]: 2025-10-14 09:20:42.675051943 +0000 UTC m=+1.287829820 container died ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:20:42 compute-0 systemd[1]: libpod-ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344.scope: Consumed 1.063s CPU time.
Oct 14 09:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3a87ce5a15fa09fb3afe704e645fc71e0cfb86251b973672d002776dd1d0de5-merged.mount: Deactivated successfully.
Oct 14 09:20:42 compute-0 podman[374390]: 2025-10-14 09:20:42.744433673 +0000 UTC m=+1.357211580 container remove ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 09:20:42 compute-0 sudo[374284]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:42 compute-0 systemd[1]: libpod-conmon-ea67e73d20392f5aff93eaf967f097bf9f2f6e40545681b71c12f57dc48e4344.scope: Deactivated successfully.
Oct 14 09:20:42 compute-0 sudo[374449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:42 compute-0 sudo[374449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:42 compute-0 sudo[374449]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:42 compute-0 sudo[374474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:20:42 compute-0 sudo[374474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:42 compute-0 sudo[374474]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:42 compute-0 nova_compute[259627]: 2025-10-14 09:20:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:43 compute-0 sudo[374499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:43 compute-0 sudo[374499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:43 compute-0 sudo[374499]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:43 compute-0 sudo[374524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:20:43 compute-0 sudo[374524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011050157297974865 of space, bias 1.0, pg target 0.33150471893924593 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:20:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:20:43 compute-0 podman[374588]: 2025-10-14 09:20:43.548559782 +0000 UTC m=+0.045211895 container create 1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:20:43 compute-0 systemd[1]: Started libpod-conmon-1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52.scope.
Oct 14 09:20:43 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:43 compute-0 podman[374588]: 2025-10-14 09:20:43.533286606 +0000 UTC m=+0.029938739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:20:43 compute-0 podman[374588]: 2025-10-14 09:20:43.640656902 +0000 UTC m=+0.137309075 container init 1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:20:43 compute-0 podman[374588]: 2025-10-14 09:20:43.652973566 +0000 UTC m=+0.149625689 container start 1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:20:43 compute-0 podman[374588]: 2025-10-14 09:20:43.656154474 +0000 UTC m=+0.152806677 container attach 1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:20:43 compute-0 tender_easley[374605]: 167 167
Oct 14 09:20:43 compute-0 systemd[1]: libpod-1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52.scope: Deactivated successfully.
Oct 14 09:20:43 compute-0 podman[374588]: 2025-10-14 09:20:43.658684357 +0000 UTC m=+0.155336480 container died 1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 09:20:43 compute-0 ceph-mon[74249]: pgmap v2009: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 201 op/s
Oct 14 09:20:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4567b02942d6194ac6070e014f65a191df2fee240fb2fab7af93f3d48d3c3f5-merged.mount: Deactivated successfully.
Oct 14 09:20:43 compute-0 podman[374588]: 2025-10-14 09:20:43.700411585 +0000 UTC m=+0.197063698 container remove 1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:20:43 compute-0 systemd[1]: libpod-conmon-1055f3e54d542a3138dcbdca8f5a235188d1f5af9603f5ea1181c3a77040dd52.scope: Deactivated successfully.
Oct 14 09:20:43 compute-0 podman[374629]: 2025-10-14 09:20:43.927367079 +0000 UTC m=+0.066919331 container create b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:20:43 compute-0 systemd[1]: Started libpod-conmon-b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c.scope.
Oct 14 09:20:43 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27761eb6c582f6b679c272a298ee329514107b88bdfeabf5b19b2223c5bcff01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:43 compute-0 podman[374629]: 2025-10-14 09:20:43.906477384 +0000 UTC m=+0.046029656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27761eb6c582f6b679c272a298ee329514107b88bdfeabf5b19b2223c5bcff01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27761eb6c582f6b679c272a298ee329514107b88bdfeabf5b19b2223c5bcff01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27761eb6c582f6b679c272a298ee329514107b88bdfeabf5b19b2223c5bcff01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:44 compute-0 podman[374629]: 2025-10-14 09:20:44.024235376 +0000 UTC m=+0.163787638 container init b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:20:44 compute-0 podman[374629]: 2025-10-14 09:20:44.035029692 +0000 UTC m=+0.174581944 container start b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:20:44 compute-0 podman[374629]: 2025-10-14 09:20:44.038911448 +0000 UTC m=+0.178463800 container attach b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:20:44 compute-0 nova_compute[259627]: 2025-10-14 09:20:44.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 14 09:20:44 compute-0 nova_compute[259627]: 2025-10-14 09:20:44.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]: {
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:     "0": [
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:         {
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "devices": [
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "/dev/loop3"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             ],
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_name": "ceph_lv0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_size": "21470642176",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "name": "ceph_lv0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "tags": {
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cluster_name": "ceph",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.crush_device_class": "",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.encrypted": "0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osd_id": "0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.type": "block",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.vdo": "0"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             },
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "type": "block",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "vg_name": "ceph_vg0"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:         }
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:     ],
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:     "1": [
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:         {
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "devices": [
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "/dev/loop4"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             ],
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_name": "ceph_lv1",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_size": "21470642176",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "name": "ceph_lv1",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "tags": {
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cluster_name": "ceph",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.crush_device_class": "",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.encrypted": "0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osd_id": "1",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.type": "block",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.vdo": "0"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             },
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "type": "block",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "vg_name": "ceph_vg1"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:         }
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:     ],
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:     "2": [
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:         {
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "devices": [
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "/dev/loop5"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             ],
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_name": "ceph_lv2",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_size": "21470642176",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "name": "ceph_lv2",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "tags": {
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.cluster_name": "ceph",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.crush_device_class": "",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.encrypted": "0",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osd_id": "2",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.type": "block",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:                 "ceph.vdo": "0"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             },
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "type": "block",
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:             "vg_name": "ceph_vg2"
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:         }
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]:     ]
Oct 14 09:20:44 compute-0 vibrant_poincare[374645]: }
Oct 14 09:20:44 compute-0 systemd[1]: libpod-b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c.scope: Deactivated successfully.
Oct 14 09:20:44 compute-0 podman[374629]: 2025-10-14 09:20:44.870422692 +0000 UTC m=+1.009974944 container died b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:20:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-27761eb6c582f6b679c272a298ee329514107b88bdfeabf5b19b2223c5bcff01-merged.mount: Deactivated successfully.
Oct 14 09:20:44 compute-0 podman[374629]: 2025-10-14 09:20:44.95228296 +0000 UTC m=+1.091835222 container remove b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:20:44 compute-0 systemd[1]: libpod-conmon-b16ac8d47b11b283c75fdd7c7344be71a60c6705e8d321dd5059ef2c117a7e0c.scope: Deactivated successfully.
Oct 14 09:20:45 compute-0 sudo[374524]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:45 compute-0 sudo[374666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:45 compute-0 sudo[374666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:45 compute-0 sudo[374666]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:45 compute-0 sudo[374691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:20:45 compute-0 sudo[374691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:45 compute-0 sudo[374691]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:45 compute-0 sudo[374716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:45 compute-0 sudo[374716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:45 compute-0 sudo[374716]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:45 compute-0 sudo[374741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:20:45 compute-0 sudo[374741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:45 compute-0 ceph-mon[74249]: pgmap v2010: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.683802) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433645683842, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 739, "num_deletes": 251, "total_data_size": 898347, "memory_usage": 911880, "flush_reason": "Manual Compaction"}
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433645690354, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 889566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41771, "largest_seqno": 42509, "table_properties": {"data_size": 885750, "index_size": 1596, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8617, "raw_average_key_size": 19, "raw_value_size": 878146, "raw_average_value_size": 1977, "num_data_blocks": 72, "num_entries": 444, "num_filter_entries": 444, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433583, "oldest_key_time": 1760433583, "file_creation_time": 1760433645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 6580 microseconds, and 3500 cpu microseconds.
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.690382) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 889566 bytes OK
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.690397) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.692540) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.692552) EVENT_LOG_v1 {"time_micros": 1760433645692548, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.692565) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 894555, prev total WAL file size 894555, number of live WAL files 2.
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.692996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(868KB)], [92(10MB)]
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433645693047, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11967063, "oldest_snapshot_seqno": -1}
Oct 14 09:20:45 compute-0 podman[374806]: 2025-10-14 09:20:45.693140529 +0000 UTC m=+0.052575216 container create 5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6653 keys, 10361889 bytes, temperature: kUnknown
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433645738961, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10361889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10315083, "index_size": 29094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 170830, "raw_average_key_size": 25, "raw_value_size": 10193497, "raw_average_value_size": 1532, "num_data_blocks": 1149, "num_entries": 6653, "num_filter_entries": 6653, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.739339) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10361889 bytes
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.740735) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.9 rd, 225.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(25.1) write-amplify(11.6) OK, records in: 7164, records dropped: 511 output_compression: NoCompression
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.740772) EVENT_LOG_v1 {"time_micros": 1760433645740755, "job": 54, "event": "compaction_finished", "compaction_time_micros": 46051, "compaction_time_cpu_micros": 22659, "output_level": 6, "num_output_files": 1, "total_output_size": 10361889, "num_input_records": 7164, "num_output_records": 6653, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433645741310, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433645745185, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.692920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.745251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.745256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.745258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.745260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:20:45 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:20:45.745262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:20:45 compute-0 systemd[1]: Started libpod-conmon-5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670.scope.
Oct 14 09:20:45 compute-0 podman[374806]: 2025-10-14 09:20:45.669797254 +0000 UTC m=+0.029231961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:20:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:45 compute-0 podman[374806]: 2025-10-14 09:20:45.804270278 +0000 UTC m=+0.163704975 container init 5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:20:45 compute-0 podman[374806]: 2025-10-14 09:20:45.812837809 +0000 UTC m=+0.172272476 container start 5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:20:45 compute-0 podman[374806]: 2025-10-14 09:20:45.816614123 +0000 UTC m=+0.176048820 container attach 5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 09:20:45 compute-0 pensive_antonelli[374823]: 167 167
Oct 14 09:20:45 compute-0 systemd[1]: libpod-5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670.scope: Deactivated successfully.
Oct 14 09:20:45 compute-0 podman[374828]: 2025-10-14 09:20:45.865141169 +0000 UTC m=+0.028400341 container died 5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-21284f0051162ed84d02f2146a510fa0452b7ce1f025422795e26a99e37c88d9-merged.mount: Deactivated successfully.
Oct 14 09:20:45 compute-0 podman[374828]: 2025-10-14 09:20:45.915829098 +0000 UTC m=+0.079088300 container remove 5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:20:45 compute-0 systemd[1]: libpod-conmon-5d982766181de9b6b76be2dc0fb4326f83db7db19fa67a985e758ac7c6cdc670.scope: Deactivated successfully.
Oct 14 09:20:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 167 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 14 09:20:46 compute-0 podman[374851]: 2025-10-14 09:20:46.221111832 +0000 UTC m=+0.083644442 container create af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:20:46 compute-0 systemd[1]: Started libpod-conmon-af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d.scope.
Oct 14 09:20:46 compute-0 podman[374851]: 2025-10-14 09:20:46.189002911 +0000 UTC m=+0.051535551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:20:46 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d04bb9c6b039fefbd3a88fe2d2eb8d4f49d509b72a995b1e9a83def95270877/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d04bb9c6b039fefbd3a88fe2d2eb8d4f49d509b72a995b1e9a83def95270877/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d04bb9c6b039fefbd3a88fe2d2eb8d4f49d509b72a995b1e9a83def95270877/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d04bb9c6b039fefbd3a88fe2d2eb8d4f49d509b72a995b1e9a83def95270877/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:20:46 compute-0 podman[374851]: 2025-10-14 09:20:46.347144927 +0000 UTC m=+0.209677567 container init af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:20:46 compute-0 podman[374851]: 2025-10-14 09:20:46.35740098 +0000 UTC m=+0.219933570 container start af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 14 09:20:46 compute-0 podman[374851]: 2025-10-14 09:20:46.361323587 +0000 UTC m=+0.223856187 container attach af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:20:47 compute-0 cool_swirles[374867]: {
Oct 14 09:20:47 compute-0 cool_swirles[374867]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "osd_id": 2,
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "type": "bluestore"
Oct 14 09:20:47 compute-0 cool_swirles[374867]:     },
Oct 14 09:20:47 compute-0 cool_swirles[374867]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "osd_id": 1,
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "type": "bluestore"
Oct 14 09:20:47 compute-0 cool_swirles[374867]:     },
Oct 14 09:20:47 compute-0 cool_swirles[374867]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "osd_id": 0,
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:20:47 compute-0 cool_swirles[374867]:         "type": "bluestore"
Oct 14 09:20:47 compute-0 cool_swirles[374867]:     }
Oct 14 09:20:47 compute-0 cool_swirles[374867]: }
Oct 14 09:20:47 compute-0 systemd[1]: libpod-af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d.scope: Deactivated successfully.
Oct 14 09:20:47 compute-0 podman[374851]: 2025-10-14 09:20:47.481547527 +0000 UTC m=+1.344080127 container died af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:20:47 compute-0 systemd[1]: libpod-af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d.scope: Consumed 1.078s CPU time.
Oct 14 09:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d04bb9c6b039fefbd3a88fe2d2eb8d4f49d509b72a995b1e9a83def95270877-merged.mount: Deactivated successfully.
Oct 14 09:20:47 compute-0 podman[374851]: 2025-10-14 09:20:47.552562127 +0000 UTC m=+1.415094727 container remove af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 09:20:47 compute-0 systemd[1]: libpod-conmon-af623fa191c7f69963655d6dc8edb1e220828b6cae7b65cbe8024220fa445c1d.scope: Deactivated successfully.
Oct 14 09:20:47 compute-0 sudo[374741]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:20:47 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:20:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:20:47 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:20:47 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev af16f000-2528-44df-8fa4-8d744df8cefa does not exist
Oct 14 09:20:47 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev aa0dd566-8056-4e23-8677-d259396b2903 does not exist
Oct 14 09:20:47 compute-0 sudo[374915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:20:47 compute-0 sudo[374915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:47 compute-0 sudo[374915]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:47 compute-0 ceph-mon[74249]: pgmap v2011: 305 pgs: 305 active+clean; 167 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 14 09:20:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:20:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:20:47 compute-0 sudo[374940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:20:47 compute-0 sudo[374940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:20:47 compute-0 sudo[374940]: pam_unix(sudo:session): session closed for user root
Oct 14 09:20:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 167 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 14 09:20:48 compute-0 ovn_controller[152662]: 2025-10-14T09:20:48Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:90:f9 10.100.0.12
Oct 14 09:20:48 compute-0 ovn_controller[152662]: 2025-10-14T09:20:48Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:90:f9 10.100.0.12
Oct 14 09:20:49 compute-0 nova_compute[259627]: 2025-10-14 09:20:49.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:49 compute-0 nova_compute[259627]: 2025-10-14 09:20:49.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:49 compute-0 ceph-mon[74249]: pgmap v2012: 305 pgs: 305 active+clean; 167 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 14 09:20:49 compute-0 nova_compute[259627]: 2025-10-14 09:20:49.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:20:50 compute-0 nova_compute[259627]: 2025-10-14 09:20:50.020 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:50 compute-0 nova_compute[259627]: 2025-10-14 09:20:50.021 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:50 compute-0 nova_compute[259627]: 2025-10-14 09:20:50.022 2 INFO nova.compute.manager [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Shelving
Oct 14 09:20:50 compute-0 nova_compute[259627]: 2025-10-14 09:20:50.048 2 DEBUG nova.virt.libvirt.driver [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 14 09:20:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 175 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 141 op/s
Oct 14 09:20:50 compute-0 sshd-session[374411]: Invalid user admin from 188.150.249.96 port 55806
Oct 14 09:20:50 compute-0 sshd-session[374411]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:20:50 compute-0 sshd-session[374411]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:20:51 compute-0 ceph-mon[74249]: pgmap v2013: 305 pgs: 305 active+clean; 175 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 141 op/s
Oct 14 09:20:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 200 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 142 op/s
Oct 14 09:20:52 compute-0 kernel: tap8a93d82c-2a (unregistering): left promiscuous mode
Oct 14 09:20:52 compute-0 NetworkManager[44885]: <info>  [1760433652.3712] device (tap8a93d82c-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:20:52 compute-0 ovn_controller[152662]: 2025-10-14T09:20:52Z|01217|binding|INFO|Releasing lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f from this chassis (sb_readonly=0)
Oct 14 09:20:52 compute-0 ovn_controller[152662]: 2025-10-14T09:20:52Z|01218|binding|INFO|Setting lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f down in Southbound
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:52 compute-0 ovn_controller[152662]: 2025-10-14T09:20:52Z|01219|binding|INFO|Removing iface tap8a93d82c-2a ovn-installed in OVS
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.398 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:cb:1b 10.100.0.8'], port_security=['fa:16:3e:5a:cb:1b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fd5669ba-0261-423e-8586-66c91ff570a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b859f880079e4e6db96cdef422402fa1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb7ff285-1f24-4ca1-a6b2-fb0966d27f7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1272bcfa-9334-4a66-b4ee-2da7a182025f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.401 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f in datapath 6c11d5e6-13c9-49c7-982d-8d1198ac7dea unbound from our chassis
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.402 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c11d5e6-13c9-49c7-982d-8d1198ac7dea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4a15cde4-d6c3-4c96-aabb-e9ea0028a2bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.407 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea namespace which is not needed anymore
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:52 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct 14 09:20:52 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000072.scope: Consumed 14.277s CPU time.
Oct 14 09:20:52 compute-0 systemd-machined[214636]: Machine qemu-144-instance-00000072 terminated.
Oct 14 09:20:52 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[373473]: [NOTICE]   (373477) : haproxy version is 2.8.14-c23fe91
Oct 14 09:20:52 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[373473]: [NOTICE]   (373477) : path to executable is /usr/sbin/haproxy
Oct 14 09:20:52 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[373473]: [WARNING]  (373477) : Exiting Master process...
Oct 14 09:20:52 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[373473]: [ALERT]    (373477) : Current worker (373479) exited with code 143 (Terminated)
Oct 14 09:20:52 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[373473]: [WARNING]  (373477) : All workers exited. Exiting... (0)
Oct 14 09:20:52 compute-0 systemd[1]: libpod-d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a.scope: Deactivated successfully.
Oct 14 09:20:52 compute-0 podman[374990]: 2025-10-14 09:20:52.565506188 +0000 UTC m=+0.050135157 container died d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a-userdata-shm.mount: Deactivated successfully.
Oct 14 09:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b1dc91f394747fc08b06f6f17ab00bb2c39967dc47162c30c76e21bf21f5448-merged.mount: Deactivated successfully.
Oct 14 09:20:52 compute-0 podman[374990]: 2025-10-14 09:20:52.613954112 +0000 UTC m=+0.098583091 container cleanup d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:20:52 compute-0 systemd[1]: libpod-conmon-d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a.scope: Deactivated successfully.
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.689 2 DEBUG nova.compute.manager [req-a89a4659-ca8d-4fbf-a738-971ab8bd5ba5 req-5e48ce3f-d502-42c6-9c44-ac6cf2f9c906 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-unplugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.691 2 DEBUG oslo_concurrency.lockutils [req-a89a4659-ca8d-4fbf-a738-971ab8bd5ba5 req-5e48ce3f-d502-42c6-9c44-ac6cf2f9c906 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.691 2 DEBUG oslo_concurrency.lockutils [req-a89a4659-ca8d-4fbf-a738-971ab8bd5ba5 req-5e48ce3f-d502-42c6-9c44-ac6cf2f9c906 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.691 2 DEBUG oslo_concurrency.lockutils [req-a89a4659-ca8d-4fbf-a738-971ab8bd5ba5 req-5e48ce3f-d502-42c6-9c44-ac6cf2f9c906 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.692 2 DEBUG nova.compute.manager [req-a89a4659-ca8d-4fbf-a738-971ab8bd5ba5 req-5e48ce3f-d502-42c6-9c44-ac6cf2f9c906 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] No waiting events found dispatching network-vif-unplugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.692 2 WARNING nova.compute.manager [req-a89a4659-ca8d-4fbf-a738-971ab8bd5ba5 req-5e48ce3f-d502-42c6-9c44-ac6cf2f9c906 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received unexpected event network-vif-unplugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f for instance with vm_state active and task_state shelving.
Oct 14 09:20:52 compute-0 podman[375027]: 2025-10-14 09:20:52.717885213 +0000 UTC m=+0.065023473 container remove d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03fac344-fbad-45a5-b4b1-e66d67a0df87]: (4, ('Tue Oct 14 09:20:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea (d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a)\nd1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a\nTue Oct 14 09:20:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea (d1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a)\nd1ff4086a427df3d94405a6bb6aa0ee5d96755ccc2b46229d62181d289a6cc0a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.729 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78b829bd-8d91-44ff-b49a-964a6ec3253c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.730 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c11d5e6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:52 compute-0 kernel: tap6c11d5e6-10: left promiscuous mode
Oct 14 09:20:52 compute-0 nova_compute[259627]: 2025-10-14 09:20:52.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.758 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf4d592-b87f-4404-932d-55894f176f63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.785 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[deb92f60-ce58-4dfd-8b67-9c2fe94215c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.787 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ca20c6cd-544e-4137-bfab-e2c8ed3450c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.807 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe256b6e-c20f-4b8d-a722-f265b29ead04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740345, 'reachable_time': 17644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375049, 'error': None, 'target': 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c11d5e6\x2d13c9\x2d49c7\x2d982d\x2d8d1198ac7dea.mount: Deactivated successfully.
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.810 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:20:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:20:52.810 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7e95838d-50e4-4d7c-91a8-c60a0779028f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:20:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:53 compute-0 nova_compute[259627]: 2025-10-14 09:20:53.069 2 INFO nova.virt.libvirt.driver [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance shutdown successfully after 3 seconds.
Oct 14 09:20:53 compute-0 nova_compute[259627]: 2025-10-14 09:20:53.079 2 INFO nova.virt.libvirt.driver [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance destroyed successfully.
Oct 14 09:20:53 compute-0 nova_compute[259627]: 2025-10-14 09:20:53.080 2 DEBUG nova.objects.instance [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'numa_topology' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:20:53 compute-0 nova_compute[259627]: 2025-10-14 09:20:53.397 2 INFO nova.virt.libvirt.driver [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Beginning cold snapshot process
Oct 14 09:20:53 compute-0 sshd-session[374411]: Failed password for invalid user admin from 188.150.249.96 port 55806 ssh2
Oct 14 09:20:53 compute-0 nova_compute[259627]: 2025-10-14 09:20:53.543 2 INFO nova.compute.manager [None req-0f34db18-5b9a-4420-a5aa-6f11a7cb2dbe 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Get console output
Oct 14 09:20:53 compute-0 nova_compute[259627]: 2025-10-14 09:20:53.550 2 DEBUG nova.virt.libvirt.imagebackend [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:20:53 compute-0 nova_compute[259627]: 2025-10-14 09:20:53.555 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:20:53 compute-0 ceph-mon[74249]: pgmap v2014: 305 pgs: 305 active+clean; 200 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 142 op/s
Oct 14 09:20:53 compute-0 nova_compute[259627]: 2025-10-14 09:20:53.834 2 DEBUG nova.storage.rbd_utils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] creating snapshot(5911a1aae36d40a880f0f062d39df779) on rbd image(fd5669ba-0261-423e-8586-66c91ff570a4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 200 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 14 09:20:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 14 09:20:54 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.796 2 DEBUG nova.compute.manager [req-18196159-5b9f-4709-9269-19d67a3b3555 req-75af09e7-e0ad-4dab-9ac1-9fada8671312 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.797 2 DEBUG oslo_concurrency.lockutils [req-18196159-5b9f-4709-9269-19d67a3b3555 req-75af09e7-e0ad-4dab-9ac1-9fada8671312 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.798 2 DEBUG oslo_concurrency.lockutils [req-18196159-5b9f-4709-9269-19d67a3b3555 req-75af09e7-e0ad-4dab-9ac1-9fada8671312 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.799 2 DEBUG oslo_concurrency.lockutils [req-18196159-5b9f-4709-9269-19d67a3b3555 req-75af09e7-e0ad-4dab-9ac1-9fada8671312 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.799 2 DEBUG nova.compute.manager [req-18196159-5b9f-4709-9269-19d67a3b3555 req-75af09e7-e0ad-4dab-9ac1-9fada8671312 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] No waiting events found dispatching network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.800 2 WARNING nova.compute.manager [req-18196159-5b9f-4709-9269-19d67a3b3555 req-75af09e7-e0ad-4dab-9ac1-9fada8671312 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received unexpected event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f for instance with vm_state active and task_state shelving_image_uploading.
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.805 2 DEBUG nova.storage.rbd_utils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] cloning vms/fd5669ba-0261-423e-8586-66c91ff570a4_disk@5911a1aae36d40a880f0f062d39df779 to images/79ee0dcf-4cc6-4560-a2c4-7c33c17b7486 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:20:54 compute-0 nova_compute[259627]: 2025-10-14 09:20:54.927 2 DEBUG nova.storage.rbd_utils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] flattening images/79ee0dcf-4cc6-4560-a2c4-7c33c17b7486 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:20:55 compute-0 nova_compute[259627]: 2025-10-14 09:20:55.176 2 DEBUG nova.compute.manager [req-6293c3e5-2396-49ce-8df0-f7cd21f85c08 req-f1811938-fc07-4646-abf8-1d35c2c876c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-changed-8210d83b-b3db-4515-b65b-c49829132abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:20:55 compute-0 nova_compute[259627]: 2025-10-14 09:20:55.177 2 DEBUG nova.compute.manager [req-6293c3e5-2396-49ce-8df0-f7cd21f85c08 req-f1811938-fc07-4646-abf8-1d35c2c876c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Refreshing instance network info cache due to event network-changed-8210d83b-b3db-4515-b65b-c49829132abf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:20:55 compute-0 nova_compute[259627]: 2025-10-14 09:20:55.177 2 DEBUG oslo_concurrency.lockutils [req-6293c3e5-2396-49ce-8df0-f7cd21f85c08 req-f1811938-fc07-4646-abf8-1d35c2c876c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:55 compute-0 nova_compute[259627]: 2025-10-14 09:20:55.177 2 DEBUG oslo_concurrency.lockutils [req-6293c3e5-2396-49ce-8df0-f7cd21f85c08 req-f1811938-fc07-4646-abf8-1d35c2c876c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:55 compute-0 nova_compute[259627]: 2025-10-14 09:20:55.177 2 DEBUG nova.network.neutron [req-6293c3e5-2396-49ce-8df0-f7cd21f85c08 req-f1811938-fc07-4646-abf8-1d35c2c876c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Refreshing network info cache for port 8210d83b-b3db-4515-b65b-c49829132abf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:20:55 compute-0 nova_compute[259627]: 2025-10-14 09:20:55.304 2 DEBUG nova.storage.rbd_utils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] removing snapshot(5911a1aae36d40a880f0f062d39df779) on rbd image(fd5669ba-0261-423e-8586-66c91ff570a4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:20:55 compute-0 sshd-session[374411]: Connection closed by invalid user admin 188.150.249.96 port 55806 [preauth]
Oct 14 09:20:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 14 09:20:55 compute-0 ceph-mon[74249]: pgmap v2015: 305 pgs: 305 active+clean; 200 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct 14 09:20:55 compute-0 ceph-mon[74249]: osdmap e275: 3 total, 3 up, 3 in
Oct 14 09:20:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 14 09:20:55 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 14 09:20:55 compute-0 nova_compute[259627]: 2025-10-14 09:20:55.783 2 DEBUG nova.storage.rbd_utils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] creating snapshot(snap) on rbd image(79ee0dcf-4cc6-4560-a2c4-7c33c17b7486) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:20:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 266 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 8.1 MiB/s wr, 191 op/s
Oct 14 09:20:56 compute-0 nova_compute[259627]: 2025-10-14 09:20:56.688 2 DEBUG nova.network.neutron [req-6293c3e5-2396-49ce-8df0-f7cd21f85c08 req-f1811938-fc07-4646-abf8-1d35c2c876c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Updated VIF entry in instance network info cache for port 8210d83b-b3db-4515-b65b-c49829132abf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:20:56 compute-0 nova_compute[259627]: 2025-10-14 09:20:56.688 2 DEBUG nova.network.neutron [req-6293c3e5-2396-49ce-8df0-f7cd21f85c08 req-f1811938-fc07-4646-abf8-1d35c2c876c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Updating instance_info_cache with network_info: [{"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:56 compute-0 nova_compute[259627]: 2025-10-14 09:20:56.719 2 DEBUG oslo_concurrency.lockutils [req-6293c3e5-2396-49ce-8df0-f7cd21f85c08 req-f1811938-fc07-4646-abf8-1d35c2c876c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6a505551-bc3f-4254-966f-ca344358f8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 14 09:20:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 14 09:20:56 compute-0 ceph-mon[74249]: osdmap e276: 3 total, 3 up, 3 in
Oct 14 09:20:56 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 14 09:20:57 compute-0 ceph-mon[74249]: pgmap v2018: 305 pgs: 305 active+clean; 266 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 8.1 MiB/s wr, 191 op/s
Oct 14 09:20:57 compute-0 ceph-mon[74249]: osdmap e277: 3 total, 3 up, 3 in
Oct 14 09:20:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:20:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 266 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.4 MiB/s wr, 118 op/s
Oct 14 09:20:58 compute-0 nova_compute[259627]: 2025-10-14 09:20:58.418 2 INFO nova.virt.libvirt.driver [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Snapshot image upload complete
Oct 14 09:20:58 compute-0 nova_compute[259627]: 2025-10-14 09:20:58.419 2 DEBUG nova.compute.manager [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:58 compute-0 nova_compute[259627]: 2025-10-14 09:20:58.482 2 INFO nova.compute.manager [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Shelve offloading
Oct 14 09:20:58 compute-0 nova_compute[259627]: 2025-10-14 09:20:58.491 2 INFO nova.virt.libvirt.driver [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance destroyed successfully.
Oct 14 09:20:58 compute-0 nova_compute[259627]: 2025-10-14 09:20:58.492 2 DEBUG nova.compute.manager [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:20:58 compute-0 nova_compute[259627]: 2025-10-14 09:20:58.495 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:20:58 compute-0 nova_compute[259627]: 2025-10-14 09:20:58.495 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:20:58 compute-0 nova_compute[259627]: 2025-10-14 09:20:58.495 2 DEBUG nova.network.neutron [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:20:58 compute-0 podman[375194]: 2025-10-14 09:20:58.733552507 +0000 UTC m=+0.131270257 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:20:58 compute-0 podman[375195]: 2025-10-14 09:20:58.735476324 +0000 UTC m=+0.129801250 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:20:59 compute-0 nova_compute[259627]: 2025-10-14 09:20:59.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:59 compute-0 nova_compute[259627]: 2025-10-14 09:20:59.689 2 DEBUG nova.network.neutron [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:20:59 compute-0 nova_compute[259627]: 2025-10-14 09:20:59.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:20:59 compute-0 nova_compute[259627]: 2025-10-14 09:20:59.714 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:20:59 compute-0 ceph-mon[74249]: pgmap v2020: 305 pgs: 305 active+clean; 266 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.4 MiB/s wr, 118 op/s
Oct 14 09:21:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 279 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 140 op/s
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.035 2 INFO nova.virt.libvirt.driver [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance destroyed successfully.
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.035 2 DEBUG nova.objects.instance [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'resources' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.049 2 DEBUG nova.virt.libvirt.vif [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1080027921',display_name='tempest-TestShelveInstance-server-1080027921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1080027921',id=114,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDaFeG/xJmvGbKmYgn4dJf37Cqex3YsQYrFJ72iAZg+c2DsrdPgi+tOr4SRSonbIRwf/h+BLYvaqfFBXVZQ0pwUCpayjdgXBvqZr1W73e5QpjlvvzQksOoSh5mhqEQBaTw==',key_name='tempest-TestShelveInstance-1777944733',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:20:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b859f880079e4e6db96cdef422402fa1',ramdisk_id='',reservation_id='r-371aolc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1721835966',owner_user_name='tempest-TestShelveInstance-1721835966-project-member',shelved_at='2025-10-14T09:20:58.419336',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='79ee0dcf-4cc6-4560-a2c4-7c33c17b7486'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:20:53Z,user_data=None,user_id='8d77c101777148edbee39ba308af8e60',uuid=fd5669ba-0261-423e-8586-66c91ff570a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.049 2 DEBUG nova.network.os_vif_util [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converting VIF {"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.050 2 DEBUG nova.network.os_vif_util [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.051 2 DEBUG os_vif [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a93d82c-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.060 2 INFO os_vif [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a')
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.118 2 DEBUG nova.compute.manager [req-b0d5b5fd-53db-4148-a847-5387d2fcfb97 req-ce01b5f6-3690-4921-83a7-3d56916d1b5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.119 2 DEBUG nova.compute.manager [req-b0d5b5fd-53db-4148-a847-5387d2fcfb97 req-ce01b5f6-3690-4921-83a7-3d56916d1b5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing instance network info cache due to event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.119 2 DEBUG oslo_concurrency.lockutils [req-b0d5b5fd-53db-4148-a847-5387d2fcfb97 req-ce01b5f6-3690-4921-83a7-3d56916d1b5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.119 2 DEBUG oslo_concurrency.lockutils [req-b0d5b5fd-53db-4148-a847-5387d2fcfb97 req-ce01b5f6-3690-4921-83a7-3d56916d1b5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.119 2 DEBUG nova.network.neutron [req-b0d5b5fd-53db-4148-a847-5387d2fcfb97 req-ce01b5f6-3690-4921-83a7-3d56916d1b5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.132 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "51e84050-59be-4f23-b78f-18bc2d3e83fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.132 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.150 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.230 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.231 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.238 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.238 2 INFO nova.compute.claims [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.386 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.563 2 INFO nova.virt.libvirt.driver [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Deleting instance files /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4_del
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.565 2 INFO nova.virt.libvirt.driver [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Deletion of /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4_del complete
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.665 2 INFO nova.scheduler.client.report [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Deleted allocations for instance fd5669ba-0261-423e-8586-66c91ff570a4
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.708 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:01 compute-0 ceph-mon[74249]: pgmap v2021: 305 pgs: 305 active+clean; 279 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 140 op/s
Oct 14 09:21:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738320340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.891 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.899 2 DEBUG nova.compute.provider_tree [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.915 2 DEBUG nova.scheduler.client.report [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.953 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.955 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:21:01 compute-0 nova_compute[259627]: 2025-10-14 09:21:01.964 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.014 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.015 2 DEBUG nova.network.neutron [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.038 2 INFO nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.043 2 DEBUG oslo_concurrency.processutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.121 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:21:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 279 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.3 MiB/s wr, 142 op/s
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.209 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.212 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.212 2 INFO nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Creating image(s)
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.256 2 DEBUG nova.storage.rbd_utils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.292 2 DEBUG nova.storage.rbd_utils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.328 2 DEBUG nova.storage.rbd_utils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.333 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.392 2 DEBUG nova.policy [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.441 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.442 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.443 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.443 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.469 2 DEBUG nova.storage.rbd_utils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.473 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834438951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.557 2 DEBUG oslo_concurrency.processutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.564 2 DEBUG nova.compute.provider_tree [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.584 2 DEBUG nova.scheduler.client.report [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.613 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.644 2 DEBUG nova.network.neutron [req-b0d5b5fd-53db-4148-a847-5387d2fcfb97 req-ce01b5f6-3690-4921-83a7-3d56916d1b5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updated VIF entry in instance network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.646 2 DEBUG nova.network.neutron [req-b0d5b5fd-53db-4148-a847-5387d2fcfb97 req-ce01b5f6-3690-4921-83a7-3d56916d1b5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": null, "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.687 2 DEBUG oslo_concurrency.lockutils [None req-44d9f0e4-e453-474c-8e70-c6e9edae2b7a 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.694 2 DEBUG oslo_concurrency.lockutils [req-b0d5b5fd-53db-4148-a847-5387d2fcfb97 req-ce01b5f6-3690-4921-83a7-3d56916d1b5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:21:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:21:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.757 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/738320340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1834438951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:21:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:21:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:21:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.825 2 DEBUG nova.storage.rbd_utils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.925 2 DEBUG nova.objects.instance [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 51e84050-59be-4f23-b78f-18bc2d3e83fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 14 09:21:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 14 09:21:02 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.944 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.944 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Ensure instance console log exists: /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.945 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.945 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:02 compute-0 nova_compute[259627]: 2025-10-14 09:21:02.945 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:03 compute-0 nova_compute[259627]: 2025-10-14 09:21:03.521 2 DEBUG nova.network.neutron [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Successfully created port: 7957d7c0-7f92-4e55-b573-985301bbdbbe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:21:03 compute-0 sshd-session[375174]: Invalid user pi from 188.150.249.96 port 58834
Oct 14 09:21:03 compute-0 ceph-mon[74249]: pgmap v2022: 305 pgs: 305 active+clean; 279 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.3 MiB/s wr, 142 op/s
Oct 14 09:21:03 compute-0 ceph-mon[74249]: osdmap e278: 3 total, 3 up, 3 in
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 279 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 828 KiB/s rd, 1.0 MiB/s wr, 42 op/s
Oct 14 09:21:04 compute-0 sshd-session[375174]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:21:04 compute-0 sshd-session[375174]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.274 2 DEBUG nova.network.neutron [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Successfully updated port: 7957d7c0-7f92-4e55-b573-985301bbdbbe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.295 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.296 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.296 2 DEBUG nova.network.neutron [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.366 2 DEBUG nova.compute.manager [req-af808eeb-ce2b-4f31-8bf1-be9638448f72 req-fb52ef38-007f-4638-8092-de2e5e9e6c8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received event network-changed-7957d7c0-7f92-4e55-b573-985301bbdbbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.367 2 DEBUG nova.compute.manager [req-af808eeb-ce2b-4f31-8bf1-be9638448f72 req-fb52ef38-007f-4638-8092-de2e5e9e6c8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Refreshing instance network info cache due to event network-changed-7957d7c0-7f92-4e55-b573-985301bbdbbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.368 2 DEBUG oslo_concurrency.lockutils [req-af808eeb-ce2b-4f31-8bf1-be9638448f72 req-fb52ef38-007f-4638-8092-de2e5e9e6c8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:21:04 compute-0 nova_compute[259627]: 2025-10-14 09:21:04.509 2 DEBUG nova.network.neutron [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.304 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.306 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.306 2 INFO nova.compute.manager [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Unshelving
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.350 2 DEBUG nova.network.neutron [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Updating instance_info_cache with network_info: [{"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.376 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.376 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Instance network_info: |[{"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.377 2 DEBUG oslo_concurrency.lockutils [req-af808eeb-ce2b-4f31-8bf1-be9638448f72 req-fb52ef38-007f-4638-8092-de2e5e9e6c8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.378 2 DEBUG nova.network.neutron [req-af808eeb-ce2b-4f31-8bf1-be9638448f72 req-fb52ef38-007f-4638-8092-de2e5e9e6c8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Refreshing network info cache for port 7957d7c0-7f92-4e55-b573-985301bbdbbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.383 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Start _get_guest_xml network_info=[{"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.390 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.390 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.392 2 WARNING nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.399 2 DEBUG nova.objects.instance [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'pci_requests' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.407 2 DEBUG nova.virt.libvirt.host [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.408 2 DEBUG nova.virt.libvirt.host [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.414 2 DEBUG nova.objects.instance [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'numa_topology' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.416 2 DEBUG nova.virt.libvirt.host [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.417 2 DEBUG nova.virt.libvirt.host [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.418 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.418 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.419 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.419 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.420 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.420 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.420 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.421 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.421 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.421 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.422 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.422 2 DEBUG nova.virt.hardware [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.427 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.485 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.487 2 INFO nova.compute.claims [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:05.525 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:21:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:05.527 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:21:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:21:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2189975385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:21:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:21:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2189975385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.626 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:05 compute-0 sshd-session[375174]: Failed password for invalid user pi from 188.150.249.96 port 58834 ssh2
Oct 14 09:21:05 compute-0 ceph-mon[74249]: pgmap v2024: 305 pgs: 305 active+clean; 279 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 828 KiB/s rd, 1.0 MiB/s wr, 42 op/s
Oct 14 09:21:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2189975385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:21:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2189975385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:21:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:21:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17351845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.923 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.960 2 DEBUG nova.storage.rbd_utils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:05 compute-0 nova_compute[259627]: 2025-10-14 09:21:05.966 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972382550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.110 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.116 2 DEBUG nova.compute.provider_tree [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.141 2 DEBUG nova.scheduler.client.report [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:21:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 752 KiB/s rd, 3.2 MiB/s wr, 106 op/s
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.171 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.343 2 INFO nova.network.neutron [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 14 09:21:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:21:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028456991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.390 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.392 2 DEBUG nova.virt.libvirt.vif [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:21:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-892839943',display_name='tempest-TestNetworkBasicOps-server-892839943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-892839943',id=116,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU9xVRNAF650NaFJ2rdO+hE0GQMvrdRo09qcAPpFjehwF2GQim0973u5oTNRL9wk6YXV2ZXz62pkj/1H8t5tw9BZg024xDTo5MHsD4N7mhy1HoPBr22QSmOlaoBMfvLww==',key_name='tempest-TestNetworkBasicOps-803603811',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-xw3rgozp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:21:02Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=51e84050-59be-4f23-b78f-18bc2d3e83fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.392 2 DEBUG nova.network.os_vif_util [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.393 2 DEBUG nova.network.os_vif_util [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:7c:44,bridge_name='br-int',has_traffic_filtering=True,id=7957d7c0-7f92-4e55-b573-985301bbdbbe,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7957d7c0-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.394 2 DEBUG nova.objects.instance [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51e84050-59be-4f23-b78f-18bc2d3e83fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.412 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <uuid>51e84050-59be-4f23-b78f-18bc2d3e83fc</uuid>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <name>instance-00000074</name>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-892839943</nova:name>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:21:05</nova:creationTime>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <nova:port uuid="7957d7c0-7f92-4e55-b573-985301bbdbbe">
Oct 14 09:21:06 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <system>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <entry name="serial">51e84050-59be-4f23-b78f-18bc2d3e83fc</entry>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <entry name="uuid">51e84050-59be-4f23-b78f-18bc2d3e83fc</entry>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </system>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <os>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   </os>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <features>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   </features>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/51e84050-59be-4f23-b78f-18bc2d3e83fc_disk">
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/51e84050-59be-4f23-b78f-18bc2d3e83fc_disk.config">
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:21:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:8c:7c:44"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <target dev="tap7957d7c0-7f"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc/console.log" append="off"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <video>
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </video>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:21:06 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:21:06 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:21:06 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:21:06 compute-0 nova_compute[259627]: </domain>
Oct 14 09:21:06 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.413 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Preparing to wait for external event network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.414 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.414 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.415 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.416 2 DEBUG nova.virt.libvirt.vif [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:21:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-892839943',display_name='tempest-TestNetworkBasicOps-server-892839943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-892839943',id=116,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU9xVRNAF650NaFJ2rdO+hE0GQMvrdRo09qcAPpFjehwF2GQim0973u5oTNRL9wk6YXV2ZXz62pkj/1H8t5tw9BZg024xDTo5MHsD4N7mhy1HoPBr22QSmOlaoBMfvLww==',key_name='tempest-TestNetworkBasicOps-803603811',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-xw3rgozp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:21:02Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=51e84050-59be-4f23-b78f-18bc2d3e83fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.416 2 DEBUG nova.network.os_vif_util [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.417 2 DEBUG nova.network.os_vif_util [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:7c:44,bridge_name='br-int',has_traffic_filtering=True,id=7957d7c0-7f92-4e55-b573-985301bbdbbe,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7957d7c0-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.418 2 DEBUG os_vif [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:7c:44,bridge_name='br-int',has_traffic_filtering=True,id=7957d7c0-7f92-4e55-b573-985301bbdbbe,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7957d7c0-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.420 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7957d7c0-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7957d7c0-7f, col_values=(('external_ids', {'iface-id': '7957d7c0-7f92-4e55-b573-985301bbdbbe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:7c:44', 'vm-uuid': '51e84050-59be-4f23-b78f-18bc2d3e83fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:06 compute-0 NetworkManager[44885]: <info>  [1760433666.4269] manager: (tap7957d7c0-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.433 2 INFO os_vif [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:7c:44,bridge_name='br-int',has_traffic_filtering=True,id=7957d7c0-7f92-4e55-b573-985301bbdbbe,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7957d7c0-7f')
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.506 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.507 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.507 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:8c:7c:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.507 2 INFO nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Using config drive
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.525 2 DEBUG nova.storage.rbd_utils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/17351845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:21:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2972382550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3028456991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.861 2 DEBUG nova.network.neutron [req-af808eeb-ce2b-4f31-8bf1-be9638448f72 req-fb52ef38-007f-4638-8092-de2e5e9e6c8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Updated VIF entry in instance network info cache for port 7957d7c0-7f92-4e55-b573-985301bbdbbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.861 2 DEBUG nova.network.neutron [req-af808eeb-ce2b-4f31-8bf1-be9638448f72 req-fb52ef38-007f-4638-8092-de2e5e9e6c8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Updating instance_info_cache with network_info: [{"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.885 2 DEBUG oslo_concurrency.lockutils [req-af808eeb-ce2b-4f31-8bf1-be9638448f72 req-fb52ef38-007f-4638-8092-de2e5e9e6c8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.971 2 INFO nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Creating config drive at /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc/disk.config
Oct 14 09:21:06 compute-0 nova_compute[259627]: 2025-10-14 09:21:06.975 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpic3fgihk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.037 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.039 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.117 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.118 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.119 2 DEBUG nova.network.neutron [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.121 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpic3fgihk" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.152 2 DEBUG nova.storage.rbd_utils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.160 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc/disk.config 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.240 2 DEBUG nova.compute.manager [req-c9c96913-2e66-49fd-a1dc-e4478034a350 req-df1264a7-71a1-4f50-bcc8-35458235ef24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.241 2 DEBUG nova.compute.manager [req-c9c96913-2e66-49fd-a1dc-e4478034a350 req-df1264a7-71a1-4f50-bcc8-35458235ef24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing instance network info cache due to event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.241 2 DEBUG oslo_concurrency.lockutils [req-c9c96913-2e66-49fd-a1dc-e4478034a350 req-df1264a7-71a1-4f50-bcc8-35458235ef24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:21:07 compute-0 sshd-session[375174]: Connection closed by invalid user pi 188.150.249.96 port 58834 [preauth]
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.341 2 DEBUG oslo_concurrency.processutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc/disk.config 51e84050-59be-4f23-b78f-18bc2d3e83fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.342 2 INFO nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Deleting local config drive /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc/disk.config because it was imported into RBD.
Oct 14 09:21:07 compute-0 kernel: tap7957d7c0-7f: entered promiscuous mode
Oct 14 09:21:07 compute-0 NetworkManager[44885]: <info>  [1760433667.4197] manager: (tap7957d7c0-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/499)
Oct 14 09:21:07 compute-0 ovn_controller[152662]: 2025-10-14T09:21:07Z|01220|binding|INFO|Claiming lport 7957d7c0-7f92-4e55-b573-985301bbdbbe for this chassis.
Oct 14 09:21:07 compute-0 ovn_controller[152662]: 2025-10-14T09:21:07Z|01221|binding|INFO|7957d7c0-7f92-4e55-b573-985301bbdbbe: Claiming fa:16:3e:8c:7c:44 10.100.0.3
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.427 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:7c:44 10.100.0.3'], port_security=['fa:16:3e:8c:7c:44 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '51e84050-59be-4f23-b78f-18bc2d3e83fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61c1acdc-e817-4d26-8900-47d35332175a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd92719f8-d4b1-482a-bb83-986dc81811df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfec6e64-6ad9-4ee1-953d-480113ac60ef, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7957d7c0-7f92-4e55-b573-985301bbdbbe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.428 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7957d7c0-7f92-4e55-b573-985301bbdbbe in datapath 61c1acdc-e817-4d26-8900-47d35332175a bound to our chassis
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.431 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61c1acdc-e817-4d26-8900-47d35332175a
Oct 14 09:21:07 compute-0 ovn_controller[152662]: 2025-10-14T09:21:07Z|01222|binding|INFO|Setting lport 7957d7c0-7f92-4e55-b573-985301bbdbbe ovn-installed in OVS
Oct 14 09:21:07 compute-0 ovn_controller[152662]: 2025-10-14T09:21:07Z|01223|binding|INFO|Setting lport 7957d7c0-7f92-4e55-b573-985301bbdbbe up in Southbound
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.456 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2e27d2-e230-4320-93d1-4564b1822cca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:07 compute-0 systemd-machined[214636]: New machine qemu-146-instance-00000074.
Oct 14 09:21:07 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000074.
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.502 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[df78c4ce-d23d-44af-8040-ea0f9c34ebfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:07 compute-0 systemd-udevd[375621]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.506 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9380b233-897e-4acd-9bb5-ad1278cc2bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:07 compute-0 NetworkManager[44885]: <info>  [1760433667.5176] device (tap7957d7c0-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:21:07 compute-0 NetworkManager[44885]: <info>  [1760433667.5191] device (tap7957d7c0-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:21:07 compute-0 auditd[703]: Audit daemon rotating log files
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.539 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8a65c9b9-5f67-48c6-ba07-10b94a80087f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7fba6c-dbc2-41b7-be66-8f3b2c962826]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61c1acdc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:79:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741459, 'reachable_time': 28535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375629, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.582 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[94b0df70-a53a-4ac0-bcb3-83c659898963]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61c1acdc-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741471, 'tstamp': 741471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375633, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61c1acdc-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741474, 'tstamp': 741474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375633, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.584 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61c1acdc-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.588 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61c1acdc-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.589 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.589 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61c1acdc-e0, col_values=(('external_ids', {'iface-id': '6f7bf4a5-58bf-4ec8-a189-0a9624df5601'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:07.590 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.626 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433652.6255503, fd5669ba-0261-423e-8586-66c91ff570a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.627 2 INFO nova.compute.manager [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Stopped (Lifecycle Event)
Oct 14 09:21:07 compute-0 nova_compute[259627]: 2025-10-14 09:21:07.647 2 DEBUG nova.compute.manager [None req-0f65440b-7cf7-4997-8e39-e8afcd6d147e - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:07 compute-0 ceph-mon[74249]: pgmap v2025: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 752 KiB/s rd, 3.2 MiB/s wr, 106 op/s
Oct 14 09:21:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 3.0 MiB/s wr, 99 op/s
Oct 14 09:21:08 compute-0 nova_compute[259627]: 2025-10-14 09:21:08.356 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433668.3556013, 51e84050-59be-4f23-b78f-18bc2d3e83fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:08 compute-0 nova_compute[259627]: 2025-10-14 09:21:08.356 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] VM Started (Lifecycle Event)
Oct 14 09:21:08 compute-0 nova_compute[259627]: 2025-10-14 09:21:08.386 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:08 compute-0 nova_compute[259627]: 2025-10-14 09:21:08.398 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433668.3557541, 51e84050-59be-4f23-b78f-18bc2d3e83fc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:08 compute-0 nova_compute[259627]: 2025-10-14 09:21:08.399 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] VM Paused (Lifecycle Event)
Oct 14 09:21:08 compute-0 nova_compute[259627]: 2025-10-14 09:21:08.415 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:08 compute-0 nova_compute[259627]: 2025-10-14 09:21:08.419 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:21:08 compute-0 nova_compute[259627]: 2025-10-14 09:21:08.436 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:21:08 compute-0 ceph-mon[74249]: pgmap v2026: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 3.0 MiB/s wr, 99 op/s
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.126 2 DEBUG nova.network.neutron [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.148 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.151 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.151 2 INFO nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Creating image(s)
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.190 2 DEBUG nova.storage.rbd_utils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.196 2 DEBUG nova.objects.instance [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.199 2 DEBUG oslo_concurrency.lockutils [req-c9c96913-2e66-49fd-a1dc-e4478034a350 req-df1264a7-71a1-4f50-bcc8-35458235ef24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.200 2 DEBUG nova.network.neutron [req-c9c96913-2e66-49fd-a1dc-e4478034a350 req-df1264a7-71a1-4f50-bcc8-35458235ef24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.258 2 DEBUG nova.storage.rbd_utils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.293 2 DEBUG nova.storage.rbd_utils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.299 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "02eb9d3366c21f9e864414441cf281a5c4f0a500" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.300 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "02eb9d3366c21f9e864414441cf281a5c4f0a500" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.380 2 DEBUG nova.compute.manager [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received event network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.380 2 DEBUG oslo_concurrency.lockutils [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.380 2 DEBUG oslo_concurrency.lockutils [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.381 2 DEBUG oslo_concurrency.lockutils [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.381 2 DEBUG nova.compute.manager [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Processing event network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.381 2 DEBUG nova.compute.manager [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received event network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.382 2 DEBUG oslo_concurrency.lockutils [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.382 2 DEBUG oslo_concurrency.lockutils [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.382 2 DEBUG oslo_concurrency.lockutils [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.382 2 DEBUG nova.compute.manager [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] No waiting events found dispatching network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.383 2 WARNING nova.compute.manager [req-c7ff071b-e818-42dd-b833-9d37897a235f req-f9a83f1a-a497-4a95-9b31-d2a299587bde 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received unexpected event network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe for instance with vm_state building and task_state spawning.
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.383 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.388 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433669.3879511, 51e84050-59be-4f23-b78f-18bc2d3e83fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.388 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] VM Resumed (Lifecycle Event)
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.391 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.398 2 INFO nova.virt.libvirt.driver [-] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Instance spawned successfully.
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.398 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.422 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.430 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.435 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.436 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.437 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.438 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.439 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.439 2 DEBUG nova.virt.libvirt.driver [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.474 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.522 2 INFO nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Took 7.31 seconds to spawn the instance on the hypervisor.
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.523 2 DEBUG nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.603 2 INFO nova.compute.manager [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Took 8.40 seconds to build instance.
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.621 2 DEBUG oslo_concurrency.lockutils [None req-0519d753-15d7-4552-afa6-c79498091525 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.684 2 DEBUG nova.virt.libvirt.imagebackend [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/79ee0dcf-4cc6-4560-a2c4-7c33c17b7486/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/79ee0dcf-4cc6-4560-a2c4-7c33c17b7486/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.771 2 DEBUG nova.virt.libvirt.imagebackend [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/79ee0dcf-4cc6-4560-a2c4-7c33c17b7486/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.772 2 DEBUG nova.storage.rbd_utils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] cloning images/79ee0dcf-4cc6-4560-a2c4-7c33c17b7486@snap to None/fd5669ba-0261-423e-8586-66c91ff570a4_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:21:09 compute-0 nova_compute[259627]: 2025-10-14 09:21:09.874 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "02eb9d3366c21f9e864414441cf281a5c4f0a500" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.031 2 DEBUG nova.objects.instance [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'migration_context' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.086 2 DEBUG nova.storage.rbd_utils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] flattening vms/fd5669ba-0261-423e-8586-66c91ff570a4_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:21:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.408 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Image rbd:vms/fd5669ba-0261-423e-8586-66c91ff570a4_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.409 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.410 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Ensure instance console log exists: /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.410 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.411 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.411 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.414 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Start _get_guest_xml network_info=[{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:20:49Z,direct_url=<?>,disk_format='raw',id=79ee0dcf-4cc6-4560-a2c4-7c33c17b7486,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1080027921-shelved',owner='b859f880079e4e6db96cdef422402fa1',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:20:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.418 2 WARNING nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.423 2 DEBUG nova.virt.libvirt.host [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.424 2 DEBUG nova.virt.libvirt.host [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.428 2 DEBUG nova.virt.libvirt.host [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.429 2 DEBUG nova.virt.libvirt.host [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.429 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.430 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:20:49Z,direct_url=<?>,disk_format='raw',id=79ee0dcf-4cc6-4560-a2c4-7c33c17b7486,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1080027921-shelved',owner='b859f880079e4e6db96cdef422402fa1',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:20:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.430 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.431 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.431 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.431 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.432 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.433 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.433 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.434 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.434 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.435 2 DEBUG nova.virt.hardware [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.436 2 DEBUG nova.objects.instance [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.455 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:10 compute-0 podman[375893]: 2025-10-14 09:21:10.645728771 +0000 UTC m=+0.052327271 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 09:21:10 compute-0 podman[375892]: 2025-10-14 09:21:10.703915755 +0000 UTC m=+0.111459948 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:21:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:21:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176999880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.895 2 DEBUG nova.network.neutron [req-c9c96913-2e66-49fd-a1dc-e4478034a350 req-df1264a7-71a1-4f50-bcc8-35458235ef24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updated VIF entry in instance network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.895 2 DEBUG nova.network.neutron [req-c9c96913-2e66-49fd-a1dc-e4478034a350 req-df1264a7-71a1-4f50-bcc8-35458235ef24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.904 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.922 2 DEBUG nova.storage.rbd_utils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.925 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:10 compute-0 nova_compute[259627]: 2025-10-14 09:21:10.968 2 DEBUG oslo_concurrency.lockutils [req-c9c96913-2e66-49fd-a1dc-e4478034a350 req-df1264a7-71a1-4f50-bcc8-35458235ef24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:21:11 compute-0 ceph-mon[74249]: pgmap v2027: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 14 09:21:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/176999880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:21:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:21:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3610381924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.383 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.386 2 DEBUG nova.virt.libvirt.vif [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1080027921',display_name='tempest-TestShelveInstance-server-1080027921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1080027921',id=114,image_ref='79ee0dcf-4cc6-4560-a2c4-7c33c17b7486',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1777944733',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:20:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b859f880079e4e6db96cdef422402fa1',ramdisk_id='',reservation_id='r-371aolc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1721835966',owner_user_name='tempest-TestShelveInstance-1721835966-project-member',shelved_at='2025-10-14T09:20:58.419336',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='79ee0dcf-4cc6-4560-a2c4-7c33c17b7486'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:21:05Z,user_data=None,user_id='8d77c101777148edbee39ba308af8e60',uuid=fd5669ba-0261-423e-8586-66c91ff570a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.387 2 DEBUG nova.network.os_vif_util [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converting VIF {"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.389 2 DEBUG nova.network.os_vif_util [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.391 2 DEBUG nova.objects.instance [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.412 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <uuid>fd5669ba-0261-423e-8586-66c91ff570a4</uuid>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <name>instance-00000072</name>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <nova:name>tempest-TestShelveInstance-server-1080027921</nova:name>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:21:10</nova:creationTime>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <nova:user uuid="8d77c101777148edbee39ba308af8e60">tempest-TestShelveInstance-1721835966-project-member</nova:user>
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <nova:project uuid="b859f880079e4e6db96cdef422402fa1">tempest-TestShelveInstance-1721835966</nova:project>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="79ee0dcf-4cc6-4560-a2c4-7c33c17b7486"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <nova:port uuid="8a93d82c-2ad9-4fb9-8867-f4d2cdac487f">
Oct 14 09:21:11 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <system>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <entry name="serial">fd5669ba-0261-423e-8586-66c91ff570a4</entry>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <entry name="uuid">fd5669ba-0261-423e-8586-66c91ff570a4</entry>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </system>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <os>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   </os>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <features>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   </features>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fd5669ba-0261-423e-8586-66c91ff570a4_disk">
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       </source>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fd5669ba-0261-423e-8586-66c91ff570a4_disk.config">
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       </source>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:21:11 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:5a:cb:1b"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <target dev="tap8a93d82c-2a"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/console.log" append="off"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <video>
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </video>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:21:11 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:21:11 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:21:11 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:21:11 compute-0 nova_compute[259627]: </domain>
Oct 14 09:21:11 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.413 2 DEBUG nova.compute.manager [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Preparing to wait for external event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.414 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.414 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.415 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.417 2 DEBUG nova.virt.libvirt.vif [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1080027921',display_name='tempest-TestShelveInstance-server-1080027921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1080027921',id=114,image_ref='79ee0dcf-4cc6-4560-a2c4-7c33c17b7486',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1777944733',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:20:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b859f880079e4e6db96cdef422402fa1',ramdisk_id='',reservation_id='r-371aolc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1721835966',owner_user_name='tempest-TestShelveInstance-1721835966-project-member',shelved_at='2025-10-14T09:20:58.419336',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='79ee0dcf-4cc6-4560-a2c4-7c33c17b7486'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:21:05Z,user_data=None,user_id='8d77c101777148edbee39ba308af8e60',uuid=fd5669ba-0261-423e-8586-66c91ff570a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.417 2 DEBUG nova.network.os_vif_util [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converting VIF {"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.421 2 DEBUG nova.network.os_vif_util [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.422 2 DEBUG os_vif [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a93d82c-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a93d82c-2a, col_values=(('external_ids', {'iface-id': '8a93d82c-2ad9-4fb9-8867-f4d2cdac487f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:cb:1b', 'vm-uuid': 'fd5669ba-0261-423e-8586-66c91ff570a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:11 compute-0 NetworkManager[44885]: <info>  [1760433671.4346] manager: (tap8a93d82c-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.470 2 INFO os_vif [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a')
Oct 14 09:21:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:11.529 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.531 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.531 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.532 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] No VIF found with MAC fa:16:3e:5a:cb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.532 2 INFO nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Using config drive
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.550 2 DEBUG nova.storage.rbd_utils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.580 2 DEBUG nova.objects.instance [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.648 2 DEBUG nova.objects.instance [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'keypairs' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.863 2 DEBUG nova.compute.manager [req-23bfcc80-a3d4-41f6-9597-05363c09d33e req-24537572-4b7e-408b-9459-80ce38461911 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received event network-changed-7957d7c0-7f92-4e55-b573-985301bbdbbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.864 2 DEBUG nova.compute.manager [req-23bfcc80-a3d4-41f6-9597-05363c09d33e req-24537572-4b7e-408b-9459-80ce38461911 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Refreshing instance network info cache due to event network-changed-7957d7c0-7f92-4e55-b573-985301bbdbbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.865 2 DEBUG oslo_concurrency.lockutils [req-23bfcc80-a3d4-41f6-9597-05363c09d33e req-24537572-4b7e-408b-9459-80ce38461911 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.865 2 DEBUG oslo_concurrency.lockutils [req-23bfcc80-a3d4-41f6-9597-05363c09d33e req-24537572-4b7e-408b-9459-80ce38461911 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:21:11 compute-0 nova_compute[259627]: 2025-10-14 09:21:11.866 2 DEBUG nova.network.neutron [req-23bfcc80-a3d4-41f6-9597-05363c09d33e req-24537572-4b7e-408b-9459-80ce38461911 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Refreshing network info cache for port 7957d7c0-7f92-4e55-b573-985301bbdbbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:21:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 288 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.6 MiB/s wr, 180 op/s
Oct 14 09:21:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3610381924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:21:12 compute-0 nova_compute[259627]: 2025-10-14 09:21:12.609 2 INFO nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Creating config drive at /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config
Oct 14 09:21:12 compute-0 nova_compute[259627]: 2025-10-14 09:21:12.621 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdng9jgg1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:12 compute-0 nova_compute[259627]: 2025-10-14 09:21:12.802 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdng9jgg1" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:12 compute-0 nova_compute[259627]: 2025-10-14 09:21:12.834 2 DEBUG nova.storage.rbd_utils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] rbd image fd5669ba-0261-423e-8586-66c91ff570a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:12 compute-0 nova_compute[259627]: 2025-10-14 09:21:12.838 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config fd5669ba-0261-423e-8586-66c91ff570a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:12.942148) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433672942183, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 545, "num_deletes": 257, "total_data_size": 524869, "memory_usage": 536600, "flush_reason": "Manual Compaction"}
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433672947397, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 519955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42510, "largest_seqno": 43054, "table_properties": {"data_size": 516893, "index_size": 1034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7195, "raw_average_key_size": 18, "raw_value_size": 510608, "raw_average_value_size": 1340, "num_data_blocks": 45, "num_entries": 381, "num_filter_entries": 381, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433646, "oldest_key_time": 1760433646, "file_creation_time": 1760433672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 5314 microseconds, and 2397 cpu microseconds.
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:12.947461) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 519955 bytes OK
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:12.947481) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:12.949441) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:12.949457) EVENT_LOG_v1 {"time_micros": 1760433672949452, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:12.949476) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 521733, prev total WAL file size 521733, number of live WAL files 2.
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:12.950009) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353037' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(507KB)], [95(10119KB)]
Oct 14 09:21:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433672950044, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10881844, "oldest_snapshot_seqno": -1}
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6505 keys, 10755184 bytes, temperature: kUnknown
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433673003649, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10755184, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10708212, "index_size": 29582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 168730, "raw_average_key_size": 25, "raw_value_size": 10588143, "raw_average_value_size": 1627, "num_data_blocks": 1167, "num_entries": 6505, "num_filter_entries": 6505, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:13.003926) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10755184 bytes
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:13.006975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.6 rd, 200.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.9 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(41.6) write-amplify(20.7) OK, records in: 7034, records dropped: 529 output_compression: NoCompression
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:13.006996) EVENT_LOG_v1 {"time_micros": 1760433673006987, "job": 56, "event": "compaction_finished", "compaction_time_micros": 53714, "compaction_time_cpu_micros": 24028, "output_level": 6, "num_output_files": 1, "total_output_size": 10755184, "num_input_records": 7034, "num_output_records": 6505, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433673007221, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433673009358, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:12.949924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:13.009423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:13.009428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:13.009430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:13.009432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:13.009434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.068 2 DEBUG oslo_concurrency.processutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config fd5669ba-0261-423e-8586-66c91ff570a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.070 2 INFO nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Deleting local config drive /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4/disk.config because it was imported into RBD.
Oct 14 09:21:13 compute-0 kernel: tap8a93d82c-2a: entered promiscuous mode
Oct 14 09:21:13 compute-0 NetworkManager[44885]: <info>  [1760433673.1372] manager: (tap8a93d82c-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Oct 14 09:21:13 compute-0 ovn_controller[152662]: 2025-10-14T09:21:13Z|01224|binding|INFO|Claiming lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f for this chassis.
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:13 compute-0 ovn_controller[152662]: 2025-10-14T09:21:13Z|01225|binding|INFO|8a93d82c-2ad9-4fb9-8867-f4d2cdac487f: Claiming fa:16:3e:5a:cb:1b 10.100.0.8
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.159 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:cb:1b 10.100.0.8'], port_security=['fa:16:3e:5a:cb:1b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fd5669ba-0261-423e-8586-66c91ff570a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b859f880079e4e6db96cdef422402fa1', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fb7ff285-1f24-4ca1-a6b2-fb0966d27f7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1272bcfa-9334-4a66-b4ee-2da7a182025f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.162 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f in datapath 6c11d5e6-13c9-49c7-982d-8d1198ac7dea bound to our chassis
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.166 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c11d5e6-13c9-49c7-982d-8d1198ac7dea
Oct 14 09:21:13 compute-0 ovn_controller[152662]: 2025-10-14T09:21:13Z|01226|binding|INFO|Setting lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f ovn-installed in OVS
Oct 14 09:21:13 compute-0 ovn_controller[152662]: 2025-10-14T09:21:13Z|01227|binding|INFO|Setting lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f up in Southbound
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[76db6af2-a4ce-4cc6-98e4-bc239075f95a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.187 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c11d5e6-11 in ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:13 compute-0 systemd-machined[214636]: New machine qemu-147-instance-00000072.
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.190 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c11d5e6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.190 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ff589a-5c9d-4d69-b282-30bda0d4fb9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.194 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8109af-7cc1-45b8-bb21-dfa4c6d00427]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 systemd-udevd[376072]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:21:13 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000072.
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.207 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[73a7e7f8-ebf3-4e50-98a6-bd264f5d8c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 NetworkManager[44885]: <info>  [1760433673.2148] device (tap8a93d82c-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:21:13 compute-0 NetworkManager[44885]: <info>  [1760433673.2159] device (tap8a93d82c-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:21:13 compute-0 ceph-mon[74249]: pgmap v2028: 305 pgs: 305 active+clean; 288 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.6 MiB/s wr, 180 op/s
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.229 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e44583b5-54dd-47f6-a5b8-de38928ce517]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.281 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[08eb20ab-1612-4a04-a26d-078cfb63302c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 NetworkManager[44885]: <info>  [1760433673.2916] manager: (tap6c11d5e6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.290 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[918f9a36-e2b3-452b-9d02-8b5217944fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 systemd-udevd[376076]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.332 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[87df84e8-e2e6-43b1-a4e1-3dfcc92ab743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.335 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[21d3fae5-56c2-4d8d-a40f-ccda24d169c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 NetworkManager[44885]: <info>  [1760433673.3629] device (tap6c11d5e6-10): carrier: link connected
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.374 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[31bc688e-7e3d-4f93-9130-13af15eb27a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.397 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4df5af8f-3307-495f-a98d-156ce9b2c9c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c11d5e6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:99:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745212, 'reachable_time': 21237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376104, 'error': None, 'target': 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d419ccad-6214-4393-8b87-5bee38e83026]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:9900'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745212, 'tstamp': 745212}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376105, 'error': None, 'target': 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.435 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[863c1218-1071-4f3b-b8c7-61cdbab18a55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c11d5e6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:99:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745212, 'reachable_time': 21237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376106, 'error': None, 'target': 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46237378-fe36-49e9-9041-e35222a05343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.557 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b09493d-4b6c-4697-b211-d9aad2df68e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.559 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c11d5e6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.559 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.560 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c11d5e6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:13 compute-0 kernel: tap6c11d5e6-10: entered promiscuous mode
Oct 14 09:21:13 compute-0 NetworkManager[44885]: <info>  [1760433673.6148] manager: (tap6c11d5e6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.619 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c11d5e6-10, col_values=(('external_ids', {'iface-id': 'b7b18d0e-c023-45b7-896e-bf4c2ce15855'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:13 compute-0 ovn_controller[152662]: 2025-10-14T09:21:13Z|01228|binding|INFO|Releasing lport b7b18d0e-c023-45b7-896e-bf4c2ce15855 from this chassis (sb_readonly=0)
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.624 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c11d5e6-13c9-49c7-982d-8d1198ac7dea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c11d5e6-13c9-49c7-982d-8d1198ac7dea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.625 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[020bbadb-dd50-4326-b869-7d1ee8f189ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.626 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-6c11d5e6-13c9-49c7-982d-8d1198ac7dea
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/6c11d5e6-13c9-49c7-982d-8d1198ac7dea.pid.haproxy
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 6c11d5e6-13c9-49c7-982d-8d1198ac7dea
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:21:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:13.627 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'env', 'PROCESS_TAG=haproxy-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c11d5e6-13c9-49c7-982d-8d1198ac7dea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.938 2 DEBUG nova.compute.manager [req-546ff1e2-9f0b-4ff2-b566-b00e3b6fe537 req-29d7e527-5669-41a9-b1ae-cf6609870939 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.938 2 DEBUG oslo_concurrency.lockutils [req-546ff1e2-9f0b-4ff2-b566-b00e3b6fe537 req-29d7e527-5669-41a9-b1ae-cf6609870939 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.939 2 DEBUG oslo_concurrency.lockutils [req-546ff1e2-9f0b-4ff2-b566-b00e3b6fe537 req-29d7e527-5669-41a9-b1ae-cf6609870939 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.939 2 DEBUG oslo_concurrency.lockutils [req-546ff1e2-9f0b-4ff2-b566-b00e3b6fe537 req-29d7e527-5669-41a9-b1ae-cf6609870939 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:13 compute-0 nova_compute[259627]: 2025-10-14 09:21:13.939 2 DEBUG nova.compute.manager [req-546ff1e2-9f0b-4ff2-b566-b00e3b6fe537 req-29d7e527-5669-41a9-b1ae-cf6609870939 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Processing event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:21:13 compute-0 podman[376180]: 2025-10-14 09:21:13.98289812 +0000 UTC m=+0.055411047 container create 6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 09:21:14 compute-0 systemd[1]: Started libpod-conmon-6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f.scope.
Oct 14 09:21:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:21:14 compute-0 podman[376180]: 2025-10-14 09:21:13.953726801 +0000 UTC m=+0.026239768 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c95ee8a6c8706c9f469e44175197a952ad2dcad1309fc5908abec0234060d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:14 compute-0 podman[376180]: 2025-10-14 09:21:14.067088815 +0000 UTC m=+0.139601762 container init 6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:21:14 compute-0 podman[376180]: 2025-10-14 09:21:14.074023046 +0000 UTC m=+0.146535973 container start 6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:21:14 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[376196]: [NOTICE]   (376200) : New worker (376202) forked
Oct 14 09:21:14 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[376196]: [NOTICE]   (376200) : Loading success.
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 288 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 161 op/s
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.230 2 DEBUG nova.compute.manager [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.230 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433674.229521, fd5669ba-0261-423e-8586-66c91ff570a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.231 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Started (Lifecycle Event)
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.235 2 DEBUG nova.virt.libvirt.driver [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.239 2 INFO nova.virt.libvirt.driver [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance spawned successfully.
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.254 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.258 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.287 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.288 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433674.2304468, fd5669ba-0261-423e-8586-66c91ff570a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.288 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Paused (Lifecycle Event)
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.305 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.309 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433674.233662, fd5669ba-0261-423e-8586-66c91ff570a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.309 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Resumed (Lifecycle Event)
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.326 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.331 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.360 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.663 2 DEBUG nova.network.neutron [req-23bfcc80-a3d4-41f6-9597-05363c09d33e req-24537572-4b7e-408b-9459-80ce38461911 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Updated VIF entry in instance network info cache for port 7957d7c0-7f92-4e55-b573-985301bbdbbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.664 2 DEBUG nova.network.neutron [req-23bfcc80-a3d4-41f6-9597-05363c09d33e req-24537572-4b7e-408b-9459-80ce38461911 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Updating instance_info_cache with network_info: [{"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:14 compute-0 nova_compute[259627]: 2025-10-14 09:21:14.699 2 DEBUG oslo_concurrency.lockutils [req-23bfcc80-a3d4-41f6-9597-05363c09d33e req-24537572-4b7e-408b-9459-80ce38461911 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-51e84050-59be-4f23-b78f-18bc2d3e83fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:21:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 14 09:21:15 compute-0 ceph-mon[74249]: pgmap v2029: 305 pgs: 305 active+clean; 288 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 161 op/s
Oct 14 09:21:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 14 09:21:15 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 14 09:21:15 compute-0 nova_compute[259627]: 2025-10-14 09:21:15.658 2 DEBUG nova.compute.manager [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:15 compute-0 nova_compute[259627]: 2025-10-14 09:21:15.714 2 DEBUG oslo_concurrency.lockutils [None req-ce9cf128-fea0-45fb-a334-f428d9296732 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:16 compute-0 nova_compute[259627]: 2025-10-14 09:21:16.031 2 DEBUG nova.compute.manager [req-5d47a706-d143-45e7-b2ea-0511a6bb60f2 req-7531d1d4-3d37-4dab-8447-b7d5b9e56a97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:16 compute-0 nova_compute[259627]: 2025-10-14 09:21:16.032 2 DEBUG oslo_concurrency.lockutils [req-5d47a706-d143-45e7-b2ea-0511a6bb60f2 req-7531d1d4-3d37-4dab-8447-b7d5b9e56a97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:16 compute-0 nova_compute[259627]: 2025-10-14 09:21:16.032 2 DEBUG oslo_concurrency.lockutils [req-5d47a706-d143-45e7-b2ea-0511a6bb60f2 req-7531d1d4-3d37-4dab-8447-b7d5b9e56a97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:16 compute-0 nova_compute[259627]: 2025-10-14 09:21:16.032 2 DEBUG oslo_concurrency.lockutils [req-5d47a706-d143-45e7-b2ea-0511a6bb60f2 req-7531d1d4-3d37-4dab-8447-b7d5b9e56a97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:16 compute-0 nova_compute[259627]: 2025-10-14 09:21:16.033 2 DEBUG nova.compute.manager [req-5d47a706-d143-45e7-b2ea-0511a6bb60f2 req-7531d1d4-3d37-4dab-8447-b7d5b9e56a97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] No waiting events found dispatching network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:21:16 compute-0 nova_compute[259627]: 2025-10-14 09:21:16.033 2 WARNING nova.compute.manager [req-5d47a706-d143-45e7-b2ea-0511a6bb60f2 req-7531d1d4-3d37-4dab-8447-b7d5b9e56a97 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received unexpected event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f for instance with vm_state active and task_state None.
Oct 14 09:21:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 277 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 263 op/s
Oct 14 09:21:16 compute-0 ceph-mon[74249]: osdmap e279: 3 total, 3 up, 3 in
Oct 14 09:21:16 compute-0 nova_compute[259627]: 2025-10-14 09:21:16.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:17 compute-0 ceph-mon[74249]: pgmap v2031: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 277 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 263 op/s
Oct 14 09:21:17 compute-0 sshd-session[375631]: Invalid user debian from 188.150.249.96 port 33160
Oct 14 09:21:17 compute-0 sshd-session[375631]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:21:17 compute-0 sshd-session[375631]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:21:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 277 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 263 op/s
Oct 14 09:21:19 compute-0 nova_compute[259627]: 2025-10-14 09:21:19.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:19 compute-0 ceph-mon[74249]: pgmap v2032: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 277 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 263 op/s
Oct 14 09:21:19 compute-0 sshd-session[375631]: Failed password for invalid user debian from 188.150.249.96 port 33160 ssh2
Oct 14 09:21:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 264 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 4.9 MiB/s wr, 253 op/s
Oct 14 09:21:20 compute-0 sshd-session[375631]: Connection closed by invalid user debian 188.150.249.96 port 33160 [preauth]
Oct 14 09:21:21 compute-0 ceph-mon[74249]: pgmap v2033: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 264 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 4.9 MiB/s wr, 253 op/s
Oct 14 09:21:21 compute-0 nova_compute[259627]: 2025-10-14 09:21:21.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:21 compute-0 ovn_controller[152662]: 2025-10-14T09:21:21Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:7c:44 10.100.0.3
Oct 14 09:21:21 compute-0 ovn_controller[152662]: 2025-10-14T09:21:21Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:7c:44 10.100.0.3
Oct 14 09:21:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 256 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.7 MiB/s wr, 216 op/s
Oct 14 09:21:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 14 09:21:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 14 09:21:22 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:22.953701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433682953730, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 356, "num_deletes": 251, "total_data_size": 185403, "memory_usage": 191728, "flush_reason": "Manual Compaction"}
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433682956541, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 183176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43055, "largest_seqno": 43410, "table_properties": {"data_size": 180961, "index_size": 379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6135, "raw_average_key_size": 20, "raw_value_size": 176468, "raw_average_value_size": 590, "num_data_blocks": 17, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433673, "oldest_key_time": 1760433673, "file_creation_time": 1760433682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 2864 microseconds, and 1035 cpu microseconds.
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:22.956564) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 183176 bytes OK
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:22.956582) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:22.960387) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:22.960400) EVENT_LOG_v1 {"time_micros": 1760433682960396, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:22.960412) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 183020, prev total WAL file size 183020, number of live WAL files 2.
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:22.960821) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353030' seq:72057594037927935, type:22 .. '6D6772737461740031373531' seq:0, type:0; will stop at (end)
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(178KB)], [98(10MB)]
Oct 14 09:21:22 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433682960888, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 10938360, "oldest_snapshot_seqno": -1}
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6291 keys, 7649389 bytes, temperature: kUnknown
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433683003433, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7649389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7608627, "index_size": 23948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 164478, "raw_average_key_size": 26, "raw_value_size": 7496937, "raw_average_value_size": 1191, "num_data_blocks": 932, "num_entries": 6291, "num_filter_entries": 6291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:23.003681) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7649389 bytes
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:23.004616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 256.6 rd, 179.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(101.5) write-amplify(41.8) OK, records in: 6804, records dropped: 513 output_compression: NoCompression
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:23.004632) EVENT_LOG_v1 {"time_micros": 1760433683004625, "job": 58, "event": "compaction_finished", "compaction_time_micros": 42625, "compaction_time_cpu_micros": 21855, "output_level": 6, "num_output_files": 1, "total_output_size": 7649389, "num_input_records": 6804, "num_output_records": 6291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433683004774, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433683006497, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:22.960742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:23.006558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:23.006563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:23.006565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:23.006567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:21:23.006569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:21:23 compute-0 ceph-mon[74249]: pgmap v2034: 305 pgs: 305 active+clean; 256 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.7 MiB/s wr, 216 op/s
Oct 14 09:21:23 compute-0 ceph-mon[74249]: osdmap e280: 3 total, 3 up, 3 in
Oct 14 09:21:24 compute-0 nova_compute[259627]: 2025-10-14 09:21:24.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 256 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 194 op/s
Oct 14 09:21:25 compute-0 ceph-mon[74249]: pgmap v2036: 305 pgs: 305 active+clean; 256 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 194 op/s
Oct 14 09:21:26 compute-0 ovn_controller[152662]: 2025-10-14T09:21:26Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:cb:1b 10.100.0.8
Oct 14 09:21:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 279 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 138 op/s
Oct 14 09:21:26 compute-0 nova_compute[259627]: 2025-10-14 09:21:26.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:27 compute-0 ceph-mon[74249]: pgmap v2037: 305 pgs: 305 active+clean; 279 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 138 op/s
Oct 14 09:21:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 279 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 138 op/s
Oct 14 09:21:28 compute-0 nova_compute[259627]: 2025-10-14 09:21:28.802 2 INFO nova.compute.manager [None req-b8f2d123-b581-4986-b564-716be5cc66eb 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Get console output
Oct 14 09:21:28 compute-0 nova_compute[259627]: 2025-10-14 09:21:28.811 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.191 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "51e84050-59be-4f23-b78f-18bc2d3e83fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.192 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.193 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.194 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.194 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.196 2 INFO nova.compute.manager [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Terminating instance
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.198 2 DEBUG nova.compute.manager [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:21:29 compute-0 kernel: tap7957d7c0-7f (unregistering): left promiscuous mode
Oct 14 09:21:29 compute-0 NetworkManager[44885]: <info>  [1760433689.2518] device (tap7957d7c0-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 ovn_controller[152662]: 2025-10-14T09:21:29Z|01229|binding|INFO|Releasing lport 7957d7c0-7f92-4e55-b573-985301bbdbbe from this chassis (sb_readonly=0)
Oct 14 09:21:29 compute-0 ovn_controller[152662]: 2025-10-14T09:21:29Z|01230|binding|INFO|Setting lport 7957d7c0-7f92-4e55-b573-985301bbdbbe down in Southbound
Oct 14 09:21:29 compute-0 ovn_controller[152662]: 2025-10-14T09:21:29Z|01231|binding|INFO|Removing iface tap7957d7c0-7f ovn-installed in OVS
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.287 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:7c:44 10.100.0.3'], port_security=['fa:16:3e:8c:7c:44 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '51e84050-59be-4f23-b78f-18bc2d3e83fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61c1acdc-e817-4d26-8900-47d35332175a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd92719f8-d4b1-482a-bb83-986dc81811df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfec6e64-6ad9-4ee1-953d-480113ac60ef, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7957d7c0-7f92-4e55-b573-985301bbdbbe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.289 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7957d7c0-7f92-4e55-b573-985301bbdbbe in datapath 61c1acdc-e817-4d26-8900-47d35332175a unbound from our chassis
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.291 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61c1acdc-e817-4d26-8900-47d35332175a
Oct 14 09:21:29 compute-0 ceph-mon[74249]: pgmap v2038: 305 pgs: 305 active+clean; 279 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 138 op/s
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6a13d2a2-0a2e-4861-91f5-adbcad970969]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct 14 09:21:29 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000074.scope: Consumed 12.595s CPU time.
Oct 14 09:21:29 compute-0 systemd-machined[214636]: Machine qemu-146-instance-00000074 terminated.
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.354 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a02a13-a72d-4d83-bc28-cee48b734c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.357 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fe87a9-fa19-4c29-8e43-67c44c3bbd06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:29 compute-0 podman[376215]: 2025-10-14 09:21:29.393210817 +0000 UTC m=+0.107505291 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.397 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dca045eb-8e47-427d-b9a7-89d8cbd0cc94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.419 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad3305e-f33f-47a3-a8ba-5d87d96bed3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61c1acdc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:79:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741459, 'reachable_time': 28535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376261, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:29 compute-0 podman[376214]: 2025-10-14 09:21:29.427719267 +0000 UTC m=+0.129162364 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, tcib_managed=true)
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.442 2 INFO nova.virt.libvirt.driver [-] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Instance destroyed successfully.
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.443 2 DEBUG nova.objects.instance [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 51e84050-59be-4f23-b78f-18bc2d3e83fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.442 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b8cec7-3131-40c8-9238-5d91d7449442]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61c1acdc-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741471, 'tstamp': 741471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376266, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61c1acdc-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741474, 'tstamp': 741474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376266, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.444 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61c1acdc-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.450 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61c1acdc-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.450 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.450 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61c1acdc-e0, col_values=(('external_ids', {'iface-id': '6f7bf4a5-58bf-4ec8-a189-0a9624df5601'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:29.451 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.456 2 DEBUG nova.virt.libvirt.vif [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-892839943',display_name='tempest-TestNetworkBasicOps-server-892839943',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-892839943',id=116,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPU9xVRNAF650NaFJ2rdO+hE0GQMvrdRo09qcAPpFjehwF2GQim0973u5oTNRL9wk6YXV2ZXz62pkj/1H8t5tw9BZg024xDTo5MHsD4N7mhy1HoPBr22QSmOlaoBMfvLww==',key_name='tempest-TestNetworkBasicOps-803603811',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:21:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-xw3rgozp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:21:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=51e84050-59be-4f23-b78f-18bc2d3e83fc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.457 2 DEBUG nova.network.os_vif_util [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "address": "fa:16:3e:8c:7c:44", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7957d7c0-7f", "ovs_interfaceid": "7957d7c0-7f92-4e55-b573-985301bbdbbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.457 2 DEBUG nova.network.os_vif_util [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:7c:44,bridge_name='br-int',has_traffic_filtering=True,id=7957d7c0-7f92-4e55-b573-985301bbdbbe,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7957d7c0-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.458 2 DEBUG os_vif [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:7c:44,bridge_name='br-int',has_traffic_filtering=True,id=7957d7c0-7f92-4e55-b573-985301bbdbbe,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7957d7c0-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.460 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7957d7c0-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.465 2 INFO os_vif [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:7c:44,bridge_name='br-int',has_traffic_filtering=True,id=7957d7c0-7f92-4e55-b573-985301bbdbbe,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7957d7c0-7f')
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.699 2 DEBUG nova.compute.manager [req-dc0a7846-77dc-41bb-89ed-f66a78009fbd req-87836bc7-3051-4e4e-beb9-8b08686d4d50 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received event network-vif-unplugged-7957d7c0-7f92-4e55-b573-985301bbdbbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.700 2 DEBUG oslo_concurrency.lockutils [req-dc0a7846-77dc-41bb-89ed-f66a78009fbd req-87836bc7-3051-4e4e-beb9-8b08686d4d50 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.700 2 DEBUG oslo_concurrency.lockutils [req-dc0a7846-77dc-41bb-89ed-f66a78009fbd req-87836bc7-3051-4e4e-beb9-8b08686d4d50 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.701 2 DEBUG oslo_concurrency.lockutils [req-dc0a7846-77dc-41bb-89ed-f66a78009fbd req-87836bc7-3051-4e4e-beb9-8b08686d4d50 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.701 2 DEBUG nova.compute.manager [req-dc0a7846-77dc-41bb-89ed-f66a78009fbd req-87836bc7-3051-4e4e-beb9-8b08686d4d50 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] No waiting events found dispatching network-vif-unplugged-7957d7c0-7f92-4e55-b573-985301bbdbbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.702 2 DEBUG nova.compute.manager [req-dc0a7846-77dc-41bb-89ed-f66a78009fbd req-87836bc7-3051-4e4e-beb9-8b08686d4d50 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received event network-vif-unplugged-7957d7c0-7f92-4e55-b573-985301bbdbbe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.842 2 INFO nova.virt.libvirt.driver [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Deleting instance files /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc_del
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.843 2 INFO nova.virt.libvirt.driver [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Deletion of /var/lib/nova/instances/51e84050-59be-4f23-b78f-18bc2d3e83fc_del complete
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.900 2 INFO nova.compute.manager [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.901 2 DEBUG oslo.service.loopingcall [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.902 2 DEBUG nova.compute.manager [-] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:21:29 compute-0 nova_compute[259627]: 2025-10-14 09:21:29.902 2 DEBUG nova.network.neutron [-] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:21:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 246 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 162 op/s
Oct 14 09:21:30 compute-0 sshd-session[376212]: Invalid user pi from 188.150.249.96 port 36124
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.204 2 DEBUG nova.network.neutron [-] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.230 2 INFO nova.compute.manager [-] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Took 1.33 seconds to deallocate network for instance.
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.288 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.289 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:31 compute-0 ceph-mon[74249]: pgmap v2039: 305 pgs: 305 active+clean; 246 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 162 op/s
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.322 2 DEBUG nova.compute.manager [req-e6712670-ae38-462d-a335-ad38d33e7956 req-72653fc9-ae75-4f44-8bfa-2d64328878ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received event network-vif-deleted-7957d7c0-7f92-4e55-b573-985301bbdbbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.383 2 DEBUG oslo_concurrency.processutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:31 compute-0 sshd-session[376212]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:21:31 compute-0 sshd-session[376212]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:21:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241688225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.809 2 DEBUG nova.compute.manager [req-d773d881-ea84-4887-9f01-464d6e7d100e req-a7bb2b1e-5a5b-4b61-94e8-7f5b4d6f316d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received event network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.810 2 DEBUG oslo_concurrency.lockutils [req-d773d881-ea84-4887-9f01-464d6e7d100e req-a7bb2b1e-5a5b-4b61-94e8-7f5b4d6f316d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.811 2 DEBUG oslo_concurrency.lockutils [req-d773d881-ea84-4887-9f01-464d6e7d100e req-a7bb2b1e-5a5b-4b61-94e8-7f5b4d6f316d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.812 2 DEBUG oslo_concurrency.lockutils [req-d773d881-ea84-4887-9f01-464d6e7d100e req-a7bb2b1e-5a5b-4b61-94e8-7f5b4d6f316d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.812 2 DEBUG nova.compute.manager [req-d773d881-ea84-4887-9f01-464d6e7d100e req-a7bb2b1e-5a5b-4b61-94e8-7f5b4d6f316d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] No waiting events found dispatching network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.813 2 WARNING nova.compute.manager [req-d773d881-ea84-4887-9f01-464d6e7d100e req-a7bb2b1e-5a5b-4b61-94e8-7f5b4d6f316d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Received unexpected event network-vif-plugged-7957d7c0-7f92-4e55-b573-985301bbdbbe for instance with vm_state deleted and task_state None.
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.816 2 DEBUG oslo_concurrency.processutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.825 2 DEBUG nova.compute.provider_tree [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.844 2 DEBUG nova.scheduler.client.report [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.882 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:31 compute-0 nova_compute[259627]: 2025-10-14 09:21:31.911 2 INFO nova.scheduler.client.report [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 51e84050-59be-4f23-b78f-18bc2d3e83fc
Oct 14 09:21:32 compute-0 nova_compute[259627]: 2025-10-14 09:21:32.026 2 DEBUG oslo_concurrency.lockutils [None req-bbec763b-133f-4f80-bfa4-32bdb37430b6 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "51e84050-59be-4f23-b78f-18bc2d3e83fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 200 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 846 KiB/s rd, 1.0 MiB/s wr, 129 op/s
Oct 14 09:21:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3241688225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:21:32
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log']
Oct 14 09:21:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:21:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:21:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.211 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.212 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.212 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.212 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.213 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.214 2 INFO nova.compute.manager [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Terminating instance
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.215 2 DEBUG nova.compute.manager [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:21:33 compute-0 kernel: tap8a93d82c-2a (unregistering): left promiscuous mode
Oct 14 09:21:33 compute-0 NetworkManager[44885]: <info>  [1760433693.2706] device (tap8a93d82c-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:33 compute-0 ovn_controller[152662]: 2025-10-14T09:21:33Z|01232|binding|INFO|Releasing lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f from this chassis (sb_readonly=0)
Oct 14 09:21:33 compute-0 ovn_controller[152662]: 2025-10-14T09:21:33Z|01233|binding|INFO|Setting lport 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f down in Southbound
Oct 14 09:21:33 compute-0 ovn_controller[152662]: 2025-10-14T09:21:33Z|01234|binding|INFO|Removing iface tap8a93d82c-2a ovn-installed in OVS
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.300 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:cb:1b 10.100.0.8'], port_security=['fa:16:3e:5a:cb:1b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fd5669ba-0261-423e-8586-66c91ff570a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b859f880079e4e6db96cdef422402fa1', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'fb7ff285-1f24-4ca1-a6b2-fb0966d27f7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1272bcfa-9334-4a66-b4ee-2da7a182025f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.301 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f in datapath 6c11d5e6-13c9-49c7-982d-8d1198ac7dea unbound from our chassis
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.302 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c11d5e6-13c9-49c7-982d-8d1198ac7dea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.304 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d5f094-60a4-4a54-a0fd-d552d19b3309]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.305 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea namespace which is not needed anymore
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:33 compute-0 ceph-mon[74249]: pgmap v2040: 305 pgs: 305 active+clean; 200 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 846 KiB/s rd, 1.0 MiB/s wr, 129 op/s
Oct 14 09:21:33 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct 14 09:21:33 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000072.scope: Consumed 13.518s CPU time.
Oct 14 09:21:33 compute-0 systemd-machined[214636]: Machine qemu-147-instance-00000072 terminated.
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.447 2 INFO nova.virt.libvirt.driver [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance destroyed successfully.
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.448 2 DEBUG nova.objects.instance [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lazy-loading 'resources' on Instance uuid fd5669ba-0261-423e-8586-66c91ff570a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:33 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[376196]: [NOTICE]   (376200) : haproxy version is 2.8.14-c23fe91
Oct 14 09:21:33 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[376196]: [NOTICE]   (376200) : path to executable is /usr/sbin/haproxy
Oct 14 09:21:33 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[376196]: [WARNING]  (376200) : Exiting Master process...
Oct 14 09:21:33 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[376196]: [WARNING]  (376200) : Exiting Master process...
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.463 2 DEBUG nova.virt.libvirt.vif [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1080027921',display_name='tempest-TestShelveInstance-server-1080027921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1080027921',id=114,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDaFeG/xJmvGbKmYgn4dJf37Cqex3YsQYrFJ72iAZg+c2DsrdPgi+tOr4SRSonbIRwf/h+BLYvaqfFBXVZQ0pwUCpayjdgXBvqZr1W73e5QpjlvvzQksOoSh5mhqEQBaTw==',key_name='tempest-TestShelveInstance-1777944733',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:21:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b859f880079e4e6db96cdef422402fa1',ramdisk_id='',reservation_id='r-371aolc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1721835966',owner_user_name='tempest-TestShelveInstance-1721835966-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:21:15Z,user_data=None,user_id='8d77c101777148edbee39ba308af8e60',uuid=fd5669ba-0261-423e-8586-66c91ff570a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.463 2 DEBUG nova.network.os_vif_util [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converting VIF {"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:21:33 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[376196]: [ALERT]    (376200) : Current worker (376202) exited with code 143 (Terminated)
Oct 14 09:21:33 compute-0 neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea[376196]: [WARNING]  (376200) : All workers exited. Exiting... (0)
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.465 2 DEBUG nova.network.os_vif_util [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.465 2 DEBUG os_vif [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:21:33 compute-0 systemd[1]: libpod-6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f.scope: Deactivated successfully.
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a93d82c-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:33 compute-0 conmon[376196]: conmon 6eae5d1e450da82be3e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f.scope/container/memory.events
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:33 compute-0 podman[376340]: 2025-10-14 09:21:33.475709266 +0000 UTC m=+0.058223476 container died 6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.477 2 INFO os_vif [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f,network=Network(6c11d5e6-13c9-49c7-982d-8d1198ac7dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a93d82c-2a')
Oct 14 09:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4c95ee8a6c8706c9f469e44175197a952ad2dcad1309fc5908abec0234060d8-merged.mount: Deactivated successfully.
Oct 14 09:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f-userdata-shm.mount: Deactivated successfully.
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.515 2 DEBUG nova.compute.manager [req-2edbd33d-6e3a-4d45-8b73-a4a19c00b31f req-6cb55ced-dbd7-4306-be9b-978d110696fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.516 2 DEBUG nova.compute.manager [req-2edbd33d-6e3a-4d45-8b73-a4a19c00b31f req-6cb55ced-dbd7-4306-be9b-978d110696fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing instance network info cache due to event network-changed-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.516 2 DEBUG oslo_concurrency.lockutils [req-2edbd33d-6e3a-4d45-8b73-a4a19c00b31f req-6cb55ced-dbd7-4306-be9b-978d110696fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.517 2 DEBUG oslo_concurrency.lockutils [req-2edbd33d-6e3a-4d45-8b73-a4a19c00b31f req-6cb55ced-dbd7-4306-be9b-978d110696fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.517 2 DEBUG nova.network.neutron [req-2edbd33d-6e3a-4d45-8b73-a4a19c00b31f req-6cb55ced-dbd7-4306-be9b-978d110696fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Refreshing network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:21:33 compute-0 podman[376340]: 2025-10-14 09:21:33.528237011 +0000 UTC m=+0.110751251 container cleanup 6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:21:33 compute-0 systemd[1]: libpod-conmon-6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f.scope: Deactivated successfully.
Oct 14 09:21:33 compute-0 podman[376395]: 2025-10-14 09:21:33.592355581 +0000 UTC m=+0.045418411 container remove 6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.600 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7acb2792-b7b2-4c3e-a03f-7a37e802eced]: (4, ('Tue Oct 14 09:21:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea (6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f)\n6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f\nTue Oct 14 09:21:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea (6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f)\n6eae5d1e450da82be3e407e1bf2642d01dc34790ea07068c7626d15e67a0357f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.602 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb64030a-3de5-4772-bf48-10c860d78933]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.603 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c11d5e6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:33 compute-0 kernel: tap6c11d5e6-10: left promiscuous mode
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.666 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[016eec91-5bfa-47a6-bac4-ad2275b3c8ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.690 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5fc430-02df-446b-a2a3-00f2552d246f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.692 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2454e754-74cb-403e-9aa9-1997cde594c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.715 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f03f7653-8505-4ad4-9fd8-03d764ee7d6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745202, 'reachable_time': 24276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376412, 'error': None, 'target': 'ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.717 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c11d5e6-13c9-49c7-982d-8d1198ac7dea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:21:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:33.718 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[14266f27-24c9-4595-9aa2-8badeae15d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c11d5e6\x2d13c9\x2d49c7\x2d982d\x2d8d1198ac7dea.mount: Deactivated successfully.
Oct 14 09:21:33 compute-0 sshd-session[376212]: Failed password for invalid user pi from 188.150.249.96 port 36124 ssh2
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.943 2 INFO nova.virt.libvirt.driver [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Deleting instance files /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4_del
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.944 2 INFO nova.virt.libvirt.driver [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Deletion of /var/lib/nova/instances/fd5669ba-0261-423e-8586-66c91ff570a4_del complete
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.995 2 INFO nova.compute.manager [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.996 2 DEBUG oslo.service.loopingcall [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.996 2 DEBUG nova.compute.manager [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:21:33 compute-0 nova_compute[259627]: 2025-10-14 09:21:33.997 2 DEBUG nova.network.neutron [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 200 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 755 KiB/s rd, 955 KiB/s wr, 116 op/s
Oct 14 09:21:34 compute-0 sshd-session[376212]: Connection closed by invalid user pi 188.150.249.96 port 36124 [preauth]
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.905 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "6a505551-bc3f-4254-966f-ca344358f8ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.906 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.906 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.907 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.907 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.908 2 INFO nova.compute.manager [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Terminating instance
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.909 2 DEBUG nova.compute.manager [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:21:34 compute-0 kernel: tap8210d83b-b3 (unregistering): left promiscuous mode
Oct 14 09:21:34 compute-0 NetworkManager[44885]: <info>  [1760433694.9761] device (tap8210d83b-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:34 compute-0 ovn_controller[152662]: 2025-10-14T09:21:34Z|01235|binding|INFO|Releasing lport 8210d83b-b3db-4515-b65b-c49829132abf from this chassis (sb_readonly=0)
Oct 14 09:21:34 compute-0 ovn_controller[152662]: 2025-10-14T09:21:34Z|01236|binding|INFO|Setting lport 8210d83b-b3db-4515-b65b-c49829132abf down in Southbound
Oct 14 09:21:34 compute-0 ovn_controller[152662]: 2025-10-14T09:21:34Z|01237|binding|INFO|Removing iface tap8210d83b-b3 ovn-installed in OVS
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:34 compute-0 nova_compute[259627]: 2025-10-14 09:21:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.001 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:90:f9 10.100.0.12'], port_security=['fa:16:3e:b3:90:f9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6a505551-bc3f-4254-966f-ca344358f8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61c1acdc-e817-4d26-8900-47d35332175a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8c6014f-d547-4ba8-9654-07ba6669ba65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfec6e64-6ad9-4ee1-953d-480113ac60ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8210d83b-b3db-4515-b65b-c49829132abf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.002 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8210d83b-b3db-4515-b65b-c49829132abf in datapath 61c1acdc-e817-4d26-8900-47d35332175a unbound from our chassis
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.003 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61c1acdc-e817-4d26-8900-47d35332175a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.007 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a61a12df-bf82-47c6-9d88-191b5781459f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.007 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a namespace which is not needed anymore
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:35 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct 14 09:21:35 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000073.scope: Consumed 15.077s CPU time.
Oct 14 09:21:35 compute-0 systemd-machined[214636]: Machine qemu-145-instance-00000073 terminated.
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:35 compute-0 NetworkManager[44885]: <info>  [1760433695.1262] manager: (tap8210d83b-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/504)
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.142 2 INFO nova.virt.libvirt.driver [-] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Instance destroyed successfully.
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.142 2 DEBUG nova.objects.instance [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 6a505551-bc3f-4254-966f-ca344358f8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.163 2 DEBUG nova.virt.libvirt.vif [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:20:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629499808',display_name='tempest-TestNetworkBasicOps-server-629499808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629499808',id=115,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP5Gh4rdsdSynoxSEF7tq/kTUtYCuC2zsLAToH//zOjD330tt1Y4Nmd3BgWIZVekiJHZCljvM1fo2eUZ6bfCX6pV1ICXWzyF2zMOpuUQORM1Q5pU9nSa2DumnhiOt/fsgA==',key_name='tempest-TestNetworkBasicOps-2079468078',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:20:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-0mgzd3ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:20:36Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=6a505551-bc3f-4254-966f-ca344358f8ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.164 2 DEBUG nova.network.os_vif_util [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8210d83b-b3db-4515-b65b-c49829132abf", "address": "fa:16:3e:b3:90:f9", "network": {"id": "61c1acdc-e817-4d26-8900-47d35332175a", "bridge": "br-int", "label": "tempest-network-smoke--1945321534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8210d83b-b3", "ovs_interfaceid": "8210d83b-b3db-4515-b65b-c49829132abf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.165 2 DEBUG nova.network.os_vif_util [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:90:f9,bridge_name='br-int',has_traffic_filtering=True,id=8210d83b-b3db-4515-b65b-c49829132abf,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8210d83b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.165 2 DEBUG os_vif [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:90:f9,bridge_name='br-int',has_traffic_filtering=True,id=8210d83b-b3db-4515-b65b-c49829132abf,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8210d83b-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8210d83b-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.173 2 INFO os_vif [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:90:f9,bridge_name='br-int',has_traffic_filtering=True,id=8210d83b-b3db-4515-b65b-c49829132abf,network=Network(61c1acdc-e817-4d26-8900-47d35332175a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8210d83b-b3')
Oct 14 09:21:35 compute-0 neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a[374021]: [NOTICE]   (374025) : haproxy version is 2.8.14-c23fe91
Oct 14 09:21:35 compute-0 neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a[374021]: [NOTICE]   (374025) : path to executable is /usr/sbin/haproxy
Oct 14 09:21:35 compute-0 neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a[374021]: [WARNING]  (374025) : Exiting Master process...
Oct 14 09:21:35 compute-0 neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a[374021]: [WARNING]  (374025) : Exiting Master process...
Oct 14 09:21:35 compute-0 neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a[374021]: [ALERT]    (374025) : Current worker (374027) exited with code 143 (Terminated)
Oct 14 09:21:35 compute-0 neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a[374021]: [WARNING]  (374025) : All workers exited. Exiting... (0)
Oct 14 09:21:35 compute-0 systemd[1]: libpod-b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f.scope: Deactivated successfully.
Oct 14 09:21:35 compute-0 podman[376439]: 2025-10-14 09:21:35.191527246 +0000 UTC m=+0.060820120 container died b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f-userdata-shm.mount: Deactivated successfully.
Oct 14 09:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-09c345a7d17434a672ce527f345c9de737a5f5d6b44645de6f5bf9601a24428d-merged.mount: Deactivated successfully.
Oct 14 09:21:35 compute-0 podman[376439]: 2025-10-14 09:21:35.235427158 +0000 UTC m=+0.104720032 container cleanup b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:21:35 compute-0 systemd[1]: libpod-conmon-b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f.scope: Deactivated successfully.
Oct 14 09:21:35 compute-0 podman[376512]: 2025-10-14 09:21:35.311571505 +0000 UTC m=+0.051422079 container remove b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.318 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17994772-f656-4254-898e-afcd0f474ecc]: (4, ('Tue Oct 14 09:21:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a (b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f)\nb08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f\nTue Oct 14 09:21:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a (b08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f)\nb08b0b04957369face77d2093f9bebc474d9ba54dbed95cfccf9a0230a177d1f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bc85ca-08d7-479d-bdb2-f1132bd83403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.321 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61c1acdc-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:21:35 compute-0 kernel: tap61c1acdc-e0: left promiscuous mode
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:35 compute-0 ceph-mon[74249]: pgmap v2041: 305 pgs: 305 active+clean; 200 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 755 KiB/s rd, 955 KiB/s wr, 116 op/s
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.355 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[74dd6419-91d3-44dd-9c7f-32dd37bfa4c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.374 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70cd7138-935c-4936-994f-6a07cad49159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.375 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5138f9a-f679-4b0a-aa92-00aa25f2d1bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.391 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[730688be-5f36-492d-b10e-6b01fbf180bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741450, 'reachable_time': 31083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376527, 'error': None, 'target': 'ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d61c1acdc\x2de817\x2d4d26\x2d8900\x2d47d35332175a.mount: Deactivated successfully.
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.396 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61c1acdc-e817-4d26-8900-47d35332175a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:21:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:21:35.396 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d8426290-624a-4be2-b81e-d481fd13e1a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:21:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508787831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.516 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.593 2 DEBUG nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-unplugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.594 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.594 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.595 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.595 2 DEBUG nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] No waiting events found dispatching network-vif-unplugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.596 2 DEBUG nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-unplugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.596 2 DEBUG nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.596 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.597 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.597 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.598 2 DEBUG nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] No waiting events found dispatching network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.598 2 WARNING nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received unexpected event network-vif-plugged-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f for instance with vm_state active and task_state deleting.
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.598 2 DEBUG nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-vif-unplugged-8210d83b-b3db-4515-b65b-c49829132abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.599 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.599 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.600 2 DEBUG oslo_concurrency.lockutils [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.600 2 DEBUG nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] No waiting events found dispatching network-vif-unplugged-8210d83b-b3db-4515-b65b-c49829132abf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.600 2 DEBUG nova.compute.manager [req-8a635df6-e4d9-4443-a7ff-f08a6d228544 req-1dda9ac6-27d1-4a32-a1d8-a61d6c28a029 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-vif-unplugged-8210d83b-b3db-4515-b65b-c49829132abf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.603 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.603 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.650 2 INFO nova.virt.libvirt.driver [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Deleting instance files /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac_del
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.651 2 INFO nova.virt.libvirt.driver [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Deletion of /var/lib/nova/instances/6a505551-bc3f-4254-966f-ca344358f8ac_del complete
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.711 2 INFO nova.compute.manager [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.712 2 DEBUG oslo.service.loopingcall [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.713 2 DEBUG nova.compute.manager [-] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.713 2 DEBUG nova.network.neutron [-] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.797 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.798 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3760MB free_disk=59.897056579589844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.798 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.798 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.936 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6a505551-bc3f-4254-966f-ca344358f8ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.937 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance fd5669ba-0261-423e-8586-66c91ff570a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.937 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:21:35 compute-0 nova_compute[259627]: 2025-10-14 09:21:35.938 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.004 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.107 2 DEBUG nova.network.neutron [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.126 2 INFO nova.compute.manager [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Took 2.13 seconds to deallocate network for instance.
Oct 14 09:21:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 99 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 941 KiB/s rd, 903 KiB/s wr, 150 op/s
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.182 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.194 2 DEBUG nova.network.neutron [req-2edbd33d-6e3a-4d45-8b73-a4a19c00b31f req-6cb55ced-dbd7-4306-be9b-978d110696fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updated VIF entry in instance network info cache for port 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.195 2 DEBUG nova.network.neutron [req-2edbd33d-6e3a-4d45-8b73-a4a19c00b31f req-6cb55ced-dbd7-4306-be9b-978d110696fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Updating instance_info_cache with network_info: [{"id": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "address": "fa:16:3e:5a:cb:1b", "network": {"id": "6c11d5e6-13c9-49c7-982d-8d1198ac7dea", "bridge": "br-int", "label": "tempest-TestShelveInstance-253772927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b859f880079e4e6db96cdef422402fa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a93d82c-2a", "ovs_interfaceid": "8a93d82c-2ad9-4fb9-8867-f4d2cdac487f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.213 2 DEBUG oslo_concurrency.lockutils [req-2edbd33d-6e3a-4d45-8b73-a4a19c00b31f req-6cb55ced-dbd7-4306-be9b-978d110696fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fd5669ba-0261-423e-8586-66c91ff570a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:21:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2508787831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.376 2 DEBUG nova.network.neutron [-] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.398 2 INFO nova.compute.manager [-] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Took 0.69 seconds to deallocate network for instance.
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.440 2 DEBUG nova.compute.manager [req-4a3188ad-1de1-4b5d-a549-c8f339b88d53 req-8c16582c-aa3d-475b-9d2e-7d0daebb10be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-vif-deleted-8210d83b-b3db-4515-b65b-c49829132abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.471 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/540026063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.488 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.494 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.508 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.528 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.528 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.528 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:36 compute-0 nova_compute[259627]: 2025-10-14 09:21:36.588 2 DEBUG oslo_concurrency.processutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469960979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.007 2 DEBUG oslo_concurrency.processutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.018 2 DEBUG nova.compute.provider_tree [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.042 2 DEBUG nova.scheduler.client.report [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.083 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.087 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.112 2 INFO nova.scheduler.client.report [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Deleted allocations for instance fd5669ba-0261-423e-8586-66c91ff570a4
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.155 2 DEBUG oslo_concurrency.processutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.205 2 DEBUG oslo_concurrency.lockutils [None req-f84cd7e0-93b7-442d-9e5a-85ce0f9989e9 8d77c101777148edbee39ba308af8e60 b859f880079e4e6db96cdef422402fa1 - - default default] Lock "fd5669ba-0261-423e-8586-66c91ff570a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:37 compute-0 ceph-mon[74249]: pgmap v2042: 305 pgs: 305 active+clean; 99 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 941 KiB/s rd, 903 KiB/s wr, 150 op/s
Oct 14 09:21:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/540026063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2469960979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.530 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.531 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827795983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.562 2 DEBUG oslo_concurrency.processutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.571 2 DEBUG nova.compute.provider_tree [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.593 2 DEBUG nova.scheduler.client.report [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.623 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.651 2 INFO nova.scheduler.client.report [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 6a505551-bc3f-4254-966f-ca344358f8ac
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.724 2 DEBUG nova.compute.manager [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Received event network-vif-deleted-8a93d82c-2ad9-4fb9-8867-f4d2cdac487f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.724 2 INFO nova.compute.manager [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Neutron deleted interface 8a93d82c-2ad9-4fb9-8867-f4d2cdac487f; detaching it from the instance and deleting it from the info cache
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.725 2 DEBUG nova.network.neutron [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.729 2 DEBUG nova.compute.manager [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Detach interface failed, port_id=8a93d82c-2ad9-4fb9-8867-f4d2cdac487f, reason: Instance fd5669ba-0261-423e-8586-66c91ff570a4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.730 2 DEBUG nova.compute.manager [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received event network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.730 2 DEBUG oslo_concurrency.lockutils [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.730 2 DEBUG oslo_concurrency.lockutils [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.731 2 DEBUG oslo_concurrency.lockutils [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.731 2 DEBUG nova.compute.manager [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] No waiting events found dispatching network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.731 2 WARNING nova.compute.manager [req-9576a0bc-6e4d-4813-900a-3033d0019808 req-b3cfc91d-0edc-4505-935b-3ce12287dabf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Received unexpected event network-vif-plugged-8210d83b-b3db-4515-b65b-c49829132abf for instance with vm_state deleted and task_state None.
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.743 2 DEBUG oslo_concurrency.lockutils [None req-b2e55d94-d81e-4cd3-a22a-816494e84508 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "6a505551-bc3f-4254-966f-ca344358f8ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:37 compute-0 nova_compute[259627]: 2025-10-14 09:21:37.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 99 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 495 KiB/s rd, 24 KiB/s wr, 92 op/s
Oct 14 09:21:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1827795983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:39 compute-0 nova_compute[259627]: 2025-10-14 09:21:39.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:39 compute-0 ceph-mon[74249]: pgmap v2043: 305 pgs: 305 active+clean; 99 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 495 KiB/s rd, 24 KiB/s wr, 92 op/s
Oct 14 09:21:39 compute-0 nova_compute[259627]: 2025-10-14 09:21:39.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:39 compute-0 nova_compute[259627]: 2025-10-14 09:21:39.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:21:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 76 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 504 KiB/s rd, 24 KiB/s wr, 104 op/s
Oct 14 09:21:40 compute-0 nova_compute[259627]: 2025-10-14 09:21:40.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:41 compute-0 ceph-mon[74249]: pgmap v2044: 305 pgs: 305 active+clean; 76 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 504 KiB/s rd, 24 KiB/s wr, 104 op/s
Oct 14 09:21:41 compute-0 podman[376600]: 2025-10-14 09:21:41.704787474 +0000 UTC m=+0.107393448 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 09:21:41 compute-0 podman[376599]: 2025-10-14 09:21:41.713667193 +0000 UTC m=+0.131938913 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 14 09:21:41 compute-0 nova_compute[259627]: 2025-10-14 09:21:41.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:41 compute-0 nova_compute[259627]: 2025-10-14 09:21:41.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:21:42 compute-0 nova_compute[259627]: 2025-10-14 09:21:41.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:21:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 14 KiB/s wr, 78 op/s
Oct 14 09:21:42 compute-0 nova_compute[259627]: 2025-10-14 09:21:42.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:43 compute-0 sshd-session[376415]: Invalid user localadmin from 188.150.249.96 port 38942
Oct 14 09:21:43 compute-0 nova_compute[259627]: 2025-10-14 09:21:43.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:21:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:21:43 compute-0 ceph-mon[74249]: pgmap v2045: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 14 KiB/s wr, 78 op/s
Oct 14 09:21:44 compute-0 nova_compute[259627]: 2025-10-14 09:21:44.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 09:21:44 compute-0 sshd-session[376415]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:21:44 compute-0 sshd-session[376415]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:21:44 compute-0 nova_compute[259627]: 2025-10-14 09:21:44.442 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433689.4409733, 51e84050-59be-4f23-b78f-18bc2d3e83fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:44 compute-0 nova_compute[259627]: 2025-10-14 09:21:44.442 2 INFO nova.compute.manager [-] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] VM Stopped (Lifecycle Event)
Oct 14 09:21:44 compute-0 nova_compute[259627]: 2025-10-14 09:21:44.476 2 DEBUG nova.compute.manager [None req-49492449-1f30-4072-b43a-327838b18d98 - - - - - -] [instance: 51e84050-59be-4f23-b78f-18bc2d3e83fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:44 compute-0 nova_compute[259627]: 2025-10-14 09:21:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:45 compute-0 nova_compute[259627]: 2025-10-14 09:21:45.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:45 compute-0 ceph-mon[74249]: pgmap v2046: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 09:21:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 09:21:46 compute-0 rsyslogd[1002]: imjournal: 4274 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 09:21:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:21:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9457 writes, 43K keys, 9457 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9457 writes, 9457 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1520 writes, 7571 keys, 1520 commit groups, 1.0 writes per commit group, ingest: 9.20 MB, 0.02 MB/s
                                           Interval WAL: 1520 writes, 1520 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     97.0      0.53              0.19        29    0.018       0      0       0.0       0.0
                                             L6      1/0    7.30 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3    188.2    157.0      1.41              0.71        28    0.050    156K    15K       0.0       0.0
                                            Sum      1/0    7.30 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3    136.9    140.6      1.94              0.90        57    0.034    156K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.0    168.9    164.5      0.49              0.27        16    0.030     55K   4087       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    188.2    157.0      1.41              0.71        28    0.050    156K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     98.3      0.52              0.19        28    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.26 GB read, 0.07 MB/s read, 1.9 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 29.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000245 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1931,28.27 MB,9.29864%) FilterBlock(58,438.55 KB,0.140878%) IndexBlock(58,773.72 KB,0.248548%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 14 09:21:47 compute-0 sshd-session[376415]: Failed password for invalid user localadmin from 188.150.249.96 port 38942 ssh2
Oct 14 09:21:47 compute-0 ceph-mon[74249]: pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 09:21:47 compute-0 sudo[376646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:47 compute-0 sudo[376646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:47 compute-0 sudo[376646]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:47 compute-0 sshd-session[376415]: Connection closed by invalid user localadmin 188.150.249.96 port 38942 [preauth]
Oct 14 09:21:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:47 compute-0 sudo[376671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:21:47 compute-0 sudo[376671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:47 compute-0 sudo[376671]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:48 compute-0 sudo[376696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:48 compute-0 sudo[376696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:48 compute-0 sudo[376696]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:48 compute-0 sudo[376722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:21:48 compute-0 sudo[376722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Oct 14 09:21:48 compute-0 nova_compute[259627]: 2025-10-14 09:21:48.446 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433693.444861, fd5669ba-0261-423e-8586-66c91ff570a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:48 compute-0 nova_compute[259627]: 2025-10-14 09:21:48.447 2 INFO nova.compute.manager [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Stopped (Lifecycle Event)
Oct 14 09:21:48 compute-0 nova_compute[259627]: 2025-10-14 09:21:48.478 2 DEBUG nova.compute.manager [None req-d219257a-8994-4aa3-b68f-e0a7021014ed - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:48 compute-0 sudo[376722]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:21:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:21:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:21:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:21:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:21:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:21:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a3ad5f0-0e44-46a9-8158-3a5ad46a99df does not exist
Oct 14 09:21:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f9091d2b-7962-47ee-962d-19d7b38c6ca9 does not exist
Oct 14 09:21:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 105f1c60-3cbb-40e9-ad92-ae7cb059b7e3 does not exist
Oct 14 09:21:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:21:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:21:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:21:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:21:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:21:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:21:48 compute-0 sudo[376781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:48 compute-0 sudo[376781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:48 compute-0 sudo[376781]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:49 compute-0 sudo[376806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:21:49 compute-0 sudo[376806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:49 compute-0 sudo[376806]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:49 compute-0 sudo[376831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:49 compute-0 sudo[376831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:49 compute-0 sudo[376831]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:49 compute-0 nova_compute[259627]: 2025-10-14 09:21:49.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:49 compute-0 sudo[376856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:21:49 compute-0 sudo[376856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:49 compute-0 ceph-mon[74249]: pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Oct 14 09:21:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:21:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:21:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:21:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:21:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:21:49 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:21:49 compute-0 podman[376922]: 2025-10-14 09:21:49.643858533 +0000 UTC m=+0.070345725 container create be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 09:21:49 compute-0 systemd[1]: Started libpod-conmon-be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f.scope.
Oct 14 09:21:49 compute-0 podman[376922]: 2025-10-14 09:21:49.613438143 +0000 UTC m=+0.039925345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:21:49 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:21:49 compute-0 podman[376922]: 2025-10-14 09:21:49.752349477 +0000 UTC m=+0.178836669 container init be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:21:49 compute-0 podman[376922]: 2025-10-14 09:21:49.765929472 +0000 UTC m=+0.192416654 container start be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:21:49 compute-0 podman[376922]: 2025-10-14 09:21:49.770384441 +0000 UTC m=+0.196871683 container attach be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:21:49 compute-0 epic_mclean[376938]: 167 167
Oct 14 09:21:49 compute-0 systemd[1]: libpod-be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f.scope: Deactivated successfully.
Oct 14 09:21:49 compute-0 podman[376922]: 2025-10-14 09:21:49.776371999 +0000 UTC m=+0.202859191 container died be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:21:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-63313f046fa4cc23b8d405fcab9e77d718b771e2cf86835fe2f9b5486289ee5d-merged.mount: Deactivated successfully.
Oct 14 09:21:49 compute-0 podman[376922]: 2025-10-14 09:21:49.835734252 +0000 UTC m=+0.262221434 container remove be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 09:21:49 compute-0 systemd[1]: libpod-conmon-be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f.scope: Deactivated successfully.
Oct 14 09:21:50 compute-0 podman[376963]: 2025-10-14 09:21:50.066752676 +0000 UTC m=+0.056633027 container create e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:21:50 compute-0 systemd[1]: Started libpod-conmon-e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d.scope.
Oct 14 09:21:50 compute-0 podman[376963]: 2025-10-14 09:21:50.04787629 +0000 UTC m=+0.037756681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:21:50 compute-0 nova_compute[259627]: 2025-10-14 09:21:50.140 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433695.139951, 6a505551-bc3f-4254-966f-ca344358f8ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:21:50 compute-0 nova_compute[259627]: 2025-10-14 09:21:50.142 2 INFO nova.compute.manager [-] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] VM Stopped (Lifecycle Event)
Oct 14 09:21:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Oct 14 09:21:50 compute-0 podman[376963]: 2025-10-14 09:21:50.174884511 +0000 UTC m=+0.164764862 container init e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:21:50 compute-0 nova_compute[259627]: 2025-10-14 09:21:50.226 2 DEBUG nova.compute.manager [None req-45f589d6-5b61-4495-bac6-3619818765c8 - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:21:50 compute-0 nova_compute[259627]: 2025-10-14 09:21:50.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:50 compute-0 podman[376963]: 2025-10-14 09:21:50.227981869 +0000 UTC m=+0.217862230 container start e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:21:50 compute-0 podman[376963]: 2025-10-14 09:21:50.231742482 +0000 UTC m=+0.221622853 container attach e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 09:21:50 compute-0 nova_compute[259627]: 2025-10-14 09:21:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:21:51 compute-0 mystifying_murdock[376980]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:21:51 compute-0 mystifying_murdock[376980]: --> relative data size: 1.0
Oct 14 09:21:51 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:21:51 compute-0 mystifying_murdock[376980]: --> All data devices are unavailable
Oct 14 09:21:51 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:21:51 compute-0 systemd[1]: libpod-e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d.scope: Deactivated successfully.
Oct 14 09:21:51 compute-0 podman[376963]: 2025-10-14 09:21:51.331061056 +0000 UTC m=+1.320941467 container died e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:21:51 compute-0 systemd[1]: libpod-e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d.scope: Consumed 1.069s CPU time.
Oct 14 09:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36-merged.mount: Deactivated successfully.
Oct 14 09:21:51 compute-0 podman[376963]: 2025-10-14 09:21:51.422989861 +0000 UTC m=+1.412870242 container remove e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:21:51 compute-0 ceph-mon[74249]: pgmap v2049: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Oct 14 09:21:51 compute-0 systemd[1]: libpod-conmon-e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d.scope: Deactivated successfully.
Oct 14 09:21:51 compute-0 sudo[376856]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:51 compute-0 sudo[377023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:51 compute-0 sudo[377023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:51 compute-0 sudo[377023]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:51 compute-0 sudo[377048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:21:51 compute-0 sudo[377048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:51 compute-0 sudo[377048]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:51 compute-0 sudo[377073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:51 compute-0 sudo[377073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:51 compute-0 sudo[377073]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:51 compute-0 sudo[377098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:21:51 compute-0 sudo[377098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 767 B/s wr, 7 op/s
Oct 14 09:21:52 compute-0 podman[377162]: 2025-10-14 09:21:52.300176081 +0000 UTC m=+0.069569686 container create 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:21:52 compute-0 systemd[1]: Started libpod-conmon-71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26.scope.
Oct 14 09:21:52 compute-0 podman[377162]: 2025-10-14 09:21:52.271831932 +0000 UTC m=+0.041225547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:21:52 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:21:52 compute-0 podman[377162]: 2025-10-14 09:21:52.401542749 +0000 UTC m=+0.170936374 container init 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:21:52 compute-0 podman[377162]: 2025-10-14 09:21:52.415596806 +0000 UTC m=+0.184990421 container start 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:21:52 compute-0 podman[377162]: 2025-10-14 09:21:52.419739498 +0000 UTC m=+0.189133113 container attach 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:21:52 compute-0 romantic_dhawan[377179]: 167 167
Oct 14 09:21:52 compute-0 systemd[1]: libpod-71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26.scope: Deactivated successfully.
Oct 14 09:21:52 compute-0 podman[377162]: 2025-10-14 09:21:52.424355102 +0000 UTC m=+0.193748717 container died 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 09:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3241a7f67c2513bd7baf8b28bfb5bdef625c4e76c838cb9d33f4cb140690547c-merged.mount: Deactivated successfully.
Oct 14 09:21:52 compute-0 podman[377162]: 2025-10-14 09:21:52.477601444 +0000 UTC m=+0.246995029 container remove 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:21:52 compute-0 systemd[1]: libpod-conmon-71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26.scope: Deactivated successfully.
Oct 14 09:21:52 compute-0 podman[377203]: 2025-10-14 09:21:52.653333095 +0000 UTC m=+0.057426916 container create b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:21:52 compute-0 systemd[1]: Started libpod-conmon-b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3.scope.
Oct 14 09:21:52 compute-0 podman[377203]: 2025-10-14 09:21:52.627330814 +0000 UTC m=+0.031424705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:21:52 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:52 compute-0 podman[377203]: 2025-10-14 09:21:52.740331979 +0000 UTC m=+0.144425820 container init b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:21:52 compute-0 podman[377203]: 2025-10-14 09:21:52.748946182 +0000 UTC m=+0.153040003 container start b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:21:52 compute-0 podman[377203]: 2025-10-14 09:21:52.752783366 +0000 UTC m=+0.156877177 container attach b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:21:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:53 compute-0 ceph-mon[74249]: pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 767 B/s wr, 7 op/s
Oct 14 09:21:53 compute-0 tender_lederberg[377219]: {
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:     "0": [
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:         {
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "devices": [
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "/dev/loop3"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             ],
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_name": "ceph_lv0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_size": "21470642176",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "name": "ceph_lv0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "tags": {
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cluster_name": "ceph",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.crush_device_class": "",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.encrypted": "0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osd_id": "0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.type": "block",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.vdo": "0"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             },
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "type": "block",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "vg_name": "ceph_vg0"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:         }
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:     ],
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:     "1": [
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:         {
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "devices": [
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "/dev/loop4"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             ],
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_name": "ceph_lv1",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_size": "21470642176",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "name": "ceph_lv1",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "tags": {
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cluster_name": "ceph",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.crush_device_class": "",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.encrypted": "0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osd_id": "1",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.type": "block",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.vdo": "0"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             },
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "type": "block",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "vg_name": "ceph_vg1"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:         }
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:     ],
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:     "2": [
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:         {
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "devices": [
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "/dev/loop5"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             ],
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_name": "ceph_lv2",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_size": "21470642176",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "name": "ceph_lv2",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "tags": {
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.cluster_name": "ceph",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.crush_device_class": "",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.encrypted": "0",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osd_id": "2",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.type": "block",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:                 "ceph.vdo": "0"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             },
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "type": "block",
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:             "vg_name": "ceph_vg2"
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:         }
Oct 14 09:21:53 compute-0 tender_lederberg[377219]:     ]
Oct 14 09:21:53 compute-0 tender_lederberg[377219]: }
Oct 14 09:21:53 compute-0 systemd[1]: libpod-b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3.scope: Deactivated successfully.
Oct 14 09:21:53 compute-0 podman[377203]: 2025-10-14 09:21:53.494317923 +0000 UTC m=+0.898411754 container died b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1-merged.mount: Deactivated successfully.
Oct 14 09:21:53 compute-0 podman[377203]: 2025-10-14 09:21:53.566857 +0000 UTC m=+0.970950851 container remove b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:21:53 compute-0 systemd[1]: libpod-conmon-b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3.scope: Deactivated successfully.
Oct 14 09:21:53 compute-0 sudo[377098]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:53 compute-0 sudo[377240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:53 compute-0 sudo[377240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:53 compute-0 sudo[377240]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:53 compute-0 sudo[377265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:21:53 compute-0 sudo[377265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:53 compute-0 sudo[377265]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:53 compute-0 sudo[377290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:53 compute-0 sudo[377290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:53 compute-0 sudo[377290]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:53 compute-0 sudo[377315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:21:53 compute-0 sudo[377315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:54 compute-0 nova_compute[259627]: 2025-10-14 09:21:54.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:21:54 compute-0 podman[377381]: 2025-10-14 09:21:54.368091837 +0000 UTC m=+0.053180481 container create 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:21:54 compute-0 systemd[1]: Started libpod-conmon-38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc.scope.
Oct 14 09:21:54 compute-0 podman[377381]: 2025-10-14 09:21:54.346659749 +0000 UTC m=+0.031748423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:21:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:21:54 compute-0 podman[377381]: 2025-10-14 09:21:54.456829924 +0000 UTC m=+0.141918628 container init 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:21:54 compute-0 podman[377381]: 2025-10-14 09:21:54.464867692 +0000 UTC m=+0.149956366 container start 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:21:54 compute-0 keen_gauss[377397]: 167 167
Oct 14 09:21:54 compute-0 podman[377381]: 2025-10-14 09:21:54.469372553 +0000 UTC m=+0.154461217 container attach 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:21:54 compute-0 systemd[1]: libpod-38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc.scope: Deactivated successfully.
Oct 14 09:21:54 compute-0 conmon[377397]: conmon 38d540e198a710cce11b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc.scope/container/memory.events
Oct 14 09:21:54 compute-0 podman[377381]: 2025-10-14 09:21:54.471427534 +0000 UTC m=+0.156516178 container died 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c636ac04bc16ef9dccbdb091087a7f37b227139acea4409a6b827c14cd237ad6-merged.mount: Deactivated successfully.
Oct 14 09:21:54 compute-0 podman[377381]: 2025-10-14 09:21:54.523696112 +0000 UTC m=+0.208784776 container remove 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:21:54 compute-0 systemd[1]: libpod-conmon-38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc.scope: Deactivated successfully.
Oct 14 09:21:54 compute-0 podman[377422]: 2025-10-14 09:21:54.754729457 +0000 UTC m=+0.057225922 container create efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:21:54 compute-0 systemd[1]: Started libpod-conmon-efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d.scope.
Oct 14 09:21:54 compute-0 podman[377422]: 2025-10-14 09:21:54.727568057 +0000 UTC m=+0.030064572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:21:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:21:54 compute-0 podman[377422]: 2025-10-14 09:21:54.869476805 +0000 UTC m=+0.171973270 container init efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:21:54 compute-0 podman[377422]: 2025-10-14 09:21:54.877105573 +0000 UTC m=+0.179602008 container start efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:21:54 compute-0 podman[377422]: 2025-10-14 09:21:54.881144972 +0000 UTC m=+0.183641407 container attach efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 14 09:21:55 compute-0 nova_compute[259627]: 2025-10-14 09:21:55.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:55 compute-0 ceph-mon[74249]: pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]: {
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "osd_id": 2,
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "type": "bluestore"
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:     },
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "osd_id": 1,
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "type": "bluestore"
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:     },
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "osd_id": 0,
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:         "type": "bluestore"
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]:     }
Oct 14 09:21:55 compute-0 festive_zhukovsky[377439]: }
Oct 14 09:21:55 compute-0 systemd[1]: libpod-efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d.scope: Deactivated successfully.
Oct 14 09:21:55 compute-0 systemd[1]: libpod-efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d.scope: Consumed 1.003s CPU time.
Oct 14 09:21:55 compute-0 podman[377422]: 2025-10-14 09:21:55.875342286 +0000 UTC m=+1.177838721 container died efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:21:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb-merged.mount: Deactivated successfully.
Oct 14 09:21:55 compute-0 podman[377422]: 2025-10-14 09:21:55.928399474 +0000 UTC m=+1.230895899 container remove efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:21:55 compute-0 systemd[1]: libpod-conmon-efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d.scope: Deactivated successfully.
Oct 14 09:21:55 compute-0 sudo[377315]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:21:55 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:21:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:21:55 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:21:55 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d3823705-c579-45fa-8cb9-2b2409dbed64 does not exist
Oct 14 09:21:55 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ea13efe0-036c-42ba-98b4-3f93ecf1e63b does not exist
Oct 14 09:21:56 compute-0 sudo[377486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:21:56 compute-0 sudo[377486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:56 compute-0 sudo[377486]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:56 compute-0 sudo[377511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:21:56 compute-0 sudo[377511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:21:56 compute-0 sudo[377511]: pam_unix(sudo:session): session closed for user root
Oct 14 09:21:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:21:56 compute-0 unix_chkpwd[377536]: password check failed for user (root)
Oct 14 09:21:56 compute-0 sshd-session[376703]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96  user=root
Oct 14 09:21:56 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:21:56 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:21:56 compute-0 ceph-mon[74249]: pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:21:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:21:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:21:58 compute-0 nova_compute[259627]: 2025-10-14 09:21:58.364 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:58 compute-0 nova_compute[259627]: 2025-10-14 09:21:58.364 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:58 compute-0 nova_compute[259627]: 2025-10-14 09:21:58.378 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:21:58 compute-0 nova_compute[259627]: 2025-10-14 09:21:58.526 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:58 compute-0 nova_compute[259627]: 2025-10-14 09:21:58.527 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:58 compute-0 nova_compute[259627]: 2025-10-14 09:21:58.535 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:21:58 compute-0 nova_compute[259627]: 2025-10-14 09:21:58.536 2 INFO nova.compute.claims [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:21:58 compute-0 nova_compute[259627]: 2025-10-14 09:21:58.651 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:59 compute-0 sshd-session[376703]: Failed password for root from 188.150.249.96 port 41772 ssh2
Oct 14 09:21:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:21:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854884346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.121 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.131 2 DEBUG nova.compute.provider_tree [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.155 2 DEBUG nova.scheduler.client.report [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.180 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.181 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:21:59 compute-0 ceph-mon[74249]: pgmap v2053: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:21:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1854884346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.238 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.238 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.264 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.289 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.397 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.399 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.400 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Creating image(s)
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.437 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.475 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.512 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.517 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.570 2 DEBUG nova.policy [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.631 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.632 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.633 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.634 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.675 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:21:59 compute-0 podman[377614]: 2025-10-14 09:21:59.67806973 +0000 UTC m=+0.081512920 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.685 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 50c83173-31e3-4f7a-8836-26e52affd0f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:21:59 compute-0 podman[377615]: 2025-10-14 09:21:59.691032209 +0000 UTC m=+0.089629010 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 09:21:59 compute-0 nova_compute[259627]: 2025-10-14 09:21:59.983 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 50c83173-31e3-4f7a-8836-26e52affd0f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.062 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:22:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 69 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.188 2 DEBUG nova.objects.instance [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.206 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.207 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Ensure instance console log exists: /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.207 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.208 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.208 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:00 compute-0 nova_compute[259627]: 2025-10-14 09:22:00.394 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Successfully created port: 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:22:00 compute-0 sshd-session[376703]: Connection closed by authenticating user root 188.150.249.96 port 41772 [preauth]
Oct 14 09:22:01 compute-0 ceph-mon[74249]: pgmap v2054: 305 pgs: 305 active+clean; 69 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Oct 14 09:22:01 compute-0 nova_compute[259627]: 2025-10-14 09:22:01.950 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Successfully updated port: 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:22:01 compute-0 nova_compute[259627]: 2025-10-14 09:22:01.976 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:01 compute-0 nova_compute[259627]: 2025-10-14 09:22:01.976 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:01 compute-0 nova_compute[259627]: 2025-10-14 09:22:01.976 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:22:02 compute-0 nova_compute[259627]: 2025-10-14 09:22:02.156 2 DEBUG nova.compute.manager [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:02 compute-0 nova_compute[259627]: 2025-10-14 09:22:02.157 2 DEBUG nova.compute.manager [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:22:02 compute-0 nova_compute[259627]: 2025-10-14 09:22:02.158 2 DEBUG oslo_concurrency.lockutils [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 14 09:22:02 compute-0 nova_compute[259627]: 2025-10-14 09:22:02.494 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:22:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:22:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:22:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:22:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:22:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:22:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:22:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:03 compute-0 ceph-mon[74249]: pgmap v2055: 305 pgs: 305 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 14 09:22:03 compute-0 nova_compute[259627]: 2025-10-14 09:22:03.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:04 compute-0 nova_compute[259627]: 2025-10-14 09:22:04.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:05 compute-0 ceph-mon[74249]: pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.332 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.366 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.367 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance network_info: |[{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.368 2 DEBUG oslo_concurrency.lockutils [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.369 2 DEBUG nova.network.neutron [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.373 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start _get_guest_xml network_info=[{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.380 2 WARNING nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.387 2 DEBUG nova.virt.libvirt.host [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.388 2 DEBUG nova.virt.libvirt.host [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.400 2 DEBUG nova.virt.libvirt.host [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.401 2 DEBUG nova.virt.libvirt.host [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.401 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.402 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.403 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.403 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.404 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.404 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.404 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.405 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.405 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.406 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.406 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.407 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.411 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:22:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206756417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:22:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:22:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206756417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:22:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:05.843 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:05.844 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:22:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235126163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.872 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.895 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:05 compute-0 nova_compute[259627]: 2025-10-14 09:22:05.899 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:22:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4206756417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:22:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4206756417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:22:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/235126163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:22:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606223629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.335 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.337 2 DEBUG nova.virt.libvirt.vif [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:21:59Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.338 2 DEBUG nova.network.os_vif_util [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.338 2 DEBUG nova.network.os_vif_util [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.340 2 DEBUG nova.objects.instance [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.356 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <name>instance-00000075</name>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:22:05</nova:creationTime>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 09:22:06 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <system>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <entry name="serial">50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <entry name="uuid">50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </system>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <os>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   </os>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <features>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   </features>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk">
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config">
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:22:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:32:f2:3f"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <target dev="tap81977d79-f7"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log" append="off"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <video>
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </video>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:22:06 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:22:06 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:22:06 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:22:06 compute-0 nova_compute[259627]: </domain>
Oct 14 09:22:06 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.358 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Preparing to wait for external event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.359 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.359 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.360 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.361 2 DEBUG nova.virt.libvirt.vif [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:21:59Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.361 2 DEBUG nova.network.os_vif_util [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.362 2 DEBUG nova.network.os_vif_util [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.362 2 DEBUG os_vif [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81977d79-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81977d79-f7, col_values=(('external_ids', {'iface-id': '81977d79-f754-42ba-8b3c-c4eb2f9651d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:f2:3f', 'vm-uuid': '50c83173-31e3-4f7a-8836-26e52affd0f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:06 compute-0 NetworkManager[44885]: <info>  [1760433726.4094] manager: (tap81977d79-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.417 2 INFO os_vif [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7')
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.470 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.470 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.470 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:32:f2:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.471 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Using config drive
Oct 14 09:22:06 compute-0 nova_compute[259627]: 2025-10-14 09:22:06.502 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.037 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:07 compute-0 ceph-mon[74249]: pgmap v2057: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:22:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1606223629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.296 2 DEBUG nova.network.neutron [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.297 2 DEBUG nova.network.neutron [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.307 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Creating config drive at /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.315 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0_cipb2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.361 2 DEBUG oslo_concurrency.lockutils [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.471 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0_cipb2" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.511 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.516 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.697 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.700 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Deleting local config drive /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config because it was imported into RBD.
Oct 14 09:22:07 compute-0 kernel: tap81977d79-f7: entered promiscuous mode
Oct 14 09:22:07 compute-0 NetworkManager[44885]: <info>  [1760433727.7694] manager: (tap81977d79-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/506)
Oct 14 09:22:07 compute-0 systemd-udevd[377897]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:07 compute-0 ovn_controller[152662]: 2025-10-14T09:22:07Z|01238|binding|INFO|Claiming lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 for this chassis.
Oct 14 09:22:07 compute-0 ovn_controller[152662]: 2025-10-14T09:22:07Z|01239|binding|INFO|81977d79-f754-42ba-8b3c-c4eb2f9651d2: Claiming fa:16:3e:32:f2:3f 10.100.0.10
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.820 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:f2:3f 10.100.0.10'], port_security=['fa:16:3e:32:f2:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '50c83173-31e3-4f7a-8836-26e52affd0f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1981aa60-63c9-49df-94e5-0874b5ab31e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=547c8605-a609-4b00-82f5-2d938c7ab8e7, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=81977d79-f754-42ba-8b3c-c4eb2f9651d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.822 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 in datapath 99e78054-f9f4-417c-a942-d4f9dd534ef7 bound to our chassis
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.823 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99e78054-f9f4-417c-a942-d4f9dd534ef7
Oct 14 09:22:07 compute-0 NetworkManager[44885]: <info>  [1760433727.8258] device (tap81977d79-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:22:07 compute-0 NetworkManager[44885]: <info>  [1760433727.8282] device (tap81977d79-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:22:07 compute-0 systemd-machined[214636]: New machine qemu-148-instance-00000075.
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.840 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5690eba1-49fe-4ff4-89d9-96a34560e7a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.841 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99e78054-f1 in ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.844 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99e78054-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e8d7a2-bace-46b1-91a6-3fd8fe653277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.845 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1db3e2-7226-42ed-b024-ccfadfea424c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.863 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f202e370-eda2-4f0a-8eb4-ea7b5ca9bd34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:07 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000075.
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.892 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a58da886-63ba-4b1e-be16-3a54d29086bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:07 compute-0 ovn_controller[152662]: 2025-10-14T09:22:07Z|01240|binding|INFO|Setting lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 ovn-installed in OVS
Oct 14 09:22:07 compute-0 ovn_controller[152662]: 2025-10-14T09:22:07Z|01241|binding|INFO|Setting lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 up in Southbound
Oct 14 09:22:07 compute-0 nova_compute[259627]: 2025-10-14 09:22:07.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.936 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[18a0e0ca-cda3-4cb7-a469-8631b38d5259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.944 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6d147e-185f-42a2-abcb-974d0a7829d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:07 compute-0 NetworkManager[44885]: <info>  [1760433727.9461] manager: (tap99e78054-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/507)
Oct 14 09:22:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.992 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[40613ea7-4454-44c4-98a9-04816932f5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.997 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[85fd5427-28bd-4881-a829-e03903df0572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:08 compute-0 sshd-session[377763]: Invalid user ubuntu from 188.150.249.96 port 44416
Oct 14 09:22:08 compute-0 NetworkManager[44885]: <info>  [1760433728.0253] device (tap99e78054-f0): carrier: link connected
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.033 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ff119bcc-e186-4fb1-a263-7330cc385a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.058 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6195f786-24cf-431b-9c9e-893627b557e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e78054-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:e4:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750678, 'reachable_time': 42953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377933, 'error': None, 'target': 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.079 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[866acf7e-3bdd-43b9-869e-dcfdc7fc07c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:e4be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750678, 'tstamp': 750678}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377934, 'error': None, 'target': 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.100 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dba65036-f907-432e-be5b-2501ae34605e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e78054-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:e4:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750678, 'reachable_time': 42953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377935, 'error': None, 'target': 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.145 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9be413-3e4d-41da-b9cc-00eeb471eb2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[54ee4751-a059-408b-b370-f2a36ac57e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.228 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e78054-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.229 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.229 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99e78054-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:08 compute-0 NetworkManager[44885]: <info>  [1760433728.2319] manager: (tap99e78054-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Oct 14 09:22:08 compute-0 kernel: tap99e78054-f0: entered promiscuous mode
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.238 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99e78054-f0, col_values=(('external_ids', {'iface-id': '16a7cbd0-b25e-4461-8725-c92979b01f53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:08 compute-0 ovn_controller[152662]: 2025-10-14T09:22:08Z|01242|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.264 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99e78054-f9f4-417c-a942-d4f9dd534ef7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99e78054-f9f4-417c-a942-d4f9dd534ef7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.265 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f36040-488e-47e2-b6c7-5e4b8574e796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.266 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-99e78054-f9f4-417c-a942-d4f9dd534ef7
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/99e78054-f9f4-417c-a942-d4f9dd534ef7.pid.haproxy
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 99e78054-f9f4-417c-a942-d4f9dd534ef7
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:22:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.268 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'env', 'PROCESS_TAG=haproxy-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99e78054-f9f4-417c-a942-d4f9dd534ef7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:22:08 compute-0 sshd-session[377763]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:22:08 compute-0 sshd-session[377763]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:22:08 compute-0 podman[378008]: 2025-10-14 09:22:08.688325408 +0000 UTC m=+0.040782946 container create 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:22:08 compute-0 systemd[1]: Started libpod-conmon-7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa.scope.
Oct 14 09:22:08 compute-0 podman[378008]: 2025-10-14 09:22:08.666131671 +0000 UTC m=+0.018589239 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:22:08 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:22:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46afafd803e5df435ff3608e3076ec93ae7e5d3420b992909b0759f480792e61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:22:08 compute-0 podman[378008]: 2025-10-14 09:22:08.79471081 +0000 UTC m=+0.147168388 container init 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 09:22:08 compute-0 podman[378008]: 2025-10-14 09:22:08.801812235 +0000 UTC m=+0.154269794 container start 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:22:08 compute-0 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [NOTICE]   (378028) : New worker (378030) forked
Oct 14 09:22:08 compute-0 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [NOTICE]   (378028) : Loading success.
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.893 2 DEBUG nova.compute.manager [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.893 2 DEBUG oslo_concurrency.lockutils [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.894 2 DEBUG oslo_concurrency.lockutils [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.894 2 DEBUG oslo_concurrency.lockutils [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.895 2 DEBUG nova.compute.manager [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Processing event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.963 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.964 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433728.9628048, 50c83173-31e3-4f7a-8836-26e52affd0f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.964 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] VM Started (Lifecycle Event)
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.969 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.972 2 INFO nova.virt.libvirt.driver [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance spawned successfully.
Oct 14 09:22:08 compute-0 nova_compute[259627]: 2025-10-14 09:22:08.973 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.009 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.014 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.066 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.067 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433728.9638407, 50c83173-31e3-4f7a-8836-26e52affd0f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.067 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] VM Paused (Lifecycle Event)
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.074 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.074 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.075 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.076 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.076 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.077 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.083 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.087 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433728.9693036, 50c83173-31e3-4f7a-8836-26e52affd0f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.087 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] VM Resumed (Lifecycle Event)
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.103 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.107 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.127 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.137 2 INFO nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Took 9.74 seconds to spawn the instance on the hypervisor.
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.137 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.198 2 INFO nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Took 10.70 seconds to build instance.
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.222 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:09 compute-0 ceph-mon[74249]: pgmap v2058: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.540 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.541 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.559 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.646 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.647 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.657 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.658 2 INFO nova.compute.claims [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:22:09 compute-0 nova_compute[259627]: 2025-10-14 09:22:09.823 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:09.846 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 978 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 14 09:22:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:22:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529970925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.315 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.323 2 DEBUG nova.compute.provider_tree [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.351 2 DEBUG nova.scheduler.client.report [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.378 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.379 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.431 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.431 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.456 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.477 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.566 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.567 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.567 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Creating image(s)
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.589 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.610 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.630 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.633 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.688 2 DEBUG nova.policy [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7629c3d96333470aa7d7ed5cabfc7e2c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.700 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.701 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.702 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.702 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.732 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:10 compute-0 nova_compute[259627]: 2025-10-14 09:22:10.737 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:10 compute-0 sshd-session[377763]: Failed password for invalid user ubuntu from 188.150.249.96 port 44416 ssh2
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.008 2 DEBUG nova.compute.manager [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.009 2 DEBUG oslo_concurrency.lockutils [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.009 2 DEBUG oslo_concurrency.lockutils [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.009 2 DEBUG oslo_concurrency.lockutils [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.009 2 DEBUG nova.compute.manager [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.009 2 WARNING nova.compute.manager [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 for instance with vm_state active and task_state None.
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.043 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.099 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] resizing rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.197 2 DEBUG nova.objects.instance [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'migration_context' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.215 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.215 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Ensure instance console log exists: /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.216 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.216 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.217 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:11 compute-0 ceph-mon[74249]: pgmap v2059: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 978 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 14 09:22:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1529970925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:11 compute-0 nova_compute[259627]: 2025-10-14 09:22:11.968 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Successfully created port: c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:22:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 738 KiB/s wr, 80 op/s
Oct 14 09:22:12 compute-0 podman[378228]: 2025-10-14 09:22:12.694337002 +0000 UTC m=+0.099021891 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:22:12 compute-0 NetworkManager[44885]: <info>  [1760433732.7308] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Oct 14 09:22:12 compute-0 NetworkManager[44885]: <info>  [1760433732.7323] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Oct 14 09:22:12 compute-0 nova_compute[259627]: 2025-10-14 09:22:12.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:12 compute-0 podman[378227]: 2025-10-14 09:22:12.750666581 +0000 UTC m=+0.148992363 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 09:22:12 compute-0 ovn_controller[152662]: 2025-10-14T09:22:12Z|01243|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 09:22:12 compute-0 nova_compute[259627]: 2025-10-14 09:22:12.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:12 compute-0 nova_compute[259627]: 2025-10-14 09:22:12.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.109 2 DEBUG nova.compute.manager [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.109 2 DEBUG nova.compute.manager [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.110 2 DEBUG oslo_concurrency.lockutils [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.110 2 DEBUG oslo_concurrency.lockutils [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.111 2 DEBUG nova.network.neutron [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:22:13 compute-0 ceph-mon[74249]: pgmap v2060: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 738 KiB/s wr, 80 op/s
Oct 14 09:22:13 compute-0 sshd-session[377763]: Connection closed by invalid user ubuntu 188.150.249.96 port 44416 [preauth]
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.702 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Successfully updated port: c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.743 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.744 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquired lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.744 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.849 2 DEBUG nova.compute.manager [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-changed-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.850 2 DEBUG nova.compute.manager [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Refreshing instance network info cache due to event network-changed-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:22:13 compute-0 nova_compute[259627]: 2025-10-14 09:22:13.850 2 DEBUG oslo_concurrency.lockutils [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:14 compute-0 nova_compute[259627]: 2025-10-14 09:22:14.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 67 op/s
Oct 14 09:22:14 compute-0 nova_compute[259627]: 2025-10-14 09:22:14.603 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:22:14 compute-0 nova_compute[259627]: 2025-10-14 09:22:14.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:15 compute-0 ceph-mon[74249]: pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 67 op/s
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.819 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.823 2 DEBUG nova.network.neutron [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.824 2 DEBUG nova.network.neutron [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.850 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Releasing lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.850 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance network_info: |[{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.851 2 DEBUG oslo_concurrency.lockutils [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.851 2 DEBUG nova.network.neutron [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Refreshing network info cache for port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.857 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start _get_guest_xml network_info=[{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.859 2 DEBUG oslo_concurrency.lockutils [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.864 2 WARNING nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.877 2 DEBUG nova.virt.libvirt.host [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.878 2 DEBUG nova.virt.libvirt.host [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.883 2 DEBUG nova.virt.libvirt.host [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.884 2 DEBUG nova.virt.libvirt.host [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.885 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.885 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.886 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.886 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.887 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.887 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.888 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.888 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.889 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.889 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.890 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.891 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:22:15 compute-0 nova_compute[259627]: 2025-10-14 09:22:15.896 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 134 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 09:22:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:22:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2665617067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.402 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.426 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.432 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:22:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2120005278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.865 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.867 2 DEBUG nova.virt.libvirt.vif [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:10Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.867 2 DEBUG nova.network.os_vif_util [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.868 2 DEBUG nova.network.os_vif_util [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.870 2 DEBUG nova.objects.instance [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.885 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <uuid>725ed629-f7d5-4a69-be5e-4cae3eef2e2e</uuid>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <name>instance-00000076</name>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <nova:name>tempest-TestServerAdvancedOps-server-1481398530</nova:name>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:22:15</nova:creationTime>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <nova:user uuid="7629c3d96333470aa7d7ed5cabfc7e2c">tempest-TestServerAdvancedOps-20904479-project-member</nova:user>
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <nova:project uuid="9a71d13aebeb4969b1877a33505f3dc4">tempest-TestServerAdvancedOps-20904479</nova:project>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <nova:port uuid="c47025b4-9051-4cc4-9fb7-70cd59d6c5c5">
Oct 14 09:22:16 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <system>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <entry name="serial">725ed629-f7d5-4a69-be5e-4cae3eef2e2e</entry>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <entry name="uuid">725ed629-f7d5-4a69-be5e-4cae3eef2e2e</entry>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </system>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <os>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   </os>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <features>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   </features>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk">
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       </source>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config">
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       </source>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:22:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:39:7a:fe"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <target dev="tapc47025b4-90"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/console.log" append="off"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <video>
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </video>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:22:16 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:22:16 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:22:16 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:22:16 compute-0 nova_compute[259627]: </domain>
Oct 14 09:22:16 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.887 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Preparing to wait for external event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.887 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.888 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.888 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.889 2 DEBUG nova.virt.libvirt.vif [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:10Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.889 2 DEBUG nova.network.os_vif_util [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.890 2 DEBUG nova.network.os_vif_util [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.890 2 DEBUG os_vif [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc47025b4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.896 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc47025b4-90, col_values=(('external_ids', {'iface-id': 'c47025b4-9051-4cc4-9fb7-70cd59d6c5c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:7a:fe', 'vm-uuid': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:16 compute-0 NetworkManager[44885]: <info>  [1760433736.8985] manager: (tapc47025b4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.905 2 INFO os_vif [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90')
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.959 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.960 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.960 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] No VIF found with MAC fa:16:3e:39:7a:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.960 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Using config drive
Oct 14 09:22:16 compute-0 nova_compute[259627]: 2025-10-14 09:22:16.979 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:17 compute-0 ceph-mon[74249]: pgmap v2062: 305 pgs: 305 active+clean; 134 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 09:22:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2665617067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2120005278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.408 2 DEBUG nova.network.neutron [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updated VIF entry in instance network info cache for port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.409 2 DEBUG nova.network.neutron [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.427 2 DEBUG oslo_concurrency.lockutils [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.471 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Creating config drive at /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.476 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_kwluz70 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.641 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_kwluz70" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.683 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.687 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.897 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.899 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Deleting local config drive /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config because it was imported into RBD.
Oct 14 09:22:17 compute-0 kernel: tapc47025b4-90: entered promiscuous mode
Oct 14 09:22:17 compute-0 NetworkManager[44885]: <info>  [1760433737.9502] manager: (tapc47025b4-90): new Tun device (/org/freedesktop/NetworkManager/Devices/512)
Oct 14 09:22:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:17 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:17 compute-0 ovn_controller[152662]: 2025-10-14T09:22:17Z|01244|binding|INFO|Claiming lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for this chassis.
Oct 14 09:22:17 compute-0 ovn_controller[152662]: 2025-10-14T09:22:17Z|01245|binding|INFO|c47025b4-9051-4cc4-9fb7-70cd59d6c5c5: Claiming fa:16:3e:39:7a:fe 10.100.0.13
Oct 14 09:22:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:17.970 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:17.971 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 bound to our chassis
Oct 14 09:22:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:17.972 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:22:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:17.973 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[002bf8fb-8b0a-48bf-99f4-d51bdaeb31ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:17 compute-0 systemd-udevd[378407]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:22:18 compute-0 NetworkManager[44885]: <info>  [1760433738.0006] device (tapc47025b4-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:22:18 compute-0 NetworkManager[44885]: <info>  [1760433738.0016] device (tapc47025b4-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:22:18 compute-0 systemd-machined[214636]: New machine qemu-149-instance-00000076.
Oct 14 09:22:18 compute-0 ovn_controller[152662]: 2025-10-14T09:22:18Z|01246|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 ovn-installed in OVS
Oct 14 09:22:18 compute-0 ovn_controller[152662]: 2025-10-14T09:22:18Z|01247|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 up in Southbound
Oct 14 09:22:18 compute-0 nova_compute[259627]: 2025-10-14 09:22:17.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:18 compute-0 nova_compute[259627]: 2025-10-14 09:22:18.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:18 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000076.
Oct 14 09:22:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 134 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:22:18 compute-0 nova_compute[259627]: 2025-10-14 09:22:18.842 2 DEBUG nova.compute.manager [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:18 compute-0 nova_compute[259627]: 2025-10-14 09:22:18.842 2 DEBUG oslo_concurrency.lockutils [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:18 compute-0 nova_compute[259627]: 2025-10-14 09:22:18.842 2 DEBUG oslo_concurrency.lockutils [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:18 compute-0 nova_compute[259627]: 2025-10-14 09:22:18.842 2 DEBUG oslo_concurrency.lockutils [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:18 compute-0 nova_compute[259627]: 2025-10-14 09:22:18.843 2 DEBUG nova.compute.manager [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Processing event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.053 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433739.0525393, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.053 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Started (Lifecycle Event)
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.056 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.060 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.064 2 INFO nova.virt.libvirt.driver [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance spawned successfully.
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.064 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.087 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.092 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.096 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.097 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.097 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.097 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.098 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.098 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.133 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.134 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433739.0560546, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.134 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Paused (Lifecycle Event)
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.168 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.172 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433739.0601325, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.172 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Resumed (Lifecycle Event)
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.178 2 INFO nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Took 8.61 seconds to spawn the instance on the hypervisor.
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.179 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.189 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.192 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.215 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.230 2 INFO nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Took 9.62 seconds to build instance.
Oct 14 09:22:19 compute-0 nova_compute[259627]: 2025-10-14 09:22:19.247 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:19 compute-0 ceph-mon[74249]: pgmap v2063: 305 pgs: 305 active+clean; 134 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:22:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 146 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 140 op/s
Oct 14 09:22:20 compute-0 nova_compute[259627]: 2025-10-14 09:22:20.959 2 DEBUG nova.compute.manager [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:20 compute-0 nova_compute[259627]: 2025-10-14 09:22:20.960 2 DEBUG oslo_concurrency.lockutils [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:20 compute-0 nova_compute[259627]: 2025-10-14 09:22:20.961 2 DEBUG oslo_concurrency.lockutils [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:20 compute-0 nova_compute[259627]: 2025-10-14 09:22:20.961 2 DEBUG oslo_concurrency.lockutils [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:20 compute-0 nova_compute[259627]: 2025-10-14 09:22:20.961 2 DEBUG nova.compute.manager [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:20 compute-0 nova_compute[259627]: 2025-10-14 09:22:20.962 2 WARNING nova.compute.manager [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state active and task_state None.
Oct 14 09:22:21 compute-0 ovn_controller[152662]: 2025-10-14T09:22:21Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:f2:3f 10.100.0.10
Oct 14 09:22:21 compute-0 ovn_controller[152662]: 2025-10-14T09:22:21Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:f2:3f 10.100.0.10
Oct 14 09:22:21 compute-0 ceph-mon[74249]: pgmap v2064: 305 pgs: 305 active+clean; 146 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 140 op/s
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.869 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.870 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.890 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.897 2 DEBUG nova.objects.instance [None req-664a07a7-66ad-437a-a9fd-81f3cdee4e83 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.924 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433741.923434, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.924 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Paused (Lifecycle Event)
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.954 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.965 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.983 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.987 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.988 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:21 compute-0 nova_compute[259627]: 2025-10-14 09:22:21.999 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.000 2 INFO nova.compute.claims [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:22:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 156 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 149 op/s
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.181 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:22 compute-0 sshd-session[378270]: Invalid user pi from 188.150.249.96 port 47166
Oct 14 09:22:22 compute-0 kernel: tapc47025b4-90 (unregistering): left promiscuous mode
Oct 14 09:22:22 compute-0 NetworkManager[44885]: <info>  [1760433742.2978] device (tapc47025b4-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:22:22 compute-0 ovn_controller[152662]: 2025-10-14T09:22:22Z|01248|binding|INFO|Releasing lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 from this chassis (sb_readonly=0)
Oct 14 09:22:22 compute-0 ovn_controller[152662]: 2025-10-14T09:22:22Z|01249|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 down in Southbound
Oct 14 09:22:22 compute-0 ovn_controller[152662]: 2025-10-14T09:22:22Z|01250|binding|INFO|Removing iface tapc47025b4-90 ovn-installed in OVS
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:22.354 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:22.355 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 unbound from our chassis
Oct 14 09:22:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:22.357 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:22:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:22.357 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[870e9281-a935-4a86-8ef8-3399b886042a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:22 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 14 09:22:22 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Consumed 4.041s CPU time.
Oct 14 09:22:22 compute-0 systemd-machined[214636]: Machine qemu-149-instance-00000076 terminated.
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.478 2 DEBUG nova.compute.manager [None req-664a07a7-66ad-437a-a9fd-81f3cdee4e83 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:22:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830324951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.683 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.691 2 DEBUG nova.compute.provider_tree [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.711 2 DEBUG nova.scheduler.client.report [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.733 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.734 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.779 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.780 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.797 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.813 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:22:22 compute-0 sshd-session[378270]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:22:22 compute-0 sshd-session[378270]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.913 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.915 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.915 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Creating image(s)
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.941 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:22 compute-0 nova_compute[259627]: 2025-10-14 09:22:22.970 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.000 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.006 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.060 2 DEBUG nova.policy [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f232ab535af04111bf570569aa293116', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4112adc84657452aa0e117ac5999054a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.064 2 DEBUG nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.064 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.065 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.067 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.067 2 DEBUG nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.067 2 WARNING nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state None.
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.067 2 DEBUG nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.068 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.068 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.068 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.069 2 DEBUG nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.069 2 WARNING nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state None.
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.109 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.110 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.110 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.110 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.135 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.139 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6810b29b-088f-441b-8a6a-02eaafada0c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:23 compute-0 ceph-mon[74249]: pgmap v2065: 305 pgs: 305 active+clean; 156 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 149 op/s
Oct 14 09:22:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1830324951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.465 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6810b29b-088f-441b-8a6a-02eaafada0c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.547 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] resizing rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.655 2 DEBUG nova.objects.instance [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'migration_context' on Instance uuid 6810b29b-088f-441b-8a6a-02eaafada0c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.674 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.674 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Ensure instance console log exists: /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.675 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.676 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:23 compute-0 nova_compute[259627]: 2025-10-14 09:22:23.676 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 156 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.200 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Successfully created port: 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.651 2 INFO nova.compute.manager [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Resuming
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.652 2 DEBUG nova.objects.instance [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'flavor' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.684 2 DEBUG oslo_concurrency.lockutils [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.684 2 DEBUG oslo_concurrency.lockutils [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquired lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.685 2 DEBUG nova.network.neutron [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.802 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Successfully updated port: 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.823 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.824 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.824 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.913 2 DEBUG nova.compute.manager [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.913 2 DEBUG nova.compute.manager [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing instance network info cache due to event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:22:24 compute-0 nova_compute[259627]: 2025-10-14 09:22:24.914 2 DEBUG oslo_concurrency.lockutils [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:25 compute-0 nova_compute[259627]: 2025-10-14 09:22:25.006 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:22:25 compute-0 sshd-session[378270]: Failed password for invalid user pi from 188.150.249.96 port 47166 ssh2
Oct 14 09:22:25 compute-0 ceph-mon[74249]: pgmap v2066: 305 pgs: 305 active+clean; 156 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.104 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.130 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.130 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance network_info: |[{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.131 2 DEBUG oslo_concurrency.lockutils [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.132 2 DEBUG nova.network.neutron [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.137 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start _get_guest_xml network_info=[{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.144 2 WARNING nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.149 2 DEBUG nova.virt.libvirt.host [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.150 2 DEBUG nova.virt.libvirt.host [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.157 2 DEBUG nova.virt.libvirt.host [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.158 2 DEBUG nova.virt.libvirt.host [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.158 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.158 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.159 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.159 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.160 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.160 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.160 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.160 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.161 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.161 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.161 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.161 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.165 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 196 op/s
Oct 14 09:22:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:22:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711984747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:26 compute-0 sshd-session[378270]: Connection closed by invalid user pi 188.150.249.96 port 47166 [preauth]
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.625 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.661 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.665 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.711 2 DEBUG nova.network.neutron [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.730 2 DEBUG oslo_concurrency.lockutils [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Releasing lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.739 2 DEBUG nova.virt.libvirt.vif [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:22Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.739 2 DEBUG nova.network.os_vif_util [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.741 2 DEBUG nova.network.os_vif_util [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.741 2 DEBUG os_vif [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc47025b4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc47025b4-90, col_values=(('external_ids', {'iface-id': 'c47025b4-9051-4cc4-9fb7-70cd59d6c5c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:7a:fe', 'vm-uuid': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.749 2 INFO os_vif [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90')
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.780 2 DEBUG nova.objects.instance [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:26 compute-0 kernel: tapc47025b4-90: entered promiscuous mode
Oct 14 09:22:26 compute-0 NetworkManager[44885]: <info>  [1760433746.8696] manager: (tapc47025b4-90): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Oct 14 09:22:26 compute-0 ovn_controller[152662]: 2025-10-14T09:22:26Z|01251|binding|INFO|Claiming lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for this chassis.
Oct 14 09:22:26 compute-0 ovn_controller[152662]: 2025-10-14T09:22:26Z|01252|binding|INFO|c47025b4-9051-4cc4-9fb7-70cd59d6c5c5: Claiming fa:16:3e:39:7a:fe 10.100.0.13
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:26.882 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:26.884 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 bound to our chassis
Oct 14 09:22:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:26.885 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:22:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:26.888 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bb71dd9f-5dad-4094-aae5-b39881f80186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:26 compute-0 ovn_controller[152662]: 2025-10-14T09:22:26Z|01253|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 ovn-installed in OVS
Oct 14 09:22:26 compute-0 ovn_controller[152662]: 2025-10-14T09:22:26Z|01254|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 up in Southbound
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:26 compute-0 nova_compute[259627]: 2025-10-14 09:22:26.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:26 compute-0 systemd-machined[214636]: New machine qemu-150-instance-00000076.
Oct 14 09:22:26 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000076.
Oct 14 09:22:26 compute-0 systemd-udevd[378743]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:22:26 compute-0 NetworkManager[44885]: <info>  [1760433746.9647] device (tapc47025b4-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:22:26 compute-0 NetworkManager[44885]: <info>  [1760433746.9667] device (tapc47025b4-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.072 2 DEBUG nova.compute.manager [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.073 2 DEBUG oslo_concurrency.lockutils [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.073 2 DEBUG oslo_concurrency.lockutils [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.074 2 DEBUG oslo_concurrency.lockutils [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.074 2 DEBUG nova.compute.manager [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.075 2 WARNING nova.compute.manager [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state resuming.
Oct 14 09:22:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:22:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028329480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.146 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.147 2 DEBUG nova.virt.libvirt.vif [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1951271072',display_name='tempest-TestSnapshotPattern-server-1951271072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1951271072',id=119,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-51g4schc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:22Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=6810b29b-088f-441b-8a6a-02eaafada0c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.148 2 DEBUG nova.network.os_vif_util [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.148 2 DEBUG nova.network.os_vif_util [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.149 2 DEBUG nova.objects.instance [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6810b29b-088f-441b-8a6a-02eaafada0c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.164 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <uuid>6810b29b-088f-441b-8a6a-02eaafada0c5</uuid>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <name>instance-00000077</name>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <nova:name>tempest-TestSnapshotPattern-server-1951271072</nova:name>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:22:26</nova:creationTime>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <nova:user uuid="f232ab535af04111bf570569aa293116">tempest-TestSnapshotPattern-70687399-project-member</nova:user>
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <nova:project uuid="4112adc84657452aa0e117ac5999054a">tempest-TestSnapshotPattern-70687399</nova:project>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <nova:port uuid="6d5e10b7-5c07-4389-8916-e7c277cb2c88">
Oct 14 09:22:27 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <system>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <entry name="serial">6810b29b-088f-441b-8a6a-02eaafada0c5</entry>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <entry name="uuid">6810b29b-088f-441b-8a6a-02eaafada0c5</entry>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </system>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <os>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   </os>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <features>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   </features>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6810b29b-088f-441b-8a6a-02eaafada0c5_disk">
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       </source>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config">
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       </source>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:22:27 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:7c:22:e5"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <target dev="tap6d5e10b7-5c"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/console.log" append="off"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <video>
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </video>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:22:27 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:22:27 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:22:27 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:22:27 compute-0 nova_compute[259627]: </domain>
Oct 14 09:22:27 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.170 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Preparing to wait for external event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.170 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.170 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.171 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.171 2 DEBUG nova.virt.libvirt.vif [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1951271072',display_name='tempest-TestSnapshotPattern-server-1951271072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1951271072',id=119,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-51g4schc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:22Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=6810b29b-088f-441b-8a6a-02eaafada0c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.172 2 DEBUG nova.network.os_vif_util [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.172 2 DEBUG nova.network.os_vif_util [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.173 2 DEBUG os_vif [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d5e10b7-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d5e10b7-5c, col_values=(('external_ids', {'iface-id': '6d5e10b7-5c07-4389-8916-e7c277cb2c88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:22:e5', 'vm-uuid': '6810b29b-088f-441b-8a6a-02eaafada0c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:27 compute-0 NetworkManager[44885]: <info>  [1760433747.1794] manager: (tap6d5e10b7-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/514)
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.185 2 INFO os_vif [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c')
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.226 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.227 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.227 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No VIF found with MAC fa:16:3e:7c:22:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.227 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Using config drive
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.247 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:27 compute-0 ceph-mon[74249]: pgmap v2067: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 196 op/s
Oct 14 09:22:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3711984747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2028329480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.618 2 DEBUG nova.network.neutron [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updated VIF entry in instance network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.619 2 DEBUG nova.network.neutron [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.642 2 DEBUG oslo_concurrency.lockutils [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.758 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Creating config drive at /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.766 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk82t929i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.828 2 INFO nova.compute.manager [None req-5fdcc3e5-4265-4647-a757-02b7bc666d62 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Get console output
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.837 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.938 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk82t929i" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.975 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:27 compute-0 nova_compute[259627]: 2025-10-14 09:22:27.980 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.145 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 725ed629-f7d5-4a69-be5e-4cae3eef2e2e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.147 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433748.1448653, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.147 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Started (Lifecycle Event)
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.164 2 DEBUG nova.compute.manager [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.165 2 DEBUG nova.objects.instance [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.168 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.173 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.179 2 INFO nova.virt.libvirt.driver [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance running successfully.
Oct 14 09:22:28 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.184 2 DEBUG nova.virt.libvirt.guest [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.185 2 DEBUG nova.compute.manager [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.190 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.191 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433748.1503003, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.191 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Resumed (Lifecycle Event)
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.192 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.193 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Deleting local config drive /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config because it was imported into RBD.
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.221 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.224 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:28 compute-0 kernel: tap6d5e10b7-5c: entered promiscuous mode
Oct 14 09:22:28 compute-0 systemd-udevd[378745]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:22:28 compute-0 NetworkManager[44885]: <info>  [1760433748.2507] manager: (tap6d5e10b7-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.259 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:28 compute-0 ovn_controller[152662]: 2025-10-14T09:22:28Z|01255|binding|INFO|Claiming lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 for this chassis.
Oct 14 09:22:28 compute-0 ovn_controller[152662]: 2025-10-14T09:22:28Z|01256|binding|INFO|6d5e10b7-5c07-4389-8916-e7c277cb2c88: Claiming fa:16:3e:7c:22:e5 10.100.0.12
Oct 14 09:22:28 compute-0 NetworkManager[44885]: <info>  [1760433748.2702] device (tap6d5e10b7-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:22:28 compute-0 NetworkManager[44885]: <info>  [1760433748.2716] device (tap6d5e10b7-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.273 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:22:e5 10.100.0.12'], port_security=['fa:16:3e:7c:22:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6810b29b-088f-441b-8a6a-02eaafada0c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4112adc84657452aa0e117ac5999054a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b7a53172-9b5e-49ee-bb03-aeca0d4a8fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=add5cdec-6440-4df9-aea8-21659d7bab06, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6d5e10b7-5c07-4389-8916-e7c277cb2c88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.274 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 in datapath 4fc37d66-193b-4ab7-80e3-58e26dc76e47 bound to our chassis
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.276 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fc37d66-193b-4ab7-80e3-58e26dc76e47
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d502e11-6a82-4c04-80eb-686fa8035106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.288 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4fc37d66-11 in ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.289 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4fc37d66-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.289 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[936ee26b-10e0-4d62-a6d2-3543b29fa1b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b06f47-9c9b-4dd0-b4a7-68790d2b4024]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_controller[152662]: 2025-10-14T09:22:28Z|01257|binding|INFO|Setting lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 ovn-installed in OVS
Oct 14 09:22:28 compute-0 ovn_controller[152662]: 2025-10-14T09:22:28Z|01258|binding|INFO|Setting lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 up in Southbound
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:28 compute-0 systemd-machined[214636]: New machine qemu-151-instance-00000077.
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:28 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000077.
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.309 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[da1dc5d3-0b62-4e67-a3e1-5b9fe22ee88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.324 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98b1d6a9-a549-4f28-97d8-579f9d9de3de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.364 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b370313a-1013-42fd-95ce-a4c6a1bc1897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 NetworkManager[44885]: <info>  [1760433748.3703] manager: (tap4fc37d66-10): new Veth device (/org/freedesktop/NetworkManager/Devices/516)
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.371 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf30a98-4c70-480b-95fc-185673066865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.414 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a74dc1-6086-4327-8006-e044ef2facb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.417 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa9e2a2-4330-4e36-a30a-79d743f67021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 NetworkManager[44885]: <info>  [1760433748.4423] device (tap4fc37d66-10): carrier: link connected
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.454 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c58cfa19-b05b-4163-9780-7e3c21aaeea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[acf42113-100e-481b-9e6c-05da349a22b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fc37d66-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:1e:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752720, 'reachable_time': 16651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378901, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3aca4b12-e207-49f8-95e9-b3671046ea9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:1e16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752720, 'tstamp': 752720}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378902, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.509 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[674d4976-020c-4ebe-8b4e-d983b944b958]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fc37d66-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:1e:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752720, 'reachable_time': 16651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378903, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.544 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7ac8c3-ff2b-4310-b18b-162dcc27bca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6be66286-f4d5-404e-a769-ed9267a34036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.620 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fc37d66-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.620 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.621 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fc37d66-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:28 compute-0 kernel: tap4fc37d66-10: entered promiscuous mode
Oct 14 09:22:28 compute-0 NetworkManager[44885]: <info>  [1760433748.6236] manager: (tap4fc37d66-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.628 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fc37d66-10, col_values=(('external_ids', {'iface-id': '04719e6c-d55b-4ad7-a45c-52e6e59101ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:28 compute-0 ovn_controller[152662]: 2025-10-14T09:22:28Z|01259|binding|INFO|Releasing lport 04719e6c-d55b-4ad7-a45c-52e6e59101ab from this chassis (sb_readonly=0)
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.634 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4fc37d66-193b-4ab7-80e3-58e26dc76e47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4fc37d66-193b-4ab7-80e3-58e26dc76e47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.636 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85e35188-f373-45e5-9fa6-c02fc5beaadd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.637 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-4fc37d66-193b-4ab7-80e3-58e26dc76e47
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/4fc37d66-193b-4ab7-80e3-58e26dc76e47.pid.haproxy
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 4fc37d66-193b-4ab7-80e3-58e26dc76e47
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:22:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.640 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'env', 'PROCESS_TAG=haproxy-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4fc37d66-193b-4ab7-80e3-58e26dc76e47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:22:28 compute-0 nova_compute[259627]: 2025-10-14 09:22:28.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:29 compute-0 podman[378978]: 2025-10-14 09:22:29.02334929 +0000 UTC m=+0.050979851 container create a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:22:29 compute-0 systemd[1]: Started libpod-conmon-a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e.scope.
Oct 14 09:22:29 compute-0 podman[378978]: 2025-10-14 09:22:28.99544381 +0000 UTC m=+0.023074411 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:22:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de954708f8f3fdfb0e31660e7186517bbce587741a2e68b98ff8eba35c2ae10b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:22:29 compute-0 podman[378978]: 2025-10-14 09:22:29.124896139 +0000 UTC m=+0.152526800 container init a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:22:29 compute-0 podman[378978]: 2025-10-14 09:22:29.141578291 +0000 UTC m=+0.169208862 container start a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.164 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.164 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.164 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.165 2 WARNING nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state active and task_state None.
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Processing event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] No waiting events found dispatching network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.167 2 WARNING nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received unexpected event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 for instance with vm_state building and task_state spawning.
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:29 compute-0 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [NOTICE]   (378998) : New worker (379000) forked
Oct 14 09:22:29 compute-0 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [NOTICE]   (378998) : Loading success.
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.319 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.319 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433749.3186138, 6810b29b-088f-441b-8a6a-02eaafada0c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.320 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] VM Started (Lifecycle Event)
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.324 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.328 2 INFO nova.virt.libvirt.driver [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance spawned successfully.
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.328 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.341 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.350 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.353 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.353 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.354 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.354 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.354 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.355 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:29 compute-0 ceph-mon[74249]: pgmap v2068: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.383 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.383 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433749.319528, 6810b29b-088f-441b-8a6a-02eaafada0c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.384 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] VM Paused (Lifecycle Event)
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.422 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.427 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433749.3241954, 6810b29b-088f-441b-8a6a-02eaafada0c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.427 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] VM Resumed (Lifecycle Event)
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.448 2 INFO nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 6.53 seconds to spawn the instance on the hypervisor.
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.448 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.455 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.458 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.499 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.524 2 INFO nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 7.58 seconds to build instance.
Oct 14 09:22:29 compute-0 nova_compute[259627]: 2025-10-14 09:22:29.543 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 14 09:22:30 compute-0 nova_compute[259627]: 2025-10-14 09:22:30.649 2 DEBUG nova.objects.instance [None req-0c1638d0-0dff-4b85-b950-232854edefaa 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:30 compute-0 podman[379010]: 2025-10-14 09:22:30.677041541 +0000 UTC m=+0.077984508 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:22:30 compute-0 nova_compute[259627]: 2025-10-14 09:22:30.679 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433750.6793401, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:30 compute-0 nova_compute[259627]: 2025-10-14 09:22:30.679 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Paused (Lifecycle Event)
Oct 14 09:22:30 compute-0 nova_compute[259627]: 2025-10-14 09:22:30.707 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:30 compute-0 nova_compute[259627]: 2025-10-14 09:22:30.711 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:30 compute-0 podman[379009]: 2025-10-14 09:22:30.728598095 +0000 UTC m=+0.126157408 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:22:30 compute-0 nova_compute[259627]: 2025-10-14 09:22:30.749 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 14 09:22:31 compute-0 kernel: tapc47025b4-90 (unregistering): left promiscuous mode
Oct 14 09:22:31 compute-0 NetworkManager[44885]: <info>  [1760433751.0477] device (tapc47025b4-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:31 compute-0 ovn_controller[152662]: 2025-10-14T09:22:31Z|01260|binding|INFO|Releasing lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 from this chassis (sb_readonly=0)
Oct 14 09:22:31 compute-0 ovn_controller[152662]: 2025-10-14T09:22:31Z|01261|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 down in Southbound
Oct 14 09:22:31 compute-0 ovn_controller[152662]: 2025-10-14T09:22:31Z|01262|binding|INFO|Removing iface tapc47025b4-90 ovn-installed in OVS
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:31.074 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:31.077 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 unbound from our chassis
Oct 14 09:22:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:31.078 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:22:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:31.079 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e579a2e0-4c68-4494-9f74-1569b113eb16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:31 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 14 09:22:31 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Consumed 3.606s CPU time.
Oct 14 09:22:31 compute-0 systemd-machined[214636]: Machine qemu-150-instance-00000076 terminated.
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.225 2 DEBUG nova.compute.manager [None req-0c1638d0-0dff-4b85-b950-232854edefaa 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.355 2 DEBUG nova.compute.manager [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.357 2 DEBUG oslo_concurrency.lockutils [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.357 2 DEBUG oslo_concurrency.lockutils [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.358 2 DEBUG oslo_concurrency.lockutils [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.358 2 DEBUG nova.compute.manager [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:31 compute-0 nova_compute[259627]: 2025-10-14 09:22:31.358 2 WARNING nova.compute.manager [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state None.
Oct 14 09:22:31 compute-0 ceph-mon[74249]: pgmap v2069: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 173 op/s
Oct 14 09:22:32 compute-0 nova_compute[259627]: 2025-10-14 09:22:32.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:32 compute-0 nova_compute[259627]: 2025-10-14 09:22:32.391 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:32 compute-0 nova_compute[259627]: 2025-10-14 09:22:32.392 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:32 compute-0 nova_compute[259627]: 2025-10-14 09:22:32.392 2 DEBUG nova.objects.instance [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'flavor' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:22:32
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.rgw.root']
Oct 14 09:22:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:22:32 compute-0 nova_compute[259627]: 2025-10-14 09:22:32.944 2 DEBUG nova.objects.instance [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_requests' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:32 compute-0 nova_compute[259627]: 2025-10-14 09:22:32.959 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:22:33 compute-0 nova_compute[259627]: 2025-10-14 09:22:33.085 2 INFO nova.compute.manager [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Resuming
Oct 14 09:22:33 compute-0 nova_compute[259627]: 2025-10-14 09:22:33.086 2 DEBUG nova.objects.instance [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'flavor' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:33 compute-0 nova_compute[259627]: 2025-10-14 09:22:33.117 2 DEBUG oslo_concurrency.lockutils [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:33 compute-0 nova_compute[259627]: 2025-10-14 09:22:33.117 2 DEBUG oslo_concurrency.lockutils [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquired lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:33 compute-0 nova_compute[259627]: 2025-10-14 09:22:33.117 2 DEBUG nova.network.neutron [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:22:33 compute-0 nova_compute[259627]: 2025-10-14 09:22:33.166 2 DEBUG nova.policy [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:22:33 compute-0 ceph-mon[74249]: pgmap v2070: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 173 op/s
Oct 14 09:22:33 compute-0 ceph-mgr[74543]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3625056923
Oct 14 09:22:34 compute-0 nova_compute[259627]: 2025-10-14 09:22:34.108 2 DEBUG nova.compute.manager [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:34 compute-0 nova_compute[259627]: 2025-10-14 09:22:34.108 2 DEBUG oslo_concurrency.lockutils [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:34 compute-0 nova_compute[259627]: 2025-10-14 09:22:34.109 2 DEBUG oslo_concurrency.lockutils [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:34 compute-0 nova_compute[259627]: 2025-10-14 09:22:34.109 2 DEBUG oslo_concurrency.lockutils [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:34 compute-0 nova_compute[259627]: 2025-10-14 09:22:34.109 2 DEBUG nova.compute.manager [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:34 compute-0 nova_compute[259627]: 2025-10-14 09:22:34.110 2 WARNING nova.compute.manager [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state resuming.
Oct 14 09:22:34 compute-0 nova_compute[259627]: 2025-10-14 09:22:34.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Oct 14 09:22:34 compute-0 nova_compute[259627]: 2025-10-14 09:22:34.276 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Successfully created port: ff2b9b74-a6fc-4774-89d2-9c010f121d65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.028 2 DEBUG nova.network.neutron [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.044 2 DEBUG oslo_concurrency.lockutils [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Releasing lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.052 2 DEBUG nova.virt.libvirt.vif [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:31Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.053 2 DEBUG nova.network.os_vif_util [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.054 2 DEBUG nova.network.os_vif_util [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.055 2 DEBUG os_vif [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.064 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc47025b4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.065 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc47025b4-90, col_values=(('external_ids', {'iface-id': 'c47025b4-9051-4cc4-9fb7-70cd59d6c5c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:7a:fe', 'vm-uuid': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.067 2 INFO os_vif [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90')
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.087 2 DEBUG nova.objects.instance [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:35 compute-0 kernel: tapc47025b4-90: entered promiscuous mode
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:35 compute-0 NetworkManager[44885]: <info>  [1760433755.1717] manager: (tapc47025b4-90): new Tun device (/org/freedesktop/NetworkManager/Devices/518)
Oct 14 09:22:35 compute-0 ovn_controller[152662]: 2025-10-14T09:22:35Z|01263|binding|INFO|Claiming lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for this chassis.
Oct 14 09:22:35 compute-0 ovn_controller[152662]: 2025-10-14T09:22:35Z|01264|binding|INFO|c47025b4-9051-4cc4-9fb7-70cd59d6c5c5: Claiming fa:16:3e:39:7a:fe 10.100.0.13
Oct 14 09:22:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:35.179 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:35.181 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 bound to our chassis
Oct 14 09:22:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:35.182 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:22:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:35.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57685871-7e3f-42cb-8662-a66f58956c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:35 compute-0 systemd-udevd[379080]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:35 compute-0 ovn_controller[152662]: 2025-10-14T09:22:35Z|01265|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 ovn-installed in OVS
Oct 14 09:22:35 compute-0 ovn_controller[152662]: 2025-10-14T09:22:35Z|01266|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 up in Southbound
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:35 compute-0 NetworkManager[44885]: <info>  [1760433755.2072] device (tapc47025b4-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:22:35 compute-0 NetworkManager[44885]: <info>  [1760433755.2080] device (tapc47025b4-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:22:35 compute-0 systemd-machined[214636]: New machine qemu-152-instance-00000076.
Oct 14 09:22:35 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000076.
Oct 14 09:22:35 compute-0 ceph-mon[74249]: pgmap v2071: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.860 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Successfully updated port: ff2b9b74-a6fc-4774-89d2-9c010f121d65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.901 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.902 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.902 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.962 2 DEBUG nova.compute.manager [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.963 2 DEBUG nova.compute.manager [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-ff2b9b74-a6fc-4774-89d2-9c010f121d65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:22:35 compute-0 nova_compute[259627]: 2025-10-14 09:22:35.964 2 DEBUG oslo_concurrency.lockutils [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.196 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 725ed629-f7d5-4a69-be5e-4cae3eef2e2e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.197 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433756.1961772, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.198 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Started (Lifecycle Event)
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.205 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.205 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing instance network info cache due to event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.206 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.206 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.206 2 DEBUG nova.network.neutron [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:22:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 155 op/s
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.220 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.229 2 DEBUG nova.compute.manager [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.229 2 DEBUG nova.objects.instance [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.232 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.254 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.255 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433756.205775, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.255 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Resumed (Lifecycle Event)
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.257 2 INFO nova.virt.libvirt.driver [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance running successfully.
Oct 14 09:22:36 compute-0 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.260 2 DEBUG nova.virt.libvirt.guest [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.261 2 DEBUG nova.compute.manager [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.272 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.276 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.298 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 14 09:22:36 compute-0 sshd-session[378751]: Invalid user test from 188.150.249.96 port 49924
Oct 14 09:22:36 compute-0 sshd-session[378751]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:22:36 compute-0 sshd-session[378751]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:36 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:36.999 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.000 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.000 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.001 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:37 compute-0 ceph-mon[74249]: pgmap v2072: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 155 op/s
Oct 14 09:22:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:22:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271480247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.473 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.524 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.525 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.525 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.526 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.526 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.528 2 INFO nova.compute.manager [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Terminating instance
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.529 2 DEBUG nova.compute.manager [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:22:37 compute-0 kernel: tapc47025b4-90 (unregistering): left promiscuous mode
Oct 14 09:22:37 compute-0 NetworkManager[44885]: <info>  [1760433757.5616] device (tapc47025b4-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.571 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.571 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:37 compute-0 ovn_controller[152662]: 2025-10-14T09:22:37Z|01267|binding|INFO|Releasing lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 from this chassis (sb_readonly=0)
Oct 14 09:22:37 compute-0 ovn_controller[152662]: 2025-10-14T09:22:37Z|01268|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 down in Southbound
Oct 14 09:22:37 compute-0 ovn_controller[152662]: 2025-10-14T09:22:37Z|01269|binding|INFO|Removing iface tapc47025b4-90 ovn-installed in OVS
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:37.588 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:37.589 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 unbound from our chassis
Oct 14 09:22:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:37.590 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 14 09:22:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:37.591 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[826f5606-1a0c-4e59-bcf3-344e0b0a0b67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:37 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 14 09:22:37 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000076.scope: Consumed 2.140s CPU time.
Oct 14 09:22:37 compute-0 systemd-machined[214636]: Machine qemu-152-instance-00000076 terminated.
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.737 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.738 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.742 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.742 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.775 2 INFO nova.virt.libvirt.driver [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance destroyed successfully.
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.776 2 DEBUG nova.objects.instance [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'resources' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.786 2 DEBUG nova.network.neutron [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updated VIF entry in instance network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.787 2 DEBUG nova.network.neutron [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.790 2 DEBUG nova.virt.libvirt.vif [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:36Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.790 2 DEBUG nova.network.os_vif_util [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.791 2 DEBUG nova.network.os_vif_util [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.791 2 DEBUG os_vif [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc47025b4-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.802 2 INFO os_vif [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90')
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.823 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.824 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.824 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.825 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.825 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.826 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.826 2 WARNING nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state resuming.
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.826 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.827 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.827 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.827 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.828 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:37 compute-0 nova_compute[259627]: 2025-10-14 09:22:37.828 2 WARNING nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state resuming.
Oct 14 09:22:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.049 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.050 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3279MB free_disk=59.90092849731445GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.050 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.050 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.151 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 50c83173-31e3-4f7a-8836-26e52affd0f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 725ed629-f7d5-4a69-be5e-4cae3eef2e2e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6810b29b-088f-441b-8a6a-02eaafada0c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.200 2 INFO nova.virt.libvirt.driver [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Deleting instance files /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e_del
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.201 2 INFO nova.virt.libvirt.driver [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Deletion of /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e_del complete
Oct 14 09:22:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 81 op/s
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.225 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.229 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.267 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.269 2 DEBUG oslo_concurrency.lockutils [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.270 2 DEBUG nova.network.neutron [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port ff2b9b74-a6fc-4774-89d2-9c010f121d65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.273 2 DEBUG nova.virt.libvirt.vif [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.273 2 DEBUG nova.network.os_vif_util [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.274 2 DEBUG nova.network.os_vif_util [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.274 2 DEBUG os_vif [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.277 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff2b9b74-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff2b9b74-a6, col_values=(('external_ids', {'iface-id': 'ff2b9b74-a6fc-4774-89d2-9c010f121d65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:98:a5', 'vm-uuid': '50c83173-31e3-4f7a-8836-26e52affd0f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.283 2 INFO nova.compute.manager [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.283 2 DEBUG oslo.service.loopingcall [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 NetworkManager[44885]: <info>  [1760433758.2845] manager: (tapff2b9b74-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.284 2 DEBUG nova.compute.manager [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.285 2 DEBUG nova.network.neutron [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.289 2 INFO os_vif [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6')
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.290 2 DEBUG nova.virt.libvirt.vif [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.290 2 DEBUG nova.network.os_vif_util [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.291 2 DEBUG nova.network.os_vif_util [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.296 2 DEBUG nova.virt.libvirt.guest [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] attach device xml: <interface type="ethernet">
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:17:98:a5"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <target dev="tapff2b9b74-a6"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]: </interface>
Oct 14 09:22:38 compute-0 nova_compute[259627]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 14 09:22:38 compute-0 kernel: tapff2b9b74-a6: entered promiscuous mode
Oct 14 09:22:38 compute-0 NetworkManager[44885]: <info>  [1760433758.3104] manager: (tapff2b9b74-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/520)
Oct 14 09:22:38 compute-0 ovn_controller[152662]: 2025-10-14T09:22:38Z|01270|binding|INFO|Claiming lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 for this chassis.
Oct 14 09:22:38 compute-0 ovn_controller[152662]: 2025-10-14T09:22:38Z|01271|binding|INFO|ff2b9b74-a6fc-4774-89d2-9c010f121d65: Claiming fa:16:3e:17:98:a5 10.100.0.18
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 sshd-session[378751]: Failed password for invalid user test from 188.150.249.96 port 49924 ssh2
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.324 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:98:a5 10.100.0.18'], port_security=['fa:16:3e:17:98:a5 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '50c83173-31e3-4f7a-8836-26e52affd0f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab61dd9b-dbf7-46d4-89df-319a0a1fc6a6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ff2b9b74-a6fc-4774-89d2-9c010f121d65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.325 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ff2b9b74-a6fc-4774-89d2-9c010f121d65 in datapath 39c21153-4a3d-40fd-91df-ae7d5dae4d8c bound to our chassis
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.327 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39c21153-4a3d-40fd-91df-ae7d5dae4d8c
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.335 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.335 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.336 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.338 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.338 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.338 2 WARNING nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state active and task_state deleting.
Oct 14 09:22:38 compute-0 systemd-udevd[379191]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc968678-2f35-4616-bf80-3e324cd55e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.345 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39c21153-41 in ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.347 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39c21153-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.347 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[30c3fa5d-af83-497a-88d1-e104ecf67fab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[44d49f0c-b7ab-4931-9794-6efdaed5bb64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 NetworkManager[44885]: <info>  [1760433758.3575] device (tapff2b9b74-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:22:38 compute-0 NetworkManager[44885]: <info>  [1760433758.3613] device (tapff2b9b74-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.377 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[94348f75-ba8e-4033-b4f1-e3bbce8bdc0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.393 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[86053419-b36a-47cb-bb84-059609102392]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_controller[152662]: 2025-10-14T09:22:38Z|01272|binding|INFO|Setting lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 ovn-installed in OVS
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 ovn_controller[152662]: 2025-10-14T09:22:38Z|01273|binding|INFO|Setting lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 up in Southbound
Oct 14 09:22:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1271480247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.404 2 DEBUG nova.virt.libvirt.driver [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.405 2 DEBUG nova.virt.libvirt.driver [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.405 2 DEBUG nova.virt.libvirt.driver [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:32:f2:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.405 2 DEBUG nova.virt.libvirt.driver [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:17:98:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.431 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7744c7-ea8e-4281-866a-5dc2f8143c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.435 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7fcba2d-3665-4a3c-94cd-24b3d19d8543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 NetworkManager[44885]: <info>  [1760433758.4363] manager: (tap39c21153-40): new Veth device (/org/freedesktop/NetworkManager/Devices/521)
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.448 2 DEBUG nova.virt.libvirt.guest [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:22:38</nova:creationTime>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 09:22:38 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     <nova:port uuid="ff2b9b74-a6fc-4774-89d2-9c010f121d65">
Oct 14 09:22:38 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Oct 14 09:22:38 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:22:38 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:22:38 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:22:38 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.470 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[567f6530-5157-4a70-abe4-f32d6b043d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.476 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c420d173-d172-4281-a669-db4a877c61f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.485 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:38 compute-0 NetworkManager[44885]: <info>  [1760433758.4971] device (tap39c21153-40): carrier: link connected
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.502 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eec99a91-3c0b-4662-b0d0-ea4412565cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.533 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f97cf2b-dbca-463d-bd55-3fa11aadc5db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39c21153-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:78:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753725, 'reachable_time': 17078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379236, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87f8b90b-8f6a-4ffa-85d7-e6d9222cbd1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:7815'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753725, 'tstamp': 753725}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379237, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.574 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9dea4c8a-cd17-4a01-8660-9ac25e1575e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39c21153-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:78:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753725, 'reachable_time': 17078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379238, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.605 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29fffa17-3d6e-4dcc-867e-db6e35172d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.680 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d559cafb-7084-4523-adce-0a985d904600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.689 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c21153-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.690 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.690 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39c21153-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:38 compute-0 NetworkManager[44885]: <info>  [1760433758.6936] manager: (tap39c21153-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 kernel: tap39c21153-40: entered promiscuous mode
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39c21153-40, col_values=(('external_ids', {'iface-id': '7bf9894c-4dab-4178-94d9-e45a9e10602a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 ovn_controller[152662]: 2025-10-14T09:22:38Z|01274|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 09:22:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:22:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573623770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.717 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39c21153-4a3d-40fd-91df-ae7d5dae4d8c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39c21153-4a3d-40fd-91df-ae7d5dae4d8c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.718 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[106762d1-d9cc-466b-bfb1-a509ee132152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.719 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-39c21153-4a3d-40fd-91df-ae7d5dae4d8c
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/39c21153-4a3d-40fd-91df-ae7d5dae4d8c.pid.haproxy
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 39c21153-4a3d-40fd-91df-ae7d5dae4d8c
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:22:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.719 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'env', 'PROCESS_TAG=haproxy-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39c21153-4a3d-40fd-91df-ae7d5dae4d8c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.724 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.728 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.746 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.764 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:22:38 compute-0 nova_compute[259627]: 2025-10-14 09:22:38.765 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:39 compute-0 podman[379272]: 2025-10-14 09:22:39.086976126 +0000 UTC m=+0.055276567 container create 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:22:39 compute-0 systemd[1]: Started libpod-conmon-812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9.scope.
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.141 2 DEBUG nova.network.neutron [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:39 compute-0 podman[379272]: 2025-10-14 09:22:39.056080302 +0000 UTC m=+0.024380803 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.156 2 INFO nova.compute.manager [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Took 0.87 seconds to deallocate network for instance.
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c287ab2c96f243d6a33b107299142f79d3dafdd9e98b859faafdb8e51bef5d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:22:39 compute-0 podman[379272]: 2025-10-14 09:22:39.196622095 +0000 UTC m=+0.164922596 container init 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.198 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.198 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:39 compute-0 podman[379272]: 2025-10-14 09:22:39.209506614 +0000 UTC m=+0.177807065 container start 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:22:39 compute-0 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [NOTICE]   (379291) : New worker (379293) forked
Oct 14 09:22:39 compute-0 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [NOTICE]   (379291) : Loading success.
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.243 2 DEBUG nova.compute.manager [req-3f39a907-fe25-41b6-8391-c1195aae0f2c req-1e182db1-0b7a-438b-ad54-575c0dab4eae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-deleted-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.281 2 DEBUG oslo_concurrency.processutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:39 compute-0 sshd-session[378751]: Connection closed by invalid user test 188.150.249.96 port 49924 [preauth]
Oct 14 09:22:39 compute-0 ceph-mon[74249]: pgmap v2073: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 81 op/s
Oct 14 09:22:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/573623770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:22:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437203333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.763 2 DEBUG oslo_concurrency.processutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.765 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.767 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.774 2 DEBUG nova.compute.provider_tree [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.800 2 DEBUG nova.scheduler.client.report [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.834 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.865 2 INFO nova.scheduler.client.report [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Deleted allocations for instance 725ed629-f7d5-4a69-be5e-4cae3eef2e2e
Oct 14 09:22:39 compute-0 nova_compute[259627]: 2025-10-14 09:22:39.936 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.182 2 DEBUG nova.network.neutron [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port ff2b9b74-a6fc-4774-89d2-9c010f121d65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.182 2 DEBUG nova.network.neutron [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 188 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 330 KiB/s wr, 93 op/s
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.217 2 DEBUG oslo_concurrency.lockutils [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/437203333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.458 2 DEBUG nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.459 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.459 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.459 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.459 2 DEBUG nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.459 2 WARNING nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 for instance with vm_state active and task_state None.
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.460 2 DEBUG nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.460 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.460 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.461 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.461 2 DEBUG nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.461 2 WARNING nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 for instance with vm_state active and task_state None.
Oct 14 09:22:40 compute-0 ovn_controller[152662]: 2025-10-14T09:22:40Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:22:e5 10.100.0.12
Oct 14 09:22:40 compute-0 ovn_controller[152662]: 2025-10-14T09:22:40Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:22:e5 10.100.0.12
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:40 compute-0 nova_compute[259627]: 2025-10-14 09:22:40.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:22:41 compute-0 ovn_controller[152662]: 2025-10-14T09:22:41Z|01275|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 09:22:41 compute-0 ovn_controller[152662]: 2025-10-14T09:22:41Z|01276|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 09:22:41 compute-0 ovn_controller[152662]: 2025-10-14T09:22:41Z|01277|binding|INFO|Releasing lport 04719e6c-d55b-4ad7-a45c-52e6e59101ab from this chassis (sb_readonly=0)
Oct 14 09:22:41 compute-0 nova_compute[259627]: 2025-10-14 09:22:41.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:41 compute-0 ovn_controller[152662]: 2025-10-14T09:22:41Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:98:a5 10.100.0.18
Oct 14 09:22:41 compute-0 ovn_controller[152662]: 2025-10-14T09:22:41Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:98:a5 10.100.0.18
Oct 14 09:22:41 compute-0 ceph-mon[74249]: pgmap v2074: 305 pgs: 305 active+clean; 188 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 330 KiB/s wr, 93 op/s
Oct 14 09:22:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 196 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct 14 09:22:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:42 compute-0 nova_compute[259627]: 2025-10-14 09:22:42.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:42 compute-0 nova_compute[259627]: 2025-10-14 09:22:42.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:22:42 compute-0 nova_compute[259627]: 2025-10-14 09:22:42.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:22:43 compute-0 nova_compute[259627]: 2025-10-14 09:22:43.212 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:43 compute-0 nova_compute[259627]: 2025-10-14 09:22:43.212 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:43 compute-0 nova_compute[259627]: 2025-10-14 09:22:43.213 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:22:43 compute-0 nova_compute[259627]: 2025-10-14 09:22:43.214 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015069725939311064 of space, bias 1.0, pg target 0.45209177817933194 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:22:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:22:43 compute-0 nova_compute[259627]: 2025-10-14 09:22:43.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:43 compute-0 ceph-mon[74249]: pgmap v2075: 305 pgs: 305 active+clean; 196 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct 14 09:22:43 compute-0 podman[379327]: 2025-10-14 09:22:43.670779951 +0000 UTC m=+0.076209295 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 14 09:22:43 compute-0 podman[379326]: 2025-10-14 09:22:43.68774356 +0000 UTC m=+0.101692414 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:22:44 compute-0 nova_compute[259627]: 2025-10-14 09:22:44.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 196 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Oct 14 09:22:45 compute-0 ceph-mon[74249]: pgmap v2076: 305 pgs: 305 active+clean; 196 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Oct 14 09:22:45 compute-0 nova_compute[259627]: 2025-10-14 09:22:45.654 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:45 compute-0 nova_compute[259627]: 2025-10-14 09:22:45.655 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:45 compute-0 nova_compute[259627]: 2025-10-14 09:22:45.671 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:22:45 compute-0 nova_compute[259627]: 2025-10-14 09:22:45.746 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:45 compute-0 nova_compute[259627]: 2025-10-14 09:22:45.747 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:45 compute-0 nova_compute[259627]: 2025-10-14 09:22:45.752 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:22:45 compute-0 nova_compute[259627]: 2025-10-14 09:22:45.753 2 INFO nova.compute.claims [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:22:45 compute-0 nova_compute[259627]: 2025-10-14 09:22:45.906 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.069 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.092 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.093 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:22:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 14 09:22:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:22:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148067217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.370 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.378 2 DEBUG nova.compute.provider_tree [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.394 2 DEBUG nova.scheduler.client.report [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.425 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.425 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:22:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/148067217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.518 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.519 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.539 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.568 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.674 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.675 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.676 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Creating image(s)
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.712 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.743 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.762 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.765 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.852 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.853 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.854 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.855 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.878 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.881 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:46 compute-0 nova_compute[259627]: 2025-10-14 09:22:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.139 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.222 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.341 2 DEBUG nova.policy [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.349 2 DEBUG nova.objects.instance [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid ef3d76bf-9763-4405-8e48-c2c4405a2a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.375 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.376 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Ensure instance console log exists: /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.377 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.377 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:47 compute-0 nova_compute[259627]: 2025-10-14 09:22:47.377 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:47 compute-0 ceph-mon[74249]: pgmap v2077: 305 pgs: 305 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 14 09:22:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:48 compute-0 nova_compute[259627]: 2025-10-14 09:22:48.104 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Successfully created port: 0c7d56b8-701e-431d-8f3f-4682c684a719 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:22:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 09:22:48 compute-0 nova_compute[259627]: 2025-10-14 09:22:48.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:48 compute-0 unix_chkpwd[379557]: password check failed for user (root)
Oct 14 09:22:48 compute-0 sshd-session[379322]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96  user=root
Oct 14 09:22:49 compute-0 nova_compute[259627]: 2025-10-14 09:22:49.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:49 compute-0 ceph-mon[74249]: pgmap v2078: 305 pgs: 305 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 09:22:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 219 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.9 MiB/s wr, 104 op/s
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.237 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Successfully updated port: 0c7d56b8-701e-431d-8f3f-4682c684a719 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.256 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.257 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.257 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:50 compute-0 sshd-session[379322]: Failed password for root from 188.150.249.96 port 52622 ssh2
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.468 2 DEBUG nova.compute.manager [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-changed-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.469 2 DEBUG nova.compute.manager [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Refreshing instance network info cache due to event network-changed-0c7d56b8-701e-431d-8f3f-4682c684a719. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.470 2 DEBUG oslo_concurrency.lockutils [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.671 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.838 2 DEBUG nova.compute.manager [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.896 2 INFO nova.compute.manager [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] instance snapshotting
Oct 14 09:22:50 compute-0 sshd-session[379322]: Connection closed by authenticating user root 188.150.249.96 port 52622 [preauth]
Oct 14 09:22:50 compute-0 nova_compute[259627]: 2025-10-14 09:22:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:22:51 compute-0 nova_compute[259627]: 2025-10-14 09:22:51.207 2 INFO nova.virt.libvirt.driver [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Beginning live snapshot process
Oct 14 09:22:51 compute-0 nova_compute[259627]: 2025-10-14 09:22:51.409 2 DEBUG nova.virt.libvirt.imagebackend [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 14 09:22:51 compute-0 ceph-mon[74249]: pgmap v2079: 305 pgs: 305 active+clean; 219 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.9 MiB/s wr, 104 op/s
Oct 14 09:22:51 compute-0 nova_compute[259627]: 2025-10-14 09:22:51.627 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] creating snapshot(c62c4c34216e4b708b452a02674b9e40) on rbd image(6810b29b-088f-441b-8a6a-02eaafada0c5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:22:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 246 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.224 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Updating instance_info_cache with network_info: [{"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.243 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.243 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance network_info: |[{"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.244 2 DEBUG oslo_concurrency.lockutils [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.244 2 DEBUG nova.network.neutron [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Refreshing network info cache for port 0c7d56b8-701e-431d-8f3f-4682c684a719 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.247 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start _get_guest_xml network_info=[{"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.253 2 WARNING nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.260 2 DEBUG nova.virt.libvirt.host [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.261 2 DEBUG nova.virt.libvirt.host [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.273 2 DEBUG nova.virt.libvirt.host [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.274 2 DEBUG nova.virt.libvirt.host [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.274 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.279 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 14 09:22:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 14 09:22:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.559 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] cloning vms/6810b29b-088f-441b-8a6a-02eaafada0c5_disk@c62c4c34216e4b708b452a02674b9e40 to images/d594f64a-1811-45da-92c9-566107aad012 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.665 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] flattening images/d594f64a-1811-45da-92c9-566107aad012 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.772 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433757.7711184, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.772 2 INFO nova.compute.manager [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Stopped (Lifecycle Event)
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.793 2 DEBUG nova.compute.manager [None req-524fdd00-5dd1-48fd-8e7b-860c3fa36a20 - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:22:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3229893015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.841 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.879 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:52 compute-0 nova_compute[259627]: 2025-10-14 09:22:52.890 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.049 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] removing snapshot(c62c4c34216e4b708b452a02674b9e40) on rbd image(6810b29b-088f-441b-8a6a-02eaafada0c5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:22:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/10162848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.372 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.374 2 DEBUG nova.virt.libvirt.vif [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-74163694',display_name='tempest-TestNetworkBasicOps-server-74163694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-74163694',id=120,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJjpHsBb1FmstcXMm13RiW9DIcCDzUbHC1W47DgC4rLa2+YaGMfll4QodMfzMI26CQxBr8mMI8Apo+Vm4ZUA+2D0BmlkJiSjNtRVZZ4pPW+p+wcLG9yH2ONX/d7llYQVA==',key_name='tempest-TestNetworkBasicOps-266793307',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-scl22852',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:46Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=ef3d76bf-9763-4405-8e48-c2c4405a2a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.374 2 DEBUG nova.network.os_vif_util [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.376 2 DEBUG nova.network.os_vif_util [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.377 2 DEBUG nova.objects.instance [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef3d76bf-9763-4405-8e48-c2c4405a2a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.395 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <uuid>ef3d76bf-9763-4405-8e48-c2c4405a2a3b</uuid>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <name>instance-00000078</name>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-74163694</nova:name>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:22:52</nova:creationTime>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <nova:port uuid="0c7d56b8-701e-431d-8f3f-4682c684a719">
Oct 14 09:22:53 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <system>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <entry name="serial">ef3d76bf-9763-4405-8e48-c2c4405a2a3b</entry>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <entry name="uuid">ef3d76bf-9763-4405-8e48-c2c4405a2a3b</entry>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </system>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <os>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   </os>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <features>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   </features>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk">
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       </source>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config">
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       </source>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:22:53 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b4:88:3d"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <target dev="tap0c7d56b8-70"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/console.log" append="off"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <video>
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </video>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:22:53 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:22:53 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:22:53 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:22:53 compute-0 nova_compute[259627]: </domain>
Oct 14 09:22:53 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.396 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Preparing to wait for external event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.397 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.397 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.398 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.399 2 DEBUG nova.virt.libvirt.vif [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-74163694',display_name='tempest-TestNetworkBasicOps-server-74163694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-74163694',id=120,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJjpHsBb1FmstcXMm13RiW9DIcCDzUbHC1W47DgC4rLa2+YaGMfll4QodMfzMI26CQxBr8mMI8Apo+Vm4ZUA+2D0BmlkJiSjNtRVZZ4pPW+p+wcLG9yH2ONX/d7llYQVA==',key_name='tempest-TestNetworkBasicOps-266793307',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-scl22852',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:46Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=ef3d76bf-9763-4405-8e48-c2c4405a2a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.400 2 DEBUG nova.network.os_vif_util [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.402 2 DEBUG nova.network.os_vif_util [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.402 2 DEBUG os_vif [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.404 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.405 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c7d56b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c7d56b8-70, col_values=(('external_ids', {'iface-id': '0c7d56b8-701e-431d-8f3f-4682c684a719', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:88:3d', 'vm-uuid': 'ef3d76bf-9763-4405-8e48-c2c4405a2a3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:53 compute-0 NetworkManager[44885]: <info>  [1760433773.4147] manager: (tap0c7d56b8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.421 2 INFO os_vif [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70')
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.498 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.498 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.499 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:b4:88:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.499 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Using config drive
Oct 14 09:22:53 compute-0 ceph-mon[74249]: pgmap v2080: 305 pgs: 305 active+clean; 246 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 14 09:22:53 compute-0 ceph-mon[74249]: osdmap e281: 3 total, 3 up, 3 in
Oct 14 09:22:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3229893015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/10162848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:22:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 14 09:22:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 14 09:22:53 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.533 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.555 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] creating snapshot(snap) on rbd image(d594f64a-1811-45da-92c9-566107aad012) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.913 2 DEBUG nova.network.neutron [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Updated VIF entry in instance network info cache for port 0c7d56b8-701e-431d-8f3f-4682c684a719. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.913 2 DEBUG nova.network.neutron [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Updating instance_info_cache with network_info: [{"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:22:53 compute-0 nova_compute[259627]: 2025-10-14 09:22:53.930 2 DEBUG oslo_concurrency.lockutils [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:22:54 compute-0 nova_compute[259627]: 2025-10-14 09:22:54.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 246 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 45 op/s
Oct 14 09:22:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 14 09:22:54 compute-0 ceph-mon[74249]: osdmap e282: 3 total, 3 up, 3 in
Oct 14 09:22:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 14 09:22:54 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 14 09:22:54 compute-0 nova_compute[259627]: 2025-10-14 09:22:54.670 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Creating config drive at /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config
Oct 14 09:22:54 compute-0 nova_compute[259627]: 2025-10-14 09:22:54.679 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy6m6hpj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:54 compute-0 nova_compute[259627]: 2025-10-14 09:22:54.830 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy6m6hpj7" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:54 compute-0 nova_compute[259627]: 2025-10-14 09:22:54.862 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:22:54 compute-0 nova_compute[259627]: 2025-10-14 09:22:54.866 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:55 compute-0 nova_compute[259627]: 2025-10-14 09:22:55.041 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:55 compute-0 nova_compute[259627]: 2025-10-14 09:22:55.042 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Deleting local config drive /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config because it was imported into RBD.
Oct 14 09:22:55 compute-0 NetworkManager[44885]: <info>  [1760433775.1253] manager: (tap0c7d56b8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/524)
Oct 14 09:22:55 compute-0 kernel: tap0c7d56b8-70: entered promiscuous mode
Oct 14 09:22:55 compute-0 nova_compute[259627]: 2025-10-14 09:22:55.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:55 compute-0 ovn_controller[152662]: 2025-10-14T09:22:55Z|01278|binding|INFO|Claiming lport 0c7d56b8-701e-431d-8f3f-4682c684a719 for this chassis.
Oct 14 09:22:55 compute-0 ovn_controller[152662]: 2025-10-14T09:22:55Z|01279|binding|INFO|0c7d56b8-701e-431d-8f3f-4682c684a719: Claiming fa:16:3e:b4:88:3d 10.100.0.22
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.147 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:88:3d 10.100.0.22'], port_security=['fa:16:3e:b4:88:3d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'ef3d76bf-9763-4405-8e48-c2c4405a2a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e990e92a-384a-47c4-be5e-d58c231a3275', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab61dd9b-dbf7-46d4-89df-319a0a1fc6a6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0c7d56b8-701e-431d-8f3f-4682c684a719) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.149 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0c7d56b8-701e-431d-8f3f-4682c684a719 in datapath 39c21153-4a3d-40fd-91df-ae7d5dae4d8c bound to our chassis
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.152 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39c21153-4a3d-40fd-91df-ae7d5dae4d8c
Oct 14 09:22:55 compute-0 ovn_controller[152662]: 2025-10-14T09:22:55Z|01280|binding|INFO|Setting lport 0c7d56b8-701e-431d-8f3f-4682c684a719 ovn-installed in OVS
Oct 14 09:22:55 compute-0 ovn_controller[152662]: 2025-10-14T09:22:55Z|01281|binding|INFO|Setting lport 0c7d56b8-701e-431d-8f3f-4682c684a719 up in Southbound
Oct 14 09:22:55 compute-0 nova_compute[259627]: 2025-10-14 09:22:55.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:55 compute-0 nova_compute[259627]: 2025-10-14 09:22:55.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.178 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[629e9bef-1e42-4959-ad04-5c6d402b7038]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:55 compute-0 systemd-udevd[379838]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:22:55 compute-0 systemd-machined[214636]: New machine qemu-153-instance-00000078.
Oct 14 09:22:55 compute-0 NetworkManager[44885]: <info>  [1760433775.2090] device (tap0c7d56b8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:22:55 compute-0 NetworkManager[44885]: <info>  [1760433775.2098] device (tap0c7d56b8-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:22:55 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-00000078.
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.223 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[251c6ca1-996e-4740-944d-9e71eb8cde54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.228 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0893192f-5f08-4f58-8616-ac8f456af18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.256 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a33e44ec-9c52-4232-9e18-58bf8c2ca7ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.277 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[99c0e95e-79f5-4357-802e-9af2d29b71e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39c21153-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:78:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753725, 'reachable_time': 17078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379846, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.296 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[72bcfc4e-d7b0-480d-9cf1-65b57bd05290]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap39c21153-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753740, 'tstamp': 753740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379851, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39c21153-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753743, 'tstamp': 753743}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379851, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.298 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c21153-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:55 compute-0 nova_compute[259627]: 2025-10-14 09:22:55.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:55 compute-0 nova_compute[259627]: 2025-10-14 09:22:55.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.363 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39c21153-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.364 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.364 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39c21153-40, col_values=(('external_ids', {'iface-id': '7bf9894c-4dab-4178-94d9-e45a9e10602a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:22:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.365 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:22:55 compute-0 nova_compute[259627]: 2025-10-14 09:22:55.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:55 compute-0 ceph-mon[74249]: pgmap v2083: 305 pgs: 305 active+clean; 246 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 45 op/s
Oct 14 09:22:55 compute-0 ceph-mon[74249]: osdmap e283: 3 total, 3 up, 3 in
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.041 2 INFO nova.virt.libvirt.driver [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Snapshot image upload complete
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.042 2 INFO nova.compute.manager [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 5.14 seconds to snapshot the instance on the hypervisor.
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.153 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433776.152919, ef3d76bf-9763-4405-8e48-c2c4405a2a3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.154 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] VM Started (Lifecycle Event)
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.173 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.178 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433776.1562665, ef3d76bf-9763-4405-8e48-c2c4405a2a3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.178 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] VM Paused (Lifecycle Event)
Oct 14 09:22:56 compute-0 sudo[379895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.193 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:56 compute-0 sudo[379895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.199 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:56 compute-0 sudo[379895]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:56 compute-0 nova_compute[259627]: 2025-10-14 09:22:56.211 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:22:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 326 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 9.8 MiB/s wr, 220 op/s
Oct 14 09:22:56 compute-0 sudo[379920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:22:56 compute-0 sudo[379920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:56 compute-0 sudo[379920]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:56 compute-0 sudo[379945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:22:56 compute-0 sudo[379945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:56 compute-0 sudo[379945]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:56 compute-0 sudo[379970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:22:56 compute-0 sudo[379970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:57 compute-0 sudo[379970]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:22:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:22:57 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:22:57 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:22:57 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 82c8e937-95e1-4d0c-ade5-1b6762f6ae32 does not exist
Oct 14 09:22:57 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 9343065b-6755-4cc9-9c67-708125294f21 does not exist
Oct 14 09:22:57 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ecc601aa-6f2f-4000-9d4d-01566355ca74 does not exist
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:22:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:22:57 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:22:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:22:57 compute-0 sudo[380027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:22:57 compute-0 sudo[380027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:57 compute-0 sudo[380027]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:57 compute-0 sudo[380052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:22:57 compute-0 sudo[380052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:57 compute-0 sudo[380052]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:57 compute-0 sudo[380077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:22:57 compute-0 sudo[380077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:57 compute-0 sudo[380077]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:57 compute-0 sudo[380102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:22:57 compute-0 sudo[380102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:57 compute-0 ceph-mon[74249]: pgmap v2085: 305 pgs: 305 active+clean; 326 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 9.8 MiB/s wr, 220 op/s
Oct 14 09:22:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:22:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:22:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:22:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:22:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:22:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:22:57 compute-0 podman[380169]: 2025-10-14 09:22:57.914803612 +0000 UTC m=+0.047977637 container create 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:22:57 compute-0 systemd[1]: Started libpod-conmon-15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08.scope.
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 14 09:22:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 14 09:22:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 14 09:22:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:22:57 compute-0 podman[380169]: 2025-10-14 09:22:57.895642358 +0000 UTC m=+0.028816413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:22:58 compute-0 podman[380169]: 2025-10-14 09:22:58.003545095 +0000 UTC m=+0.136719210 container init 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:22:58 compute-0 podman[380169]: 2025-10-14 09:22:58.010689751 +0000 UTC m=+0.143863776 container start 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 09:22:58 compute-0 podman[380169]: 2025-10-14 09:22:58.013712286 +0000 UTC m=+0.146886331 container attach 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:22:58 compute-0 dazzling_hopper[380185]: 167 167
Oct 14 09:22:58 compute-0 systemd[1]: libpod-15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08.scope: Deactivated successfully.
Oct 14 09:22:58 compute-0 podman[380169]: 2025-10-14 09:22:58.019234892 +0000 UTC m=+0.152408927 container died 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-10087adeb801d8f20640d4d9a94690c8144be0c8f497e81c5d37abdb5900d91a-merged.mount: Deactivated successfully.
Oct 14 09:22:58 compute-0 podman[380169]: 2025-10-14 09:22:58.074665592 +0000 UTC m=+0.207839657 container remove 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:22:58 compute-0 systemd[1]: libpod-conmon-15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08.scope: Deactivated successfully.
Oct 14 09:22:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 326 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.2 MiB/s wr, 194 op/s
Oct 14 09:22:58 compute-0 podman[380207]: 2025-10-14 09:22:58.333711873 +0000 UTC m=+0.061064900 container create 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:22:58 compute-0 systemd[1]: Started libpod-conmon-28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252.scope.
Oct 14 09:22:58 compute-0 podman[380207]: 2025-10-14 09:22:58.317267687 +0000 UTC m=+0.044620724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:22:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:22:58 compute-0 podman[380207]: 2025-10-14 09:22:58.454259532 +0000 UTC m=+0.181612589 container init 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:22:58 compute-0 podman[380207]: 2025-10-14 09:22:58.467678593 +0000 UTC m=+0.195031610 container start 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:22:58 compute-0 podman[380207]: 2025-10-14 09:22:58.471538429 +0000 UTC m=+0.198891486 container attach 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.680 2 DEBUG nova.compute.manager [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.680 2 DEBUG oslo_concurrency.lockutils [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.680 2 DEBUG oslo_concurrency.lockutils [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.681 2 DEBUG oslo_concurrency.lockutils [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.681 2 DEBUG nova.compute.manager [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Processing event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.682 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.688 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433778.6874824, ef3d76bf-9763-4405-8e48-c2c4405a2a3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.688 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] VM Resumed (Lifecycle Event)
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.690 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.693 2 INFO nova.virt.libvirt.driver [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance spawned successfully.
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.694 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.708 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.716 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.720 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.721 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.721 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.722 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.722 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.723 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.744 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.777 2 INFO nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Took 12.10 seconds to spawn the instance on the hypervisor.
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.778 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.850 2 INFO nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Took 13.13 seconds to build instance.
Oct 14 09:22:58 compute-0 nova_compute[259627]: 2025-10-14 09:22:58.872 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:58 compute-0 ceph-mon[74249]: osdmap e284: 3 total, 3 up, 3 in
Oct 14 09:22:58 compute-0 ceph-mon[74249]: pgmap v2087: 305 pgs: 305 active+clean; 326 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.2 MiB/s wr, 194 op/s
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.041 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.041 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.060 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.163 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.164 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.179 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.179 2 INFO nova.compute.claims [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.374 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:22:59 compute-0 zen_mirzakhani[380224]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:22:59 compute-0 zen_mirzakhani[380224]: --> relative data size: 1.0
Oct 14 09:22:59 compute-0 zen_mirzakhani[380224]: --> All data devices are unavailable
Oct 14 09:22:59 compute-0 systemd[1]: libpod-28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252.scope: Deactivated successfully.
Oct 14 09:22:59 compute-0 systemd[1]: libpod-28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252.scope: Consumed 1.046s CPU time.
Oct 14 09:22:59 compute-0 podman[380207]: 2025-10-14 09:22:59.6566003 +0000 UTC m=+1.383953337 container died 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913-merged.mount: Deactivated successfully.
Oct 14 09:22:59 compute-0 podman[380207]: 2025-10-14 09:22:59.725217336 +0000 UTC m=+1.452570363 container remove 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:22:59 compute-0 systemd[1]: libpod-conmon-28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252.scope: Deactivated successfully.
Oct 14 09:22:59 compute-0 sudo[380102]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:59 compute-0 sudo[380284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:22:59 compute-0 sudo[380284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:59 compute-0 sudo[380284]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:22:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269823853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:22:59 compute-0 sudo[380309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:22:59 compute-0 sudo[380309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:59 compute-0 sudo[380309]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.907 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.915 2 DEBUG nova.compute.provider_tree [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.934 2 DEBUG nova.scheduler.client.report [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:22:59 compute-0 sudo[380336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:22:59 compute-0 sudo[380336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:22:59 compute-0 sudo[380336]: pam_unix(sudo:session): session closed for user root
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.963 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:22:59 compute-0 nova_compute[259627]: 2025-10-14 09:22:59.965 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:22:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1269823853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:00 compute-0 sudo[380361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:23:00 compute-0 sudo[380361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.037 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.037 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.211 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:23:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 7.0 MiB/s wr, 220 op/s
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.228 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:23:00 compute-0 unix_chkpwd[380417]: password check failed for user (root)
Oct 14 09:23:00 compute-0 sshd-session[379591]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96  user=root
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.333 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.334 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.334 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Creating image(s)
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.358 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:00 compute-0 podman[380425]: 2025-10-14 09:23:00.406975981 +0000 UTC m=+0.084030017 container create 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.423 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:00 compute-0 systemd[1]: Started libpod-conmon-231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6.scope.
Oct 14 09:23:00 compute-0 podman[380425]: 2025-10-14 09:23:00.356906924 +0000 UTC m=+0.033960980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.453 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.456 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "00eb9ed082be65aa60123c7b340067da781aa0fa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.457 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "00eb9ed082be65aa60123c7b340067da781aa0fa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:23:00 compute-0 podman[380425]: 2025-10-14 09:23:00.491741186 +0000 UTC m=+0.168795222 container init 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:23:00 compute-0 podman[380425]: 2025-10-14 09:23:00.499791945 +0000 UTC m=+0.176845971 container start 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 09:23:00 compute-0 podman[380425]: 2025-10-14 09:23:00.503405054 +0000 UTC m=+0.180459070 container attach 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:23:00 compute-0 vigilant_pike[380493]: 167 167
Oct 14 09:23:00 compute-0 systemd[1]: libpod-231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6.scope: Deactivated successfully.
Oct 14 09:23:00 compute-0 podman[380425]: 2025-10-14 09:23:00.508172832 +0000 UTC m=+0.185226878 container died 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-66111c2a9c9d578137af4a5fd1d87e2bea74640e5cd0166ffd724415571b3e0d-merged.mount: Deactivated successfully.
Oct 14 09:23:00 compute-0 podman[380425]: 2025-10-14 09:23:00.559302696 +0000 UTC m=+0.236356742 container remove 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:23:00 compute-0 systemd[1]: libpod-conmon-231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6.scope: Deactivated successfully.
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.639 2 DEBUG nova.policy [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f232ab535af04111bf570569aa293116', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4112adc84657452aa0e117ac5999054a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.782 2 DEBUG nova.virt.libvirt.imagebackend [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/d594f64a-1811-45da-92c9-566107aad012/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/d594f64a-1811-45da-92c9-566107aad012/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.838 2 DEBUG nova.compute.manager [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.839 2 DEBUG oslo_concurrency.lockutils [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.839 2 DEBUG oslo_concurrency.lockutils [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.839 2 DEBUG oslo_concurrency.lockutils [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.839 2 DEBUG nova.compute.manager [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] No waiting events found dispatching network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.839 2 WARNING nova.compute.manager [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received unexpected event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 for instance with vm_state active and task_state None.
Oct 14 09:23:00 compute-0 podman[380517]: 2025-10-14 09:23:00.842099273 +0000 UTC m=+0.050570741 container create 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.853 2 DEBUG nova.virt.libvirt.imagebackend [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/d594f64a-1811-45da-92c9-566107aad012/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 14 09:23:00 compute-0 nova_compute[259627]: 2025-10-14 09:23:00.854 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] cloning images/d594f64a-1811-45da-92c9-566107aad012@snap to None/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:23:00 compute-0 systemd[1]: Started libpod-conmon-87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b.scope.
Oct 14 09:23:00 compute-0 podman[380517]: 2025-10-14 09:23:00.819262139 +0000 UTC m=+0.027716406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:23:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:00 compute-0 podman[380517]: 2025-10-14 09:23:00.955308211 +0000 UTC m=+0.163762458 container init 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:23:00 compute-0 podman[380574]: 2025-10-14 09:23:00.955822333 +0000 UTC m=+0.080750326 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:23:00 compute-0 podman[380563]: 2025-10-14 09:23:00.959461453 +0000 UTC m=+0.083729060 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:23:00 compute-0 podman[380517]: 2025-10-14 09:23:00.964431236 +0000 UTC m=+0.172885483 container start 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:23:00 compute-0 podman[380517]: 2025-10-14 09:23:00.977232833 +0000 UTC m=+0.185687080 container attach 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:23:01 compute-0 ceph-mon[74249]: pgmap v2088: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 7.0 MiB/s wr, 220 op/s
Oct 14 09:23:01 compute-0 nova_compute[259627]: 2025-10-14 09:23:01.156 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "00eb9ed082be65aa60123c7b340067da781aa0fa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:01 compute-0 nova_compute[259627]: 2025-10-14 09:23:01.266 2 DEBUG nova.objects.instance [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'migration_context' on Instance uuid 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:23:01 compute-0 nova_compute[259627]: 2025-10-14 09:23:01.282 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:23:01 compute-0 nova_compute[259627]: 2025-10-14 09:23:01.283 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Ensure instance console log exists: /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:23:01 compute-0 nova_compute[259627]: 2025-10-14 09:23:01.283 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:01 compute-0 nova_compute[259627]: 2025-10-14 09:23:01.283 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:01 compute-0 nova_compute[259627]: 2025-10-14 09:23:01.283 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:01 compute-0 sshd-session[379591]: Failed password for root from 188.150.249.96 port 55096 ssh2
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]: {
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:     "0": [
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:         {
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "devices": [
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "/dev/loop3"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             ],
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_name": "ceph_lv0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_size": "21470642176",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "name": "ceph_lv0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "tags": {
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cluster_name": "ceph",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.crush_device_class": "",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.encrypted": "0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osd_id": "0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.type": "block",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.vdo": "0"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             },
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "type": "block",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "vg_name": "ceph_vg0"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:         }
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:     ],
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:     "1": [
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:         {
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "devices": [
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "/dev/loop4"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             ],
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_name": "ceph_lv1",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_size": "21470642176",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "name": "ceph_lv1",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "tags": {
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cluster_name": "ceph",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.crush_device_class": "",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.encrypted": "0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osd_id": "1",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.type": "block",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.vdo": "0"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             },
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "type": "block",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "vg_name": "ceph_vg1"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:         }
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:     ],
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:     "2": [
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:         {
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "devices": [
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "/dev/loop5"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             ],
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_name": "ceph_lv2",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_size": "21470642176",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "name": "ceph_lv2",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "tags": {
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.cluster_name": "ceph",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.crush_device_class": "",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.encrypted": "0",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osd_id": "2",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.type": "block",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:                 "ceph.vdo": "0"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             },
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "type": "block",
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:             "vg_name": "ceph_vg2"
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:         }
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]:     ]
Oct 14 09:23:01 compute-0 naughty_maxwell[380615]: }
Oct 14 09:23:01 compute-0 systemd[1]: libpod-87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b.scope: Deactivated successfully.
Oct 14 09:23:01 compute-0 podman[380701]: 2025-10-14 09:23:01.844918713 +0000 UTC m=+0.024749743 container died 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 09:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5-merged.mount: Deactivated successfully.
Oct 14 09:23:01 compute-0 podman[380701]: 2025-10-14 09:23:01.981925548 +0000 UTC m=+0.161756558 container remove 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:23:01 compute-0 systemd[1]: libpod-conmon-87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b.scope: Deactivated successfully.
Oct 14 09:23:02 compute-0 nova_compute[259627]: 2025-10-14 09:23:02.022 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Successfully created port: 169fcf13-d616-47ef-8558-362361f16f03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:23:02 compute-0 sudo[380361]: pam_unix(sudo:session): session closed for user root
Oct 14 09:23:02 compute-0 sudo[380716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:23:02 compute-0 sudo[380716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:23:02 compute-0 sudo[380716]: pam_unix(sudo:session): session closed for user root
Oct 14 09:23:02 compute-0 sudo[380741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:23:02 compute-0 sudo[380741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:23:02 compute-0 sudo[380741]: pam_unix(sudo:session): session closed for user root
Oct 14 09:23:02 compute-0 sudo[380766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:23:02 compute-0 sudo[380766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:23:02 compute-0 sudo[380766]: pam_unix(sudo:session): session closed for user root
Oct 14 09:23:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 237 op/s
Oct 14 09:23:02 compute-0 sudo[380791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:23:02 compute-0 sudo[380791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:23:02 compute-0 sshd-session[379591]: Connection closed by authenticating user root 188.150.249.96 port 55096 [preauth]
Oct 14 09:23:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:23:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:23:02 compute-0 podman[380854]: 2025-10-14 09:23:02.751007221 +0000 UTC m=+0.051646867 container create 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:23:02 compute-0 systemd[1]: Started libpod-conmon-0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1.scope.
Oct 14 09:23:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:23:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:23:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:23:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:23:02 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:23:02 compute-0 podman[380854]: 2025-10-14 09:23:02.827208944 +0000 UTC m=+0.127848600 container init 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:23:02 compute-0 podman[380854]: 2025-10-14 09:23:02.733385015 +0000 UTC m=+0.034024681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:23:02 compute-0 podman[380854]: 2025-10-14 09:23:02.836139454 +0000 UTC m=+0.136779140 container start 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:23:02 compute-0 podman[380854]: 2025-10-14 09:23:02.840564984 +0000 UTC m=+0.141204650 container attach 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:23:02 compute-0 funny_liskov[380870]: 167 167
Oct 14 09:23:02 compute-0 systemd[1]: libpod-0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1.scope: Deactivated successfully.
Oct 14 09:23:02 compute-0 podman[380854]: 2025-10-14 09:23:02.843710982 +0000 UTC m=+0.144350668 container died 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bc75e951dd04e8fed14665f7ea05663315f5b7ef1dca6b8d5a340a2e2a3b8d5-merged.mount: Deactivated successfully.
Oct 14 09:23:02 compute-0 podman[380854]: 2025-10-14 09:23:02.890779185 +0000 UTC m=+0.191418871 container remove 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:23:02 compute-0 systemd[1]: libpod-conmon-0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1.scope: Deactivated successfully.
Oct 14 09:23:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:03 compute-0 podman[380895]: 2025-10-14 09:23:03.138514986 +0000 UTC m=+0.061706876 container create c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:23:03 compute-0 systemd[1]: Started libpod-conmon-c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508.scope.
Oct 14 09:23:03 compute-0 podman[380895]: 2025-10-14 09:23:03.113019556 +0000 UTC m=+0.036211506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:23:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:03 compute-0 podman[380895]: 2025-10-14 09:23:03.244235388 +0000 UTC m=+0.167427268 container init c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:23:03 compute-0 podman[380895]: 2025-10-14 09:23:03.25848565 +0000 UTC m=+0.181677510 container start c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:23:03 compute-0 podman[380895]: 2025-10-14 09:23:03.261987197 +0000 UTC m=+0.185179097 container attach c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:23:03 compute-0 ceph-mon[74249]: pgmap v2089: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 237 op/s
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.647 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Successfully updated port: 169fcf13-d616-47ef-8558-362361f16f03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.663 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.663 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquired lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.663 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.757 2 DEBUG nova.compute.manager [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-changed-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.758 2 DEBUG nova.compute.manager [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing instance network info cache due to event network-changed-169fcf13-d616-47ef-8558-362361f16f03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.758 2 DEBUG oslo_concurrency.lockutils [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.936 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.977 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.978 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.978 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.978 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.979 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:03 compute-0 nova_compute[259627]: 2025-10-14 09:23:03.979 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.012 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.032 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.032 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Image id d594f64a-1811-45da-92c9-566107aad012 yields fingerprint 00eb9ed082be65aa60123c7b340067da781aa0fa _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.033 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.033 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Image id a4789543-f429-47d7-9f79-80a9d90a59f9 yields fingerprint 342c3cf69558783c61e2fc446ea836becb687963 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.034 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] image a4789543-f429-47d7-9f79-80a9d90a59f9 at (/var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963): checking
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.034 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] image a4789543-f429-47d7-9f79-80a9d90a59f9 at (/var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.036 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] 50c83173-31e3-4f7a-8836-26e52affd0f2 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.036 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] 6810b29b-088f-441b-8a6a-02eaafada0c5 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.037 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] ef3d76bf-9763-4405-8e48-c2c4405a2a3b is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.037 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.037 2 WARNING nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.037 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Active base files: /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.037 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Removable base files: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.038 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.038 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.038 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.038 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.039 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]: {
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "osd_id": 2,
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "type": "bluestore"
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:     },
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "osd_id": 1,
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "type": "bluestore"
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:     },
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "osd_id": 0,
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:         "type": "bluestore"
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]:     }
Oct 14 09:23:04 compute-0 unruffled_volhard[380912]: }
Oct 14 09:23:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 4.8 MiB/s wr, 195 op/s
Oct 14 09:23:04 compute-0 systemd[1]: libpod-c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508.scope: Deactivated successfully.
Oct 14 09:23:04 compute-0 conmon[380912]: conmon c4328c265c8ea1a8a320 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508.scope/container/memory.events
Oct 14 09:23:04 compute-0 podman[380895]: 2025-10-14 09:23:04.259836094 +0000 UTC m=+1.183027984 container died c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:23:04 compute-0 nova_compute[259627]: 2025-10-14 09:23:04.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a-merged.mount: Deactivated successfully.
Oct 14 09:23:04 compute-0 podman[380895]: 2025-10-14 09:23:04.318530004 +0000 UTC m=+1.241721904 container remove c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:23:04 compute-0 systemd[1]: libpod-conmon-c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508.scope: Deactivated successfully.
Oct 14 09:23:04 compute-0 sudo[380791]: pam_unix(sudo:session): session closed for user root
Oct 14 09:23:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:23:04 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:23:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:23:04 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:23:04 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 73447796-0991-4fb4-bd4d-94225c535c27 does not exist
Oct 14 09:23:04 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e0068f8c-a12d-49f3-8c54-b4236d25bd45 does not exist
Oct 14 09:23:04 compute-0 sudo[380956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:23:04 compute-0 sudo[380956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:23:04 compute-0 sudo[380956]: pam_unix(sudo:session): session closed for user root
Oct 14 09:23:04 compute-0 sudo[380981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:23:04 compute-0 sudo[380981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:23:04 compute-0 sudo[380981]: pam_unix(sudo:session): session closed for user root
Oct 14 09:23:05 compute-0 ceph-mon[74249]: pgmap v2090: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 4.8 MiB/s wr, 195 op/s
Oct 14 09:23:05 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:23:05 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:23:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:23:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254305706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:23:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:23:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254305706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.698 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.726 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Releasing lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.726 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance network_info: |[{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.726 2 DEBUG oslo_concurrency.lockutils [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.726 2 DEBUG nova.network.neutron [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing network info cache for port 169fcf13-d616-47ef-8558-362361f16f03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.729 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start _get_guest_xml network_info=[{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:22:50Z,direct_url=<?>,disk_format='raw',id=d594f64a-1811-45da-92c9-566107aad012,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-397603188',owner='4112adc84657452aa0e117ac5999054a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:22:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'd594f64a-1811-45da-92c9-566107aad012'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.733 2 WARNING nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.740 2 DEBUG nova.virt.libvirt.host [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.740 2 DEBUG nova.virt.libvirt.host [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.744 2 DEBUG nova.virt.libvirt.host [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.745 2 DEBUG nova.virt.libvirt.host [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.745 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.745 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:22:50Z,direct_url=<?>,disk_format='raw',id=d594f64a-1811-45da-92c9-566107aad012,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-397603188',owner='4112adc84657452aa0e117ac5999054a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:22:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.745 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.747 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.747 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.747 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:23:05 compute-0 nova_compute[259627]: 2025-10-14 09:23:05.749 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:23:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777986676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 KiB/s wr, 116 op/s
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.235 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.276 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.283 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4254305706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:23:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4254305706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:23:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3777986676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:23:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:06.587 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:06.589 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:23:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:23:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969605750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.823 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.825 2 DEBUG nova.virt.libvirt.vif [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1820866852',display_name='tempest-TestSnapshotPattern-server-1820866852',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1820866852',id=121,image_ref='d594f64a-1811-45da-92c9-566107aad012',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-z8e26amr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6810b29b-088f-441b-8a6a-02eaafada0c5',image_min_disk='1',image_min_ram='0',image_owner_id='4112adc84657452aa0e117ac5999054a',image_owner_project_name='tempest-TestSnapshotPattern-70687399',image_owner_user_name='tempest-TestSnapshotPattern-70687399-project-member',image_user_id='f232ab535af04111bf570569aa293116',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:23:00Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=8f5e63fb-23c2-4f15-acca-bc5fbeb0729b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.825 2 DEBUG nova.network.os_vif_util [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.826 2 DEBUG nova.network.os_vif_util [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.828 2 DEBUG nova.objects.instance [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.855 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <uuid>8f5e63fb-23c2-4f15-acca-bc5fbeb0729b</uuid>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <name>instance-00000079</name>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <nova:name>tempest-TestSnapshotPattern-server-1820866852</nova:name>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:23:05</nova:creationTime>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <nova:user uuid="f232ab535af04111bf570569aa293116">tempest-TestSnapshotPattern-70687399-project-member</nova:user>
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <nova:project uuid="4112adc84657452aa0e117ac5999054a">tempest-TestSnapshotPattern-70687399</nova:project>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="d594f64a-1811-45da-92c9-566107aad012"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <nova:port uuid="169fcf13-d616-47ef-8558-362361f16f03">
Oct 14 09:23:06 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <system>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <entry name="serial">8f5e63fb-23c2-4f15-acca-bc5fbeb0729b</entry>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <entry name="uuid">8f5e63fb-23c2-4f15-acca-bc5fbeb0729b</entry>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </system>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <os>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   </os>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <features>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   </features>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk">
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config">
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       </source>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:23:06 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:46:4e:45"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <target dev="tap169fcf13-d6"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/console.log" append="off"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <video>
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </video>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <input type="keyboard" bus="usb"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:23:06 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:23:06 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:23:06 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:23:06 compute-0 nova_compute[259627]: </domain>
Oct 14 09:23:06 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.857 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Preparing to wait for external event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.857 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.858 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.858 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.859 2 DEBUG nova.virt.libvirt.vif [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1820866852',display_name='tempest-TestSnapshotPattern-server-1820866852',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1820866852',id=121,image_ref='d594f64a-1811-45da-92c9-566107aad012',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-z8e26amr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6810b29b-088f-441b-8a6a-02eaafada0c5',image_min_disk='1',image_min_ram='0',image_owner_id='4112adc84657452aa0e117ac5999054a',image_owner_project_name='tempest-TestSnapshotPattern-70687399',image_owner_user_name='tempest-TestSnapshotPattern-70687399-project-member',image_user_id='f232ab535af04111bf570569aa293116',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:23:00Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=8f5e63fb-23c2-4f15-acca-bc5fbeb0729b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.859 2 DEBUG nova.network.os_vif_util [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.859 2 DEBUG nova.network.os_vif_util [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.860 2 DEBUG os_vif [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap169fcf13-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap169fcf13-d6, col_values=(('external_ids', {'iface-id': '169fcf13-d616-47ef-8558-362361f16f03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:4e:45', 'vm-uuid': '8f5e63fb-23c2-4f15-acca-bc5fbeb0729b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:06 compute-0 NetworkManager[44885]: <info>  [1760433786.8660] manager: (tap169fcf13-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/525)
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.873 2 INFO os_vif [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6')
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.944 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.944 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.945 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No VIF found with MAC fa:16:3e:46:4e:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.945 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Using config drive
Oct 14 09:23:06 compute-0 nova_compute[259627]: 2025-10-14 09:23:06.964 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.039 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:07 compute-0 ceph-mon[74249]: pgmap v2091: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 KiB/s wr, 116 op/s
Oct 14 09:23:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3969605750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.500 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Creating config drive at /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.504 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2aae2s_0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.663 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2aae2s_0" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.694 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.700 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.739 2 DEBUG nova.network.neutron [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updated VIF entry in instance network info cache for port 169fcf13-d616-47ef-8558-362361f16f03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.740 2 DEBUG nova.network.neutron [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.764 2 DEBUG oslo_concurrency.lockutils [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.848 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.849 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Deleting local config drive /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config because it was imported into RBD.
Oct 14 09:23:07 compute-0 kernel: tap169fcf13-d6: entered promiscuous mode
Oct 14 09:23:07 compute-0 NetworkManager[44885]: <info>  [1760433787.9041] manager: (tap169fcf13-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/526)
Oct 14 09:23:07 compute-0 ovn_controller[152662]: 2025-10-14T09:23:07Z|01282|binding|INFO|Claiming lport 169fcf13-d616-47ef-8558-362361f16f03 for this chassis.
Oct 14 09:23:07 compute-0 ovn_controller[152662]: 2025-10-14T09:23:07Z|01283|binding|INFO|169fcf13-d616-47ef-8558-362361f16f03: Claiming fa:16:3e:46:4e:45 10.100.0.4
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.914 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:4e:45 10.100.0.4'], port_security=['fa:16:3e:46:4e:45 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8f5e63fb-23c2-4f15-acca-bc5fbeb0729b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4112adc84657452aa0e117ac5999054a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b7a53172-9b5e-49ee-bb03-aeca0d4a8fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=add5cdec-6440-4df9-aea8-21659d7bab06, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=169fcf13-d616-47ef-8558-362361f16f03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.915 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 169fcf13-d616-47ef-8558-362361f16f03 in datapath 4fc37d66-193b-4ab7-80e3-58e26dc76e47 bound to our chassis
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.916 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fc37d66-193b-4ab7-80e3-58e26dc76e47
Oct 14 09:23:07 compute-0 ovn_controller[152662]: 2025-10-14T09:23:07Z|01284|binding|INFO|Setting lport 169fcf13-d616-47ef-8558-362361f16f03 ovn-installed in OVS
Oct 14 09:23:07 compute-0 ovn_controller[152662]: 2025-10-14T09:23:07Z|01285|binding|INFO|Setting lport 169fcf13-d616-47ef-8558-362361f16f03 up in Southbound
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.935 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[711bc90d-676c-4917-8037-2c01301baafc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:07 compute-0 nova_compute[259627]: 2025-10-14 09:23:07.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:07 compute-0 systemd-udevd[381143]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:23:07 compute-0 systemd-machined[214636]: New machine qemu-154-instance-00000079.
Oct 14 09:23:07 compute-0 NetworkManager[44885]: <info>  [1760433787.9565] device (tap169fcf13-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:23:07 compute-0 NetworkManager[44885]: <info>  [1760433787.9574] device (tap169fcf13-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:23:07 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-00000079.
Oct 14 09:23:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.967 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ed442531-b1b6-4fd5-b19e-9d9611abcac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.970 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a8926cf4-fae2-463b-810f-e050a1637aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.012 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[54056637-90af-4b4f-a64b-2df38d32788c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.032 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f12a082-5dce-4d38-a507-240929d06595]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fc37d66-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:1e:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752720, 'reachable_time': 16651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381154, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f46f6522-b263-4559-a27a-886035d74dbb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fc37d66-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752733, 'tstamp': 752733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381157, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fc37d66-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752737, 'tstamp': 752737}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381157, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.054 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fc37d66-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.057 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fc37d66-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.058 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:23:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.058 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fc37d66-10, col_values=(('external_ids', {'iface-id': '04719e6c-d55b-4ad7-a45c-52e6e59101ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.059 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:23:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 KiB/s wr, 113 op/s
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.730 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433788.7293375, 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.732 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] VM Started (Lifecycle Event)
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.760 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.763 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433788.7307906, 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.763 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] VM Paused (Lifecycle Event)
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.788 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.791 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.814 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.930 2 DEBUG nova.compute.manager [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.930 2 DEBUG oslo_concurrency.lockutils [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.931 2 DEBUG oslo_concurrency.lockutils [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.931 2 DEBUG oslo_concurrency.lockutils [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.931 2 DEBUG nova.compute.manager [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Processing event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.932 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.936 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433788.935517, 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.936 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] VM Resumed (Lifecycle Event)
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.938 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.943 2 INFO nova.virt.libvirt.driver [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance spawned successfully.
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.944 2 INFO nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 8.61 seconds to spawn the instance on the hypervisor.
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.944 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.984 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:08 compute-0 nova_compute[259627]: 2025-10-14 09:23:08.989 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:23:09 compute-0 nova_compute[259627]: 2025-10-14 09:23:09.027 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:23:09 compute-0 nova_compute[259627]: 2025-10-14 09:23:09.034 2 INFO nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 9.91 seconds to build instance.
Oct 14 09:23:09 compute-0 nova_compute[259627]: 2025-10-14 09:23:09.059 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:09 compute-0 nova_compute[259627]: 2025-10-14 09:23:09.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:09 compute-0 ceph-mon[74249]: pgmap v2092: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 KiB/s wr, 113 op/s
Oct 14 09:23:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:09.593 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.4 KiB/s wr, 113 op/s
Oct 14 09:23:11 compute-0 nova_compute[259627]: 2025-10-14 09:23:11.049 2 DEBUG nova.compute.manager [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:11 compute-0 nova_compute[259627]: 2025-10-14 09:23:11.050 2 DEBUG oslo_concurrency.lockutils [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:11 compute-0 nova_compute[259627]: 2025-10-14 09:23:11.050 2 DEBUG oslo_concurrency.lockutils [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:11 compute-0 nova_compute[259627]: 2025-10-14 09:23:11.050 2 DEBUG oslo_concurrency.lockutils [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:11 compute-0 nova_compute[259627]: 2025-10-14 09:23:11.051 2 DEBUG nova.compute.manager [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] No waiting events found dispatching network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:23:11 compute-0 nova_compute[259627]: 2025-10-14 09:23:11.051 2 WARNING nova.compute.manager [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received unexpected event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 for instance with vm_state active and task_state None.
Oct 14 09:23:11 compute-0 ceph-mon[74249]: pgmap v2093: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.4 KiB/s wr, 113 op/s
Oct 14 09:23:11 compute-0 nova_compute[259627]: 2025-10-14 09:23:11.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:12 compute-0 ovn_controller[152662]: 2025-10-14T09:23:12Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:88:3d 10.100.0.22
Oct 14 09:23:12 compute-0 ovn_controller[152662]: 2025-10-14T09:23:12Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:88:3d 10.100.0.22
Oct 14 09:23:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 335 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 740 KiB/s wr, 129 op/s
Oct 14 09:23:12 compute-0 nova_compute[259627]: 2025-10-14 09:23:12.405 2 DEBUG nova.compute.manager [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-changed-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:12 compute-0 nova_compute[259627]: 2025-10-14 09:23:12.406 2 DEBUG nova.compute.manager [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing instance network info cache due to event network-changed-169fcf13-d616-47ef-8558-362361f16f03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:23:12 compute-0 nova_compute[259627]: 2025-10-14 09:23:12.407 2 DEBUG oslo_concurrency.lockutils [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:12 compute-0 nova_compute[259627]: 2025-10-14 09:23:12.407 2 DEBUG oslo_concurrency.lockutils [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:12 compute-0 nova_compute[259627]: 2025-10-14 09:23:12.408 2 DEBUG nova.network.neutron [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing network info cache for port 169fcf13-d616-47ef-8558-362361f16f03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:23:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:13 compute-0 ceph-mon[74249]: pgmap v2094: 305 pgs: 305 active+clean; 335 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 740 KiB/s wr, 129 op/s
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.713 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.713 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.737 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.808 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.809 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.817 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.818 2 INFO nova.compute.claims [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.964 2 DEBUG nova.network.neutron [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updated VIF entry in instance network info cache for port 169fcf13-d616-47ef-8558-362361f16f03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.965 2 DEBUG nova.network.neutron [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.988 2 DEBUG oslo_concurrency.lockutils [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:13 compute-0 nova_compute[259627]: 2025-10-14 09:23:13.997 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 335 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 739 KiB/s wr, 94 op/s
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:23:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512835621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.461 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.467 2 DEBUG nova.compute.provider_tree [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.488 2 DEBUG nova.scheduler.client.report [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.514 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.515 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.587 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.589 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.609 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.633 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:23:14 compute-0 podman[381223]: 2025-10-14 09:23:14.669948814 +0000 UTC m=+0.068690279 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:23:14 compute-0 podman[381222]: 2025-10-14 09:23:14.751214903 +0000 UTC m=+0.158588940 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.778 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.780 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.780 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Creating image(s)
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.802 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.824 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.846 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.852 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.894 2 DEBUG nova.policy [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c3638538fa6347dc95b6e30735bf0e83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9afc0bee75634a4cb284babbfba8d601', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.933 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.934 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.935 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.935 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.958 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:14 compute-0 nova_compute[259627]: 2025-10-14 09:23:14.962 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:15 compute-0 nova_compute[259627]: 2025-10-14 09:23:15.227 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:15 compute-0 nova_compute[259627]: 2025-10-14 09:23:15.289 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] resizing rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:23:15 compute-0 nova_compute[259627]: 2025-10-14 09:23:15.401 2 DEBUG nova.objects.instance [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lazy-loading 'migration_context' on Instance uuid 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:23:15 compute-0 ceph-mon[74249]: pgmap v2095: 305 pgs: 305 active+clean; 335 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 739 KiB/s wr, 94 op/s
Oct 14 09:23:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3512835621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:15 compute-0 nova_compute[259627]: 2025-10-14 09:23:15.418 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:23:15 compute-0 nova_compute[259627]: 2025-10-14 09:23:15.419 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Ensure instance console log exists: /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:23:15 compute-0 nova_compute[259627]: 2025-10-14 09:23:15.419 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:15 compute-0 nova_compute[259627]: 2025-10-14 09:23:15.419 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:15 compute-0 nova_compute[259627]: 2025-10-14 09:23:15.420 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:16 compute-0 nova_compute[259627]: 2025-10-14 09:23:16.156 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Successfully created port: aff3a259-5908-4491-83a2-9aa0430d46e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:23:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 389 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Oct 14 09:23:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:16.709 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:ef:a5 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9f430dff-3f6d-47bc-b9b6-a9119c33360a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f430dff-3f6d-47bc-b9b6-a9119c33360a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d988bdc4-1ce3-4ffd-8a2d-6d82ccf0df6c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=285c7ed0-64cf-4192-b665-658fbdfac746) old=Port_Binding(mac=['fa:16:3e:6c:ef:a5 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9f430dff-3f6d-47bc-b9b6-a9119c33360a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f430dff-3f6d-47bc-b9b6-a9119c33360a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:16.712 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 285c7ed0-64cf-4192-b665-658fbdfac746 in datapath 9f430dff-3f6d-47bc-b9b6-a9119c33360a updated
Oct 14 09:23:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:16.716 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f430dff-3f6d-47bc-b9b6-a9119c33360a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:23:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:16.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc7a8f3-b316-4cab-a1dd-60bc4571c5af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:16 compute-0 nova_compute[259627]: 2025-10-14 09:23:16.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:17 compute-0 nova_compute[259627]: 2025-10-14 09:23:17.418 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Successfully updated port: aff3a259-5908-4491-83a2-9aa0430d46e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:23:17 compute-0 ceph-mon[74249]: pgmap v2096: 305 pgs: 305 active+clean; 389 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Oct 14 09:23:17 compute-0 nova_compute[259627]: 2025-10-14 09:23:17.436 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:17 compute-0 nova_compute[259627]: 2025-10-14 09:23:17.437 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquired lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:17 compute-0 nova_compute[259627]: 2025-10-14 09:23:17.437 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:23:17 compute-0 nova_compute[259627]: 2025-10-14 09:23:17.560 2 DEBUG nova.compute.manager [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-changed-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:17 compute-0 nova_compute[259627]: 2025-10-14 09:23:17.561 2 DEBUG nova.compute.manager [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Refreshing instance network info cache due to event network-changed-aff3a259-5908-4491-83a2-9aa0430d46e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:23:17 compute-0 nova_compute[259627]: 2025-10-14 09:23:17.562 2 DEBUG oslo_concurrency.lockutils [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:17 compute-0 nova_compute[259627]: 2025-10-14 09:23:17.615 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:23:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 389 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 158 op/s
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.758 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updating instance_info_cache with network_info: [{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.777 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Releasing lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.777 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance network_info: |[{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.779 2 DEBUG oslo_concurrency.lockutils [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.779 2 DEBUG nova.network.neutron [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Refreshing network info cache for port aff3a259-5908-4491-83a2-9aa0430d46e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.786 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start _get_guest_xml network_info=[{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.795 2 WARNING nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.807 2 DEBUG nova.virt.libvirt.host [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.808 2 DEBUG nova.virt.libvirt.host [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.814 2 DEBUG nova.virt.libvirt.host [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.814 2 DEBUG nova.virt.libvirt.host [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.815 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.816 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.817 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.817 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.818 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.818 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.819 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.819 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.820 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.820 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.821 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.821 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:23:18 compute-0 nova_compute[259627]: 2025-10-14 09:23:18.827 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:23:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664240466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.392 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.415 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.419 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:19 compute-0 ceph-mon[74249]: pgmap v2097: 305 pgs: 305 active+clean; 389 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 158 op/s
Oct 14 09:23:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/664240466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.757 2 DEBUG nova.compute.manager [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.757 2 DEBUG nova.compute.manager [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-ff2b9b74-a6fc-4774-89d2-9c010f121d65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.758 2 DEBUG oslo_concurrency.lockutils [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.758 2 DEBUG oslo_concurrency.lockutils [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.759 2 DEBUG nova.network.neutron [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port ff2b9b74-a6fc-4774-89d2-9c010f121d65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:23:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:23:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128251632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.884 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.885 2 DEBUG nova.virt.libvirt.vif [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-769140514',display_name='tempest-TestServerBasicOps-server-769140514',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-769140514',id=122,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEF69xZ4MwycAnhRnKcfi5zKJsp1M1x2Cnuyy56sKit+Vi9Xj59LaAw8ViNKigCRTmbkHi0zC7jNFwbXf3v+6JnEKBlDXNW4AxSHy39++Bl7O4v2nk74xukLS+FgnBMAYQ==',key_name='tempest-TestServerBasicOps-2077025303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9afc0bee75634a4cb284babbfba8d601',ramdisk_id='',reservation_id='r-lxozf8j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1877578104',owner_user_name='tempest-TestServerBasicOps-1877578104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:23:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c3638538fa6347dc95b6e30735bf0e83',uuid=060db45d-e2f9-4bf6-bcc0-c72e479bfae1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.886 2 DEBUG nova.network.os_vif_util [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converting VIF {"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.887 2 DEBUG nova.network.os_vif_util [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.888 2 DEBUG nova.objects.instance [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lazy-loading 'pci_devices' on Instance uuid 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.905 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <uuid>060db45d-e2f9-4bf6-bcc0-c72e479bfae1</uuid>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <name>instance-0000007a</name>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <nova:name>tempest-TestServerBasicOps-server-769140514</nova:name>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:23:18</nova:creationTime>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <nova:user uuid="c3638538fa6347dc95b6e30735bf0e83">tempest-TestServerBasicOps-1877578104-project-member</nova:user>
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <nova:project uuid="9afc0bee75634a4cb284babbfba8d601">tempest-TestServerBasicOps-1877578104</nova:project>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <nova:port uuid="aff3a259-5908-4491-83a2-9aa0430d46e0">
Oct 14 09:23:19 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <system>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <entry name="serial">060db45d-e2f9-4bf6-bcc0-c72e479bfae1</entry>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <entry name="uuid">060db45d-e2f9-4bf6-bcc0-c72e479bfae1</entry>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </system>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <os>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   </os>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <features>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   </features>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk">
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       </source>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config">
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       </source>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:23:19 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:56:31:2d"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <target dev="tapaff3a259-59"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/console.log" append="off"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <video>
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </video>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:23:19 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:23:19 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:23:19 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:23:19 compute-0 nova_compute[259627]: </domain>
Oct 14 09:23:19 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.907 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Preparing to wait for external event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.907 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.907 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.908 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.908 2 DEBUG nova.virt.libvirt.vif [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-769140514',display_name='tempest-TestServerBasicOps-server-769140514',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-769140514',id=122,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEF69xZ4MwycAnhRnKcfi5zKJsp1M1x2Cnuyy56sKit+Vi9Xj59LaAw8ViNKigCRTmbkHi0zC7jNFwbXf3v+6JnEKBlDXNW4AxSHy39++Bl7O4v2nk74xukLS+FgnBMAYQ==',key_name='tempest-TestServerBasicOps-2077025303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9afc0bee75634a4cb284babbfba8d601',ramdisk_id='',reservation_id='r-lxozf8j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1877578104',owner_user_name='tempest-TestServerBasicOps-1877578104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:23:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c3638538fa6347dc95b6e30735bf0e83',uuid=060db45d-e2f9-4bf6-bcc0-c72e479bfae1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.909 2 DEBUG nova.network.os_vif_util [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converting VIF {"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.909 2 DEBUG nova.network.os_vif_util [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.910 2 DEBUG os_vif [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.912 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaff3a259-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaff3a259-59, col_values=(('external_ids', {'iface-id': 'aff3a259-5908-4491-83a2-9aa0430d46e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:31:2d', 'vm-uuid': '060db45d-e2f9-4bf6-bcc0-c72e479bfae1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:23:19 compute-0 NetworkManager[44885]: <info>  [1760433799.9216] manager: (tapaff3a259-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:19 compute-0 nova_compute[259627]: 2025-10-14 09:23:19.925 2 INFO os_vif [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59')
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.015 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.015 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.016 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] No VIF found with MAC fa:16:3e:56:31:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.016 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Using config drive
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.040 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.075 2 DEBUG nova.network.neutron [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updated VIF entry in instance network info cache for port aff3a259-5908-4491-83a2-9aa0430d46e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.075 2 DEBUG nova.network.neutron [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updating instance_info_cache with network_info: [{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.096 2 DEBUG oslo_concurrency.lockutils [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 405 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct 14 09:23:20 compute-0 sshd-session[380846]: Invalid user admin from 188.150.249.96 port 57384
Oct 14 09:23:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4128251632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.566 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Creating config drive at /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.572 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jh12e4u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.719 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jh12e4u" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.744 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:23:20 compute-0 nova_compute[259627]: 2025-10-14 09:23:20.747 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:20 compute-0 sshd-session[380846]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:23:20 compute-0 sshd-session[380846]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.265 2 DEBUG nova.network.neutron [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port ff2b9b74-a6fc-4774-89d2-9c010f121d65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.267 2 DEBUG nova.network.neutron [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.297 2 DEBUG oslo_concurrency.lockutils [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.377 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.378 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Deleting local config drive /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config because it was imported into RBD.
Oct 14 09:23:21 compute-0 kernel: tapaff3a259-59: entered promiscuous mode
Oct 14 09:23:21 compute-0 NetworkManager[44885]: <info>  [1760433801.4389] manager: (tapaff3a259-59): new Tun device (/org/freedesktop/NetworkManager/Devices/528)
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:21 compute-0 ovn_controller[152662]: 2025-10-14T09:23:21Z|01286|binding|INFO|Claiming lport aff3a259-5908-4491-83a2-9aa0430d46e0 for this chassis.
Oct 14 09:23:21 compute-0 ovn_controller[152662]: 2025-10-14T09:23:21Z|01287|binding|INFO|aff3a259-5908-4491-83a2-9aa0430d46e0: Claiming fa:16:3e:56:31:2d 10.100.0.14
Oct 14 09:23:21 compute-0 ovn_controller[152662]: 2025-10-14T09:23:21Z|01288|binding|INFO|Setting lport aff3a259-5908-4491-83a2-9aa0430d46e0 ovn-installed in OVS
Oct 14 09:23:21 compute-0 ovn_controller[152662]: 2025-10-14T09:23:21Z|01289|binding|INFO|Setting lport aff3a259-5908-4491-83a2-9aa0430d46e0 up in Southbound
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.461 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:31:2d 10.100.0.14'], port_security=['fa:16:3e:56:31:2d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '060db45d-e2f9-4bf6-bcc0-c72e479bfae1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afc0bee75634a4cb284babbfba8d601', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc47b78-9d93-43fe-9a5c-3ace8d9a4e46 df77c32e-f13f-4a03-9ec7-b5592ef3d091', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b027bd6-06b6-47af-868f-e9e43d4de27f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=aff3a259-5908-4491-83a2-9aa0430d46e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.462 162547 INFO neutron.agent.ovn.metadata.agent [-] Port aff3a259-5908-4491-83a2-9aa0430d46e0 in datapath 9ed30564-d15d-43a5-8374-94d3c4f31dec bound to our chassis
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.464 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ed30564-d15d-43a5-8374-94d3c4f31dec
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:21 compute-0 systemd-udevd[381572]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.536 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[194359a6-923f-4864-8b69-01f0f57315eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.537 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ed30564-d1 in ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:23:21 compute-0 systemd-machined[214636]: New machine qemu-155-instance-0000007a.
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.540 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ed30564-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.541 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8f44f8b9-9b1b-4e11-afaf-6ef43b4dfccd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.542 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c76417d-8290-41f5-a877-e928f6f3b1ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ceph-mon[74249]: pgmap v2098: 305 pgs: 305 active+clean; 405 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct 14 09:23:21 compute-0 NetworkManager[44885]: <info>  [1760433801.5508] device (tapaff3a259-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:23:21 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007a.
Oct 14 09:23:21 compute-0 NetworkManager[44885]: <info>  [1760433801.5519] device (tapaff3a259-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.560 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6ec1eb-0649-4960-a983-627320e5ba65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.588 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[efc5d4dd-8717-4ea8-9f5f-a458a85fc5e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.626 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aca55e03-af00-4121-b22f-755b3472910e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 systemd-udevd[381577]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:23:21 compute-0 NetworkManager[44885]: <info>  [1760433801.6366] manager: (tap9ed30564-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/529)
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.634 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[efdfb0bf-603b-4a89-a50c-932287c953b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.676 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[622e7227-b2ce-47d0-8d2b-eac557caf0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.679 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ffab4722-4837-4bd2-84e9-2e0b5791fd37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 NetworkManager[44885]: <info>  [1760433801.7105] device (tap9ed30564-d0): carrier: link connected
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.717 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[95063acf-cc0b-4c2c-add9-5862f26c2d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.756 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[661cfa59-7dbd-4e2b-9eab-6187ae579ffd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ed30564-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:96:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758047, 'reachable_time': 34673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381606, 'error': None, 'target': 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.776 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[473894f3-0e91-481c-a7c8-0d96f917c5ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:96eb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758047, 'tstamp': 758047}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381607, 'error': None, 'target': 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.802 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e685a46-0a4b-4b0b-88ed-1b0668effc78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ed30564-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:96:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758047, 'reachable_time': 34673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381608, 'error': None, 'target': 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.845 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88e7ef51-1da6-4c55-839d-5297625b93a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.859 2 DEBUG nova.compute.manager [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.860 2 DEBUG oslo_concurrency.lockutils [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.861 2 DEBUG oslo_concurrency.lockutils [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.861 2 DEBUG oslo_concurrency.lockutils [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.861 2 DEBUG nova.compute.manager [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Processing event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.928 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0e04dfcd-146b-465c-b392-aea58aa3629b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.930 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ed30564-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.930 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.931 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ed30564-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:21 compute-0 NetworkManager[44885]: <info>  [1760433801.9341] manager: (tap9ed30564-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/530)
Oct 14 09:23:21 compute-0 kernel: tap9ed30564-d0: entered promiscuous mode
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.943 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ed30564-d0, col_values=(('external_ids', {'iface-id': 'aa7df6c9-2599-4ac0-b318-2cea87a1558f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:21 compute-0 ovn_controller[152662]: 2025-10-14T09:23:21Z|01290|binding|INFO|Releasing lport aa7df6c9-2599-4ac0-b318-2cea87a1558f from this chassis (sb_readonly=0)
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:21 compute-0 nova_compute[259627]: 2025-10-14 09:23:21.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.969 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ed30564-d15d-43a5-8374-94d3c4f31dec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ed30564-d15d-43a5-8374-94d3c4f31dec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.970 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d249fc-fd90-4bba-adbf-f7270e7f574c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.971 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-9ed30564-d15d-43a5-8374-94d3c4f31dec
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/9ed30564-d15d-43a5-8374-94d3c4f31dec.pid.haproxy
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 9ed30564-d15d-43a5-8374-94d3c4f31dec
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:23:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.974 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'env', 'PROCESS_TAG=haproxy-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ed30564-d15d-43a5-8374-94d3c4f31dec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:23:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 405 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 14 09:23:22 compute-0 podman[381681]: 2025-10-14 09:23:22.408256904 +0000 UTC m=+0.050775956 container create 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:23:22 compute-0 systemd[1]: Started libpod-conmon-09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb.scope.
Oct 14 09:23:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d50de940caad5298fe95e1dc9e7763caa796f2db746a3b136ae2e99fd99b9d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:23:22 compute-0 podman[381681]: 2025-10-14 09:23:22.380822216 +0000 UTC m=+0.023341288 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:23:22 compute-0 podman[381681]: 2025-10-14 09:23:22.483934764 +0000 UTC m=+0.126453836 container init 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:23:22 compute-0 podman[381681]: 2025-10-14 09:23:22.489991593 +0000 UTC m=+0.132510645 container start 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 09:23:22 compute-0 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [NOTICE]   (381701) : New worker (381703) forked
Oct 14 09:23:22 compute-0 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [NOTICE]   (381701) : Loading success.
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.689 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.690 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433802.6889188, 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.690 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] VM Started (Lifecycle Event)
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.694 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.697 2 INFO nova.virt.libvirt.driver [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance spawned successfully.
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.698 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.730 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.736 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.739 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.740 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.740 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.741 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.741 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.742 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.779 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.779 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433802.6904864, 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.779 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] VM Paused (Lifecycle Event)
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.807 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.809 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433802.694312, 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.810 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] VM Resumed (Lifecycle Event)
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.820 2 INFO nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Took 8.04 seconds to spawn the instance on the hypervisor.
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.821 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.847 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.849 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.872 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.884 2 INFO nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Took 9.10 seconds to build instance.
Oct 14 09:23:22 compute-0 nova_compute[259627]: 2025-10-14 09:23:22.899 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:23 compute-0 sshd-session[380846]: Failed password for invalid user admin from 188.150.249.96 port 57384 ssh2
Oct 14 09:23:23 compute-0 ceph-mon[74249]: pgmap v2099: 305 pgs: 305 active+clean; 405 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 14 09:23:24 compute-0 nova_compute[259627]: 2025-10-14 09:23:24.002 2 DEBUG nova.compute.manager [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:24 compute-0 nova_compute[259627]: 2025-10-14 09:23:24.003 2 DEBUG oslo_concurrency.lockutils [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:24 compute-0 nova_compute[259627]: 2025-10-14 09:23:24.005 2 DEBUG oslo_concurrency.lockutils [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:24 compute-0 nova_compute[259627]: 2025-10-14 09:23:24.005 2 DEBUG oslo_concurrency.lockutils [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:24 compute-0 nova_compute[259627]: 2025-10-14 09:23:24.006 2 DEBUG nova.compute.manager [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] No waiting events found dispatching network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:23:24 compute-0 nova_compute[259627]: 2025-10-14 09:23:24.006 2 WARNING nova.compute.manager [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received unexpected event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 for instance with vm_state active and task_state None.
Oct 14 09:23:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 405 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 3.2 MiB/s wr, 105 op/s
Oct 14 09:23:24 compute-0 ovn_controller[152662]: 2025-10-14T09:23:24Z|00137|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.4
Oct 14 09:23:24 compute-0 ovn_controller[152662]: 2025-10-14T09:23:24Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:46:4e:45 10.100.0.4
Oct 14 09:23:24 compute-0 nova_compute[259627]: 2025-10-14 09:23:24.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:24 compute-0 nova_compute[259627]: 2025-10-14 09:23:24.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:25 compute-0 sshd-session[380846]: Connection closed by invalid user admin 188.150.249.96 port 57384 [preauth]
Oct 14 09:23:25 compute-0 ceph-mon[74249]: pgmap v2100: 305 pgs: 305 active+clean; 405 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 3.2 MiB/s wr, 105 op/s
Oct 14 09:23:25 compute-0 nova_compute[259627]: 2025-10-14 09:23:25.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:25 compute-0 nova_compute[259627]: 2025-10-14 09:23:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:23:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Oct 14 09:23:26 compute-0 nova_compute[259627]: 2025-10-14 09:23:26.383 2 DEBUG nova.compute.manager [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-changed-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:26 compute-0 nova_compute[259627]: 2025-10-14 09:23:26.384 2 DEBUG nova.compute.manager [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Refreshing instance network info cache due to event network-changed-aff3a259-5908-4491-83a2-9aa0430d46e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:23:26 compute-0 nova_compute[259627]: 2025-10-14 09:23:26.384 2 DEBUG oslo_concurrency.lockutils [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:26 compute-0 nova_compute[259627]: 2025-10-14 09:23:26.385 2 DEBUG oslo_concurrency.lockutils [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:26 compute-0 nova_compute[259627]: 2025-10-14 09:23:26.385 2 DEBUG nova.network.neutron [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Refreshing network info cache for port aff3a259-5908-4491-83a2-9aa0430d46e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:23:27 compute-0 ceph-mon[74249]: pgmap v2101: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Oct 14 09:23:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:28 compute-0 nova_compute[259627]: 2025-10-14 09:23:28.098 2 DEBUG nova.network.neutron [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updated VIF entry in instance network info cache for port aff3a259-5908-4491-83a2-9aa0430d46e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:23:28 compute-0 nova_compute[259627]: 2025-10-14 09:23:28.098 2 DEBUG nova.network.neutron [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updating instance_info_cache with network_info: [{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:28 compute-0 nova_compute[259627]: 2025-10-14 09:23:28.189 2 DEBUG oslo_concurrency.lockutils [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Oct 14 09:23:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:23:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.04 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.77 writes per sync, written: 0.14 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7447 writes, 29K keys, 7447 commit groups, 1.0 writes per commit group, ingest: 33.01 MB, 0.06 MB/s
                                           Interval WAL: 7447 writes, 2948 syncs, 2.53 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:23:29 compute-0 ovn_controller[152662]: 2025-10-14T09:23:29Z|00139|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.4
Oct 14 09:23:29 compute-0 ovn_controller[152662]: 2025-10-14T09:23:29Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:46:4e:45 10.100.0.4
Oct 14 09:23:29 compute-0 ovn_controller[152662]: 2025-10-14T09:23:29Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:4e:45 10.100.0.4
Oct 14 09:23:29 compute-0 ovn_controller[152662]: 2025-10-14T09:23:29Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:4e:45 10.100.0.4
Oct 14 09:23:29 compute-0 nova_compute[259627]: 2025-10-14 09:23:29.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:29 compute-0 ceph-mon[74249]: pgmap v2102: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Oct 14 09:23:29 compute-0 nova_compute[259627]: 2025-10-14 09:23:29.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct 14 09:23:31 compute-0 ceph-mon[74249]: pgmap v2103: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct 14 09:23:31 compute-0 podman[381715]: 2025-10-14 09:23:31.68396977 +0000 UTC m=+0.081652508 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 14 09:23:31 compute-0 podman[381714]: 2025-10-14 09:23:31.709000689 +0000 UTC m=+0.101769076 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd)
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 423 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 575 KiB/s wr, 129 op/s
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:23:32
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'backups']
Oct 14 09:23:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:23:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:23:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:23:33 compute-0 ceph-mon[74249]: pgmap v2104: 305 pgs: 305 active+clean; 423 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 575 KiB/s wr, 129 op/s
Oct 14 09:23:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 423 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 573 KiB/s wr, 129 op/s
Oct 14 09:23:34 compute-0 nova_compute[259627]: 2025-10-14 09:23:34.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:23:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.04 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.76 writes per sync, written: 0.14 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7870 writes, 29K keys, 7870 commit groups, 1.0 writes per commit group, ingest: 34.14 MB, 0.06 MB/s
                                           Interval WAL: 7870 writes, 3099 syncs, 2.54 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:23:34 compute-0 nova_compute[259627]: 2025-10-14 09:23:34.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:35 compute-0 ceph-mon[74249]: pgmap v2105: 305 pgs: 305 active+clean; 423 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 573 KiB/s wr, 129 op/s
Oct 14 09:23:35 compute-0 ovn_controller[152662]: 2025-10-14T09:23:35Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:31:2d 10.100.0.14
Oct 14 09:23:35 compute-0 ovn_controller[152662]: 2025-10-14T09:23:35Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:31:2d 10.100.0.14
Oct 14 09:23:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.6 MiB/s wr, 180 op/s
Oct 14 09:23:36 compute-0 nova_compute[259627]: 2025-10-14 09:23:36.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:36 compute-0 nova_compute[259627]: 2025-10-14 09:23:36.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.039 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.040 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.041 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.041 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.042 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:23:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951794241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.510 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.611 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.611 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 ceph-mon[74249]: pgmap v2106: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.6 MiB/s wr, 180 op/s
Oct 14 09:23:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1951794241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.618 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.618 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.626 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.626 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.632 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.632 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.638 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.638 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:23:37 compute-0 sshd-session[381712]: Invalid user guest from 188.150.249.96 port 33856
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.892 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.893 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2754MB free_disk=59.8003044128418GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.894 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:37 compute-0 nova_compute[259627]: 2025-10-14 09:23:37.894 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.063 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 50c83173-31e3-4f7a-8836-26e52affd0f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6810b29b-088f-441b-8a6a-02eaafada0c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance ef3d76bf-9763-4405-8e48-c2c4405a2a3b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:23:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.314 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:38 compute-0 sshd-session[381712]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:23:38 compute-0 sshd-session[381712]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=188.150.249.96
Oct 14 09:23:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:23:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099643206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.740 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.749 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.772 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.813 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:23:38 compute-0 nova_compute[259627]: 2025-10-14 09:23:38.814 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:39.213 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:e2:5d 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf824143-865f-4777-be8c-e192803f85fe, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=36d9c2d2-c941-4fd1-a63a-9e6fee3a0d20) old=Port_Binding(mac=['fa:16:3e:63:e2:5d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:39.215 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 36d9c2d2-c941-4fd1-a63a-9e6fee3a0d20 in datapath 17206a37-8263-4403-aaa6-3b6fe9255608 updated
Oct 14 09:23:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:39.219 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17206a37-8263-4403-aaa6-3b6fe9255608, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:23:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:39.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4803fb-2e6c-4203-8399-61738b686425]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:39 compute-0 nova_compute[259627]: 2025-10-14 09:23:39.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:39 compute-0 ceph-mon[74249]: pgmap v2107: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 14 09:23:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1099643206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:23:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 121K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.77 writes per sync, written: 0.11 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5834 writes, 23K keys, 5834 commit groups, 1.0 writes per commit group, ingest: 24.66 MB, 0.04 MB/s
                                           Interval WAL: 5834 writes, 2337 syncs, 2.50 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:23:39 compute-0 nova_compute[259627]: 2025-10-14 09:23:39.798 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:39 compute-0 nova_compute[259627]: 2025-10-14 09:23:39.798 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:39 compute-0 nova_compute[259627]: 2025-10-14 09:23:39.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:40 compute-0 sshd-session[381712]: Failed password for invalid user guest from 188.150.249.96 port 33856 ssh2
Oct 14 09:23:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Oct 14 09:23:40 compute-0 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 09:23:40 compute-0 nova_compute[259627]: 2025-10-14 09:23:40.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:41 compute-0 sshd-session[381712]: Connection closed by invalid user guest 188.150.249.96 port 33856 [preauth]
Oct 14 09:23:41 compute-0 ceph-mon[74249]: pgmap v2108: 305 pgs: 305 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Oct 14 09:23:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 14 09:23:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:42 compute-0 nova_compute[259627]: 2025-10-14 09:23:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:42 compute-0 nova_compute[259627]: 2025-10-14 09:23:42.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031436066411082895 of space, bias 1.0, pg target 0.9430819923324868 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001424050310933125 of space, bias 1.0, pg target 0.4272150932799375 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:23:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:23:43 compute-0 ceph-mon[74249]: pgmap v2109: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:44.004 162744 DEBUG eventlet.wsgi.server [-] (162744) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:44.007 162744 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: Accept: */*
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: Connection: close
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: Content-Type: text/plain
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: Host: 169.254.169.254
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: User-Agent: curl/7.84.0
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: X-Forwarded-For: 10.100.0.14
Oct 14 09:23:44 compute-0 ovn_metadata_agent[162542]: X-Ovn-Network-Id: 9ed30564-d15d-43a5-8374-94d3c4f31dec __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 14 09:23:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 14 09:23:44 compute-0 nova_compute[259627]: 2025-10-14 09:23:44.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:44 compute-0 nova_compute[259627]: 2025-10-14 09:23:44.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:44 compute-0 nova_compute[259627]: 2025-10-14 09:23:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:44 compute-0 nova_compute[259627]: 2025-10-14 09:23:44.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:23:45 compute-0 nova_compute[259627]: 2025-10-14 09:23:45.157 2 DEBUG nova.compute.manager [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:23:45 compute-0 nova_compute[259627]: 2025-10-14 09:23:45.183 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:45 compute-0 nova_compute[259627]: 2025-10-14 09:23:45.183 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:45 compute-0 nova_compute[259627]: 2025-10-14 09:23:45.183 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:23:45 compute-0 nova_compute[259627]: 2025-10-14 09:23:45.221 2 INFO nova.compute.manager [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] instance snapshotting
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.274 162744 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.275 162744 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.2680297
Oct 14 09:23:45 compute-0 haproxy-metadata-proxy-9ed30564-d15d-43a5-8374-94d3c4f31dec[381703]: 10.100.0.14:60340 [14/Oct/2025:09:23:44.001] listener listener/metadata 0/0/0/1273/1273 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.411 162744 DEBUG eventlet.wsgi.server [-] (162744) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.413 162744 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: Accept: */*
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: Connection: close
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: Content-Length: 100
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: Content-Type: application/x-www-form-urlencoded
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: Host: 169.254.169.254
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: User-Agent: curl/7.84.0
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: X-Forwarded-For: 10.100.0.14
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: X-Ovn-Network-Id: 9ed30564-d15d-43a5-8374-94d3c4f31dec
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 14 09:23:45 compute-0 nova_compute[259627]: 2025-10-14 09:23:45.476 2 INFO nova.virt.libvirt.driver [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Beginning live snapshot process
Oct 14 09:23:45 compute-0 nova_compute[259627]: 2025-10-14 09:23:45.646 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] creating snapshot(9133787a58494c31ada8906404467086) on rbd image(8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:23:45 compute-0 ceph-mon[74249]: pgmap v2110: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 14 09:23:45 compute-0 haproxy-metadata-proxy-9ed30564-d15d-43a5-8374-94d3c4f31dec[381703]: 10.100.0.14:60356 [14/Oct/2025:09:23:45.409] listener listener/metadata 0/0/0/273/273 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.682 162744 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 14 09:23:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.682 162744 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2693579
Oct 14 09:23:45 compute-0 podman[381815]: 2025-10-14 09:23:45.749709957 +0000 UTC m=+0.138968165 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:23:45 compute-0 podman[381799]: 2025-10-14 09:23:45.766868461 +0000 UTC m=+0.165353517 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 14 09:23:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 456 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Oct 14 09:23:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 14 09:23:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 14 09:23:46 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 14 09:23:46 compute-0 nova_compute[259627]: 2025-10-14 09:23:46.717 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] cloning vms/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk@9133787a58494c31ada8906404467086 to images/68eb8740-bd8e-4270-a8b8-e37a3ed71f7a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 14 09:23:46 compute-0 nova_compute[259627]: 2025-10-14 09:23:46.824 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] flattening images/68eb8740-bd8e-4270-a8b8-e37a3ed71f7a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 14 09:23:46 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.365 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] removing snapshot(9133787a58494c31ada8906404467086) on rbd image(8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.564 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.581 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.582 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:23:47 compute-0 ceph-mon[74249]: pgmap v2111: 305 pgs: 305 active+clean; 456 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Oct 14 09:23:47 compute-0 ceph-mon[74249]: osdmap e285: 3 total, 3 up, 3 in
Oct 14 09:23:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 14 09:23:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 14 09:23:47 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.717 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] creating snapshot(snap) on rbd image(68eb8740-bd8e-4270-a8b8-e37a3ed71f7a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.802 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.803 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.804 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.804 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.804 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.806 2 INFO nova.compute.manager [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Terminating instance
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.808 2 DEBUG nova.compute.manager [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:23:47 compute-0 kernel: tapaff3a259-59 (unregistering): left promiscuous mode
Oct 14 09:23:47 compute-0 NetworkManager[44885]: <info>  [1760433827.8721] device (tapaff3a259-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:47 compute-0 ovn_controller[152662]: 2025-10-14T09:23:47Z|01291|binding|INFO|Releasing lport aff3a259-5908-4491-83a2-9aa0430d46e0 from this chassis (sb_readonly=0)
Oct 14 09:23:47 compute-0 ovn_controller[152662]: 2025-10-14T09:23:47Z|01292|binding|INFO|Setting lport aff3a259-5908-4491-83a2-9aa0430d46e0 down in Southbound
Oct 14 09:23:47 compute-0 ovn_controller[152662]: 2025-10-14T09:23:47Z|01293|binding|INFO|Removing iface tapaff3a259-59 ovn-installed in OVS
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.938 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:31:2d 10.100.0.14'], port_security=['fa:16:3e:56:31:2d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '060db45d-e2f9-4bf6-bcc0-c72e479bfae1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afc0bee75634a4cb284babbfba8d601', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc47b78-9d93-43fe-9a5c-3ace8d9a4e46 df77c32e-f13f-4a03-9ec7-b5592ef3d091', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b027bd6-06b6-47af-868f-e9e43d4de27f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=aff3a259-5908-4491-83a2-9aa0430d46e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.940 162547 INFO neutron.agent.ovn.metadata.agent [-] Port aff3a259-5908-4491-83a2-9aa0430d46e0 in datapath 9ed30564-d15d-43a5-8374-94d3c4f31dec unbound from our chassis
Oct 14 09:23:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.943 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ed30564-d15d-43a5-8374-94d3c4f31dec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:23:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.948 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c090c8a-928d-4a4e-b8d2-03f6a6101394]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.950 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec namespace which is not needed anymore
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:47 compute-0 nova_compute[259627]: 2025-10-14 09:23:47.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:47 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct 14 09:23:47 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Consumed 14.718s CPU time.
Oct 14 09:23:48 compute-0 systemd-machined[214636]: Machine qemu-155-instance-0000007a terminated.
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.045 2 INFO nova.virt.libvirt.driver [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance destroyed successfully.
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.045 2 DEBUG nova.objects.instance [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lazy-loading 'resources' on Instance uuid 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.061 2 DEBUG nova.virt.libvirt.vif [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-769140514',display_name='tempest-TestServerBasicOps-server-769140514',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-769140514',id=122,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEF69xZ4MwycAnhRnKcfi5zKJsp1M1x2Cnuyy56sKit+Vi9Xj59LaAw8ViNKigCRTmbkHi0zC7jNFwbXf3v+6JnEKBlDXNW4AxSHy39++Bl7O4v2nk74xukLS+FgnBMAYQ==',key_name='tempest-TestServerBasicOps-2077025303',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:23:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9afc0bee75634a4cb284babbfba8d601',ramdisk_id='',reservation_id='r-lxozf8j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1877578104',owner_user_name='tempest-TestServerBasicOps-1877578104-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:23:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c3638538fa6347dc95b6e30735bf0e83',uuid=060db45d-e2f9-4bf6-bcc0-c72e479bfae1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.062 2 DEBUG nova.network.os_vif_util [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converting VIF {"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.063 2 DEBUG nova.network.os_vif_util [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.064 2 DEBUG os_vif [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaff3a259-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.074 2 INFO os_vif [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59')
Oct 14 09:23:48 compute-0 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [NOTICE]   (381701) : haproxy version is 2.8.14-c23fe91
Oct 14 09:23:48 compute-0 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [NOTICE]   (381701) : path to executable is /usr/sbin/haproxy
Oct 14 09:23:48 compute-0 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [WARNING]  (381701) : Exiting Master process...
Oct 14 09:23:48 compute-0 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [WARNING]  (381701) : Exiting Master process...
Oct 14 09:23:48 compute-0 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [ALERT]    (381701) : Current worker (381703) exited with code 143 (Terminated)
Oct 14 09:23:48 compute-0 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [WARNING]  (381701) : All workers exited. Exiting... (0)
Oct 14 09:23:48 compute-0 systemd[1]: libpod-09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb.scope: Deactivated successfully.
Oct 14 09:23:48 compute-0 podman[382017]: 2025-10-14 09:23:48.129821349 +0000 UTC m=+0.054510268 container died 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb-userdata-shm.mount: Deactivated successfully.
Oct 14 09:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-44d50de940caad5298fe95e1dc9e7763caa796f2db746a3b136ae2e99fd99b9d-merged.mount: Deactivated successfully.
Oct 14 09:23:48 compute-0 podman[382017]: 2025-10-14 09:23:48.182189633 +0000 UTC m=+0.106878522 container cleanup 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:23:48 compute-0 systemd[1]: libpod-conmon-09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb.scope: Deactivated successfully.
Oct 14 09:23:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 456 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 75 KiB/s wr, 27 op/s
Oct 14 09:23:48 compute-0 podman[382065]: 2025-10-14 09:23:48.261194055 +0000 UTC m=+0.052084898 container remove 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.267 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2367df-ef21-4eb3-a1bf-6b5990de479a]: (4, ('Tue Oct 14 09:23:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec (09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb)\n09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb\nTue Oct 14 09:23:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec (09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb)\n09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.268 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2ebcf9-8f33-40d1-8122-d5c503d65e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.269 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ed30564-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:48 compute-0 kernel: tap9ed30564-d0: left promiscuous mode
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc27c73-548b-4f4c-af01-8be6667365d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.325 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8713bd8d-8145-4842-95b9-e59f7299802d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.326 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dffb64f8-447e-465e-8b21-0c3304c4d8c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ca9ee2-8243-4914-abd1-df82a9c37d53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758038, 'reachable_time': 25287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382081, 'error': None, 'target': 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.349 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:23:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.349 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[042ceeb5-52bd-4d7d-bf5f-1183acc2c1f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d9ed30564\x2dd15d\x2d43a5\x2d8374\x2d94d3c4f31dec.mount: Deactivated successfully.
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.478 2 INFO nova.virt.libvirt.driver [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Deleting instance files /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1_del
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.480 2 INFO nova.virt.libvirt.driver [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Deletion of /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1_del complete
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.542 2 INFO nova.compute.manager [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.542 2 DEBUG oslo.service.loopingcall [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.543 2 DEBUG nova.compute.manager [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.543 2 DEBUG nova.network.neutron [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:23:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 14 09:23:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 14 09:23:48 compute-0 ceph-mon[74249]: osdmap e286: 3 total, 3 up, 3 in
Oct 14 09:23:48 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.835 2 DEBUG nova.compute.manager [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-unplugged-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.836 2 DEBUG oslo_concurrency.lockutils [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.836 2 DEBUG oslo_concurrency.lockutils [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.837 2 DEBUG oslo_concurrency.lockutils [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.837 2 DEBUG nova.compute.manager [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] No waiting events found dispatching network-vif-unplugged-aff3a259-5908-4491-83a2-9aa0430d46e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:23:48 compute-0 nova_compute[259627]: 2025-10-14 09:23:48.837 2 DEBUG nova.compute.manager [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-unplugged-aff3a259-5908-4491-83a2-9aa0430d46e0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:23:49 compute-0 nova_compute[259627]: 2025-10-14 09:23:49.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:49 compute-0 ceph-mon[74249]: pgmap v2114: 305 pgs: 305 active+clean; 456 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 75 KiB/s wr, 27 op/s
Oct 14 09:23:49 compute-0 ceph-mon[74249]: osdmap e287: 3 total, 3 up, 3 in
Oct 14 09:23:49 compute-0 nova_compute[259627]: 2025-10-14 09:23:49.904 2 DEBUG nova.network.neutron [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:49 compute-0 nova_compute[259627]: 2025-10-14 09:23:49.932 2 INFO nova.compute.manager [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Took 1.39 seconds to deallocate network for instance.
Oct 14 09:23:49 compute-0 nova_compute[259627]: 2025-10-14 09:23:49.984 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:49 compute-0 nova_compute[259627]: 2025-10-14 09:23:49.985 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.140 2 DEBUG oslo_concurrency.processutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.201 2 INFO nova.virt.libvirt.driver [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Snapshot image upload complete
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.202 2 INFO nova.compute.manager [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 4.98 seconds to snapshot the instance on the hypervisor.
Oct 14 09:23:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Oct 14 09:23:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:23:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3846297174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.621 2 DEBUG oslo_concurrency.processutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.627 2 DEBUG nova.compute.provider_tree [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.653 2 DEBUG nova.scheduler.client.report [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.680 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3846297174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.718 2 INFO nova.scheduler.client.report [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Deleted allocations for instance 060db45d-e2f9-4bf6-bcc0-c72e479bfae1
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.813 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:50.840 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:e2:5d 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf824143-865f-4777-be8c-e192803f85fe, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=36d9c2d2-c941-4fd1-a63a-9e6fee3a0d20) old=Port_Binding(mac=['fa:16:3e:63:e2:5d 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:50.841 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 36d9c2d2-c941-4fd1-a63a-9e6fee3a0d20 in datapath 17206a37-8263-4403-aaa6-3b6fe9255608 updated
Oct 14 09:23:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:50.843 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17206a37-8263-4403-aaa6-3b6fe9255608, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:23:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:50.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8806db02-2098-4536-8570-2ee907bca734]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.949 2 DEBUG nova.compute.manager [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.950 2 DEBUG oslo_concurrency.lockutils [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.951 2 DEBUG oslo_concurrency.lockutils [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.951 2 DEBUG oslo_concurrency.lockutils [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.951 2 DEBUG nova.compute.manager [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] No waiting events found dispatching network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.952 2 WARNING nova.compute.manager [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received unexpected event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 for instance with vm_state deleted and task_state None.
Oct 14 09:23:50 compute-0 nova_compute[259627]: 2025-10-14 09:23:50.952 2 DEBUG nova.compute.manager [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-deleted-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 14 09:23:51 compute-0 ceph-mon[74249]: pgmap v2116: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Oct 14 09:23:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 14 09:23:51 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 14 09:23:51 compute-0 nova_compute[259627]: 2025-10-14 09:23:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 16 MiB/s wr, 271 op/s
Oct 14 09:23:52 compute-0 ceph-mon[74249]: osdmap e288: 3 total, 3 up, 3 in
Oct 14 09:23:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 14 09:23:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 14 09:23:52 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.053 2 DEBUG nova.compute.manager [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-changed-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.053 2 DEBUG nova.compute.manager [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing instance network info cache due to event network-changed-169fcf13-d616-47ef-8558-362361f16f03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.054 2 DEBUG oslo_concurrency.lockutils [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.054 2 DEBUG oslo_concurrency.lockutils [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.055 2 DEBUG nova.network.neutron [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing network info cache for port 169fcf13-d616-47ef-8558-362361f16f03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.117 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.118 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.118 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.119 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.119 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.121 2 INFO nova.compute.manager [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Terminating instance
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.122 2 DEBUG nova.compute.manager [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:53 compute-0 kernel: tap169fcf13-d6 (unregistering): left promiscuous mode
Oct 14 09:23:53 compute-0 NetworkManager[44885]: <info>  [1760433833.1780] device (tap169fcf13-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:53 compute-0 ovn_controller[152662]: 2025-10-14T09:23:53Z|01294|binding|INFO|Releasing lport 169fcf13-d616-47ef-8558-362361f16f03 from this chassis (sb_readonly=0)
Oct 14 09:23:53 compute-0 ovn_controller[152662]: 2025-10-14T09:23:53Z|01295|binding|INFO|Setting lport 169fcf13-d616-47ef-8558-362361f16f03 down in Southbound
Oct 14 09:23:53 compute-0 ovn_controller[152662]: 2025-10-14T09:23:53Z|01296|binding|INFO|Removing iface tap169fcf13-d6 ovn-installed in OVS
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.205 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:4e:45 10.100.0.4'], port_security=['fa:16:3e:46:4e:45 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8f5e63fb-23c2-4f15-acca-bc5fbeb0729b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4112adc84657452aa0e117ac5999054a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b7a53172-9b5e-49ee-bb03-aeca0d4a8fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=add5cdec-6440-4df9-aea8-21659d7bab06, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=169fcf13-d616-47ef-8558-362361f16f03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.207 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 169fcf13-d616-47ef-8558-362361f16f03 in datapath 4fc37d66-193b-4ab7-80e3-58e26dc76e47 unbound from our chassis
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.211 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fc37d66-193b-4ab7-80e3-58e26dc76e47
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb40a17-42bd-4e58-ae0b-abfb07c2f6c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:53 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 14 09:23:53 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000079.scope: Consumed 16.763s CPU time.
Oct 14 09:23:53 compute-0 systemd-machined[214636]: Machine qemu-154-instance-00000079 terminated.
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.274 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7361a15d-bd8a-4bb0-b01e-7b43d7536dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.277 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1ada0a64-6dfd-465a-9c35-5ad6027b7ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.303 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fe32a5-b2e3-4d48-a450-58480da21e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.321 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bebd7151-d253-44cb-a513-62a1d13fbc9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fc37d66-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:1e:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752720, 'reachable_time': 16651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382115, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[377c66f8-775f-49a5-831e-d57d59f7d82f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fc37d66-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752733, 'tstamp': 752733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382116, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fc37d66-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752737, 'tstamp': 752737}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382116, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.340 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fc37d66-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.350 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fc37d66-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.351 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.351 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fc37d66-10, col_values=(('external_ids', {'iface-id': '04719e6c-d55b-4ad7-a45c-52e6e59101ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.351 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.353 2 INFO nova.virt.libvirt.driver [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance destroyed successfully.
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.353 2 DEBUG nova.objects.instance [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'resources' on Instance uuid 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.378 2 DEBUG nova.virt.libvirt.vif [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1820866852',display_name='tempest-TestSnapshotPattern-server-1820866852',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1820866852',id=121,image_ref='d594f64a-1811-45da-92c9-566107aad012',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:23:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-z8e26amr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6810b29b-088f-441b-8a6a-02eaafada0c5',image_min_disk='1',image_min_ram='0',image_owner_id='4112adc84657452aa0e117ac5999054a',image_owner_project_name='tempest-TestSnapshotPattern-70687399',image_owner_user_name='tempest-TestSnapshotPattern-70687399-project-member',image_user_id='f232ab535af04111bf570569aa293116',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:23:50Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=8f5e63fb-23c2-4f15-acca-bc5fbeb0729b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.378 2 DEBUG nova.network.os_vif_util [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.379 2 DEBUG nova.network.os_vif_util [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.379 2 DEBUG os_vif [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap169fcf13-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.385 2 INFO os_vif [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6')
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.549 2 DEBUG nova.compute.manager [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-unplugged-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.550 2 DEBUG oslo_concurrency.lockutils [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.550 2 DEBUG oslo_concurrency.lockutils [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.551 2 DEBUG oslo_concurrency.lockutils [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.551 2 DEBUG nova.compute.manager [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] No waiting events found dispatching network-vif-unplugged-169fcf13-d616-47ef-8558-362361f16f03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.552 2 DEBUG nova.compute.manager [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-unplugged-169fcf13-d616-47ef-8558-362361f16f03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:23:53 compute-0 ceph-mon[74249]: pgmap v2118: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 16 MiB/s wr, 271 op/s
Oct 14 09:23:53 compute-0 ceph-mon[74249]: osdmap e289: 3 total, 3 up, 3 in
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.765 2 INFO nova.virt.libvirt.driver [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Deleting instance files /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_del
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.767 2 INFO nova.virt.libvirt.driver [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Deletion of /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_del complete
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.821 2 INFO nova.compute.manager [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.822 2 DEBUG oslo.service.loopingcall [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.823 2 DEBUG nova.compute.manager [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.823 2 DEBUG nova.network.neutron [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:23:53 compute-0 nova_compute[259627]: 2025-10-14 09:23:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 252 op/s
Oct 14 09:23:54 compute-0 nova_compute[259627]: 2025-10-14 09:23:54.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.480 2 DEBUG nova.network.neutron [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updated VIF entry in instance network info cache for port 169fcf13-d616-47ef-8558-362361f16f03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.481 2 DEBUG nova.network.neutron [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.506 2 DEBUG oslo_concurrency.lockutils [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.516 2 DEBUG nova.network.neutron [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.536 2 INFO nova.compute.manager [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 1.71 seconds to deallocate network for instance.
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.595 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.595 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.619 2 DEBUG nova.compute.manager [req-c7e402a0-c8e6-439f-887a-0c6eb5939b89 req-380a5c73-1e38-4a29-920f-2e2c44419681 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-deleted-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.652 2 DEBUG nova.compute.manager [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.653 2 DEBUG oslo_concurrency.lockutils [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.653 2 DEBUG oslo_concurrency.lockutils [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.653 2 DEBUG oslo_concurrency.lockutils [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.653 2 DEBUG nova.compute.manager [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] No waiting events found dispatching network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.654 2 WARNING nova.compute.manager [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received unexpected event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 for instance with vm_state deleted and task_state None.
Oct 14 09:23:55 compute-0 nova_compute[259627]: 2025-10-14 09:23:55.731 2 DEBUG oslo_concurrency.processutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:23:55 compute-0 ceph-mon[74249]: pgmap v2120: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 252 op/s
Oct 14 09:23:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:23:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943956470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:56 compute-0 nova_compute[259627]: 2025-10-14 09:23:56.210 2 DEBUG oslo_concurrency.processutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:23:56 compute-0 nova_compute[259627]: 2025-10-14 09:23:56.219 2 DEBUG nova.compute.provider_tree [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:23:56 compute-0 nova_compute[259627]: 2025-10-14 09:23:56.235 2 DEBUG nova.scheduler.client.report [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:23:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 358 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 284 op/s
Oct 14 09:23:56 compute-0 nova_compute[259627]: 2025-10-14 09:23:56.259 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:56 compute-0 nova_compute[259627]: 2025-10-14 09:23:56.288 2 INFO nova.scheduler.client.report [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Deleted allocations for instance 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b
Oct 14 09:23:56 compute-0 nova_compute[259627]: 2025-10-14 09:23:56.386 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/943956470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:23:57 compute-0 ovn_controller[152662]: 2025-10-14T09:23:57Z|01297|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 09:23:57 compute-0 ovn_controller[152662]: 2025-10-14T09:23:57Z|01298|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 09:23:57 compute-0 ovn_controller[152662]: 2025-10-14T09:23:57Z|01299|binding|INFO|Releasing lport 04719e6c-d55b-4ad7-a45c-52e6e59101ab from this chassis (sb_readonly=0)
Oct 14 09:23:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 14 09:23:57 compute-0 ceph-mon[74249]: pgmap v2121: 305 pgs: 305 active+clean; 358 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 284 op/s
Oct 14 09:23:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 14 09:23:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 14 09:23:57 compute-0 nova_compute[259627]: 2025-10-14 09:23:57.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:23:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 14 09:23:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 14 09:23:57 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 14 09:23:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 358 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 384 KiB/s wr, 105 op/s
Oct 14 09:23:58 compute-0 nova_compute[259627]: 2025-10-14 09:23:58.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:58 compute-0 ceph-mon[74249]: osdmap e290: 3 total, 3 up, 3 in
Oct 14 09:23:58 compute-0 ceph-mon[74249]: osdmap e291: 3 total, 3 up, 3 in
Oct 14 09:23:58 compute-0 nova_compute[259627]: 2025-10-14 09:23:58.990 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:23:58 compute-0 nova_compute[259627]: 2025-10-14 09:23:58.990 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.018 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.280 2 DEBUG nova.compute.manager [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.281 2 DEBUG nova.compute.manager [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing instance network info cache due to event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.281 2 DEBUG oslo_concurrency.lockutils [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.281 2 DEBUG oslo_concurrency.lockutils [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.281 2 DEBUG nova.network.neutron [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.356 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.356 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.357 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.358 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.358 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.360 2 INFO nova.compute.manager [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Terminating instance
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.362 2 DEBUG nova.compute.manager [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:23:59 compute-0 kernel: tap6d5e10b7-5c (unregistering): left promiscuous mode
Oct 14 09:23:59 compute-0 NetworkManager[44885]: <info>  [1760433839.4171] device (tap6d5e10b7-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 ovn_controller[152662]: 2025-10-14T09:23:59Z|01300|binding|INFO|Releasing lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 from this chassis (sb_readonly=0)
Oct 14 09:23:59 compute-0 ovn_controller[152662]: 2025-10-14T09:23:59Z|01301|binding|INFO|Setting lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 down in Southbound
Oct 14 09:23:59 compute-0 ovn_controller[152662]: 2025-10-14T09:23:59Z|01302|binding|INFO|Removing iface tap6d5e10b7-5c ovn-installed in OVS
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.442 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:22:e5 10.100.0.12'], port_security=['fa:16:3e:7c:22:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6810b29b-088f-441b-8a6a-02eaafada0c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4112adc84657452aa0e117ac5999054a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b7a53172-9b5e-49ee-bb03-aeca0d4a8fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=add5cdec-6440-4df9-aea8-21659d7bab06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6d5e10b7-5c07-4389-8916-e7c277cb2c88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.443 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 in datapath 4fc37d66-193b-4ab7-80e3-58e26dc76e47 unbound from our chassis
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.447 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4fc37d66-193b-4ab7-80e3-58e26dc76e47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.448 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a87801fa-0314-4e00-acd6-d32c9d85475a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.451 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 namespace which is not needed anymore
Oct 14 09:23:59 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 14 09:23:59 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Consumed 15.782s CPU time.
Oct 14 09:23:59 compute-0 systemd-machined[214636]: Machine qemu-151-instance-00000077 terminated.
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [NOTICE]   (378998) : haproxy version is 2.8.14-c23fe91
Oct 14 09:23:59 compute-0 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [NOTICE]   (378998) : path to executable is /usr/sbin/haproxy
Oct 14 09:23:59 compute-0 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [WARNING]  (378998) : Exiting Master process...
Oct 14 09:23:59 compute-0 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [WARNING]  (378998) : Exiting Master process...
Oct 14 09:23:59 compute-0 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [ALERT]    (378998) : Current worker (379000) exited with code 143 (Terminated)
Oct 14 09:23:59 compute-0 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [WARNING]  (378998) : All workers exited. Exiting... (0)
Oct 14 09:23:59 compute-0 systemd[1]: libpod-a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e.scope: Deactivated successfully.
Oct 14 09:23:59 compute-0 podman[382193]: 2025-10-14 09:23:59.606201608 +0000 UTC m=+0.053694708 container died a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.605 2 INFO nova.virt.libvirt.driver [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance destroyed successfully.
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.606 2 DEBUG nova.objects.instance [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'resources' on Instance uuid 6810b29b-088f-441b-8a6a-02eaafada0c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.637 2 DEBUG nova.virt.libvirt.vif [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1951271072',display_name='tempest-TestSnapshotPattern-server-1951271072',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1951271072',id=119,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-51g4schc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:56Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=6810b29b-088f-441b-8a6a-02eaafada0c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.637 2 DEBUG nova.network.os_vif_util [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.639 2 DEBUG nova.network.os_vif_util [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.639 2 DEBUG os_vif [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d5e10b7-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e-userdata-shm.mount: Deactivated successfully.
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-de954708f8f3fdfb0e31660e7186517bbce587741a2e68b98ff8eba35c2ae10b-merged.mount: Deactivated successfully.
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.658 2 INFO os_vif [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c')
Oct 14 09:23:59 compute-0 podman[382193]: 2025-10-14 09:23:59.670789674 +0000 UTC m=+0.118282784 container cleanup a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:23:59 compute-0 systemd[1]: libpod-conmon-a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e.scope: Deactivated successfully.
Oct 14 09:23:59 compute-0 podman[382248]: 2025-10-14 09:23:59.755872486 +0000 UTC m=+0.052947589 container remove a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.765 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe2199c-87ec-4aef-82f4-215c60fe4c1a]: (4, ('Tue Oct 14 09:23:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 (a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e)\na80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e\nTue Oct 14 09:23:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 (a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e)\na80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.767 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c19e78cc-194b-43f2-9064-a8cbc5efa4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.768 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fc37d66-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 kernel: tap4fc37d66-10: left promiscuous mode
Oct 14 09:23:59 compute-0 ceph-mon[74249]: pgmap v2124: 305 pgs: 305 active+clean; 358 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 384 KiB/s wr, 105 op/s
Oct 14 09:23:59 compute-0 nova_compute[259627]: 2025-10-14 09:23:59.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.798 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d184de9-801a-4e06-b3cb-c80b548e2668]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.830 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[799d3fd6-be19-4fd5-9c6b-a89ed0f1588b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.832 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f16545dd-361f-4c45-846b-92dc5914436d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.848 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd4ca85-1ea9-4fa7-9ccd-bb27bbeaa603]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752711, 'reachable_time': 43077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382266, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:23:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d4fc37d66\x2d193b\x2d4ab7\x2d80e3\x2d58e26dc76e47.mount: Deactivated successfully.
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.852 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:23:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.852 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8da79455-58c9-45c4-bfa0-9df95a99914d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:00 compute-0 nova_compute[259627]: 2025-10-14 09:24:00.047 2 INFO nova.virt.libvirt.driver [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Deleting instance files /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5_del
Oct 14 09:24:00 compute-0 nova_compute[259627]: 2025-10-14 09:24:00.047 2 INFO nova.virt.libvirt.driver [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Deletion of /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5_del complete
Oct 14 09:24:00 compute-0 nova_compute[259627]: 2025-10-14 09:24:00.123 2 INFO nova.compute.manager [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 14 09:24:00 compute-0 nova_compute[259627]: 2025-10-14 09:24:00.124 2 DEBUG oslo.service.loopingcall [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:24:00 compute-0 nova_compute[259627]: 2025-10-14 09:24:00.124 2 DEBUG nova.compute.manager [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:24:00 compute-0 nova_compute[259627]: 2025-10-14 09:24:00.124 2 DEBUG nova.network.neutron [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:24:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 328 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 317 KiB/s wr, 96 op/s
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.124 2 DEBUG nova.network.neutron [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updated VIF entry in instance network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.125 2 DEBUG nova.network.neutron [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.152 2 DEBUG oslo_concurrency.lockutils [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.399 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-unplugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.400 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.400 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.400 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.401 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] No waiting events found dispatching network-vif-unplugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.401 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-unplugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.401 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.402 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.402 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.403 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.403 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] No waiting events found dispatching network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:01 compute-0 nova_compute[259627]: 2025-10-14 09:24:01.403 2 WARNING nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received unexpected event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 for instance with vm_state active and task_state deleting.
Oct 14 09:24:01 compute-0 ceph-mon[74249]: pgmap v2125: 305 pgs: 305 active+clean; 328 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 317 KiB/s wr, 96 op/s
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.020 2 DEBUG nova.network.neutron [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.043 2 INFO nova.compute.manager [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 1.92 seconds to deallocate network for instance.
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.093 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.093 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.201 2 DEBUG oslo_concurrency.processutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 200 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 293 KiB/s wr, 147 op/s
Oct 14 09:24:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:24:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680459775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.638 2 DEBUG oslo_concurrency.processutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.647 2 DEBUG nova.compute.provider_tree [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.668 2 DEBUG nova.scheduler.client.report [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:24:02 compute-0 podman[382289]: 2025-10-14 09:24:02.685199449 +0000 UTC m=+0.081503025 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:24:02 compute-0 podman[382288]: 2025-10-14 09:24:02.690928381 +0000 UTC m=+0.088982980 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.698 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.724 2 INFO nova.scheduler.client.report [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Deleted allocations for instance 6810b29b-088f-441b-8a6a-02eaafada0c5
Oct 14 09:24:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:24:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:24:02 compute-0 nova_compute[259627]: 2025-10-14 09:24:02.790 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/680459775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:24:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:24:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:24:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:24:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 14 09:24:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 14 09:24:02 compute-0 ovn_controller[152662]: 2025-10-14T09:24:02Z|01303|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 09:24:02 compute-0 ovn_controller[152662]: 2025-10-14T09:24:02Z|01304|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 09:24:02 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 14 09:24:03 compute-0 nova_compute[259627]: 2025-10-14 09:24:03.044 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433828.0427566, 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:24:03 compute-0 nova_compute[259627]: 2025-10-14 09:24:03.045 2 INFO nova.compute.manager [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] VM Stopped (Lifecycle Event)
Oct 14 09:24:03 compute-0 nova_compute[259627]: 2025-10-14 09:24:03.076 2 DEBUG nova.compute.manager [None req-334d64ca-7666-45f7-9947-cf8f4d8eea1a - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:03 compute-0 nova_compute[259627]: 2025-10-14 09:24:03.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:03 compute-0 nova_compute[259627]: 2025-10-14 09:24:03.524 2 DEBUG nova.compute.manager [req-738525c8-f843-4cac-82f0-91e43ef875b4 req-467af9bc-95d9-48a7-bf8e-80fcec652b4e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-deleted-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:03 compute-0 ceph-mon[74249]: pgmap v2126: 305 pgs: 305 active+clean; 200 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 293 KiB/s wr, 147 op/s
Oct 14 09:24:03 compute-0 ceph-mon[74249]: osdmap e292: 3 total, 3 up, 3 in
Oct 14 09:24:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 6.2 KiB/s wr, 83 op/s
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.398 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.399 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.400 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.401 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.401 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.403 2 INFO nova.compute.manager [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Terminating instance
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.405 2 DEBUG nova.compute.manager [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:24:04 compute-0 kernel: tap0c7d56b8-70 (unregistering): left promiscuous mode
Oct 14 09:24:04 compute-0 NetworkManager[44885]: <info>  [1760433844.4718] device (tap0c7d56b8-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 ovn_controller[152662]: 2025-10-14T09:24:04Z|01305|binding|INFO|Releasing lport 0c7d56b8-701e-431d-8f3f-4682c684a719 from this chassis (sb_readonly=0)
Oct 14 09:24:04 compute-0 ovn_controller[152662]: 2025-10-14T09:24:04Z|01306|binding|INFO|Setting lport 0c7d56b8-701e-431d-8f3f-4682c684a719 down in Southbound
Oct 14 09:24:04 compute-0 ovn_controller[152662]: 2025-10-14T09:24:04Z|01307|binding|INFO|Removing iface tap0c7d56b8-70 ovn-installed in OVS
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.544 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:88:3d 10.100.0.22'], port_security=['fa:16:3e:b4:88:3d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'ef3d76bf-9763-4405-8e48-c2c4405a2a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e990e92a-384a-47c4-be5e-d58c231a3275', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab61dd9b-dbf7-46d4-89df-319a0a1fc6a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0c7d56b8-701e-431d-8f3f-4682c684a719) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.546 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0c7d56b8-701e-431d-8f3f-4682c684a719 in datapath 39c21153-4a3d-40fd-91df-ae7d5dae4d8c unbound from our chassis
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.547 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39c21153-4a3d-40fd-91df-ae7d5dae4d8c
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct 14 09:24:04 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000078.scope: Consumed 16.147s CPU time.
Oct 14 09:24:04 compute-0 systemd-machined[214636]: Machine qemu-153-instance-00000078 terminated.
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.570 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc66324d-e5e1-4ff9-98cc-d730c9911131]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:04 compute-0 sudo[382335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:04 compute-0 sudo[382335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.614 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[78dfd2f1-ace1-48aa-aa0e-63cf72c5d242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.623 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0213e232-c9f3-48a7-9b56-400e06bc918a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:04 compute-0 sudo[382335]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.657 2 INFO nova.virt.libvirt.driver [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance destroyed successfully.
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.658 2 DEBUG nova.objects.instance [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid ef3d76bf-9763-4405-8e48-c2c4405a2a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.672 2 DEBUG nova.virt.libvirt.vif [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-74163694',display_name='tempest-TestNetworkBasicOps-server-74163694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-74163694',id=120,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJjpHsBb1FmstcXMm13RiW9DIcCDzUbHC1W47DgC4rLa2+YaGMfll4QodMfzMI26CQxBr8mMI8Apo+Vm4ZUA+2D0BmlkJiSjNtRVZZ4pPW+p+wcLG9yH2ONX/d7llYQVA==',key_name='tempest-TestNetworkBasicOps-266793307',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-scl22852',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:58Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=ef3d76bf-9763-4405-8e48-c2c4405a2a3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.672 2 DEBUG nova.network.os_vif_util [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.674 2 DEBUG nova.network.os_vif_util [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.676 2 DEBUG os_vif [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c7d56b8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.678 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8d159fad-e669-42d1-9330-017dbd44ebd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.687 2 INFO os_vif [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70')
Oct 14 09:24:04 compute-0 sudo[382366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:24:04 compute-0 sudo[382366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:04 compute-0 sudo[382366]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.716 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[30efaecb-f5e8-4fc1-a921-7ee0651e43a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39c21153-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:78:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 7, 'rx_bytes': 1132, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 7, 'rx_bytes': 1132, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753725, 'reachable_time': 17078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 9, 'inoctets': 796, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 796, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382398, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.739 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0f0a00-23b9-4c01-a1a4-43986e7d6ebf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap39c21153-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753740, 'tstamp': 753740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382417, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39c21153-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753743, 'tstamp': 753743}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382417, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.741 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c21153-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 nova_compute[259627]: 2025-10-14 09:24:04.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.744 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39c21153-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.745 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.745 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39c21153-40, col_values=(('external_ids', {'iface-id': '7bf9894c-4dab-4178-94d9-e45a9e10602a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.746 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:24:04 compute-0 sudo[382416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:04 compute-0 sudo[382416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:04 compute-0 sudo[382416]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:04 compute-0 sudo[382445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:24:04 compute-0 sudo[382445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.121 2 INFO nova.virt.libvirt.driver [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Deleting instance files /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b_del
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.121 2 INFO nova.virt.libvirt.driver [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Deletion of /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b_del complete
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.178 2 INFO nova.compute.manager [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.179 2 DEBUG oslo.service.loopingcall [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.179 2 DEBUG nova.compute.manager [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.179 2 DEBUG nova.network.neutron [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:24:05 compute-0 sudo[382445]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:24:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1075546762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:24:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:24:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1075546762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:24:05 compute-0 sudo[382501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.637 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-unplugged-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.637 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.638 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.638 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:05 compute-0 sudo[382501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.639 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] No waiting events found dispatching network-vif-unplugged-0c7d56b8-701e-431d-8f3f-4682c684a719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.639 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-unplugged-0c7d56b8-701e-431d-8f3f-4682c684a719 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.640 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.640 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.640 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.641 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.641 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] No waiting events found dispatching network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:05 compute-0 nova_compute[259627]: 2025-10-14 09:24:05.641 2 WARNING nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received unexpected event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 for instance with vm_state active and task_state deleting.
Oct 14 09:24:05 compute-0 sudo[382501]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:05 compute-0 sudo[382526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:24:05 compute-0 sudo[382526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:05 compute-0 sudo[382526]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:05 compute-0 sudo[382551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:05 compute-0 sudo[382551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:05 compute-0 sudo[382551]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:05 compute-0 ceph-mon[74249]: pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 6.2 KiB/s wr, 83 op/s
Oct 14 09:24:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1075546762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:24:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1075546762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:24:05 compute-0 sudo[382576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 14 09:24:05 compute-0 sudo[382576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:06 compute-0 sudo[382576]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:24:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:24:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:24:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:24:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:24:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:24:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:24:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 27356cb3-385b-4584-bbc6-fa49f3337939 does not exist
Oct 14 09:24:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 06a1773c-59c6-4de7-8972-2a93c2fa4693 does not exist
Oct 14 09:24:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 09c46760-8c72-4775-ac13-49b86cdd97e0 does not exist
Oct 14 09:24:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:24:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.146626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846146700, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1760, "num_deletes": 255, "total_data_size": 2664952, "memory_usage": 2711392, "flush_reason": "Manual Compaction"}
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 14 09:24:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:24:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:24:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:24:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846164214, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2614024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43411, "largest_seqno": 45170, "table_properties": {"data_size": 2605905, "index_size": 4933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17202, "raw_average_key_size": 20, "raw_value_size": 2589497, "raw_average_value_size": 3082, "num_data_blocks": 218, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433683, "oldest_key_time": 1760433683, "file_creation_time": 1760433846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 17622 microseconds, and 10741 cpu microseconds.
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.164258) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2614024 bytes OK
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.164276) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.165563) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.165575) EVENT_LOG_v1 {"time_micros": 1760433846165571, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.165590) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2657321, prev total WAL file size 2657321, number of live WAL files 2.
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.166371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2552KB)], [101(7470KB)]
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846166456, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10263413, "oldest_snapshot_seqno": -1}
Oct 14 09:24:06 compute-0 sudo[382621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:06 compute-0 sudo[382621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:06 compute-0 sudo[382621]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6607 keys, 8666808 bytes, temperature: kUnknown
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846224880, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8666808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8622667, "index_size": 26493, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 171863, "raw_average_key_size": 26, "raw_value_size": 8504268, "raw_average_value_size": 1287, "num_data_blocks": 1035, "num_entries": 6607, "num_filter_entries": 6607, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.225099) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8666808 bytes
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.226453) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.2 rd, 148.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.3 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 7131, records dropped: 524 output_compression: NoCompression
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.226468) EVENT_LOG_v1 {"time_micros": 1760433846226460, "job": 60, "event": "compaction_finished", "compaction_time_micros": 58255, "compaction_time_cpu_micros": 39344, "output_level": 6, "num_output_files": 1, "total_output_size": 8666808, "num_input_records": 7131, "num_output_records": 6607, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846226961, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846228310, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.166284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:24:06 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:24:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 121 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 9.4 KiB/s wr, 105 op/s
Oct 14 09:24:06 compute-0 sudo[382646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:24:06 compute-0 sudo[382646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:06 compute-0 sudo[382646]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:06 compute-0 sudo[382671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:06 compute-0 sudo[382671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:06 compute-0 sudo[382671]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:06 compute-0 sudo[382696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:24:06 compute-0 sudo[382696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:06 compute-0 nova_compute[259627]: 2025-10-14 09:24:06.653 2 DEBUG nova.network.neutron [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:06 compute-0 nova_compute[259627]: 2025-10-14 09:24:06.677 2 INFO nova.compute.manager [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Took 1.50 seconds to deallocate network for instance.
Oct 14 09:24:06 compute-0 nova_compute[259627]: 2025-10-14 09:24:06.727 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:06 compute-0 nova_compute[259627]: 2025-10-14 09:24:06.728 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:06 compute-0 podman[382760]: 2025-10-14 09:24:06.743669581 +0000 UTC m=+0.065335846 container create 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:24:06 compute-0 systemd[1]: Started libpod-conmon-707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3.scope.
Oct 14 09:24:06 compute-0 nova_compute[259627]: 2025-10-14 09:24:06.797 2 DEBUG oslo_concurrency.processutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:06 compute-0 ovn_controller[152662]: 2025-10-14T09:24:06Z|01308|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 09:24:06 compute-0 ovn_controller[152662]: 2025-10-14T09:24:06Z|01309|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 09:24:06 compute-0 podman[382760]: 2025-10-14 09:24:06.719120444 +0000 UTC m=+0.040786809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:24:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:24:06 compute-0 podman[382760]: 2025-10-14 09:24:06.846393879 +0000 UTC m=+0.168060194 container init 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:24:06 compute-0 podman[382760]: 2025-10-14 09:24:06.857029072 +0000 UTC m=+0.178695337 container start 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:24:06 compute-0 podman[382760]: 2025-10-14 09:24:06.860883997 +0000 UTC m=+0.182550272 container attach 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:24:06 compute-0 cool_nobel[382776]: 167 167
Oct 14 09:24:06 compute-0 systemd[1]: libpod-707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3.scope: Deactivated successfully.
Oct 14 09:24:06 compute-0 conmon[382776]: conmon 707154b64902b6df87c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3.scope/container/memory.events
Oct 14 09:24:06 compute-0 podman[382760]: 2025-10-14 09:24:06.868087625 +0000 UTC m=+0.189753950 container died 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a498481b9cf6fe03809d4ee27f54b847e7c66237884886ed97a33cd75205bfc4-merged.mount: Deactivated successfully.
Oct 14 09:24:06 compute-0 nova_compute[259627]: 2025-10-14 09:24:06.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:06 compute-0 podman[382760]: 2025-10-14 09:24:06.950488282 +0000 UTC m=+0.272154547 container remove 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:24:06 compute-0 systemd[1]: libpod-conmon-707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3.scope: Deactivated successfully.
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.039 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.039 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.040 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:24:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:24:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:24:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:24:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:24:07 compute-0 ceph-mon[74249]: pgmap v2129: 305 pgs: 305 active+clean; 121 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 9.4 KiB/s wr, 105 op/s
Oct 14 09:24:07 compute-0 podman[382821]: 2025-10-14 09:24:07.155900187 +0000 UTC m=+0.043314671 container create 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:24:07 compute-0 systemd[1]: Started libpod-conmon-9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678.scope.
Oct 14 09:24:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:24:07 compute-0 podman[382821]: 2025-10-14 09:24:07.13618981 +0000 UTC m=+0.023604314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:07 compute-0 podman[382821]: 2025-10-14 09:24:07.259978499 +0000 UTC m=+0.147393043 container init 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 09:24:07 compute-0 podman[382821]: 2025-10-14 09:24:07.267421333 +0000 UTC m=+0.154835817 container start 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:24:07 compute-0 podman[382821]: 2025-10-14 09:24:07.272243842 +0000 UTC m=+0.159658316 container attach 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:24:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:24:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181953327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.304 2 DEBUG oslo_concurrency.processutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.311 2 DEBUG nova.compute.provider_tree [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.330 2 DEBUG nova.scheduler.client.report [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.365 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.396 2 INFO nova.scheduler.client.report [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance ef3d76bf-9763-4405-8e48-c2c4405a2a3b
Oct 14 09:24:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.457 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:24:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.459 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.561 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:07 compute-0 nova_compute[259627]: 2025-10-14 09:24:07.762 2 DEBUG nova.compute.manager [req-46cb717e-d82c-4046-b5d7-de50572b77e4 req-8729897c-bfb1-4961-8753-b658bc434990 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-deleted-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.148 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-ff2b9b74-a6fc-4774-89d2-9c010f121d65" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.150 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-ff2b9b74-a6fc-4774-89d2-9c010f121d65" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4181953327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.171 2 DEBUG nova.objects.instance [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'flavor' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.193 2 DEBUG nova.virt.libvirt.vif [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.193 2 DEBUG nova.network.os_vif_util [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.194 2 DEBUG nova.network.os_vif_util [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.198 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.201 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.202 2 DEBUG nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Attempting to detach device tapff2b9b74-a6 from instance 50c83173-31e3-4f7a-8836-26e52affd0f2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.203 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:17:98:a5"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <target dev="tapff2b9b74-a6"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]: </interface>
Oct 14 09:24:08 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.208 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.212 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface>not found in domain: <domain type='kvm' id='148'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <name>instance-00000075</name>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:22:38</nova:creationTime>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:port uuid="ff2b9b74-a6fc-4774-89d2-9c010f121d65">
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:24:08 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <system>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='serial'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='uuid'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </system>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <os>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </os>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <features>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </features>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk' index='2'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config' index='1'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:32:f2:3f'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target dev='tap81977d79-f7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:17:98:a5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target dev='tapff2b9b74-a6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='net1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </target>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </console>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <video>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </video>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c470,c678</label>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c470,c678</imagelabel>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:24:08 compute-0 nova_compute[259627]: </domain>
Oct 14 09:24:08 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.213 2 INFO nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully detached device tapff2b9b74-a6 from instance 50c83173-31e3-4f7a-8836-26e52affd0f2 from the persistent domain config.
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.213 2 DEBUG nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] (1/8): Attempting to detach device tapff2b9b74-a6 with device alias net1 from instance 50c83173-31e3-4f7a-8836-26e52affd0f2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.214 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] detach device xml: <interface type="ethernet">
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <mac address="fa:16:3e:17:98:a5"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <model type="virtio"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <mtu size="1442"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <target dev="tapff2b9b74-a6"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]: </interface>
Oct 14 09:24:08 compute-0 nova_compute[259627]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 14 09:24:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 121 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 7.8 KiB/s wr, 86 op/s
Oct 14 09:24:08 compute-0 jolly_joliot[382838]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:24:08 compute-0 jolly_joliot[382838]: --> relative data size: 1.0
Oct 14 09:24:08 compute-0 jolly_joliot[382838]: --> All data devices are unavailable
Oct 14 09:24:08 compute-0 kernel: tapff2b9b74-a6 (unregistering): left promiscuous mode
Oct 14 09:24:08 compute-0 NetworkManager[44885]: <info>  [1760433848.3224] device (tapff2b9b74-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:24:08 compute-0 systemd[1]: libpod-9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678.scope: Deactivated successfully.
Oct 14 09:24:08 compute-0 podman[382821]: 2025-10-14 09:24:08.327686901 +0000 UTC m=+1.215101375 container died 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.340 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760433848.340114, 50c83173-31e3-4f7a-8836-26e52affd0f2 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 14 09:24:08 compute-0 ovn_controller[152662]: 2025-10-14T09:24:08Z|01310|binding|INFO|Releasing lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 from this chassis (sb_readonly=0)
Oct 14 09:24:08 compute-0 ovn_controller[152662]: 2025-10-14T09:24:08Z|01311|binding|INFO|Setting lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 down in Southbound
Oct 14 09:24:08 compute-0 ovn_controller[152662]: 2025-10-14T09:24:08Z|01312|binding|INFO|Removing iface tapff2b9b74-a6 ovn-installed in OVS
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.343 2 DEBUG nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Start waiting for the detach event from libvirt for device tapff2b9b74-a6 with device alias net1 for instance 50c83173-31e3-4f7a-8836-26e52affd0f2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.344 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.348 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:98:a5 10.100.0.18', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '50c83173-31e3-4f7a-8836-26e52affd0f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab61dd9b-dbf7-46d4-89df-319a0a1fc6a6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ff2b9b74-a6fc-4774-89d2-9c010f121d65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.349 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433833.3482473, 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.349 2 INFO nova.compute.manager [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] VM Stopped (Lifecycle Event)
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.350 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ff2b9b74-a6fc-4774-89d2-9c010f121d65 in datapath 39c21153-4a3d-40fd-91df-ae7d5dae4d8c unbound from our chassis
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.352 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39c21153-4a3d-40fd-91df-ae7d5dae4d8c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.354 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3468914f-bbfe-4ae0-b02b-50c1823dc14c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.355 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c namespace which is not needed anymore
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.356 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface>not found in domain: <domain type='kvm' id='148'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <name>instance-00000075</name>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:22:38</nova:creationTime>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:port uuid="ff2b9b74-a6fc-4774-89d2-9c010f121d65">
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:24:08 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <system>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='serial'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='uuid'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </system>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <os>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </os>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <features>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </features>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk' index='2'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config' index='1'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:32:f2:3f'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target dev='tap81977d79-f7'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       </target>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </console>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <video>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </video>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c470,c678</label>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c470,c678</imagelabel>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:24:08 compute-0 nova_compute[259627]: </domain>
Oct 14 09:24:08 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.356 2 INFO nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully detached device tapff2b9b74-a6 from instance 50c83173-31e3-4f7a-8836-26e52affd0f2 from the live domain config.
Oct 14 09:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676-merged.mount: Deactivated successfully.
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.361 2 DEBUG nova.virt.libvirt.vif [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.362 2 DEBUG nova.network.os_vif_util [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.363 2 DEBUG nova.network.os_vif_util [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.368 2 DEBUG os_vif [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff2b9b74-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.374 2 DEBUG nova.compute.manager [None req-3cbc76b5-7b46-4181-83fd-de04683ca66c - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.385 2 INFO os_vif [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6')
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.386 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:24:08</nova:creationTime>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 09:24:08 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:24:08 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:24:08 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:24:08 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:24:08 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:24:08 compute-0 podman[382821]: 2025-10-14 09:24:08.398266285 +0000 UTC m=+1.285680759 container remove 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:24:08 compute-0 systemd[1]: libpod-conmon-9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678.scope: Deactivated successfully.
Oct 14 09:24:08 compute-0 sudo[382696]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:08 compute-0 sudo[382899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:08 compute-0 sudo[382899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:08 compute-0 sudo[382899]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:08 compute-0 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [NOTICE]   (379291) : haproxy version is 2.8.14-c23fe91
Oct 14 09:24:08 compute-0 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [NOTICE]   (379291) : path to executable is /usr/sbin/haproxy
Oct 14 09:24:08 compute-0 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [WARNING]  (379291) : Exiting Master process...
Oct 14 09:24:08 compute-0 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [WARNING]  (379291) : Exiting Master process...
Oct 14 09:24:08 compute-0 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [ALERT]    (379291) : Current worker (379293) exited with code 143 (Terminated)
Oct 14 09:24:08 compute-0 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [WARNING]  (379291) : All workers exited. Exiting... (0)
Oct 14 09:24:08 compute-0 systemd[1]: libpod-812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9.scope: Deactivated successfully.
Oct 14 09:24:08 compute-0 podman[382925]: 2025-10-14 09:24:08.555638204 +0000 UTC m=+0.049897334 container died 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9-userdata-shm.mount: Deactivated successfully.
Oct 14 09:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-36c287ab2c96f243d6a33b107299142f79d3dafdd9e98b859faafdb8e51bef5d-merged.mount: Deactivated successfully.
Oct 14 09:24:08 compute-0 podman[382925]: 2025-10-14 09:24:08.601131318 +0000 UTC m=+0.095390438 container cleanup 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:24:08 compute-0 sudo[382938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:24:08 compute-0 sudo[382938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:08 compute-0 sudo[382938]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:08 compute-0 systemd[1]: libpod-conmon-812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9.scope: Deactivated successfully.
Oct 14 09:24:08 compute-0 podman[382978]: 2025-10-14 09:24:08.664840932 +0000 UTC m=+0.041737712 container remove 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:24:08 compute-0 sudo[382979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:08 compute-0 sudo[382979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:08 compute-0 sudo[382979]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.671 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c20974d6-c205-45ff-9f71-d9eea27f8e99]: (4, ('Tue Oct 14 09:24:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c (812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9)\n812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9\nTue Oct 14 09:24:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c (812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9)\n812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[933bf4aa-16cb-4c5b-bd3f-b654834140c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.673 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c21153-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:08 compute-0 kernel: tap39c21153-40: left promiscuous mode
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:08 compute-0 nova_compute[259627]: 2025-10-14 09:24:08.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.702 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d39bf5a4-b4fe-49d0-8888-3b1d3c7bec24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4539ac4b-edc0-47dc-8a5b-8df63650206e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.727 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f2d433-ab77-4628-9b99-da0778d7431b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:08 compute-0 sudo[383015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:24:08 compute-0 sudo[383015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5eccdc32-e475-4a80-977b-86457cbc693b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753718, 'reachable_time': 33156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383040, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.746 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:24:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.746 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[484fa5fb-3603-4b36-ae34-9942eb770173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d39c21153\x2d4a3d\x2d40fd\x2d91df\x2dae7d5dae4d8c.mount: Deactivated successfully.
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.134 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.162 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.162 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:09 compute-0 ceph-mon[74249]: pgmap v2130: 305 pgs: 305 active+clean; 121 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 7.8 KiB/s wr, 86 op/s
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.163 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.196 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:09 compute-0 podman[383084]: 2025-10-14 09:24:09.199138545 +0000 UTC m=+0.079929416 container create 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:24:09 compute-0 systemd[1]: Started libpod-conmon-46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37.scope.
Oct 14 09:24:09 compute-0 podman[383084]: 2025-10-14 09:24:09.16820215 +0000 UTC m=+0.048993121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:24:09 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:24:09 compute-0 podman[383084]: 2025-10-14 09:24:09.306805945 +0000 UTC m=+0.187596836 container init 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:24:09 compute-0 podman[383084]: 2025-10-14 09:24:09.315161222 +0000 UTC m=+0.195952103 container start 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 14 09:24:09 compute-0 podman[383084]: 2025-10-14 09:24:09.318995906 +0000 UTC m=+0.199786787 container attach 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:24:09 compute-0 flamboyant_goldberg[383100]: 167 167
Oct 14 09:24:09 compute-0 systemd[1]: libpod-46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37.scope: Deactivated successfully.
Oct 14 09:24:09 compute-0 podman[383084]: 2025-10-14 09:24:09.323847736 +0000 UTC m=+0.204638647 container died 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 09:24:09 compute-0 podman[383084]: 2025-10-14 09:24:09.365636909 +0000 UTC m=+0.246427820 container remove 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6379b5f4e870b5f94699ca12df069e8ab3c4394ab1de72acfe8f66a7cc2e7ab-merged.mount: Deactivated successfully.
Oct 14 09:24:09 compute-0 systemd[1]: libpod-conmon-46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37.scope: Deactivated successfully.
Oct 14 09:24:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:09.460 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.462 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.463 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.463 2 DEBUG nova.network.neutron [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:09 compute-0 podman[383124]: 2025-10-14 09:24:09.631655972 +0000 UTC m=+0.061195453 container create a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:24:09 compute-0 systemd[1]: Started libpod-conmon-a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c.scope.
Oct 14 09:24:09 compute-0 podman[383124]: 2025-10-14 09:24:09.613574555 +0000 UTC m=+0.043114036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:24:09 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:09 compute-0 podman[383124]: 2025-10-14 09:24:09.737865296 +0000 UTC m=+0.167404877 container init a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 09:24:09 compute-0 podman[383124]: 2025-10-14 09:24:09.746203883 +0000 UTC m=+0.175743364 container start a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:24:09 compute-0 podman[383124]: 2025-10-14 09:24:09.750681863 +0000 UTC m=+0.180221384 container attach a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.873 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-unplugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.874 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.875 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.875 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.875 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-unplugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.875 2 WARNING nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-unplugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 for instance with vm_state active and task_state None.
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.876 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.876 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.876 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.877 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.877 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.877 2 WARNING nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 for instance with vm_state active and task_state None.
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.877 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-deleted-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.878 2 INFO nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Neutron deleted interface ff2b9b74-a6fc-4774-89d2-9c010f121d65; detaching it from the instance and deleting it from the info cache
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.878 2 DEBUG nova.network.neutron [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.902 2 DEBUG nova.objects.instance [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'system_metadata' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.926 2 DEBUG nova.objects.instance [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'flavor' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.942 2 DEBUG nova.virt.libvirt.vif [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.943 2 DEBUG nova.network.os_vif_util [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.943 2 DEBUG nova.network.os_vif_util [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.952 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.956 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface>not found in domain: <domain type='kvm' id='148'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <name>instance-00000075</name>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:24:08</nova:creationTime>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:24:09 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <system>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='serial'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='uuid'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </system>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <os>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </os>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <features>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </features>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk' index='2'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config' index='1'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:32:f2:3f'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target dev='tap81977d79-f7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </target>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </console>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <video>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </video>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c470,c678</label>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c470,c678</imagelabel>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:24:09 compute-0 nova_compute[259627]: </domain>
Oct 14 09:24:09 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.957 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.961 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface>not found in domain: <domain type='kvm' id='148'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <name>instance-00000075</name>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:24:08</nova:creationTime>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:24:09 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <memory unit='KiB'>131072</memory>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <vcpu placement='static'>1</vcpu>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <resource>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <partition>/machine</partition>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </resource>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <sysinfo type='smbios'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <system>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='manufacturer'>RDO</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='product'>OpenStack Compute</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='serial'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='uuid'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <entry name='family'>Virtual Machine</entry>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </system>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <os>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <boot dev='hd'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <smbios mode='sysinfo'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </os>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <features>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <vmcoreinfo state='on'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </features>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <cpu mode='custom' match='exact' check='full'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <vendor>AMD</vendor>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='x2apic'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc-deadline'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='hypervisor'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='tsc_adjust'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='spec-ctrl'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='stibp'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='arch-capabilities'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='ssbd'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='cmp_legacy'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='overflow-recov'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='succor'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='ibrs'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='amd-ssbd'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='virt-ssbd'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='lbrv'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='tsc-scale'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='vmcb-clean'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='flushbyasid'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='pause-filter'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='pfthreshold'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='rdctl-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='mds-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='pschange-mc-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='gds-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='rfds-no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='xsaves'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='svm'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='require' name='topoext'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='npt'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <feature policy='disable' name='nrip-save'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <clock offset='utc'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <timer name='pit' tickpolicy='delay'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <timer name='hpet' present='no'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <on_poweroff>destroy</on_poweroff>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <on_reboot>restart</on_reboot>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <on_crash>destroy</on_crash>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <disk type='network' device='disk'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk' index='2'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target dev='vda' bus='virtio'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='virtio-disk0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <disk type='network' device='cdrom'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <driver name='qemu' type='raw' cache='none'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <auth username='openstack'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config' index='1'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <host name='192.168.122.100' port='6789'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target dev='sda' bus='sata'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <readonly/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='sata0-0-0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='0' model='pcie-root'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pcie.0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='1' port='0x10'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='2' port='0x11'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='3' port='0x12'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.3'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='4' port='0x13'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.4'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='5' port='0x14'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.5'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='6' port='0x15'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.6'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='7' port='0x16'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='8' port='0x17'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.8'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='9' port='0x18'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.9'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='10' port='0x19'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.10'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='11' port='0x1a'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.11'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='12' port='0x1b'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.12'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='13' port='0x1c'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.13'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='14' port='0x1d'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.14'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='15' port='0x1e'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.15'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='16' port='0x1f'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.16'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='17' port='0x20'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.17'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='18' port='0x21'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.18'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='19' port='0x22'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.19'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='20' port='0x23'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.20'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='21' port='0x24'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.21'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='22' port='0x25'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.22'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='23' port='0x26'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.23'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='24' port='0x27'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.24'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-root-port'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target chassis='25' port='0x28'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.25'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model name='pcie-pci-bridge'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='pci.26'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='usb'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <controller type='sata' index='0'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='ide'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </controller>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <interface type='ethernet'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <mac address='fa:16:3e:32:f2:3f'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target dev='tap81977d79-f7'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model type='virtio'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <driver name='vhost' rx_queue_size='512'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <mtu size='1442'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='net0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <serial type='pty'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target type='isa-serial' port='0'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:         <model name='isa-serial'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       </target>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <console type='pty' tty='/dev/pts/0'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <source path='/dev/pts/0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <target type='serial' port='0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='serial0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </console>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <input type='tablet' bus='usb'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='input0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='usb' bus='0' port='1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <input type='mouse' bus='ps2'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='input1'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <input type='keyboard' bus='ps2'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='input2'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </input>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <listen type='address' address='::0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </graphics>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <audio id='1' type='none'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <video>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <model type='virtio' heads='1' primary='yes'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='video0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </video>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <watchdog model='itco' action='reset'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='watchdog0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </watchdog>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <memballoon model='virtio'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <stats period='10'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='balloon0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <rng model='virtio'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <backend model='random'>/dev/urandom</backend>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <alias name='rng0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <label>system_u:system_r:svirt_t:s0:c470,c678</label>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c470,c678</imagelabel>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <label>+107:+107</label>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <imagelabel>+107:+107</imagelabel>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </seclabel>
Oct 14 09:24:09 compute-0 nova_compute[259627]: </domain>
Oct 14 09:24:09 compute-0 nova_compute[259627]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.961 2 WARNING nova.virt.libvirt.driver [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Detaching interface fa:16:3e:17:98:a5 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapff2b9b74-a6' not found.
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.962 2 DEBUG nova.virt.libvirt.vif [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.962 2 DEBUG nova.network.os_vif_util [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.962 2 DEBUG nova.network.os_vif_util [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.962 2 DEBUG os_vif [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff2b9b74-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.966 2 INFO os_vif [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6')
Oct 14 09:24:09 compute-0 nova_compute[259627]: 2025-10-14 09:24:09.967 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:creationTime>2025-10-14 09:24:09</nova:creationTime>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:flavor name="m1.nano">
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:memory>128</nova:memory>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:disk>1</nova:disk>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:swap>0</nova:swap>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:vcpus>1</nova:vcpus>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:flavor>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:owner>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:owner>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   <nova:ports>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 09:24:09 compute-0 nova_compute[259627]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:24:09 compute-0 nova_compute[259627]:     </nova:port>
Oct 14 09:24:09 compute-0 nova_compute[259627]:   </nova:ports>
Oct 14 09:24:09 compute-0 nova_compute[259627]: </nova:instance>
Oct 14 09:24:09 compute-0 nova_compute[259627]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 14 09:24:10 compute-0 ovn_controller[152662]: 2025-10-14T09:24:10Z|01313|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 09:24:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 8.7 KiB/s wr, 88 op/s
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:10 compute-0 keen_wu[383140]: {
Oct 14 09:24:10 compute-0 keen_wu[383140]:     "0": [
Oct 14 09:24:10 compute-0 keen_wu[383140]:         {
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "devices": [
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "/dev/loop3"
Oct 14 09:24:10 compute-0 keen_wu[383140]:             ],
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_name": "ceph_lv0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_size": "21470642176",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "name": "ceph_lv0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "tags": {
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cluster_name": "ceph",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.crush_device_class": "",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.encrypted": "0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osd_id": "0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.type": "block",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.vdo": "0"
Oct 14 09:24:10 compute-0 keen_wu[383140]:             },
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "type": "block",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "vg_name": "ceph_vg0"
Oct 14 09:24:10 compute-0 keen_wu[383140]:         }
Oct 14 09:24:10 compute-0 keen_wu[383140]:     ],
Oct 14 09:24:10 compute-0 keen_wu[383140]:     "1": [
Oct 14 09:24:10 compute-0 keen_wu[383140]:         {
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "devices": [
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "/dev/loop4"
Oct 14 09:24:10 compute-0 keen_wu[383140]:             ],
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_name": "ceph_lv1",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_size": "21470642176",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "name": "ceph_lv1",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "tags": {
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cluster_name": "ceph",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.crush_device_class": "",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.encrypted": "0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osd_id": "1",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.type": "block",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.vdo": "0"
Oct 14 09:24:10 compute-0 keen_wu[383140]:             },
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "type": "block",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "vg_name": "ceph_vg1"
Oct 14 09:24:10 compute-0 keen_wu[383140]:         }
Oct 14 09:24:10 compute-0 keen_wu[383140]:     ],
Oct 14 09:24:10 compute-0 keen_wu[383140]:     "2": [
Oct 14 09:24:10 compute-0 keen_wu[383140]:         {
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "devices": [
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "/dev/loop5"
Oct 14 09:24:10 compute-0 keen_wu[383140]:             ],
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_name": "ceph_lv2",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_size": "21470642176",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "name": "ceph_lv2",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "tags": {
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.cluster_name": "ceph",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.crush_device_class": "",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.encrypted": "0",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osd_id": "2",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.type": "block",
Oct 14 09:24:10 compute-0 keen_wu[383140]:                 "ceph.vdo": "0"
Oct 14 09:24:10 compute-0 keen_wu[383140]:             },
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "type": "block",
Oct 14 09:24:10 compute-0 keen_wu[383140]:             "vg_name": "ceph_vg2"
Oct 14 09:24:10 compute-0 keen_wu[383140]:         }
Oct 14 09:24:10 compute-0 keen_wu[383140]:     ]
Oct 14 09:24:10 compute-0 keen_wu[383140]: }
Oct 14 09:24:10 compute-0 ovn_controller[152662]: 2025-10-14T09:24:10Z|01314|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 09:24:10 compute-0 systemd[1]: libpod-a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c.scope: Deactivated successfully.
Oct 14 09:24:10 compute-0 podman[383124]: 2025-10-14 09:24:10.5239949 +0000 UTC m=+0.953534381 container died a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6-merged.mount: Deactivated successfully.
Oct 14 09:24:10 compute-0 podman[383124]: 2025-10-14 09:24:10.578862446 +0000 UTC m=+1.008401927 container remove a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 09:24:10 compute-0 systemd[1]: libpod-conmon-a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c.scope: Deactivated successfully.
Oct 14 09:24:10 compute-0 sudo[383015]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:10 compute-0 sudo[383163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:10 compute-0 sudo[383163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:10 compute-0 sudo[383163]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.770 2 DEBUG nova.compute.manager [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.771 2 DEBUG nova.compute.manager [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.771 2 DEBUG oslo_concurrency.lockutils [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:24:10 compute-0 sudo[383188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:24:10 compute-0 sudo[383188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:10 compute-0 sudo[383188]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.825 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.825 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.825 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.826 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.826 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.827 2 INFO nova.compute.manager [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Terminating instance
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.827 2 DEBUG nova.compute.manager [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:24:10 compute-0 kernel: tap81977d79-f7 (unregistering): left promiscuous mode
Oct 14 09:24:10 compute-0 NetworkManager[44885]: <info>  [1760433850.8777] device (tap81977d79-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:10 compute-0 ovn_controller[152662]: 2025-10-14T09:24:10Z|01315|binding|INFO|Releasing lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 from this chassis (sb_readonly=0)
Oct 14 09:24:10 compute-0 ovn_controller[152662]: 2025-10-14T09:24:10Z|01316|binding|INFO|Setting lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 down in Southbound
Oct 14 09:24:10 compute-0 ovn_controller[152662]: 2025-10-14T09:24:10Z|01317|binding|INFO|Removing iface tap81977d79-f7 ovn-installed in OVS
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:10 compute-0 sudo[383213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.903 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:f2:3f 10.100.0.10'], port_security=['fa:16:3e:32:f2:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '50c83173-31e3-4f7a-8836-26e52affd0f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1981aa60-63c9-49df-94e5-0874b5ab31e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=547c8605-a609-4b00-82f5-2d938c7ab8e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=81977d79-f754-42ba-8b3c-c4eb2f9651d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:24:10 compute-0 sudo[383213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.905 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 in datapath 99e78054-f9f4-417c-a942-d4f9dd534ef7 unbound from our chassis
Oct 14 09:24:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.908 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99e78054-f9f4-417c-a942-d4f9dd534ef7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:24:10 compute-0 sudo[383213]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a49dd713-1531-4879-93ff-52471731bd3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.910 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 namespace which is not needed anymore
Oct 14 09:24:10 compute-0 nova_compute[259627]: 2025-10-14 09:24:10.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:10 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct 14 09:24:10 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000075.scope: Consumed 19.395s CPU time.
Oct 14 09:24:10 compute-0 systemd-machined[214636]: Machine qemu-148-instance-00000075 terminated.
Oct 14 09:24:11 compute-0 sudo[383243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:24:11 compute-0 sudo[383243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:11 compute-0 NetworkManager[44885]: <info>  [1760433851.0529] manager: (tap81977d79-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/531)
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.069 2 INFO nova.virt.libvirt.driver [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance destroyed successfully.
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.070 2 DEBUG nova.objects.instance [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.090 2 DEBUG nova.virt.libvirt.vif [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.091 2 DEBUG nova.network.os_vif_util [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.091 2 DEBUG nova.network.os_vif_util [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.092 2 DEBUG os_vif [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81977d79-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:11 compute-0 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [NOTICE]   (378028) : haproxy version is 2.8.14-c23fe91
Oct 14 09:24:11 compute-0 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [NOTICE]   (378028) : path to executable is /usr/sbin/haproxy
Oct 14 09:24:11 compute-0 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [WARNING]  (378028) : Exiting Master process...
Oct 14 09:24:11 compute-0 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [WARNING]  (378028) : Exiting Master process...
Oct 14 09:24:11 compute-0 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [ALERT]    (378028) : Current worker (378030) exited with code 143 (Terminated)
Oct 14 09:24:11 compute-0 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [WARNING]  (378028) : All workers exited. Exiting... (0)
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.101 2 INFO os_vif [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7')
Oct 14 09:24:11 compute-0 systemd[1]: libpod-7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa.scope: Deactivated successfully.
Oct 14 09:24:11 compute-0 podman[383285]: 2025-10-14 09:24:11.110126914 +0000 UTC m=+0.055751809 container died 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 09:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa-userdata-shm.mount: Deactivated successfully.
Oct 14 09:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-46afafd803e5df435ff3608e3076ec93ae7e5d3420b992909b0759f480792e61-merged.mount: Deactivated successfully.
Oct 14 09:24:11 compute-0 podman[383285]: 2025-10-14 09:24:11.151539737 +0000 UTC m=+0.097164632 container cleanup 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:24:11 compute-0 systemd[1]: libpod-conmon-7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa.scope: Deactivated successfully.
Oct 14 09:24:11 compute-0 podman[383344]: 2025-10-14 09:24:11.226173421 +0000 UTC m=+0.049961346 container remove 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.231 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e76e48b-0aab-439a-a79a-e933c5357538]: (4, ('Tue Oct 14 09:24:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 (7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa)\n7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa\nTue Oct 14 09:24:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 (7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa)\n7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98007bca-172c-4197-87c3-9e41e5e64450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.234 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e78054-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:11 compute-0 kernel: tap99e78054-f0: left promiscuous mode
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.239 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[94f9d075-b20f-4ae6-be69-1099061e47b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.279 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5190f94-c574-4ed0-8aa1-5961af7cccc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcc48c4-d226-4fde-9744-3adba414ca08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.298 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8118069-5f3b-474e-9054-b2112126d8ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750668, 'reachable_time': 38249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383382, 'error': None, 'target': 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.301 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:24:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.301 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[0e36258e-6143-4e6f-bd97-0d9c014eb3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d99e78054\x2df9f4\x2d417c\x2da942\x2dd4f9dd534ef7.mount: Deactivated successfully.
Oct 14 09:24:11 compute-0 ceph-mon[74249]: pgmap v2131: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 8.7 KiB/s wr, 88 op/s
Oct 14 09:24:11 compute-0 podman[383395]: 2025-10-14 09:24:11.427349842 +0000 UTC m=+0.056891677 container create d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:24:11 compute-0 systemd[1]: Started libpod-conmon-d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51.scope.
Oct 14 09:24:11 compute-0 podman[383395]: 2025-10-14 09:24:11.39730305 +0000 UTC m=+0.026844955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:24:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.517 2 INFO nova.virt.libvirt.driver [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Deleting instance files /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2_del
Oct 14 09:24:11 compute-0 podman[383395]: 2025-10-14 09:24:11.520659688 +0000 UTC m=+0.150201543 container init d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.519 2 INFO nova.virt.libvirt.driver [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Deletion of /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2_del complete
Oct 14 09:24:11 compute-0 podman[383395]: 2025-10-14 09:24:11.527951998 +0000 UTC m=+0.157493833 container start d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:24:11 compute-0 podman[383395]: 2025-10-14 09:24:11.531042684 +0000 UTC m=+0.160584569 container attach d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:24:11 compute-0 quirky_poincare[383412]: 167 167
Oct 14 09:24:11 compute-0 systemd[1]: libpod-d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51.scope: Deactivated successfully.
Oct 14 09:24:11 compute-0 podman[383395]: 2025-10-14 09:24:11.533966316 +0000 UTC m=+0.163508171 container died d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 14 09:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b54ef0006ee395095bd4c340785ee3a3c517ef2decb631b94aa65e07b050448-merged.mount: Deactivated successfully.
Oct 14 09:24:11 compute-0 podman[383395]: 2025-10-14 09:24:11.573467113 +0000 UTC m=+0.203008988 container remove d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.574 2 INFO nova.compute.manager [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.575 2 DEBUG oslo.service.loopingcall [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.576 2 DEBUG nova.compute.manager [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.576 2 DEBUG nova.network.neutron [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:24:11 compute-0 systemd[1]: libpod-conmon-d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51.scope: Deactivated successfully.
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.592 2 INFO nova.network.neutron [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Port ff2b9b74-a6fc-4774-89d2-9c010f121d65 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.593 2 DEBUG nova.network.neutron [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.608 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.610 2 DEBUG oslo_concurrency.lockutils [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.610 2 DEBUG nova.network.neutron [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.632 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-ff2b9b74-a6fc-4774-89d2-9c010f121d65" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:11 compute-0 podman[383436]: 2025-10-14 09:24:11.767679811 +0000 UTC m=+0.054932478 container create b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 09:24:11 compute-0 systemd[1]: Started libpod-conmon-b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf.scope.
Oct 14 09:24:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:11 compute-0 podman[383436]: 2025-10-14 09:24:11.741948436 +0000 UTC m=+0.029201183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:11 compute-0 podman[383436]: 2025-10-14 09:24:11.854420005 +0000 UTC m=+0.141672712 container init b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:24:11 compute-0 podman[383436]: 2025-10-14 09:24:11.865112159 +0000 UTC m=+0.152364856 container start b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 09:24:11 compute-0 podman[383436]: 2025-10-14 09:24:11.869223621 +0000 UTC m=+0.156476318 container attach b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.982 2 DEBUG nova.compute.manager [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-unplugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.982 2 DEBUG oslo_concurrency.lockutils [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.983 2 DEBUG oslo_concurrency.lockutils [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.983 2 DEBUG oslo_concurrency.lockutils [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.983 2 DEBUG nova.compute.manager [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-unplugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:11 compute-0 nova_compute[259627]: 2025-10-14 09:24:11.984 2 DEBUG nova.compute.manager [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-unplugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.045 2 DEBUG nova.network.neutron [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.060 2 INFO nova.compute.manager [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Took 0.48 seconds to deallocate network for instance.
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.107 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.112 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.147 2 DEBUG oslo_concurrency.processutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 KiB/s wr, 40 op/s
Oct 14 09:24:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:24:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355233882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.607 2 DEBUG oslo_concurrency.processutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.617 2 DEBUG nova.compute.provider_tree [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.635 2 DEBUG nova.scheduler.client.report [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.659 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.688 2 INFO nova.scheduler.client.report [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 50c83173-31e3-4f7a-8836-26e52affd0f2
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.764 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:12 compute-0 focused_gates[383452]: {
Oct 14 09:24:12 compute-0 focused_gates[383452]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "osd_id": 2,
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "type": "bluestore"
Oct 14 09:24:12 compute-0 focused_gates[383452]:     },
Oct 14 09:24:12 compute-0 focused_gates[383452]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "osd_id": 1,
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "type": "bluestore"
Oct 14 09:24:12 compute-0 focused_gates[383452]:     },
Oct 14 09:24:12 compute-0 focused_gates[383452]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "osd_id": 0,
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:24:12 compute-0 focused_gates[383452]:         "type": "bluestore"
Oct 14 09:24:12 compute-0 focused_gates[383452]:     }
Oct 14 09:24:12 compute-0 focused_gates[383452]: }
Oct 14 09:24:12 compute-0 systemd[1]: libpod-b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf.scope: Deactivated successfully.
Oct 14 09:24:12 compute-0 podman[383436]: 2025-10-14 09:24:12.813562995 +0000 UTC m=+1.100815692 container died b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c-merged.mount: Deactivated successfully.
Oct 14 09:24:12 compute-0 podman[383436]: 2025-10-14 09:24:12.876028538 +0000 UTC m=+1.163281185 container remove b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:24:12 compute-0 systemd[1]: libpod-conmon-b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf.scope: Deactivated successfully.
Oct 14 09:24:12 compute-0 sudo[383243]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:24:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:24:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:12 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev fb79dafb-f443-4ecc-b7c8-98f61a431fa3 does not exist
Oct 14 09:24:12 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f430ab29-df09-46a4-a048-be2274958b87 does not exist
Oct 14 09:24:12 compute-0 nova_compute[259627]: 2025-10-14 09:24:12.929 2 DEBUG nova.compute.manager [req-d3481104-175e-4aaf-a13f-739de2fb79f1 req-05852587-aaa7-4d04-9bc8-e73ce9052564 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-deleted-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:12 compute-0 sudo[383521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:24:12 compute-0 sudo[383521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:13 compute-0 sudo[383521]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:13 compute-0 sudo[383546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:24:13 compute-0 sudo[383546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:24:13 compute-0 sudo[383546]: pam_unix(sudo:session): session closed for user root
Oct 14 09:24:13 compute-0 ceph-mon[74249]: pgmap v2132: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 KiB/s wr, 40 op/s
Oct 14 09:24:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3355233882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:24:13 compute-0 nova_compute[259627]: 2025-10-14 09:24:13.588 2 DEBUG nova.network.neutron [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:24:13 compute-0 nova_compute[259627]: 2025-10-14 09:24:13.588 2 DEBUG nova.network.neutron [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:13 compute-0 nova_compute[259627]: 2025-10-14 09:24:13.610 2 DEBUG oslo_concurrency.lockutils [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.193 2 DEBUG nova.compute.manager [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.194 2 DEBUG oslo_concurrency.lockutils [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.194 2 DEBUG oslo_concurrency.lockutils [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.195 2 DEBUG oslo_concurrency.lockutils [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.195 2 DEBUG nova.compute.manager [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.195 2 WARNING nova.compute.manager [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 for instance with vm_state deleted and task_state None.
Oct 14 09:24:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 4.5 KiB/s wr, 36 op/s
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.603 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433839.6015317, 6810b29b-088f-441b-8a6a-02eaafada0c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.603 2 INFO nova.compute.manager [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] VM Stopped (Lifecycle Event)
Oct 14 09:24:14 compute-0 nova_compute[259627]: 2025-10-14 09:24:14.635 2 DEBUG nova.compute.manager [None req-1aab3de4-f9f9-4e2c-a3a4-d7d94c20f12a - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:15 compute-0 ceph-mon[74249]: pgmap v2133: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 4.5 KiB/s wr, 36 op/s
Oct 14 09:24:16 compute-0 nova_compute[259627]: 2025-10-14 09:24:16.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 5.4 KiB/s wr, 61 op/s
Oct 14 09:24:16 compute-0 podman[383572]: 2025-10-14 09:24:16.671144583 +0000 UTC m=+0.078015158 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:24:16 compute-0 podman[383571]: 2025-10-14 09:24:16.709927452 +0000 UTC m=+0.112251935 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 14 09:24:17 compute-0 ceph-mon[74249]: pgmap v2134: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 5.4 KiB/s wr, 61 op/s
Oct 14 09:24:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:18 compute-0 nova_compute[259627]: 2025-10-14 09:24:18.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:18 compute-0 nova_compute[259627]: 2025-10-14 09:24:18.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Oct 14 09:24:19 compute-0 ceph-mon[74249]: pgmap v2135: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Oct 14 09:24:19 compute-0 nova_compute[259627]: 2025-10-14 09:24:19.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:19 compute-0 nova_compute[259627]: 2025-10-14 09:24:19.650 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433844.649698, ef3d76bf-9763-4405-8e48-c2c4405a2a3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:24:19 compute-0 nova_compute[259627]: 2025-10-14 09:24:19.651 2 INFO nova.compute.manager [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] VM Stopped (Lifecycle Event)
Oct 14 09:24:19 compute-0 nova_compute[259627]: 2025-10-14 09:24:19.681 2 DEBUG nova.compute.manager [None req-d65167ca-1b97-4fa1-8939-c957c864a01e - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Oct 14 09:24:21 compute-0 nova_compute[259627]: 2025-10-14 09:24:21.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:21 compute-0 ceph-mon[74249]: pgmap v2136: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Oct 14 09:24:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:24:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:23 compute-0 ceph-mon[74249]: pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:24:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:24:24 compute-0 nova_compute[259627]: 2025-10-14 09:24:24.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:25 compute-0 ceph-mon[74249]: pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:24:26 compute-0 nova_compute[259627]: 2025-10-14 09:24:26.068 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433851.0668418, 50c83173-31e3-4f7a-8836-26e52affd0f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:24:26 compute-0 nova_compute[259627]: 2025-10-14 09:24:26.068 2 INFO nova.compute.manager [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] VM Stopped (Lifecycle Event)
Oct 14 09:24:26 compute-0 nova_compute[259627]: 2025-10-14 09:24:26.100 2 DEBUG nova.compute.manager [None req-c0b6e475-3ded-4579-b472-051e29d78cae - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:26 compute-0 nova_compute[259627]: 2025-10-14 09:24:26.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:24:27 compute-0 ceph-mon[74249]: pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:24:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:29 compute-0 ceph-mon[74249]: pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:29 compute-0 nova_compute[259627]: 2025-10-14 09:24:29.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:31 compute-0 nova_compute[259627]: 2025-10-14 09:24:31.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:31 compute-0 ceph-mon[74249]: pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:24:32
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'vms', 'images', 'default.rgw.meta']
Oct 14 09:24:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:24:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:24:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:24:33 compute-0 ceph-mon[74249]: pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:33 compute-0 podman[383615]: 2025-10-14 09:24:33.670692221 +0000 UTC m=+0.086999971 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:24:33 compute-0 podman[383616]: 2025-10-14 09:24:33.692885099 +0000 UTC m=+0.098773651 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:24:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:34 compute-0 nova_compute[259627]: 2025-10-14 09:24:34.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:35 compute-0 ceph-mon[74249]: pgmap v2143: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.778 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.779 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.795 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.881 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.882 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.890 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.890 2 INFO nova.compute.claims [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:24:36 compute-0 nova_compute[259627]: 2025-10-14 09:24:36.999 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.005 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.027 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.028 2 DEBUG nova.compute.provider_tree [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.050 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.069 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.120 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:37 compute-0 ceph-mon[74249]: pgmap v2144: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:24:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256022138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.586 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.595 2 DEBUG nova.compute.provider_tree [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.612 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.634 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.636 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.691 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.691 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.709 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.726 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.834 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.837 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.837 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Creating image(s)
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.871 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.902 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.933 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.938 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:37 compute-0 nova_compute[259627]: 2025-10-14 09:24:37.991 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.013 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.013 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.068 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.071 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.072 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.073 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.110 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.115 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.469 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4256022138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:24:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541032966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.567 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.574 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.694 2 DEBUG nova.objects.instance [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid c4b3476e-7b32-4a60-ad45-41cb6716adaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.709 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.710 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Ensure instance console log exists: /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.710 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.711 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.711 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.878 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.881 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3657MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.881 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.881 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.970 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance c4b3476e-7b32-4a60-ad45-41cb6716adaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.971 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:24:38 compute-0 nova_compute[259627]: 2025-10-14 09:24:38.971 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:24:39 compute-0 nova_compute[259627]: 2025-10-14 09:24:39.004 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:39 compute-0 ceph-mon[74249]: pgmap v2145: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:24:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2541032966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:24:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790958384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:39 compute-0 nova_compute[259627]: 2025-10-14 09:24:39.529 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:39 compute-0 nova_compute[259627]: 2025-10-14 09:24:39.536 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:24:39 compute-0 nova_compute[259627]: 2025-10-14 09:24:39.556 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:24:39 compute-0 nova_compute[259627]: 2025-10-14 09:24:39.579 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:24:39 compute-0 nova_compute[259627]: 2025-10-14 09:24:39.580 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:39 compute-0 nova_compute[259627]: 2025-10-14 09:24:39.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:39 compute-0 nova_compute[259627]: 2025-10-14 09:24:39.721 2 DEBUG nova.policy [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:24:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 51 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 7.8 KiB/s rd, 333 KiB/s wr, 14 op/s
Oct 14 09:24:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/790958384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:40 compute-0 nova_compute[259627]: 2025-10-14 09:24:40.567 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:40 compute-0 nova_compute[259627]: 2025-10-14 09:24:40.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:41 compute-0 nova_compute[259627]: 2025-10-14 09:24:41.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:41 compute-0 ceph-mon[74249]: pgmap v2146: 305 pgs: 305 active+clean; 51 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 7.8 KiB/s rd, 333 KiB/s wr, 14 op/s
Oct 14 09:24:41 compute-0 nova_compute[259627]: 2025-10-14 09:24:41.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.127 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Successfully updated port: 8cdca031-de5b-4956-a27b-c6c6320c9764 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.150 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.150 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.150 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.235 2 DEBUG nova.compute.manager [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.236 2 DEBUG nova.compute.manager [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Refreshing instance network info cache due to event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.236 2 DEBUG oslo_concurrency.lockutils [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:24:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.356 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:42 compute-0 nova_compute[259627]: 2025-10-14 09:24:42.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:24:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.072 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.093 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.093 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance network_info: |[{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.094 2 DEBUG oslo_concurrency.lockutils [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.094 2 DEBUG nova.network.neutron [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Refreshing network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.098 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start _get_guest_xml network_info=[{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.102 2 WARNING nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.107 2 DEBUG nova.virt.libvirt.host [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.107 2 DEBUG nova.virt.libvirt.host [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.116 2 DEBUG nova.virt.libvirt.host [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.117 2 DEBUG nova.virt.libvirt.host [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.117 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.118 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.118 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.119 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.119 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.119 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.119 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.120 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.120 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.120 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.121 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.121 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.125 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:24:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:24:43 compute-0 ceph-mon[74249]: pgmap v2147: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 14 09:24:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:24:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/251679546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.608 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.629 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:24:43 compute-0 nova_compute[259627]: 2025-10-14 09:24:43.633 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:24:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896303892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.072 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.075 2 DEBUG nova.virt.libvirt.vif [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-922814987',display_name='tempest-TestNetworkBasicOps-server-922814987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-922814987',id=123,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDfkSnFvDSr+GSt1dM38bvIyYGUhpm7xroCMStk06IR9Vyf1uV/kgX14ev9LDBZUF7QO+LVF5DG5NihTr1U28RQ9HNU8vPNSalnywo2YhCd69n5Jhi3ssJBlCIEOLbD3Q==',key_name='tempest-TestNetworkBasicOps-1537576407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-s3vt9dhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:24:37Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c4b3476e-7b32-4a60-ad45-41cb6716adaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.075 2 DEBUG nova.network.os_vif_util [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.077 2 DEBUG nova.network.os_vif_util [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.079 2 DEBUG nova.objects.instance [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid c4b3476e-7b32-4a60-ad45-41cb6716adaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.104 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <uuid>c4b3476e-7b32-4a60-ad45-41cb6716adaf</uuid>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <name>instance-0000007b</name>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-922814987</nova:name>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:24:43</nova:creationTime>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <nova:port uuid="8cdca031-de5b-4956-a27b-c6c6320c9764">
Oct 14 09:24:44 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <system>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <entry name="serial">c4b3476e-7b32-4a60-ad45-41cb6716adaf</entry>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <entry name="uuid">c4b3476e-7b32-4a60-ad45-41cb6716adaf</entry>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </system>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <os>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   </os>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <features>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   </features>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk">
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config">
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       </source>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:24:44 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:ed:fc:28"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <target dev="tap8cdca031-de"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/console.log" append="off"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <video>
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </video>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:24:44 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:24:44 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:24:44 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:24:44 compute-0 nova_compute[259627]: </domain>
Oct 14 09:24:44 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.106 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Preparing to wait for external event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.106 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.107 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.107 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.108 2 DEBUG nova.virt.libvirt.vif [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-922814987',display_name='tempest-TestNetworkBasicOps-server-922814987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-922814987',id=123,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDfkSnFvDSr+GSt1dM38bvIyYGUhpm7xroCMStk06IR9Vyf1uV/kgX14ev9LDBZUF7QO+LVF5DG5NihTr1U28RQ9HNU8vPNSalnywo2YhCd69n5Jhi3ssJBlCIEOLbD3Q==',key_name='tempest-TestNetworkBasicOps-1537576407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-s3vt9dhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:24:37Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c4b3476e-7b32-4a60-ad45-41cb6716adaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.109 2 DEBUG nova.network.os_vif_util [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.110 2 DEBUG nova.network.os_vif_util [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.111 2 DEBUG os_vif [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.112 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cdca031-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8cdca031-de, col_values=(('external_ids', {'iface-id': '8cdca031-de5b-4956-a27b-c6c6320c9764', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:fc:28', 'vm-uuid': 'c4b3476e-7b32-4a60-ad45-41cb6716adaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:44 compute-0 NetworkManager[44885]: <info>  [1760433884.1226] manager: (tap8cdca031-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.130 2 INFO os_vif [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de')
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.193 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.193 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.194 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:ed:fc:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.194 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Using config drive
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.231 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:24:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 14 09:24:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/251679546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:24:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2896303892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.838 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Creating config drive at /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.848 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_49qynh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.997 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:24:44 compute-0 nova_compute[259627]: 2025-10-14 09:24:44.998 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.023 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_49qynh" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.055 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.059 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.282 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.283 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Deleting local config drive /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config because it was imported into RBD.
Oct 14 09:24:45 compute-0 kernel: tap8cdca031-de: entered promiscuous mode
Oct 14 09:24:45 compute-0 ovn_controller[152662]: 2025-10-14T09:24:45Z|01318|binding|INFO|Claiming lport 8cdca031-de5b-4956-a27b-c6c6320c9764 for this chassis.
Oct 14 09:24:45 compute-0 ovn_controller[152662]: 2025-10-14T09:24:45Z|01319|binding|INFO|8cdca031-de5b-4956-a27b-c6c6320c9764: Claiming fa:16:3e:ed:fc:28 10.100.0.14
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:45 compute-0 NetworkManager[44885]: <info>  [1760433885.3706] manager: (tap8cdca031-de): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.384 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:fc:28 10.100.0.14'], port_security=['fa:16:3e:ed:fc:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c4b3476e-7b32-4a60-ad45-41cb6716adaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=596a534e-c0c1-49ed-bcdd-00855a90d08e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8cdca031-de5b-4956-a27b-c6c6320c9764) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.385 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8cdca031-de5b-4956-a27b-c6c6320c9764 in datapath ee3efe32-94e6-45cb-ae71-b379f4a2309a bound to our chassis
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.386 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.400 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0ff928-354b-4ca7-ae56-2d03b8717cd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.401 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee3efe32-91 in ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.403 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee3efe32-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.403 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98524d8f-2a24-4e52-9e70-7455177c831f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.405 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8798c73-af39-42fb-a96f-c663ac8280ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 systemd-machined[214636]: New machine qemu-156-instance-0000007b.
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.422 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8b677b-4869-42ca-b939-aaa1563b65a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007b.
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.454 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[859a14a1-cb52-4f55-b7cc-c5071840d881]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 systemd-udevd[384029]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:24:45 compute-0 ovn_controller[152662]: 2025-10-14T09:24:45Z|01320|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 ovn-installed in OVS
Oct 14 09:24:45 compute-0 ovn_controller[152662]: 2025-10-14T09:24:45Z|01321|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 up in Southbound
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:45 compute-0 NetworkManager[44885]: <info>  [1760433885.4878] device (tap8cdca031-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:24:45 compute-0 NetworkManager[44885]: <info>  [1760433885.4894] device (tap8cdca031-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.495 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8d23ed-ea68-4ff4-b7ca-a03fc9eb876e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa58725-fbf1-4ad9-8678-a26cdb09070b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 NetworkManager[44885]: <info>  [1760433885.5042] manager: (tapee3efe32-90): new Veth device (/org/freedesktop/NetworkManager/Devices/534)
Oct 14 09:24:45 compute-0 ceph-mon[74249]: pgmap v2148: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.549 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6a180720-36b9-4ced-841a-fa29b5cdb48d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.554 2 DEBUG nova.network.neutron [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updated VIF entry in instance network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.555 2 DEBUG nova.network.neutron [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.560 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f249cbd3-b754-4837-9b9c-7df4bccbb7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.578 2 DEBUG oslo_concurrency.lockutils [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:24:45 compute-0 NetworkManager[44885]: <info>  [1760433885.5916] device (tapee3efe32-90): carrier: link connected
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.600 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b5ffa0-20b0-4449-b44d-57d7e79cef90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c32d276d-3e36-4dfd-8c99-15aaee4bbec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3efe32-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:fa:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 377], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766435, 'reachable_time': 28860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384058, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9716ba0-c8d3-4a73-b584-3b3ce0e1e7a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:fab9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766435, 'tstamp': 766435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384059, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.655 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9a56e693-7d85-4004-8446-04ae45cacb7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3efe32-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:fa:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 377], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766435, 'reachable_time': 28860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384060, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.694 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[658d052a-f2e2-4c2a-bdae-3ac5f68fc8d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[21595f0a-9397-40ab-ae8c-738b4d8c2f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.765 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3efe32-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.765 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.766 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee3efe32-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.767 2 DEBUG nova.compute.manager [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.768 2 DEBUG oslo_concurrency.lockutils [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.769 2 DEBUG oslo_concurrency.lockutils [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.769 2 DEBUG oslo_concurrency.lockutils [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.770 2 DEBUG nova.compute.manager [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Processing event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:45 compute-0 kernel: tapee3efe32-90: entered promiscuous mode
Oct 14 09:24:45 compute-0 NetworkManager[44885]: <info>  [1760433885.8027] manager: (tapee3efe32-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.808 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee3efe32-90, col_values=(('external_ids', {'iface-id': '77c455ce-a111-4a2d-9630-1d923bb22b5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:45 compute-0 ovn_controller[152662]: 2025-10-14T09:24:45Z|01322|binding|INFO|Releasing lport 77c455ce-a111-4a2d-9630-1d923bb22b5c from this chassis (sb_readonly=0)
Oct 14 09:24:45 compute-0 nova_compute[259627]: 2025-10-14 09:24:45.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.840 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.841 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa9f65f-6d3f-4ae3-a6f3-097b5be63f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.841 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:24:45 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.843 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'env', 'PROCESS_TAG=haproxy-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee3efe32-94e6-45cb-ae71-b379f4a2309a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:24:46 compute-0 podman[384092]: 2025-10-14 09:24:46.262461559 +0000 UTC m=+0.083973686 container create 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:24:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 14 09:24:46 compute-0 systemd[1]: Started libpod-conmon-8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706.scope.
Oct 14 09:24:46 compute-0 podman[384092]: 2025-10-14 09:24:46.22241693 +0000 UTC m=+0.043929067 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:24:46 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43da196363ab20472740d45986e51af165d5a9498f073d634db3931ca381187d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:24:46 compute-0 podman[384092]: 2025-10-14 09:24:46.344406344 +0000 UTC m=+0.165918491 container init 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:24:46 compute-0 podman[384092]: 2025-10-14 09:24:46.355257542 +0000 UTC m=+0.176769659 container start 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:24:46 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [NOTICE]   (384111) : New worker (384113) forked
Oct 14 09:24:46 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [NOTICE]   (384111) : Loading success.
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.365 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433887.364822, c4b3476e-7b32-4a60-ad45-41cb6716adaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.366 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] VM Started (Lifecycle Event)
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.368 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.372 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.377 2 INFO nova.virt.libvirt.driver [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance spawned successfully.
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.378 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.397 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.407 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.414 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.415 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.417 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.417 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.418 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.419 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.435 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.436 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433887.3653703, c4b3476e-7b32-4a60-ad45-41cb6716adaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.436 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] VM Paused (Lifecycle Event)
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.467 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.473 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433887.3714688, c4b3476e-7b32-4a60-ad45-41cb6716adaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.473 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] VM Resumed (Lifecycle Event)
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.499 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.504 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.510 2 INFO nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Took 9.67 seconds to spawn the instance on the hypervisor.
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.511 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.523 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:24:47 compute-0 ceph-mon[74249]: pgmap v2149: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.580 2 INFO nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Took 10.74 seconds to build instance.
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.599 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:47 compute-0 podman[384165]: 2025-10-14 09:24:47.669991769 +0000 UTC m=+0.071087948 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 14 09:24:47 compute-0 podman[384164]: 2025-10-14 09:24:47.700132263 +0000 UTC m=+0.101977020 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.881 2 DEBUG nova.compute.manager [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.882 2 DEBUG oslo_concurrency.lockutils [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.883 2 DEBUG oslo_concurrency.lockutils [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.883 2 DEBUG oslo_concurrency.lockutils [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.884 2 DEBUG nova.compute.manager [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] No waiting events found dispatching network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:24:47 compute-0 nova_compute[259627]: 2025-10-14 09:24:47.885 2 WARNING nova.compute.manager [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received unexpected event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 for instance with vm_state active and task_state None.
Oct 14 09:24:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 14 09:24:49 compute-0 nova_compute[259627]: 2025-10-14 09:24:49.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:49 compute-0 ceph-mon[74249]: pgmap v2150: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 14 09:24:49 compute-0 nova_compute[259627]: 2025-10-14 09:24:49.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:49 compute-0 nova_compute[259627]: 2025-10-14 09:24:49.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 14 09:24:51 compute-0 ovn_controller[152662]: 2025-10-14T09:24:51Z|01323|binding|INFO|Releasing lport 77c455ce-a111-4a2d-9630-1d923bb22b5c from this chassis (sb_readonly=0)
Oct 14 09:24:51 compute-0 NetworkManager[44885]: <info>  [1760433891.1147] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:51 compute-0 NetworkManager[44885]: <info>  [1760433891.1171] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Oct 14 09:24:51 compute-0 ovn_controller[152662]: 2025-10-14T09:24:51Z|01324|binding|INFO|Releasing lport 77c455ce-a111-4a2d-9630-1d923bb22b5c from this chassis (sb_readonly=0)
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:51 compute-0 ceph-mon[74249]: pgmap v2151: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.835 2 DEBUG nova.compute.manager [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.836 2 DEBUG nova.compute.manager [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Refreshing instance network info cache due to event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.837 2 DEBUG oslo_concurrency.lockutils [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.837 2 DEBUG oslo_concurrency.lockutils [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.838 2 DEBUG nova.network.neutron [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Refreshing network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:24:51 compute-0 nova_compute[259627]: 2025-10-14 09:24:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.032 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.033 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.034 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.035 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.035 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.037 2 INFO nova.compute.manager [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Terminating instance
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.039 2 DEBUG nova.compute.manager [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:24:52 compute-0 kernel: tap8cdca031-de (unregistering): left promiscuous mode
Oct 14 09:24:52 compute-0 NetworkManager[44885]: <info>  [1760433892.0892] device (tap8cdca031-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:24:52 compute-0 ovn_controller[152662]: 2025-10-14T09:24:52Z|01325|binding|INFO|Releasing lport 8cdca031-de5b-4956-a27b-c6c6320c9764 from this chassis (sb_readonly=0)
Oct 14 09:24:52 compute-0 ovn_controller[152662]: 2025-10-14T09:24:52Z|01326|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 down in Southbound
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 ovn_controller[152662]: 2025-10-14T09:24:52Z|01327|binding|INFO|Removing iface tap8cdca031-de ovn-installed in OVS
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.115 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:fc:28 10.100.0.14'], port_security=['fa:16:3e:ed:fc:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c4b3476e-7b32-4a60-ad45-41cb6716adaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=596a534e-c0c1-49ed-bcdd-00855a90d08e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8cdca031-de5b-4956-a27b-c6c6320c9764) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.117 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8cdca031-de5b-4956-a27b-c6c6320c9764 in datapath ee3efe32-94e6-45cb-ae71-b379f4a2309a unbound from our chassis
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.119 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee3efe32-94e6-45cb-ae71-b379f4a2309a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.121 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc424803-2bb6-441e-8a41-4e0295526b09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.122 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a namespace which is not needed anymore
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 14 09:24:52 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Consumed 6.626s CPU time.
Oct 14 09:24:52 compute-0 systemd-machined[214636]: Machine qemu-156-instance-0000007b terminated.
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 159 op/s
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.277 2 INFO nova.virt.libvirt.driver [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance destroyed successfully.
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.278 2 DEBUG nova.objects.instance [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid c4b3476e-7b32-4a60-ad45-41cb6716adaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:24:52 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [NOTICE]   (384111) : haproxy version is 2.8.14-c23fe91
Oct 14 09:24:52 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [NOTICE]   (384111) : path to executable is /usr/sbin/haproxy
Oct 14 09:24:52 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [WARNING]  (384111) : Exiting Master process...
Oct 14 09:24:52 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [WARNING]  (384111) : Exiting Master process...
Oct 14 09:24:52 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [ALERT]    (384111) : Current worker (384113) exited with code 143 (Terminated)
Oct 14 09:24:52 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [WARNING]  (384111) : All workers exited. Exiting... (0)
Oct 14 09:24:52 compute-0 systemd[1]: libpod-8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706.scope: Deactivated successfully.
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.291 2 DEBUG nova.virt.libvirt.vif [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-922814987',display_name='tempest-TestNetworkBasicOps-server-922814987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-922814987',id=123,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDfkSnFvDSr+GSt1dM38bvIyYGUhpm7xroCMStk06IR9Vyf1uV/kgX14ev9LDBZUF7QO+LVF5DG5NihTr1U28RQ9HNU8vPNSalnywo2YhCd69n5Jhi3ssJBlCIEOLbD3Q==',key_name='tempest-TestNetworkBasicOps-1537576407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:24:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-s3vt9dhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:24:47Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c4b3476e-7b32-4a60-ad45-41cb6716adaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.291 2 DEBUG nova.network.os_vif_util [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.292 2 DEBUG nova.network.os_vif_util [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.292 2 DEBUG os_vif [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:24:52 compute-0 podman[384231]: 2025-10-14 09:24:52.294916398 +0000 UTC m=+0.057865431 container died 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.294 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cdca031-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.305 2 INFO os_vif [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de')
Oct 14 09:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706-userdata-shm.mount: Deactivated successfully.
Oct 14 09:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-43da196363ab20472740d45986e51af165d5a9498f073d634db3931ca381187d-merged.mount: Deactivated successfully.
Oct 14 09:24:52 compute-0 podman[384231]: 2025-10-14 09:24:52.34760319 +0000 UTC m=+0.110552203 container cleanup 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:24:52 compute-0 systemd[1]: libpod-conmon-8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706.scope: Deactivated successfully.
Oct 14 09:24:52 compute-0 podman[384288]: 2025-10-14 09:24:52.432845076 +0000 UTC m=+0.051209576 container remove 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68dc95f8-94a9-4f54-b539-28b829a187df]: (4, ('Tue Oct 14 09:24:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a (8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706)\n8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706\nTue Oct 14 09:24:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a (8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706)\n8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.447 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[368e0674-2a00-4318-81c5-3b7fada5ce11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.448 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3efe32-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:24:52 compute-0 kernel: tapee3efe32-90: left promiscuous mode
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.456 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bf54b7-f0c6-46a6-9b02-9aa72e6f87e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.491 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6656baeb-75ce-4cd8-83de-3de930449c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bdebaf2f-f3af-4b81-970d-0c797b9f2107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.510 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bca30114-49e6-4abe-a0b1-64b4d0b5d86a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766425, 'reachable_time': 26523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384303, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.512 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:24:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.512 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[37f5632e-d95b-4129-87fa-a92f71da605c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:24:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dee3efe32\x2d94e6\x2d45cb\x2dae71\x2db379f4a2309a.mount: Deactivated successfully.
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.714 2 INFO nova.virt.libvirt.driver [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Deleting instance files /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf_del
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.717 2 INFO nova.virt.libvirt.driver [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Deletion of /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf_del complete
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.761 2 INFO nova.compute.manager [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.761 2 DEBUG oslo.service.loopingcall [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.762 2 DEBUG nova.compute.manager [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.762 2 DEBUG nova.network.neutron [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.828 2 DEBUG nova.network.neutron [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updated VIF entry in instance network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.828 2 DEBUG nova.network.neutron [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:52 compute-0 nova_compute[259627]: 2025-10-14 09:24:52.845 2 DEBUG oslo_concurrency.lockutils [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:24:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:53 compute-0 ceph-mon[74249]: pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 159 op/s
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.196 2 DEBUG nova.network.neutron [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.217 2 INFO nova.compute.manager [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Took 1.45 seconds to deallocate network for instance.
Oct 14 09:24:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 101 op/s
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.273 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.274 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.330 2 DEBUG oslo_concurrency.processutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:24:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114859750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.837 2 DEBUG oslo_concurrency.processutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.847 2 DEBUG nova.compute.provider_tree [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.873 2 DEBUG nova.scheduler.client.report [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.908 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:54 compute-0 nova_compute[259627]: 2025-10-14 09:24:54.960 2 INFO nova.scheduler.client.report [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance c4b3476e-7b32-4a60-ad45-41cb6716adaf
Oct 14 09:24:55 compute-0 nova_compute[259627]: 2025-10-14 09:24:55.039 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:24:55 compute-0 ceph-mon[74249]: pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 101 op/s
Oct 14 09:24:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/114859750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:24:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 128 op/s
Oct 14 09:24:57 compute-0 nova_compute[259627]: 2025-10-14 09:24:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:24:57 compute-0 ceph-mon[74249]: pgmap v2154: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 128 op/s
Oct 14 09:24:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:24:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 97 op/s
Oct 14 09:24:59 compute-0 ceph-mon[74249]: pgmap v2155: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 97 op/s
Oct 14 09:24:59 compute-0 nova_compute[259627]: 2025-10-14 09:24:59.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 97 op/s
Oct 14 09:25:01 compute-0 ceph-mon[74249]: pgmap v2156: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 97 op/s
Oct 14 09:25:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 KiB/s wr, 77 op/s
Oct 14 09:25:02 compute-0 nova_compute[259627]: 2025-10-14 09:25:02.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:25:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:25:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:25:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:25:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:25:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:25:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.182 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.183 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.197 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.288 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.288 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.300 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.301 2 INFO nova.compute.claims [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.432 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:03 compute-0 ceph-mon[74249]: pgmap v2157: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 KiB/s wr, 77 op/s
Oct 14 09:25:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:25:03 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027551998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.928 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.936 2 DEBUG nova.compute.provider_tree [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.954 2 DEBUG nova.scheduler.client.report [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.987 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:03 compute-0 nova_compute[259627]: 2025-10-14 09:25:03.988 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.098 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.098 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.123 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.146 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.229 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.230 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.231 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Creating image(s)
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.260 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.287 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.320 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.325 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.432 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.433 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.434 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.435 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.471 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.477 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c0981e31-738f-44e8-be4c-b64961716660_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1027551998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:04 compute-0 podman[384427]: 2025-10-14 09:25:04.702967993 +0000 UTC m=+0.105355814 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 14 09:25:04 compute-0 podman[384434]: 2025-10-14 09:25:04.705835904 +0000 UTC m=+0.110731177 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.819 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c0981e31-738f-44e8-be4c-b64961716660_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.881 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image c0981e31-738f-44e8-be4c-b64961716660_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.975 2 DEBUG nova.policy [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.982 2 DEBUG nova.objects.instance [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid c0981e31-738f-44e8-be4c-b64961716660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.997 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.997 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Ensure instance console log exists: /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.998 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.998 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:04 compute-0 nova_compute[259627]: 2025-10-14 09:25:04.999 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:25:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1794537339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:25:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:25:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1794537339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:25:05 compute-0 ceph-mon[74249]: pgmap v2158: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 09:25:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1794537339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:25:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1794537339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:25:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 88 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 14 09:25:06 compute-0 nova_compute[259627]: 2025-10-14 09:25:06.766 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Successfully updated port: 8cdca031-de5b-4956-a27b-c6c6320c9764 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:25:06 compute-0 nova_compute[259627]: 2025-10-14 09:25:06.788 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:25:06 compute-0 nova_compute[259627]: 2025-10-14 09:25:06.789 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:25:06 compute-0 nova_compute[259627]: 2025-10-14 09:25:06.789 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:25:06 compute-0 nova_compute[259627]: 2025-10-14 09:25:06.885 2 DEBUG nova.compute.manager [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:06 compute-0 nova_compute[259627]: 2025-10-14 09:25:06.885 2 DEBUG nova.compute.manager [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Refreshing instance network info cache due to event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:25:06 compute-0 nova_compute[259627]: 2025-10-14 09:25:06.885 2 DEBUG oslo_concurrency.lockutils [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:25:06 compute-0 nova_compute[259627]: 2025-10-14 09:25:06.981 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:25:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:07.040 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:07.041 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:07.041 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:07 compute-0 nova_compute[259627]: 2025-10-14 09:25:07.277 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433892.2757423, c4b3476e-7b32-4a60-ad45-41cb6716adaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:25:07 compute-0 nova_compute[259627]: 2025-10-14 09:25:07.277 2 INFO nova.compute.manager [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] VM Stopped (Lifecycle Event)
Oct 14 09:25:07 compute-0 nova_compute[259627]: 2025-10-14 09:25:07.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:07 compute-0 nova_compute[259627]: 2025-10-14 09:25:07.308 2 DEBUG nova.compute.manager [None req-34b6b0a4-892f-453d-a301-94ff33d99efb - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:07 compute-0 ceph-mon[74249]: pgmap v2159: 305 pgs: 305 active+clean; 88 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 14 09:25:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 88 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.761 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.784 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.785 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance network_info: |[{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.786 2 DEBUG oslo_concurrency.lockutils [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.787 2 DEBUG nova.network.neutron [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Refreshing network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.795 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start _get_guest_xml network_info=[{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.803 2 WARNING nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.808 2 DEBUG nova.virt.libvirt.host [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.809 2 DEBUG nova.virt.libvirt.host [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.814 2 DEBUG nova.virt.libvirt.host [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.815 2 DEBUG nova.virt.libvirt.host [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.816 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.816 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.817 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.817 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.818 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.818 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.819 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.819 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.820 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.820 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.820 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.821 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:25:08 compute-0 nova_compute[259627]: 2025-10-14 09:25:08.826 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:25:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171874576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.272 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.310 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.315 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:09.526 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:25:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:09.528 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:09 compute-0 ceph-mon[74249]: pgmap v2160: 305 pgs: 305 active+clean; 88 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1171874576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:25:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2731177902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.819 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.821 2 DEBUG nova.virt.libvirt.vif [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-546978415',display_name='tempest-TestNetworkBasicOps-server-546978415',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-546978415',id=124,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKPHsxRqxnHLpMWsVQOQ2YVhzbM1QIaVRvazbKfcBx066Wk2bLss4UyHnFnwGk2N+hL3dCcdm0s3ho7BXaEBpPBlInClKepgsjMFj/5tj/fAwTM9jsdqXQDPYNKI8XGpQ==',key_name='tempest-TestNetworkBasicOps-1675596793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ycr49veq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:25:04Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c0981e31-738f-44e8-be4c-b64961716660,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.822 2 DEBUG nova.network.os_vif_util [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.824 2 DEBUG nova.network.os_vif_util [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.826 2 DEBUG nova.objects.instance [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0981e31-738f-44e8-be4c-b64961716660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.844 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <uuid>c0981e31-738f-44e8-be4c-b64961716660</uuid>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <name>instance-0000007c</name>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-546978415</nova:name>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:25:08</nova:creationTime>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <nova:port uuid="8cdca031-de5b-4956-a27b-c6c6320c9764">
Oct 14 09:25:09 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <system>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <entry name="serial">c0981e31-738f-44e8-be4c-b64961716660</entry>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <entry name="uuid">c0981e31-738f-44e8-be4c-b64961716660</entry>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </system>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <os>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   </os>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <features>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   </features>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c0981e31-738f-44e8-be4c-b64961716660_disk">
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c0981e31-738f-44e8-be4c-b64961716660_disk.config">
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       </source>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:25:09 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:ed:fc:28"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <target dev="tap8cdca031-de"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/console.log" append="off"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <video>
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </video>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:25:09 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:25:09 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:25:09 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:25:09 compute-0 nova_compute[259627]: </domain>
Oct 14 09:25:09 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.846 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Preparing to wait for external event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.847 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.847 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.848 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.849 2 DEBUG nova.virt.libvirt.vif [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-546978415',display_name='tempest-TestNetworkBasicOps-server-546978415',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-546978415',id=124,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKPHsxRqxnHLpMWsVQOQ2YVhzbM1QIaVRvazbKfcBx066Wk2bLss4UyHnFnwGk2N+hL3dCcdm0s3ho7BXaEBpPBlInClKepgsjMFj/5tj/fAwTM9jsdqXQDPYNKI8XGpQ==',key_name='tempest-TestNetworkBasicOps-1675596793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ycr49veq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:25:04Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c0981e31-738f-44e8-be4c-b64961716660,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.850 2 DEBUG nova.network.os_vif_util [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.851 2 DEBUG nova.network.os_vif_util [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.851 2 DEBUG os_vif [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cdca031-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8cdca031-de, col_values=(('external_ids', {'iface-id': '8cdca031-de5b-4956-a27b-c6c6320c9764', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:fc:28', 'vm-uuid': 'c0981e31-738f-44e8-be4c-b64961716660'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:09 compute-0 NetworkManager[44885]: <info>  [1760433909.8628] manager: (tap8cdca031-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.871 2 INFO os_vif [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de')
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.943 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.944 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.944 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:ed:fc:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.945 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Using config drive
Oct 14 09:25:09 compute-0 nova_compute[259627]: 2025-10-14 09:25:09.979 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:10.530 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2731177902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:25:10 compute-0 nova_compute[259627]: 2025-10-14 09:25:10.786 2 DEBUG nova.network.neutron [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Updated VIF entry in instance network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:25:10 compute-0 nova_compute[259627]: 2025-10-14 09:25:10.787 2 DEBUG nova.network.neutron [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:25:10 compute-0 nova_compute[259627]: 2025-10-14 09:25:10.803 2 DEBUG oslo_concurrency.lockutils [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:25:10 compute-0 nova_compute[259627]: 2025-10-14 09:25:10.972 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Creating config drive at /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config
Oct 14 09:25:10 compute-0 nova_compute[259627]: 2025-10-14 09:25:10.980 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpraoob577 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.126 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpraoob577" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.176 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.181 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config c0981e31-738f-44e8-be4c-b64961716660_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.393 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config c0981e31-738f-44e8-be4c-b64961716660_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.396 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Deleting local config drive /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config because it was imported into RBD.
Oct 14 09:25:11 compute-0 kernel: tap8cdca031-de: entered promiscuous mode
Oct 14 09:25:11 compute-0 NetworkManager[44885]: <info>  [1760433911.4765] manager: (tap8cdca031-de): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Oct 14 09:25:11 compute-0 ovn_controller[152662]: 2025-10-14T09:25:11Z|01328|binding|INFO|Claiming lport 8cdca031-de5b-4956-a27b-c6c6320c9764 for this chassis.
Oct 14 09:25:11 compute-0 ovn_controller[152662]: 2025-10-14T09:25:11Z|01329|binding|INFO|8cdca031-de5b-4956-a27b-c6c6320c9764: Claiming fa:16:3e:ed:fc:28 10.100.0.14
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.491 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:fc:28 10.100.0.14'], port_security=['fa:16:3e:ed:fc:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c0981e31-738f-44e8-be4c-b64961716660', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '6', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=596a534e-c0c1-49ed-bcdd-00855a90d08e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8cdca031-de5b-4956-a27b-c6c6320c9764) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.494 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8cdca031-de5b-4956-a27b-c6c6320c9764 in datapath ee3efe32-94e6-45cb-ae71-b379f4a2309a bound to our chassis
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.496 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 09:25:11 compute-0 ovn_controller[152662]: 2025-10-14T09:25:11Z|01330|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 ovn-installed in OVS
Oct 14 09:25:11 compute-0 ovn_controller[152662]: 2025-10-14T09:25:11Z|01331|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 up in Southbound
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.512 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11837e53-63e2-4c77-a4c6-95eb1c60e317]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.513 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee3efe32-91 in ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.515 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee3efe32-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb983b8b-6bb9-458b-b189-d2657f58532e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.516 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f3aa7e-17c2-4b6c-9a27-f461c43c50eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 systemd-machined[214636]: New machine qemu-157-instance-0000007c.
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.530 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[22e87b2c-6ee6-4a7d-9d74-e5c4dada7c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007c.
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.546 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7c6c11-4252-468a-9e3c-4c3886b5aa56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 systemd-udevd[384692]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:25:11 compute-0 NetworkManager[44885]: <info>  [1760433911.5829] device (tap8cdca031-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.584 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e12a9d-1b55-47e3-9748-85e46bf18cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 NetworkManager[44885]: <info>  [1760433911.5878] device (tap8cdca031-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:25:11 compute-0 systemd-udevd[384696]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.592 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[648dff7a-e0b6-4b58-a502-3b5e70114538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 NetworkManager[44885]: <info>  [1760433911.5946] manager: (tapee3efe32-90): new Veth device (/org/freedesktop/NetworkManager/Devices/540)
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.644 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1bd880-500c-4762-bc6d-ee52ca560467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.649 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fc44f8-2c09-481b-b48e-076e892a9457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 NetworkManager[44885]: <info>  [1760433911.6754] device (tapee3efe32-90): carrier: link connected
Oct 14 09:25:11 compute-0 ceph-mon[74249]: pgmap v2161: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.681 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbbacad-cb69-440b-af39-212dbfc28bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.702 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c800d9a-a8e2-44ce-9261-d489865aa268]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3efe32-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:fa:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 380], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 769043, 'reachable_time': 43826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384721, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[54371e6d-4cd5-459a-84a6-660f0739287f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:fab9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 769043, 'tstamp': 769043}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384722, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.742 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9582821c-def1-4e8c-b581-f735675ac67b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3efe32-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:fa:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 380], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 769043, 'reachable_time': 43826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384723, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.784 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0ac306-32b7-4f11-905e-0b1142736989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.854 2 DEBUG nova.compute.manager [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.855 2 DEBUG oslo_concurrency.lockutils [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.856 2 DEBUG oslo_concurrency.lockutils [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.856 2 DEBUG oslo_concurrency.lockutils [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.856 2 DEBUG nova.compute.manager [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Processing event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[015901de-7228-4f41-b88e-a3e348b28acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.872 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3efe32-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.872 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.873 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee3efe32-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:11 compute-0 NetworkManager[44885]: <info>  [1760433911.8763] manager: (tapee3efe32-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Oct 14 09:25:11 compute-0 kernel: tapee3efe32-90: entered promiscuous mode
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.880 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee3efe32-90, col_values=(('external_ids', {'iface-id': '77c455ce-a111-4a2d-9630-1d923bb22b5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:11 compute-0 ovn_controller[152662]: 2025-10-14T09:25:11Z|01332|binding|INFO|Releasing lport 77c455ce-a111-4a2d-9630-1d923bb22b5c from this chassis (sb_readonly=0)
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.884 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.885 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[30e94be4-061c-417a-9ae4-aeb75a7d955d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.886 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:25:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.888 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'env', 'PROCESS_TAG=haproxy-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee3efe32-94e6-45cb-ae71-b379f4a2309a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:25:11 compute-0 nova_compute[259627]: 2025-10-14 09:25:11.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:12 compute-0 podman[384797]: 2025-10-14 09:25:12.344604323 +0000 UTC m=+0.058808624 container create 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS)
Oct 14 09:25:12 compute-0 systemd[1]: Started libpod-conmon-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6.scope.
Oct 14 09:25:12 compute-0 podman[384797]: 2025-10-14 09:25:12.314576781 +0000 UTC m=+0.028781072 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:25:12 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3cb1e7089941c132d39655646185e2c3a1080aba046532a423fb17ff44148c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:12 compute-0 podman[384797]: 2025-10-14 09:25:12.445821604 +0000 UTC m=+0.160025965 container init 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:25:12 compute-0 podman[384797]: 2025-10-14 09:25:12.452983721 +0000 UTC m=+0.167188042 container start 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:25:12 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [NOTICE]   (384816) : New worker (384818) forked
Oct 14 09:25:12 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [NOTICE]   (384816) : Loading success.
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.841 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433912.8405082, c0981e31-738f-44e8-be4c-b64961716660 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.842 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] VM Started (Lifecycle Event)
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.844 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.849 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.853 2 INFO nova.virt.libvirt.driver [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance spawned successfully.
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.854 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.883 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.890 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.891 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.892 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.893 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.893 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.894 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.901 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.948 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.948 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433912.840722, c0981e31-738f-44e8-be4c-b64961716660 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.949 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] VM Paused (Lifecycle Event)
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.982 2 INFO nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Took 8.75 seconds to spawn the instance on the hypervisor.
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.983 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.985 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.996 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433912.8476634, c0981e31-738f-44e8-be4c-b64961716660 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:25:12 compute-0 nova_compute[259627]: 2025-10-14 09:25:12.996 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] VM Resumed (Lifecycle Event)
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.023 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.027 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.064 2 INFO nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Took 9.82 seconds to build instance.
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.080 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:13 compute-0 sudo[384827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:13 compute-0 sudo[384827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:13 compute-0 sudo[384827]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:13 compute-0 sudo[384852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:25:13 compute-0 sudo[384852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:13 compute-0 sudo[384852]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:13 compute-0 sudo[384877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:13 compute-0 sudo[384877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:13 compute-0 sudo[384877]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:13 compute-0 sudo[384902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 14 09:25:13 compute-0 sudo[384902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:13 compute-0 ceph-mon[74249]: pgmap v2162: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:13 compute-0 sudo[384902]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:25:13 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:25:13 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:13 compute-0 sudo[384948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:13 compute-0 sudo[384948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:13 compute-0 sudo[384948]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:13 compute-0 sudo[384973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:25:13 compute-0 sudo[384973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:13 compute-0 sudo[384973]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:13 compute-0 sudo[384998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.926 2 DEBUG nova.compute.manager [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.926 2 DEBUG oslo_concurrency.lockutils [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:13 compute-0 sudo[384998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.926 2 DEBUG oslo_concurrency.lockutils [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.926 2 DEBUG oslo_concurrency.lockutils [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.927 2 DEBUG nova.compute.manager [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] No waiting events found dispatching network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:25:13 compute-0 nova_compute[259627]: 2025-10-14 09:25:13.927 2 WARNING nova.compute.manager [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received unexpected event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 for instance with vm_state active and task_state None.
Oct 14 09:25:13 compute-0 sudo[384998]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:13 compute-0 sudo[385023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:25:13 compute-0 sudo[385023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:14 compute-0 sudo[385023]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:14 compute-0 sudo[385079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:14 compute-0 sudo[385079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:14 compute-0 sudo[385079]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:14 compute-0 sudo[385104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:25:14 compute-0 sudo[385104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:14 compute-0 sudo[385104]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:14 compute-0 sudo[385129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:14 compute-0 sudo[385129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:14 compute-0 sudo[385129]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:14 compute-0 nova_compute[259627]: 2025-10-14 09:25:14.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:14 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:14 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:14 compute-0 sudo[385154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- inventory --format=json-pretty --filter-for-batch
Oct 14 09:25:14 compute-0 sudo[385154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:14 compute-0 nova_compute[259627]: 2025-10-14 09:25:14.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:15 compute-0 podman[385216]: 2025-10-14 09:25:15.253787177 +0000 UTC m=+0.061844819 container create 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.260 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.260 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.261 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.261 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.261 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.262 2 INFO nova.compute.manager [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Terminating instance
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.263 2 DEBUG nova.compute.manager [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:25:15 compute-0 podman[385216]: 2025-10-14 09:25:15.226956354 +0000 UTC m=+0.035013976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:25:15 compute-0 systemd[1]: Started libpod-conmon-7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6.scope.
Oct 14 09:25:15 compute-0 kernel: tap8cdca031-de (unregistering): left promiscuous mode
Oct 14 09:25:15 compute-0 NetworkManager[44885]: <info>  [1760433915.3410] device (tap8cdca031-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:15 compute-0 ovn_controller[152662]: 2025-10-14T09:25:15Z|01333|binding|INFO|Releasing lport 8cdca031-de5b-4956-a27b-c6c6320c9764 from this chassis (sb_readonly=0)
Oct 14 09:25:15 compute-0 ovn_controller[152662]: 2025-10-14T09:25:15Z|01334|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 down in Southbound
Oct 14 09:25:15 compute-0 ovn_controller[152662]: 2025-10-14T09:25:15Z|01335|binding|INFO|Removing iface tap8cdca031-de ovn-installed in OVS
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.391 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:fc:28 10.100.0.14'], port_security=['fa:16:3e:ed:fc:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c0981e31-738f-44e8-be4c-b64961716660', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '8', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=596a534e-c0c1-49ed-bcdd-00855a90d08e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8cdca031-de5b-4956-a27b-c6c6320c9764) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.392 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8cdca031-de5b-4956-a27b-c6c6320c9764 in datapath ee3efe32-94e6-45cb-ae71-b379f4a2309a unbound from our chassis
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.393 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee3efe32-94e6-45cb-ae71-b379f4a2309a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.394 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5375285-91c1-487e-b794-f81a6c9987c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.395 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a namespace which is not needed anymore
Oct 14 09:25:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:15 compute-0 podman[385216]: 2025-10-14 09:25:15.42183244 +0000 UTC m=+0.229890112 container init 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:25:15 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct 14 09:25:15 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Consumed 3.752s CPU time.
Oct 14 09:25:15 compute-0 podman[385216]: 2025-10-14 09:25:15.433605691 +0000 UTC m=+0.241663333 container start 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:15 compute-0 systemd-machined[214636]: Machine qemu-157-instance-0000007c terminated.
Oct 14 09:25:15 compute-0 podman[385216]: 2025-10-14 09:25:15.437407335 +0000 UTC m=+0.245465017 container attach 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:15 compute-0 systemd[1]: libpod-7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6.scope: Deactivated successfully.
Oct 14 09:25:15 compute-0 affectionate_hodgkin[385232]: 167 167
Oct 14 09:25:15 compute-0 conmon[385232]: conmon 7cdf12f0918426b56a1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6.scope/container/memory.events
Oct 14 09:25:15 compute-0 podman[385216]: 2025-10-14 09:25:15.442742156 +0000 UTC m=+0.250799798 container died 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac10fa9e5f153fef45ed4d9a5811ca423f863c7412ea62c8082d590aeb18e6cf-merged.mount: Deactivated successfully.
Oct 14 09:25:15 compute-0 podman[385216]: 2025-10-14 09:25:15.509344502 +0000 UTC m=+0.317402104 container remove 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.526 2 INFO nova.virt.libvirt.driver [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance destroyed successfully.
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.529 2 DEBUG nova.objects.instance [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid c0981e31-738f-44e8-be4c-b64961716660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:25:15 compute-0 systemd[1]: libpod-conmon-7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6.scope: Deactivated successfully.
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.544 2 DEBUG nova.virt.libvirt.vif [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-546978415',display_name='tempest-TestNetworkBasicOps-server-546978415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-546978415',id=124,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKPHsxRqxnHLpMWsVQOQ2YVhzbM1QIaVRvazbKfcBx066Wk2bLss4UyHnFnwGk2N+hL3dCcdm0s3ho7BXaEBpPBlInClKepgsjMFj/5tj/fAwTM9jsdqXQDPYNKI8XGpQ==',key_name='tempest-TestNetworkBasicOps-1675596793',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:25:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ycr49veq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:25:13Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c0981e31-738f-44e8-be4c-b64961716660,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.545 2 DEBUG nova.network.os_vif_util [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.546 2 DEBUG nova.network.os_vif_util [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.546 2 DEBUG os_vif [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cdca031-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.556 2 INFO os_vif [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de')
Oct 14 09:25:15 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [NOTICE]   (384816) : haproxy version is 2.8.14-c23fe91
Oct 14 09:25:15 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [NOTICE]   (384816) : path to executable is /usr/sbin/haproxy
Oct 14 09:25:15 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [WARNING]  (384816) : Exiting Master process...
Oct 14 09:25:15 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [WARNING]  (384816) : Exiting Master process...
Oct 14 09:25:15 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [ALERT]    (384816) : Current worker (384818) exited with code 143 (Terminated)
Oct 14 09:25:15 compute-0 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [WARNING]  (384816) : All workers exited. Exiting... (0)
Oct 14 09:25:15 compute-0 systemd[1]: libpod-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6.scope: Deactivated successfully.
Oct 14 09:25:15 compute-0 conmon[384812]: conmon 9927ec3ccf3768341212 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6.scope/container/memory.events
Oct 14 09:25:15 compute-0 podman[385282]: 2025-10-14 09:25:15.62215082 +0000 UTC m=+0.059666396 container died 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6-userdata-shm.mount: Deactivated successfully.
Oct 14 09:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3cb1e7089941c132d39655646185e2c3a1080aba046532a423fb17ff44148c4-merged.mount: Deactivated successfully.
Oct 14 09:25:15 compute-0 podman[385282]: 2025-10-14 09:25:15.661332158 +0000 UTC m=+0.098847724 container cleanup 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 09:25:15 compute-0 systemd[1]: libpod-conmon-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6.scope: Deactivated successfully.
Oct 14 09:25:15 compute-0 podman[385321]: 2025-10-14 09:25:15.68568552 +0000 UTC m=+0.043055995 container create 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:25:15 compute-0 systemd[1]: Started libpod-conmon-8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377.scope.
Oct 14 09:25:15 compute-0 podman[385348]: 2025-10-14 09:25:15.735701265 +0000 UTC m=+0.051787040 container remove 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.740 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d202c7bb-2f45-4755-bd4c-c8ddee81809a]: (4, ('Tue Oct 14 09:25:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a (9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6)\n9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6\nTue Oct 14 09:25:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a (9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6)\n9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0617239c-fe7e-455d-b1e7-b634a3346243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.744 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3efe32-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:15 compute-0 kernel: tapee3efe32-90: left promiscuous mode
Oct 14 09:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c627340-d9be-44fe-aec4-64d10b7eee39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:15 compute-0 podman[385321]: 2025-10-14 09:25:15.669650803 +0000 UTC m=+0.027021288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:25:15 compute-0 ceph-mon[74249]: pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:15 compute-0 podman[385321]: 2025-10-14 09:25:15.772858214 +0000 UTC m=+0.130228699 container init 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:15 compute-0 podman[385321]: 2025-10-14 09:25:15.782612785 +0000 UTC m=+0.139983250 container start 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:25:15 compute-0 podman[385321]: 2025-10-14 09:25:15.786211894 +0000 UTC m=+0.143582359 container attach 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.794 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af5ba01d-4ec3-4497-aa04-3c3230b7ef7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.796 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9965318b-3a97-408d-b2f7-d37a82d75382]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.814 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[733461b9-0f04-4220-a8ba-dbfbddc3c727]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 769033, 'reachable_time': 37028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385369, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.816 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:25:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.817 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[29401b3e-4831-4cb2-b87b-7deeabdcb686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.943 2 INFO nova.virt.libvirt.driver [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Deleting instance files /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660_del
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.944 2 INFO nova.virt.libvirt.driver [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Deletion of /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660_del complete
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.998 2 INFO nova.compute.manager [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.998 2 DEBUG oslo.service.loopingcall [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.998 2 DEBUG nova.compute.manager [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:25:15 compute-0 nova_compute[259627]: 2025-10-14 09:25:15.998 2 DEBUG nova.network.neutron [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:25:16 compute-0 nova_compute[259627]: 2025-10-14 09:25:16.060 2 DEBUG nova.compute.manager [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-unplugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:16 compute-0 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG oslo_concurrency.lockutils [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:16 compute-0 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG oslo_concurrency.lockutils [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:16 compute-0 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG oslo_concurrency.lockutils [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:16 compute-0 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG nova.compute.manager [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] No waiting events found dispatching network-vif-unplugged-8cdca031-de5b-4956-a27b-c6c6320c9764 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:25:16 compute-0 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG nova.compute.manager [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-unplugged-8cdca031-de5b-4956-a27b-c6c6320c9764 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:25:16 compute-0 systemd[1]: run-netns-ovnmeta\x2dee3efe32\x2d94e6\x2d45cb\x2dae71\x2db379f4a2309a.mount: Deactivated successfully.
Oct 14 09:25:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:25:17 compute-0 reverent_golick[385361]: [
Oct 14 09:25:17 compute-0 reverent_golick[385361]:     {
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         "available": false,
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         "ceph_device": false,
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         "lsm_data": {},
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         "lvs": [],
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         "path": "/dev/sr0",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         "rejected_reasons": [
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "Has a FileSystem",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "Insufficient space (<5GB)"
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         ],
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         "sys_api": {
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "actuators": null,
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "device_nodes": "sr0",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "devname": "sr0",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "human_readable_size": "482.00 KB",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "id_bus": "ata",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "model": "QEMU DVD-ROM",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "nr_requests": "2",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "parent": "/dev/sr0",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "partitions": {},
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "path": "/dev/sr0",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "removable": "1",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "rev": "2.5+",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "ro": "0",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "rotational": "0",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "sas_address": "",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "sas_device_handle": "",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "scheduler_mode": "mq-deadline",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "sectors": 0,
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "sectorsize": "2048",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "size": 493568.0,
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "support_discard": "2048",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "type": "disk",
Oct 14 09:25:17 compute-0 reverent_golick[385361]:             "vendor": "QEMU"
Oct 14 09:25:17 compute-0 reverent_golick[385361]:         }
Oct 14 09:25:17 compute-0 reverent_golick[385361]:     }
Oct 14 09:25:17 compute-0 reverent_golick[385361]: ]
Oct 14 09:25:17 compute-0 systemd[1]: libpod-8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377.scope: Deactivated successfully.
Oct 14 09:25:17 compute-0 systemd[1]: libpod-8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377.scope: Consumed 1.491s CPU time.
Oct 14 09:25:17 compute-0 podman[385321]: 2025-10-14 09:25:17.261922328 +0000 UTC m=+1.619292833 container died 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f-merged.mount: Deactivated successfully.
Oct 14 09:25:17 compute-0 podman[385321]: 2025-10-14 09:25:17.342341385 +0000 UTC m=+1.699711850 container remove 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:25:17 compute-0 systemd[1]: libpod-conmon-8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377.scope: Deactivated successfully.
Oct 14 09:25:17 compute-0 sudo[385154]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:17 compute-0 nova_compute[259627]: 2025-10-14 09:25:17.386 2 DEBUG nova.network.neutron [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e0155b0a-dfe8-4997-a18d-d5f21717cdb7 does not exist
Oct 14 09:25:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7bd4f0ed-ac98-4f80-8cfb-62521ee88fd9 does not exist
Oct 14 09:25:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a4418754-4f0a-4ce1-b407-b244220031f3 does not exist
Oct 14 09:25:17 compute-0 nova_compute[259627]: 2025-10-14 09:25:17.408 2 INFO nova.compute.manager [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Took 1.41 seconds to deallocate network for instance.
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:25:17 compute-0 nova_compute[259627]: 2025-10-14 09:25:17.452 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:17 compute-0 nova_compute[259627]: 2025-10-14 09:25:17.453 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:17 compute-0 sudo[387247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:17 compute-0 sudo[387247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:17 compute-0 sudo[387247]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:17 compute-0 nova_compute[259627]: 2025-10-14 09:25:17.507 2 DEBUG oslo_concurrency.processutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:17 compute-0 sudo[387272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:25:17 compute-0 sudo[387272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:17 compute-0 sudo[387272]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:17 compute-0 sudo[387298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:17 compute-0 sudo[387298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:17 compute-0 sudo[387298]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:17 compute-0 sudo[387340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:25:17 compute-0 sudo[387340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:17 compute-0 ceph-mon[74249]: pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:25:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:25:17 compute-0 podman[387367]: 2025-10-14 09:25:17.837943351 +0000 UTC m=+0.083280858 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:25:17 compute-0 podman[387366]: 2025-10-14 09:25:17.852820269 +0000 UTC m=+0.109597729 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:25:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493938824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:17 compute-0 nova_compute[259627]: 2025-10-14 09:25:17.959 2 DEBUG oslo_concurrency.processutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:17 compute-0 nova_compute[259627]: 2025-10-14 09:25:17.965 2 DEBUG nova.compute.provider_tree [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:25:17 compute-0 nova_compute[259627]: 2025-10-14 09:25:17.985 2 DEBUG nova.scheduler.client.report [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:25:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.022 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.061 2 INFO nova.scheduler.client.report [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance c0981e31-738f-44e8-be4c-b64961716660
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.131 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.163 2 DEBUG nova.compute.manager [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.163 2 DEBUG oslo_concurrency.lockutils [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.163 2 DEBUG oslo_concurrency.lockutils [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.164 2 DEBUG oslo_concurrency.lockutils [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.164 2 DEBUG nova.compute.manager [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] No waiting events found dispatching network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:25:18 compute-0 nova_compute[259627]: 2025-10-14 09:25:18.164 2 WARNING nova.compute.manager [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received unexpected event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 for instance with vm_state deleted and task_state None.
Oct 14 09:25:18 compute-0 podman[387453]: 2025-10-14 09:25:18.188898033 +0000 UTC m=+0.067288773 container create 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:25:18 compute-0 systemd[1]: Started libpod-conmon-7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029.scope.
Oct 14 09:25:18 compute-0 podman[387453]: 2025-10-14 09:25:18.166069489 +0000 UTC m=+0.044460209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:25:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:18 compute-0 podman[387453]: 2025-10-14 09:25:18.283322775 +0000 UTC m=+0.161713465 container init 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:25:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:25:18 compute-0 podman[387453]: 2025-10-14 09:25:18.296592583 +0000 UTC m=+0.174983273 container start 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 09:25:18 compute-0 podman[387453]: 2025-10-14 09:25:18.301797822 +0000 UTC m=+0.180188512 container attach 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:25:18 compute-0 serene_gagarin[387469]: 167 167
Oct 14 09:25:18 compute-0 systemd[1]: libpod-7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029.scope: Deactivated successfully.
Oct 14 09:25:18 compute-0 podman[387453]: 2025-10-14 09:25:18.308924388 +0000 UTC m=+0.187315118 container died 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cb9ec100d2c38d205368e11003954aaaa7805906556738adb79a7236ca28b06-merged.mount: Deactivated successfully.
Oct 14 09:25:18 compute-0 podman[387453]: 2025-10-14 09:25:18.367661399 +0000 UTC m=+0.246052089 container remove 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:18 compute-0 systemd[1]: libpod-conmon-7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029.scope: Deactivated successfully.
Oct 14 09:25:18 compute-0 podman[387491]: 2025-10-14 09:25:18.619896152 +0000 UTC m=+0.079252119 container create 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:25:18 compute-0 systemd[1]: Started libpod-conmon-6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099.scope.
Oct 14 09:25:18 compute-0 podman[387491]: 2025-10-14 09:25:18.588291001 +0000 UTC m=+0.047647048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:25:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:18 compute-0 podman[387491]: 2025-10-14 09:25:18.736191446 +0000 UTC m=+0.195547453 container init 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 09:25:18 compute-0 podman[387491]: 2025-10-14 09:25:18.749057334 +0000 UTC m=+0.208413321 container start 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:25:18 compute-0 podman[387491]: 2025-10-14 09:25:18.753206136 +0000 UTC m=+0.212562213 container attach 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:25:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2493938824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:19 compute-0 nova_compute[259627]: 2025-10-14 09:25:19.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:19 compute-0 ceph-mon[74249]: pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:25:19 compute-0 distracted_elion[387508]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:25:19 compute-0 distracted_elion[387508]: --> relative data size: 1.0
Oct 14 09:25:19 compute-0 distracted_elion[387508]: --> All data devices are unavailable
Oct 14 09:25:20 compute-0 systemd[1]: libpod-6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099.scope: Deactivated successfully.
Oct 14 09:25:20 compute-0 systemd[1]: libpod-6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099.scope: Consumed 1.205s CPU time.
Oct 14 09:25:20 compute-0 podman[387491]: 2025-10-14 09:25:20.010441572 +0000 UTC m=+1.469797589 container died 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9-merged.mount: Deactivated successfully.
Oct 14 09:25:20 compute-0 podman[387491]: 2025-10-14 09:25:20.093570026 +0000 UTC m=+1.552926023 container remove 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:20 compute-0 systemd[1]: libpod-conmon-6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099.scope: Deactivated successfully.
Oct 14 09:25:20 compute-0 sudo[387340]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:20 compute-0 sudo[387550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:20 compute-0 sudo[387550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:20 compute-0 sudo[387550]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 75 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 85 op/s
Oct 14 09:25:20 compute-0 sudo[387575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:25:20 compute-0 sudo[387575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:20 compute-0 sudo[387575]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:20 compute-0 sudo[387600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:20 compute-0 sudo[387600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:20 compute-0 sudo[387600]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:20 compute-0 sudo[387625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:25:20 compute-0 sudo[387625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:20 compute-0 nova_compute[259627]: 2025-10-14 09:25:20.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:20 compute-0 podman[387692]: 2025-10-14 09:25:20.936448723 +0000 UTC m=+0.061318676 container create a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:25:20 compute-0 systemd[1]: Started libpod-conmon-a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308.scope.
Oct 14 09:25:21 compute-0 podman[387692]: 2025-10-14 09:25:20.912238655 +0000 UTC m=+0.037108658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:25:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:21 compute-0 podman[387692]: 2025-10-14 09:25:21.050187454 +0000 UTC m=+0.175057467 container init a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:25:21 compute-0 podman[387692]: 2025-10-14 09:25:21.064386055 +0000 UTC m=+0.189255988 container start a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:25:21 compute-0 podman[387692]: 2025-10-14 09:25:21.069349577 +0000 UTC m=+0.194219550 container attach a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:21 compute-0 inspiring_grothendieck[387708]: 167 167
Oct 14 09:25:21 compute-0 systemd[1]: libpod-a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308.scope: Deactivated successfully.
Oct 14 09:25:21 compute-0 podman[387692]: 2025-10-14 09:25:21.07349159 +0000 UTC m=+0.198361553 container died a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-32591531d28ccaca99d6a47b17a49a47a6517a0dfa7c270f4d8bbba3d675ba21-merged.mount: Deactivated successfully.
Oct 14 09:25:21 compute-0 podman[387692]: 2025-10-14 09:25:21.122938311 +0000 UTC m=+0.247808244 container remove a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:25:21 compute-0 systemd[1]: libpod-conmon-a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308.scope: Deactivated successfully.
Oct 14 09:25:21 compute-0 podman[387731]: 2025-10-14 09:25:21.327871675 +0000 UTC m=+0.069330954 container create 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:25:21 compute-0 systemd[1]: Started libpod-conmon-4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab.scope.
Oct 14 09:25:21 compute-0 podman[387731]: 2025-10-14 09:25:21.29810843 +0000 UTC m=+0.039567769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:25:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:21 compute-0 podman[387731]: 2025-10-14 09:25:21.427890557 +0000 UTC m=+0.169349896 container init 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:25:21 compute-0 podman[387731]: 2025-10-14 09:25:21.443563864 +0000 UTC m=+0.185023123 container start 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:25:21 compute-0 podman[387731]: 2025-10-14 09:25:21.447369648 +0000 UTC m=+0.188828977 container attach 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:25:21 compute-0 ceph-mon[74249]: pgmap v2166: 305 pgs: 305 active+clean; 75 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 85 op/s
Oct 14 09:25:22 compute-0 agitated_babbage[387747]: {
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:     "0": [
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:         {
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "devices": [
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "/dev/loop3"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             ],
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_name": "ceph_lv0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_size": "21470642176",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "name": "ceph_lv0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "tags": {
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cluster_name": "ceph",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.crush_device_class": "",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.encrypted": "0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osd_id": "0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.type": "block",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.vdo": "0"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             },
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "type": "block",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "vg_name": "ceph_vg0"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:         }
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:     ],
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:     "1": [
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:         {
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "devices": [
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "/dev/loop4"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             ],
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_name": "ceph_lv1",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_size": "21470642176",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "name": "ceph_lv1",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "tags": {
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cluster_name": "ceph",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.crush_device_class": "",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.encrypted": "0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osd_id": "1",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.type": "block",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.vdo": "0"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             },
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "type": "block",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "vg_name": "ceph_vg1"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:         }
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:     ],
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:     "2": [
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:         {
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "devices": [
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "/dev/loop5"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             ],
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_name": "ceph_lv2",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_size": "21470642176",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "name": "ceph_lv2",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "tags": {
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.cluster_name": "ceph",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.crush_device_class": "",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.encrypted": "0",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osd_id": "2",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.type": "block",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:                 "ceph.vdo": "0"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             },
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "type": "block",
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:             "vg_name": "ceph_vg2"
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:         }
Oct 14 09:25:22 compute-0 agitated_babbage[387747]:     ]
Oct 14 09:25:22 compute-0 agitated_babbage[387747]: }
Oct 14 09:25:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:25:22 compute-0 systemd[1]: libpod-4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab.scope: Deactivated successfully.
Oct 14 09:25:22 compute-0 podman[387731]: 2025-10-14 09:25:22.297858192 +0000 UTC m=+1.039317501 container died 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:25:22 compute-0 sshd-session[387757]: banner exchange: Connection from 93.123.109.214 port 53922: invalid format
Oct 14 09:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534-merged.mount: Deactivated successfully.
Oct 14 09:25:22 compute-0 podman[387731]: 2025-10-14 09:25:22.384131074 +0000 UTC m=+1.125590353 container remove 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:25:22 compute-0 systemd[1]: libpod-conmon-4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab.scope: Deactivated successfully.
Oct 14 09:25:22 compute-0 sudo[387625]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:22 compute-0 sshd-session[387782]: banner exchange: Connection from 93.123.109.214 port 53934: invalid format
Oct 14 09:25:22 compute-0 sudo[387769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:22 compute-0 sudo[387769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:22 compute-0 sudo[387769]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:22 compute-0 sudo[387795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:25:22 compute-0 sudo[387795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:22 compute-0 sudo[387795]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:22 compute-0 sudo[387820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:22 compute-0 sudo[387820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:22 compute-0 sudo[387820]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:22 compute-0 sudo[387845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:25:22 compute-0 sudo[387845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:23 compute-0 podman[387910]: 2025-10-14 09:25:23.118533241 +0000 UTC m=+0.045627078 container create b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:23 compute-0 systemd[1]: Started libpod-conmon-b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73.scope.
Oct 14 09:25:23 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:23 compute-0 podman[387910]: 2025-10-14 09:25:23.094982589 +0000 UTC m=+0.022076516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:25:23 compute-0 podman[387910]: 2025-10-14 09:25:23.207158801 +0000 UTC m=+0.134252628 container init b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:25:23 compute-0 podman[387910]: 2025-10-14 09:25:23.217724982 +0000 UTC m=+0.144818809 container start b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:25:23 compute-0 podman[387910]: 2025-10-14 09:25:23.220720666 +0000 UTC m=+0.147814493 container attach b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:25:23 compute-0 admiring_burnell[387926]: 167 167
Oct 14 09:25:23 compute-0 systemd[1]: libpod-b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73.scope: Deactivated successfully.
Oct 14 09:25:23 compute-0 podman[387910]: 2025-10-14 09:25:23.226739425 +0000 UTC m=+0.153833272 container died b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b603d1ce05ef40b91803e3789e86fafe6ca64455daf6e7490a3175e749c49bd0-merged.mount: Deactivated successfully.
Oct 14 09:25:23 compute-0 podman[387910]: 2025-10-14 09:25:23.276245138 +0000 UTC m=+0.203338985 container remove b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:25:23 compute-0 systemd[1]: libpod-conmon-b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73.scope: Deactivated successfully.
Oct 14 09:25:23 compute-0 podman[387951]: 2025-10-14 09:25:23.47987071 +0000 UTC m=+0.058096537 container create 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:25:23 compute-0 systemd[1]: Started libpod-conmon-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope.
Oct 14 09:25:23 compute-0 podman[387951]: 2025-10-14 09:25:23.459757563 +0000 UTC m=+0.037983430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:25:23 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:23 compute-0 podman[387951]: 2025-10-14 09:25:23.577586974 +0000 UTC m=+0.155812831 container init 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:25:23 compute-0 podman[387951]: 2025-10-14 09:25:23.589937019 +0000 UTC m=+0.168162876 container start 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 14 09:25:23 compute-0 podman[387951]: 2025-10-14 09:25:23.593934148 +0000 UTC m=+0.172160015 container attach 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:25:23 compute-0 ceph-mon[74249]: pgmap v2167: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:25:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]: {
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "osd_id": 2,
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "type": "bluestore"
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:     },
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "osd_id": 1,
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "type": "bluestore"
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:     },
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "osd_id": 0,
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:         "type": "bluestore"
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]:     }
Oct 14 09:25:24 compute-0 compassionate_fermi[387967]: }
Oct 14 09:25:24 compute-0 systemd[1]: libpod-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope: Deactivated successfully.
Oct 14 09:25:24 compute-0 systemd[1]: libpod-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope: Consumed 1.051s CPU time.
Oct 14 09:25:24 compute-0 conmon[387967]: conmon 3e900b48c665756bd555 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope/container/memory.events
Oct 14 09:25:24 compute-0 podman[387951]: 2025-10-14 09:25:24.637371501 +0000 UTC m=+1.215597388 container died 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986-merged.mount: Deactivated successfully.
Oct 14 09:25:24 compute-0 podman[387951]: 2025-10-14 09:25:24.704352376 +0000 UTC m=+1.282578203 container remove 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:25:24 compute-0 systemd[1]: libpod-conmon-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope: Deactivated successfully.
Oct 14 09:25:24 compute-0 sudo[387845]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:24 compute-0 nova_compute[259627]: 2025-10-14 09:25:24.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:25:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:25:24 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8cd24a57-e538-4a40-94f9-4d8503074b58 does not exist
Oct 14 09:25:24 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a22ccd02-a15a-49e9-a0fd-639c5f7b51c6 does not exist
Oct 14 09:25:24 compute-0 sudo[388012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:25:24 compute-0 sudo[388012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:24 compute-0 sudo[388012]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:24 compute-0 sudo[388037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:25:24 compute-0 sudo[388037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:25:24 compute-0 sudo[388037]: pam_unix(sudo:session): session closed for user root
Oct 14 09:25:25 compute-0 nova_compute[259627]: 2025-10-14 09:25:25.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:25 compute-0 ceph-mon[74249]: pgmap v2168: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:25:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:25:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:25:27 compute-0 ceph-mon[74249]: pgmap v2169: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 09:25:27 compute-0 nova_compute[259627]: 2025-10-14 09:25:27.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:27 compute-0 nova_compute[259627]: 2025-10-14 09:25:27.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 09:25:29 compute-0 nova_compute[259627]: 2025-10-14 09:25:29.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:29 compute-0 ceph-mon[74249]: pgmap v2170: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 09:25:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 09:25:30 compute-0 nova_compute[259627]: 2025-10-14 09:25:30.521 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433915.5197697, c0981e31-738f-44e8-be4c-b64961716660 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:25:30 compute-0 nova_compute[259627]: 2025-10-14 09:25:30.521 2 INFO nova.compute.manager [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] VM Stopped (Lifecycle Event)
Oct 14 09:25:30 compute-0 nova_compute[259627]: 2025-10-14 09:25:30.546 2 DEBUG nova.compute.manager [None req-701c9003-6589-43a4-902f-1feb2a5d1606 - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:30 compute-0 nova_compute[259627]: 2025-10-14 09:25:30.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:31 compute-0 ceph-mon[74249]: pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 14 op/s
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:25:32
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'backups', 'vms']
Oct 14 09:25:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:25:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:25:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:25:33 compute-0 ceph-mon[74249]: pgmap v2172: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 14 op/s
Oct 14 09:25:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:34 compute-0 nova_compute[259627]: 2025-10-14 09:25:34.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:35 compute-0 nova_compute[259627]: 2025-10-14 09:25:35.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:35 compute-0 podman[388064]: 2025-10-14 09:25:35.671858357 +0000 UTC m=+0.073528428 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 09:25:35 compute-0 podman[388063]: 2025-10-14 09:25:35.672839321 +0000 UTC m=+0.073882686 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 09:25:35 compute-0 ceph-mon[74249]: pgmap v2173: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:37 compute-0 ceph-mon[74249]: pgmap v2174: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:37 compute-0 nova_compute[259627]: 2025-10-14 09:25:37.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:39 compute-0 nova_compute[259627]: 2025-10-14 09:25:39.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:39 compute-0 ceph-mon[74249]: pgmap v2175: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:39 compute-0 nova_compute[259627]: 2025-10-14 09:25:39.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:39 compute-0 nova_compute[259627]: 2025-10-14 09:25:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.017 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:25:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712989019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.472 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.654 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.656 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3671MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.656 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.656 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.740 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.741 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:25:40 compute-0 nova_compute[259627]: 2025-10-14 09:25:40.766 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1712989019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:25:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3998892646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:41 compute-0 nova_compute[259627]: 2025-10-14 09:25:41.172 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:41 compute-0 nova_compute[259627]: 2025-10-14 09:25:41.178 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:25:41 compute-0 nova_compute[259627]: 2025-10-14 09:25:41.194 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:25:41 compute-0 nova_compute[259627]: 2025-10-14 09:25:41.213 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:25:41 compute-0 nova_compute[259627]: 2025-10-14 09:25:41.214 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:41 compute-0 ceph-mon[74249]: pgmap v2176: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3998892646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:43 compute-0 nova_compute[259627]: 2025-10-14 09:25:43.209 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:43 compute-0 nova_compute[259627]: 2025-10-14 09:25:43.209 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:25:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:25:43 compute-0 ceph-mon[74249]: pgmap v2177: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:43 compute-0 nova_compute[259627]: 2025-10-14 09:25:43.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:43 compute-0 nova_compute[259627]: 2025-10-14 09:25:43.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:25:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.621 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.621 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.646 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.744 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.745 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.753 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.754 2 INFO nova.compute.claims [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.905 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:25:44 compute-0 nova_compute[259627]: 2025-10-14 09:25:44.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:25:45 compute-0 ceph-mon[74249]: pgmap v2178: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:25:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325488986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.331 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.337 2 DEBUG nova.compute.provider_tree [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.357 2 DEBUG nova.scheduler.client.report [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.381 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.382 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.431 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.431 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.447 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.474 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.587 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.589 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.589 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Creating image(s)
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.613 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.637 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.661 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.665 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.718 2 DEBUG nova.policy [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.769 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.771 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.771 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.772 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.796 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:45 compute-0 nova_compute[259627]: 2025-10-14 09:25:45.801 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f41def60-a7be-4154-86bc-ef63a639ee94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3325488986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.133 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f41def60-a7be-4154-86bc-ef63a639ee94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.179 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.254 2 DEBUG nova.objects.instance [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid f41def60-a7be-4154-86bc-ef63a639ee94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.275 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.275 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Ensure instance console log exists: /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.276 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.276 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.276 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:46 compute-0 nova_compute[259627]: 2025-10-14 09:25:46.415 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Successfully created port: 52d803a0-5139-4197-a575-2530583dda13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:25:47 compute-0 ceph-mon[74249]: pgmap v2179: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:47 compute-0 nova_compute[259627]: 2025-10-14 09:25:47.384 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Successfully updated port: 52d803a0-5139-4197-a575-2530583dda13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:25:47 compute-0 nova_compute[259627]: 2025-10-14 09:25:47.398 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:25:47 compute-0 nova_compute[259627]: 2025-10-14 09:25:47.398 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:25:47 compute-0 nova_compute[259627]: 2025-10-14 09:25:47.398 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:25:47 compute-0 nova_compute[259627]: 2025-10-14 09:25:47.491 2 DEBUG nova.compute.manager [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-changed-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:47 compute-0 nova_compute[259627]: 2025-10-14 09:25:47.492 2 DEBUG nova.compute.manager [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing instance network info cache due to event network-changed-52d803a0-5139-4197-a575-2530583dda13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:25:47 compute-0 nova_compute[259627]: 2025-10-14 09:25:47.492 2 DEBUG oslo_concurrency.lockutils [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:25:47 compute-0 nova_compute[259627]: 2025-10-14 09:25:47.620 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:25:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.408 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.442 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.443 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance network_info: |[{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.443 2 DEBUG oslo_concurrency.lockutils [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.443 2 DEBUG nova.network.neutron [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing network info cache for port 52d803a0-5139-4197-a575-2530583dda13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.449 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start _get_guest_xml network_info=[{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.454 2 WARNING nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.461 2 DEBUG nova.virt.libvirt.host [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.461 2 DEBUG nova.virt.libvirt.host [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.471 2 DEBUG nova.virt.libvirt.host [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.472 2 DEBUG nova.virt.libvirt.host [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.472 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.472 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.473 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.473 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.473 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.474 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.474 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.474 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.474 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.475 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.475 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.475 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.479 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:48 compute-0 podman[388338]: 2025-10-14 09:25:48.686580023 +0000 UTC m=+0.092682511 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 09:25:48 compute-0 podman[388337]: 2025-10-14 09:25:48.731585505 +0000 UTC m=+0.146248454 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:25:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:25:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4213029771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.945 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.972 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:48 compute-0 nova_compute[259627]: 2025-10-14 09:25:48.977 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:49 compute-0 ceph-mon[74249]: pgmap v2180: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:25:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4213029771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:25:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:25:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498982399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.444 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.446 2 DEBUG nova.virt.libvirt.vif [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2135752872',display_name='tempest-TestNetworkBasicOps-server-2135752872',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2135752872',id=125,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnKWI1fcgm2J2USW/ocYjEbdNv3UFcXFKMO5si6IlBqcZWb9CUblu//WPv2zFMmUex5tiH7jg81h5bD0kUs5doUIUp4qv9iNUQKKi7Q5u8sjjVGsY4n55Yf/sl7IQnjgw==',key_name='tempest-TestNetworkBasicOps-1922317585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-h0pwqz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:25:45Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=f41def60-a7be-4154-86bc-ef63a639ee94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.447 2 DEBUG nova.network.os_vif_util [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.448 2 DEBUG nova.network.os_vif_util [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.450 2 DEBUG nova.objects.instance [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid f41def60-a7be-4154-86bc-ef63a639ee94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.473 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <uuid>f41def60-a7be-4154-86bc-ef63a639ee94</uuid>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <name>instance-0000007d</name>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-2135752872</nova:name>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:25:48</nova:creationTime>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <nova:port uuid="52d803a0-5139-4197-a575-2530583dda13">
Oct 14 09:25:49 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <system>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <entry name="serial">f41def60-a7be-4154-86bc-ef63a639ee94</entry>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <entry name="uuid">f41def60-a7be-4154-86bc-ef63a639ee94</entry>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </system>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <os>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   </os>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <features>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   </features>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f41def60-a7be-4154-86bc-ef63a639ee94_disk">
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f41def60-a7be-4154-86bc-ef63a639ee94_disk.config">
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:25:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:6a:56:7c"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <target dev="tap52d803a0-51"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/console.log" append="off"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <video>
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </video>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:25:49 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:25:49 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:25:49 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:25:49 compute-0 nova_compute[259627]: </domain>
Oct 14 09:25:49 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.475 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Preparing to wait for external event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.477 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.478 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.478 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.479 2 DEBUG nova.virt.libvirt.vif [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2135752872',display_name='tempest-TestNetworkBasicOps-server-2135752872',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2135752872',id=125,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnKWI1fcgm2J2USW/ocYjEbdNv3UFcXFKMO5si6IlBqcZWb9CUblu//WPv2zFMmUex5tiH7jg81h5bD0kUs5doUIUp4qv9iNUQKKi7Q5u8sjjVGsY4n55Yf/sl7IQnjgw==',key_name='tempest-TestNetworkBasicOps-1922317585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-h0pwqz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:25:45Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=f41def60-a7be-4154-86bc-ef63a639ee94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.480 2 DEBUG nova.network.os_vif_util [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.481 2 DEBUG nova.network.os_vif_util [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.482 2 DEBUG os_vif [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d803a0-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52d803a0-51, col_values=(('external_ids', {'iface-id': '52d803a0-5139-4197-a575-2530583dda13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:56:7c', 'vm-uuid': 'f41def60-a7be-4154-86bc-ef63a639ee94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:49 compute-0 NetworkManager[44885]: <info>  [1760433949.4955] manager: (tap52d803a0-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.505 2 INFO os_vif [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51')
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.561 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.562 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.562 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:6a:56:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.562 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Using config drive
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.587 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:49 compute-0 nova_compute[259627]: 2025-10-14 09:25:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.081 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Creating config drive at /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.087 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmv4vdjgv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.260 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmv4vdjgv" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.297 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:25:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 52 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 360 KiB/s wr, 11 op/s
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.301 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config f41def60-a7be-4154-86bc-ef63a639ee94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.356 2 DEBUG nova.network.neutron [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updated VIF entry in instance network info cache for port 52d803a0-5139-4197-a575-2530583dda13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.358 2 DEBUG nova.network.neutron [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:25:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/498982399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.378 2 DEBUG oslo_concurrency.lockutils [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.508 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config f41def60-a7be-4154-86bc-ef63a639ee94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.510 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Deleting local config drive /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config because it was imported into RBD.
Oct 14 09:25:50 compute-0 kernel: tap52d803a0-51: entered promiscuous mode
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:50 compute-0 NetworkManager[44885]: <info>  [1760433950.5813] manager: (tap52d803a0-51): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Oct 14 09:25:50 compute-0 ovn_controller[152662]: 2025-10-14T09:25:50Z|01336|binding|INFO|Claiming lport 52d803a0-5139-4197-a575-2530583dda13 for this chassis.
Oct 14 09:25:50 compute-0 ovn_controller[152662]: 2025-10-14T09:25:50Z|01337|binding|INFO|52d803a0-5139-4197-a575-2530583dda13: Claiming fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.597 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:56:7c 10.100.0.9'], port_security=['fa:16:3e:6a:56:7c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f41def60-a7be-4154-86bc-ef63a639ee94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '68240394-e812-45d0-9e91-6623d4ac03bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c79ac29-f3b4-494c-ada5-2b93955c4fe1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=52d803a0-5139-4197-a575-2530583dda13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.599 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 52d803a0-5139-4197-a575-2530583dda13 in datapath 0e4b4c3f-9218-4fba-8f93-74ac472b0db0 bound to our chassis
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.600 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0e4b4c3f-9218-4fba-8f93-74ac472b0db0
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b689237-fcba-402b-aa9f-cecdb73134fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.618 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0e4b4c3f-91 in ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.620 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0e4b4c3f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.620 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c0ee83-141f-4938-b054-11a68305d56d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.622 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[790dfb20-a574-4f67-811a-d6b7e14e7301]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 systemd-machined[214636]: New machine qemu-158-instance-0000007d.
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.638 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[da29f8af-76b5-49a1-9f3a-fcfc383ce0fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-0000007d.
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e09cbd3-0e28-494e-aab9-6e18fd78cb9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 systemd-udevd[388518]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:50 compute-0 ovn_controller[152662]: 2025-10-14T09:25:50Z|01338|binding|INFO|Setting lport 52d803a0-5139-4197-a575-2530583dda13 ovn-installed in OVS
Oct 14 09:25:50 compute-0 ovn_controller[152662]: 2025-10-14T09:25:50Z|01339|binding|INFO|Setting lport 52d803a0-5139-4197-a575-2530583dda13 up in Southbound
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:50 compute-0 NetworkManager[44885]: <info>  [1760433950.6993] device (tap52d803a0-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:25:50 compute-0 NetworkManager[44885]: <info>  [1760433950.7017] device (tap52d803a0-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.720 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[39ac6064-7053-41d4-8a63-c663f9c14e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 systemd-udevd[388521]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.729 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc832ce7-7d3d-4dbf-95b0-21c0ccab1813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 NetworkManager[44885]: <info>  [1760433950.7307] manager: (tap0e4b4c3f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/544)
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.772 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[15bfab49-5f37-4c1a-858e-a52d467b9336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.776 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76b4a48d-116a-44be-ba37-7cb2e9ef5b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 NetworkManager[44885]: <info>  [1760433950.8066] device (tap0e4b4c3f-90): carrier: link connected
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.811 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[45e7ea4d-c472-4ff8-b277-f97d2266e0f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.829 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5ba426-de1c-47bf-962c-fb1cdab11bcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e4b4c3f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:36:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772956, 'reachable_time': 39747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388548, 'error': None, 'target': 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.846 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7851ea3c-9c94-4644-8955-a030d84cc46a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:36ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772956, 'tstamp': 772956}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388549, 'error': None, 'target': 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.863 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f059c03-bc38-4e21-9b4e-a74d2eb54f8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e4b4c3f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:36:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772956, 'reachable_time': 39747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388550, 'error': None, 'target': 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.887 2 DEBUG nova.compute.manager [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.888 2 DEBUG oslo_concurrency.lockutils [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.888 2 DEBUG oslo_concurrency.lockutils [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.888 2 DEBUG oslo_concurrency.lockutils [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:50 compute-0 nova_compute[259627]: 2025-10-14 09:25:50.888 2 DEBUG nova.compute.manager [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Processing event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a675b0a5-c5af-4e98-bb28-442104543c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.995 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e500a0-8270-45b4-8e0f-28ef06ae6268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.999 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e4b4c3f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.999 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.001 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e4b4c3f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:51 compute-0 kernel: tap0e4b4c3f-90: entered promiscuous mode
Oct 14 09:25:51 compute-0 NetworkManager[44885]: <info>  [1760433951.0049] manager: (tap0e4b4c3f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.013 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0e4b4c3f-90, col_values=(('external_ids', {'iface-id': 'fbc0ee84-b007-472f-ae27-854b1ba7e94b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:51 compute-0 ovn_controller[152662]: 2025-10-14T09:25:51Z|01340|binding|INFO|Releasing lport fbc0ee84-b007-472f-ae27-854b1ba7e94b from this chassis (sb_readonly=0)
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.049 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0e4b4c3f-9218-4fba-8f93-74ac472b0db0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0e4b4c3f-9218-4fba-8f93-74ac472b0db0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.050 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb49469d-44d8-4679-b3c9-8272456dc07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.051 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0e4b4c3f-9218-4fba-8f93-74ac472b0db0
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0e4b4c3f-9218-4fba-8f93-74ac472b0db0.pid.haproxy
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0e4b4c3f-9218-4fba-8f93-74ac472b0db0
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:25:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.053 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'env', 'PROCESS_TAG=haproxy-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0e4b4c3f-9218-4fba-8f93-74ac472b0db0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:25:51 compute-0 ceph-mon[74249]: pgmap v2181: 305 pgs: 305 active+clean; 52 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 360 KiB/s wr, 11 op/s
Oct 14 09:25:51 compute-0 podman[388625]: 2025-10-14 09:25:51.551736879 +0000 UTC m=+0.070291498 container create 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 14 09:25:51 compute-0 podman[388625]: 2025-10-14 09:25:51.516390336 +0000 UTC m=+0.034945025 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:25:51 compute-0 systemd[1]: Started libpod-conmon-0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd.scope.
Oct 14 09:25:51 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4e9f0f032f127958bf9c519d14d596a1ab0d965830f5dc4cf0b7403580501a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:25:51 compute-0 podman[388625]: 2025-10-14 09:25:51.679318472 +0000 UTC m=+0.197873151 container init 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:25:51 compute-0 podman[388625]: 2025-10-14 09:25:51.689453782 +0000 UTC m=+0.208008371 container start 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 14 09:25:51 compute-0 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [NOTICE]   (388644) : New worker (388646) forked
Oct 14 09:25:51 compute-0 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [NOTICE]   (388644) : Loading success.
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.717 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433951.7172818, f41def60-a7be-4154-86bc-ef63a639ee94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.718 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] VM Started (Lifecycle Event)
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.721 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.725 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.729 2 INFO nova.virt.libvirt.driver [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance spawned successfully.
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.730 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.749 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.757 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.763 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.764 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.764 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.765 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.766 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.767 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.777 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.778 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433951.717514, f41def60-a7be-4154-86bc-ef63a639ee94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.778 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] VM Paused (Lifecycle Event)
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.809 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433951.7244825, f41def60-a7be-4154-86bc-ef63a639ee94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.809 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] VM Resumed (Lifecycle Event)
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.963 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.969 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.976 2 INFO nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Took 6.39 seconds to spawn the instance on the hypervisor.
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.976 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:25:51 compute-0 nova_compute[259627]: 2025-10-14 09:25:51.991 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:25:52 compute-0 nova_compute[259627]: 2025-10-14 09:25:52.046 2 INFO nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Took 7.33 seconds to build instance.
Oct 14 09:25:52 compute-0 nova_compute[259627]: 2025-10-14 09:25:52.061 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:52 compute-0 nova_compute[259627]: 2025-10-14 09:25:52.974 2 DEBUG nova.compute.manager [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:52 compute-0 nova_compute[259627]: 2025-10-14 09:25:52.974 2 DEBUG oslo_concurrency.lockutils [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:25:52 compute-0 nova_compute[259627]: 2025-10-14 09:25:52.975 2 DEBUG oslo_concurrency.lockutils [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:25:52 compute-0 nova_compute[259627]: 2025-10-14 09:25:52.975 2 DEBUG oslo_concurrency.lockutils [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:25:52 compute-0 nova_compute[259627]: 2025-10-14 09:25:52.975 2 DEBUG nova.compute.manager [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] No waiting events found dispatching network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:25:52 compute-0 nova_compute[259627]: 2025-10-14 09:25:52.975 2 WARNING nova.compute.manager [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received unexpected event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 for instance with vm_state active and task_state None.
Oct 14 09:25:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:53 compute-0 ceph-mon[74249]: pgmap v2182: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:53 compute-0 nova_compute[259627]: 2025-10-14 09:25:53.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:25:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:54 compute-0 nova_compute[259627]: 2025-10-14 09:25:54.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:54 compute-0 nova_compute[259627]: 2025-10-14 09:25:54.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:55 compute-0 NetworkManager[44885]: <info>  [1760433955.0954] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Oct 14 09:25:55 compute-0 NetworkManager[44885]: <info>  [1760433955.0965] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Oct 14 09:25:55 compute-0 nova_compute[259627]: 2025-10-14 09:25:55.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:55 compute-0 ovn_controller[152662]: 2025-10-14T09:25:55Z|01341|binding|INFO|Releasing lport fbc0ee84-b007-472f-ae27-854b1ba7e94b from this chassis (sb_readonly=0)
Oct 14 09:25:55 compute-0 nova_compute[259627]: 2025-10-14 09:25:55.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:55 compute-0 nova_compute[259627]: 2025-10-14 09:25:55.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:55 compute-0 ceph-mon[74249]: pgmap v2183: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:25:55 compute-0 nova_compute[259627]: 2025-10-14 09:25:55.551 2 DEBUG nova.compute.manager [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-changed-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:25:55 compute-0 nova_compute[259627]: 2025-10-14 09:25:55.551 2 DEBUG nova.compute.manager [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing instance network info cache due to event network-changed-52d803a0-5139-4197-a575-2530583dda13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:25:55 compute-0 nova_compute[259627]: 2025-10-14 09:25:55.552 2 DEBUG oslo_concurrency.lockutils [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:25:55 compute-0 nova_compute[259627]: 2025-10-14 09:25:55.552 2 DEBUG oslo_concurrency.lockutils [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:25:55 compute-0 nova_compute[259627]: 2025-10-14 09:25:55.552 2 DEBUG nova.network.neutron [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing network info cache for port 52d803a0-5139-4197-a575-2530583dda13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:25:56 compute-0 nova_compute[259627]: 2025-10-14 09:25:56.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:25:56 compute-0 nova_compute[259627]: 2025-10-14 09:25:56.957 2 DEBUG nova.network.neutron [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updated VIF entry in instance network info cache for port 52d803a0-5139-4197-a575-2530583dda13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:25:56 compute-0 nova_compute[259627]: 2025-10-14 09:25:56.958 2 DEBUG nova.network.neutron [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:25:56 compute-0 nova_compute[259627]: 2025-10-14 09:25:56.987 2 DEBUG oslo_concurrency.lockutils [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:25:57 compute-0 ceph-mon[74249]: pgmap v2184: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:25:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:25:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:25:59 compute-0 ceph-mon[74249]: pgmap v2185: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:25:59 compute-0 nova_compute[259627]: 2025-10-14 09:25:59.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:25:59 compute-0 nova_compute[259627]: 2025-10-14 09:25:59.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:26:01 compute-0 ceph-mon[74249]: pgmap v2186: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:26:01 compute-0 nova_compute[259627]: 2025-10-14 09:26:01.995 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:01 compute-0 nova_compute[259627]: 2025-10-14 09:26:01.996 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.019 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.122 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.123 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.134 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.134 2 INFO nova.compute.claims [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:26:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.322 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:26:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:26:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:26:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416403570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.791 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.795 2 DEBUG nova.compute.provider_tree [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:26:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:26:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:26:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.815 2 DEBUG nova.scheduler.client.report [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:26:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.838 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.839 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.892 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.892 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.917 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:26:02 compute-0 nova_compute[259627]: 2025-10-14 09:26:02.952 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:26:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.053 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.055 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.055 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Creating image(s)
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.084 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.114 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.139 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.142 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.234 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.235 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.236 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.236 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.260 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.264 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4310595f-2280-438c-97ca-f2de57527501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:03 compute-0 ovn_controller[152662]: 2025-10-14T09:26:03Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 09:26:03 compute-0 ovn_controller[152662]: 2025-10-14T09:26:03Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 09:26:03 compute-0 ceph-mon[74249]: pgmap v2187: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 14 09:26:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2416403570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.517 2 DEBUG nova.policy [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.535 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4310595f-2280-438c-97ca-f2de57527501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.608 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 4310595f-2280-438c-97ca-f2de57527501_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.719 2 DEBUG nova.objects.instance [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 4310595f-2280-438c-97ca-f2de57527501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.733 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.734 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Ensure instance console log exists: /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.735 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.735 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:03 compute-0 nova_compute[259627]: 2025-10-14 09:26:03.736 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:26:04 compute-0 nova_compute[259627]: 2025-10-14 09:26:04.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:04 compute-0 nova_compute[259627]: 2025-10-14 09:26:04.827 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Successfully created port: ea25832f-13d3-41ec-874c-e622d24c912e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:26:04 compute-0 nova_compute[259627]: 2025-10-14 09:26:04.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:05 compute-0 ceph-mon[74249]: pgmap v2188: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:26:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:26:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2415980028' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:26:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:26:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2415980028' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:26:05 compute-0 nova_compute[259627]: 2025-10-14 09:26:05.920 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Successfully updated port: ea25832f-13d3-41ec-874c-e622d24c912e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:26:05 compute-0 nova_compute[259627]: 2025-10-14 09:26:05.942 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:05 compute-0 nova_compute[259627]: 2025-10-14 09:26:05.942 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:05 compute-0 nova_compute[259627]: 2025-10-14 09:26:05.943 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:26:06 compute-0 nova_compute[259627]: 2025-10-14 09:26:06.042 2 DEBUG nova.compute.manager [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:06 compute-0 nova_compute[259627]: 2025-10-14 09:26:06.042 2 DEBUG nova.compute.manager [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing instance network info cache due to event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:26:06 compute-0 nova_compute[259627]: 2025-10-14 09:26:06.043 2 DEBUG oslo_concurrency.lockutils [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:06 compute-0 nova_compute[259627]: 2025-10-14 09:26:06.113 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:26:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 14 09:26:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2415980028' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:26:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2415980028' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:26:06 compute-0 podman[388846]: 2025-10-14 09:26:06.673823777 +0000 UTC m=+0.073966618 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:26:06 compute-0 podman[388845]: 2025-10-14 09:26:06.697955064 +0000 UTC m=+0.099883959 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:26:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:07.040 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:07.041 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:07.042 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:07 compute-0 ceph-mon[74249]: pgmap v2189: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.660 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.752 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.753 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance network_info: |[{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.754 2 DEBUG oslo_concurrency.lockutils [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.755 2 DEBUG nova.network.neutron [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.759 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start _get_guest_xml network_info=[{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.766 2 WARNING nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.774 2 DEBUG nova.virt.libvirt.host [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.775 2 DEBUG nova.virt.libvirt.host [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.779 2 DEBUG nova.virt.libvirt.host [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.780 2 DEBUG nova.virt.libvirt.host [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.781 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.781 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.782 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.783 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.783 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.784 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.784 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.785 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.786 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.786 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.787 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.787 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:26:07 compute-0 nova_compute[259627]: 2025-10-14 09:26:07.793 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:26:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857147547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.249 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.274 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.278 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 09:26:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/857147547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:26:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1382903529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.751 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.754 2 DEBUG nova.virt.libvirt.vif [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=126,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHc9phIuU5FXYInHFmneK7ofu0Hronr3GOHgS3ZKrK8UZEcxqPRrwvV2ktBWbk2vf9CswqByMiWPlH6Y1ffYCmRhb+LdZFzcPKCiYu31yXKGqBJ2r6m/arw2a5HgrQ1Icw==',key_name='tempest-TestSecurityGroupsBasicOps-69160241',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-pph0582m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:02Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4310595f-2280-438c-97ca-f2de57527501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.755 2 DEBUG nova.network.os_vif_util [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.757 2 DEBUG nova.network.os_vif_util [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.759 2 DEBUG nova.objects.instance [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4310595f-2280-438c-97ca-f2de57527501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.778 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <uuid>4310595f-2280-438c-97ca-f2de57527501</uuid>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <name>instance-0000007e</name>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986</nova:name>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:26:07</nova:creationTime>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <nova:port uuid="ea25832f-13d3-41ec-874c-e622d24c912e">
Oct 14 09:26:08 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <system>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <entry name="serial">4310595f-2280-438c-97ca-f2de57527501</entry>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <entry name="uuid">4310595f-2280-438c-97ca-f2de57527501</entry>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </system>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <os>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   </os>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <features>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   </features>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4310595f-2280-438c-97ca-f2de57527501_disk">
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4310595f-2280-438c-97ca-f2de57527501_disk.config">
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:26:08 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:fe:45:3f"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <target dev="tapea25832f-13"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/console.log" append="off"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <video>
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </video>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:26:08 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:26:08 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:26:08 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:26:08 compute-0 nova_compute[259627]: </domain>
Oct 14 09:26:08 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.779 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Preparing to wait for external event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.780 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.780 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.780 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.782 2 DEBUG nova.virt.libvirt.vif [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=126,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHc9phIuU5FXYInHFmneK7ofu0Hronr3GOHgS3ZKrK8UZEcxqPRrwvV2ktBWbk2vf9CswqByMiWPlH6Y1ffYCmRhb+LdZFzcPKCiYu31yXKGqBJ2r6m/arw2a5HgrQ1Icw==',key_name='tempest-TestSecurityGroupsBasicOps-69160241',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-pph0582m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:02Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4310595f-2280-438c-97ca-f2de57527501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.782 2 DEBUG nova.network.os_vif_util [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.783 2 DEBUG nova.network.os_vif_util [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.784 2 DEBUG os_vif [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.793 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea25832f-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea25832f-13, col_values=(('external_ids', {'iface-id': 'ea25832f-13d3-41ec-874c-e622d24c912e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:45:3f', 'vm-uuid': '4310595f-2280-438c-97ca-f2de57527501'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:08 compute-0 NetworkManager[44885]: <info>  [1760433968.7978] manager: (tapea25832f-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.806 2 INFO os_vif [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13')
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.875 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.876 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.876 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:fe:45:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.877 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Using config drive
Oct 14 09:26:08 compute-0 nova_compute[259627]: 2025-10-14 09:26:08.907 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.224 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Creating config drive at /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.234 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xoix8t2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.299 2 DEBUG nova.network.neutron [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updated VIF entry in instance network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.300 2 DEBUG nova.network.neutron [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.314 2 DEBUG oslo_concurrency.lockutils [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.389 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xoix8t2" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.429 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.434 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config 4310595f-2280-438c-97ca-f2de57527501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:09 compute-0 ceph-mon[74249]: pgmap v2190: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 09:26:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1382903529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.639 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config 4310595f-2280-438c-97ca-f2de57527501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.640 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Deleting local config drive /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config because it was imported into RBD.
Oct 14 09:26:09 compute-0 kernel: tapea25832f-13: entered promiscuous mode
Oct 14 09:26:09 compute-0 NetworkManager[44885]: <info>  [1760433969.7073] manager: (tapea25832f-13): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:09 compute-0 ovn_controller[152662]: 2025-10-14T09:26:09Z|01342|binding|INFO|Claiming lport ea25832f-13d3-41ec-874c-e622d24c912e for this chassis.
Oct 14 09:26:09 compute-0 ovn_controller[152662]: 2025-10-14T09:26:09Z|01343|binding|INFO|ea25832f-13d3-41ec-874c-e622d24c912e: Claiming fa:16:3e:fe:45:3f 10.100.0.12
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.720 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:45:3f 10.100.0.12'], port_security=['fa:16:3e:fe:45:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4310595f-2280-438c-97ca-f2de57527501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ea5a077-a2c7-41d4-9c82-971893cbca2e 5ded162a-2a98-4fc1-94d1-b742c1816f61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57f23724-2b34-445a-b3d0-46a0f0ee87c3, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ea25832f-13d3-41ec-874c-e622d24c912e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.721 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ea25832f-13d3-41ec-874c-e622d24c912e in datapath 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c bound to our chassis
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.723 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[794f794c-1d79-4f59-95de-6dafd46f877d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.735 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69d5e5f4-01 in ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.736 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69d5e5f4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d36a989-5da6-4817-ac9e-c8fbdef463bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9922194-5371-43c7-bc75-d793df664450]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 systemd-udevd[389021]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:26:09 compute-0 ovn_controller[152662]: 2025-10-14T09:26:09Z|01344|binding|INFO|Setting lport ea25832f-13d3-41ec-874c-e622d24c912e ovn-installed in OVS
Oct 14 09:26:09 compute-0 ovn_controller[152662]: 2025-10-14T09:26:09Z|01345|binding|INFO|Setting lport ea25832f-13d3-41ec-874c-e622d24c912e up in Southbound
Oct 14 09:26:09 compute-0 systemd-machined[214636]: New machine qemu-159-instance-0000007e.
Oct 14 09:26:09 compute-0 NetworkManager[44885]: <info>  [1760433969.7535] device (tapea25832f-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:26:09 compute-0 NetworkManager[44885]: <info>  [1760433969.7548] device (tapea25832f-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.771 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb6b723-d2c1-4fec-8f45-c267aff10820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:09 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-0000007e.
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.817 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2274fa-20e8-411e-bae8-554cb694a199]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.846 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76289ff7-2bea-45d7-8b72-7f3770494d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.850 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[97efc030-1ce9-4336-bb54-fce1f86f7025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 NetworkManager[44885]: <info>  [1760433969.8527] manager: (tap69d5e5f4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/550)
Oct 14 09:26:09 compute-0 systemd-udevd[389026]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.889 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d6374c-9b13-4605-b38c-4735bccb3f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.892 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ba1908-8791-4adf-991e-b64e675184be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 nova_compute[259627]: 2025-10-14 09:26:09.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:09 compute-0 NetworkManager[44885]: <info>  [1760433969.9148] device (tap69d5e5f4-00): carrier: link connected
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.921 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3dba3ad3-9d2a-4556-8821-36c94e52e14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.943 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe00eed-f7cf-449f-bc4a-ba8cc479ee66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69d5e5f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:7d:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774867, 'reachable_time': 33876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389055, 'error': None, 'target': 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.960 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9f5172-810b-4fc9-af0f-6b16bb7e8053]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:7da1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 774867, 'tstamp': 774867}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389056, 'error': None, 'target': 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[beff8ccf-b361-4a4f-8919-d53d19de4842]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69d5e5f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:7d:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774867, 'reachable_time': 33876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389057, 'error': None, 'target': 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.009 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06129994-bfa0-440f-b047-75cd6fd3308c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.083 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[002fa53c-eb68-4d9c-a486-2865451ae061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.085 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69d5e5f4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.086 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.087 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69d5e5f4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:10 compute-0 NetworkManager[44885]: <info>  [1760433970.0902] manager: (tap69d5e5f4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:10 compute-0 kernel: tap69d5e5f4-00: entered promiscuous mode
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.094 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69d5e5f4-00, col_values=(('external_ids', {'iface-id': 'a300b10f-f6fd-47ab-bc03-160d747e5ac0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:10 compute-0 ovn_controller[152662]: 2025-10-14T09:26:10Z|01346|binding|INFO|Releasing lport a300b10f-f6fd-47ab-bc03-160d747e5ac0 from this chassis (sb_readonly=0)
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.124 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.125 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[659a4138-4ab1-4e81-aa29-22cb7863ecae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.126 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c.pid.haproxy
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:26:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.127 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'env', 'PROCESS_TAG=haproxy-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:26:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.319 2 INFO nova.compute.manager [None req-1b472b76-f7e5-4d11-9bbc-9cfd0232a5e3 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Get console output
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.328 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:26:10 compute-0 podman[389131]: 2025-10-14 09:26:10.520137217 +0000 UTC m=+0.064978916 container create a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:26:10 compute-0 systemd[1]: Started libpod-conmon-a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1.scope.
Oct 14 09:26:10 compute-0 podman[389131]: 2025-10-14 09:26:10.481112543 +0000 UTC m=+0.025954262 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:26:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a75670ad0d206a928ee627a44dd9949ee1ddfbb2472618a373916a2273c134/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:10 compute-0 podman[389131]: 2025-10-14 09:26:10.610675055 +0000 UTC m=+0.155516784 container init a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:26:10 compute-0 podman[389131]: 2025-10-14 09:26:10.615809201 +0000 UTC m=+0.160650900 container start a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:26:10 compute-0 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [NOTICE]   (389150) : New worker (389152) forked
Oct 14 09:26:10 compute-0 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [NOTICE]   (389150) : Loading success.
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.680 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433970.6797283, 4310595f-2280-438c-97ca-f2de57527501 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.680 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] VM Started (Lifecycle Event)
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.715 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.718 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433970.6799247, 4310595f-2280-438c-97ca-f2de57527501 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.718 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] VM Paused (Lifecycle Event)
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.740 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.742 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.771 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.927 2 DEBUG nova.compute.manager [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.928 2 DEBUG oslo_concurrency.lockutils [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.928 2 DEBUG oslo_concurrency.lockutils [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.928 2 DEBUG oslo_concurrency.lockutils [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.928 2 DEBUG nova.compute.manager [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Processing event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.929 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.933 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433970.933312, 4310595f-2280-438c-97ca-f2de57527501 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.934 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] VM Resumed (Lifecycle Event)
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.935 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.938 2 INFO nova.virt.libvirt.driver [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance spawned successfully.
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.938 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.967 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.972 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.975 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.975 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.976 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.976 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.976 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:10 compute-0 nova_compute[259627]: 2025-10-14 09:26:10.976 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:11 compute-0 nova_compute[259627]: 2025-10-14 09:26:11.008 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:26:11 compute-0 nova_compute[259627]: 2025-10-14 09:26:11.066 2 INFO nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Took 8.01 seconds to spawn the instance on the hypervisor.
Oct 14 09:26:11 compute-0 nova_compute[259627]: 2025-10-14 09:26:11.067 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:11.162 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:26:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:11.163 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:26:11 compute-0 nova_compute[259627]: 2025-10-14 09:26:11.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:11 compute-0 nova_compute[259627]: 2025-10-14 09:26:11.176 2 INFO nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Took 9.09 seconds to build instance.
Oct 14 09:26:11 compute-0 ovn_controller[152662]: 2025-10-14T09:26:11Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 09:26:11 compute-0 nova_compute[259627]: 2025-10-14 09:26:11.245 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:11 compute-0 ceph-mon[74249]: pgmap v2191: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 14 09:26:11 compute-0 nova_compute[259627]: 2025-10-14 09:26:11.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 14 09:26:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:13 compute-0 ovn_controller[152662]: 2025-10-14T09:26:13Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 09:26:13 compute-0 nova_compute[259627]: 2025-10-14 09:26:13.199 2 DEBUG nova.compute.manager [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:13 compute-0 nova_compute[259627]: 2025-10-14 09:26:13.199 2 DEBUG oslo_concurrency.lockutils [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:13 compute-0 nova_compute[259627]: 2025-10-14 09:26:13.199 2 DEBUG oslo_concurrency.lockutils [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:13 compute-0 nova_compute[259627]: 2025-10-14 09:26:13.199 2 DEBUG oslo_concurrency.lockutils [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:13 compute-0 nova_compute[259627]: 2025-10-14 09:26:13.200 2 DEBUG nova.compute.manager [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] No waiting events found dispatching network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:26:13 compute-0 nova_compute[259627]: 2025-10-14 09:26:13.200 2 WARNING nova.compute.manager [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received unexpected event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e for instance with vm_state active and task_state None.
Oct 14 09:26:13 compute-0 ceph-mon[74249]: pgmap v2192: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 14 09:26:13 compute-0 nova_compute[259627]: 2025-10-14 09:26:13.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 14 09:26:14 compute-0 nova_compute[259627]: 2025-10-14 09:26:14.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:14 compute-0 ovn_controller[152662]: 2025-10-14T09:26:14Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 09:26:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:15.165 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:15 compute-0 ceph-mon[74249]: pgmap v2193: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 14 09:26:15 compute-0 nova_compute[259627]: 2025-10-14 09:26:15.912 2 DEBUG nova.compute.manager [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-changed-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:15 compute-0 nova_compute[259627]: 2025-10-14 09:26:15.913 2 DEBUG nova.compute.manager [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing instance network info cache due to event network-changed-52d803a0-5139-4197-a575-2530583dda13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:26:15 compute-0 nova_compute[259627]: 2025-10-14 09:26:15.913 2 DEBUG oslo_concurrency.lockutils [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:15 compute-0 nova_compute[259627]: 2025-10-14 09:26:15.914 2 DEBUG oslo_concurrency.lockutils [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:15 compute-0 nova_compute[259627]: 2025-10-14 09:26:15.914 2 DEBUG nova.network.neutron [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing network info cache for port 52d803a0-5139-4197-a575-2530583dda13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.022 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.023 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.024 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.024 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.024 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.026 2 INFO nova.compute.manager [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Terminating instance
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.027 2 DEBUG nova.compute.manager [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:26:16 compute-0 kernel: tap52d803a0-51 (unregistering): left promiscuous mode
Oct 14 09:26:16 compute-0 NetworkManager[44885]: <info>  [1760433976.0880] device (tap52d803a0-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:26:16 compute-0 ovn_controller[152662]: 2025-10-14T09:26:16Z|01347|binding|INFO|Releasing lport 52d803a0-5139-4197-a575-2530583dda13 from this chassis (sb_readonly=0)
Oct 14 09:26:16 compute-0 ovn_controller[152662]: 2025-10-14T09:26:16Z|01348|binding|INFO|Setting lport 52d803a0-5139-4197-a575-2530583dda13 down in Southbound
Oct 14 09:26:16 compute-0 ovn_controller[152662]: 2025-10-14T09:26:16Z|01349|binding|INFO|Removing iface tap52d803a0-51 ovn-installed in OVS
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.169 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:56:7c 10.100.0.9'], port_security=['fa:16:3e:6a:56:7c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f41def60-a7be-4154-86bc-ef63a639ee94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '68240394-e812-45d0-9e91-6623d4ac03bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c79ac29-f3b4-494c-ada5-2b93955c4fe1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=52d803a0-5139-4197-a575-2530583dda13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.171 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 52d803a0-5139-4197-a575-2530583dda13 in datapath 0e4b4c3f-9218-4fba-8f93-74ac472b0db0 unbound from our chassis
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.174 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e4b4c3f-9218-4fba-8f93-74ac472b0db0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.175 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79d64cbf-f8af-4860-b362-f7250bb82306]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.177 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 namespace which is not needed anymore
Oct 14 09:26:16 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct 14 09:26:16 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Consumed 13.063s CPU time.
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:16 compute-0 systemd-machined[214636]: Machine qemu-158-instance-0000007d terminated.
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.282 2 INFO nova.virt.libvirt.driver [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance destroyed successfully.
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.284 2 DEBUG nova.objects.instance [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid f41def60-a7be-4154-86bc-ef63a639ee94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.307 2 DEBUG nova.virt.libvirt.vif [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2135752872',display_name='tempest-TestNetworkBasicOps-server-2135752872',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2135752872',id=125,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnKWI1fcgm2J2USW/ocYjEbdNv3UFcXFKMO5si6IlBqcZWb9CUblu//WPv2zFMmUex5tiH7jg81h5bD0kUs5doUIUp4qv9iNUQKKi7Q5u8sjjVGsY4n55Yf/sl7IQnjgw==',key_name='tempest-TestNetworkBasicOps-1922317585',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:25:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-h0pwqz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:25:52Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=f41def60-a7be-4154-86bc-ef63a639ee94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.308 2 DEBUG nova.network.os_vif_util [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.309 2 DEBUG nova.network.os_vif_util [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.310 2 DEBUG os_vif [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.314 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d803a0-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.320 2 INFO os_vif [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51')
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.339 2 DEBUG nova.compute.manager [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-unplugged-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.341 2 DEBUG oslo_concurrency.lockutils [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.341 2 DEBUG oslo_concurrency.lockutils [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.341 2 DEBUG oslo_concurrency.lockutils [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.342 2 DEBUG nova.compute.manager [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] No waiting events found dispatching network-vif-unplugged-52d803a0-5139-4197-a575-2530583dda13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.342 2 DEBUG nova.compute.manager [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-unplugged-52d803a0-5139-4197-a575-2530583dda13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:26:16 compute-0 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [NOTICE]   (388644) : haproxy version is 2.8.14-c23fe91
Oct 14 09:26:16 compute-0 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [NOTICE]   (388644) : path to executable is /usr/sbin/haproxy
Oct 14 09:26:16 compute-0 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [WARNING]  (388644) : Exiting Master process...
Oct 14 09:26:16 compute-0 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [ALERT]    (388644) : Current worker (388646) exited with code 143 (Terminated)
Oct 14 09:26:16 compute-0 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [WARNING]  (388644) : All workers exited. Exiting... (0)
Oct 14 09:26:16 compute-0 systemd[1]: libpod-0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd.scope: Deactivated successfully.
Oct 14 09:26:16 compute-0 podman[389195]: 2025-10-14 09:26:16.407442359 +0000 UTC m=+0.076169203 container died 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd-userdata-shm.mount: Deactivated successfully.
Oct 14 09:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e4e9f0f032f127958bf9c519d14d596a1ab0d965830f5dc4cf0b7403580501a-merged.mount: Deactivated successfully.
Oct 14 09:26:16 compute-0 podman[389195]: 2025-10-14 09:26:16.475630024 +0000 UTC m=+0.144356798 container cleanup 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:26:16 compute-0 systemd[1]: libpod-conmon-0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd.scope: Deactivated successfully.
Oct 14 09:26:16 compute-0 podman[389245]: 2025-10-14 09:26:16.574151678 +0000 UTC m=+0.065061178 container remove 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.587 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82b4c77d-780e-40db-8ac8-68796206c5cd]: (4, ('Tue Oct 14 09:26:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 (0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd)\n0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd\nTue Oct 14 09:26:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 (0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd)\n0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.589 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f685f836-ca49-42ac-a33a-9df792fff3fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.591 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e4b4c3f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:16 compute-0 kernel: tap0e4b4c3f-90: left promiscuous mode
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.602 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31b85c8f-5e27-4a53-8be4-1c1cc8403e9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.629 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fe0cfc-e359-4457-bab5-f73c34433f3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.632 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18fad62d-c41b-4be5-bf3f-9a219c64a8ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[700e5e20-16ba-41d5-8656-eb039b93ef3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772947, 'reachable_time': 35338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389261, 'error': None, 'target': 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d0e4b4c3f\x2d9218\x2d4fba\x2d8f93\x2d74ac472b0db0.mount: Deactivated successfully.
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.658 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:26:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.658 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[54ad42e8-51e7-4c06-8541-a444976952c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.972 2 INFO nova.virt.libvirt.driver [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Deleting instance files /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94_del
Oct 14 09:26:16 compute-0 nova_compute[259627]: 2025-10-14 09:26:16.973 2 INFO nova.virt.libvirt.driver [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Deletion of /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94_del complete
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.035 2 INFO nova.compute.manager [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Took 1.01 seconds to destroy the instance on the hypervisor.
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.035 2 DEBUG oslo.service.loopingcall [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.036 2 DEBUG nova.compute.manager [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.036 2 DEBUG nova.network.neutron [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:26:17 compute-0 ceph-mon[74249]: pgmap v2194: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.520 2 DEBUG nova.network.neutron [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.537 2 INFO nova.compute.manager [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Took 0.50 seconds to deallocate network for instance.
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.593 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.594 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.656 2 DEBUG nova.network.neutron [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updated VIF entry in instance network info cache for port 52d803a0-5139-4197-a575-2530583dda13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.657 2 DEBUG nova.network.neutron [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.668 2 DEBUG oslo_concurrency.processutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:17 compute-0 nova_compute[259627]: 2025-10-14 09:26:17.713 2 DEBUG oslo_concurrency.lockutils [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.045 2 DEBUG nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.046 2 DEBUG nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing instance network info cache due to event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.046 2 DEBUG oslo_concurrency.lockutils [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.050 2 DEBUG oslo_concurrency.lockutils [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.051 2 DEBUG nova.network.neutron [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:26:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:26:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662381266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.103 2 DEBUG oslo_concurrency.processutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.114 2 DEBUG nova.compute.provider_tree [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.138 2 DEBUG nova.scheduler.client.report [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.167 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.193 2 INFO nova.scheduler.client.report [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance f41def60-a7be-4154-86bc-ef63a639ee94
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.256 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.420 2 DEBUG nova.compute.manager [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.421 2 DEBUG oslo_concurrency.lockutils [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.421 2 DEBUG oslo_concurrency.lockutils [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.421 2 DEBUG oslo_concurrency.lockutils [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.421 2 DEBUG nova.compute.manager [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] No waiting events found dispatching network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:26:18 compute-0 nova_compute[259627]: 2025-10-14 09:26:18.421 2 WARNING nova.compute.manager [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received unexpected event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 for instance with vm_state deleted and task_state None.
Oct 14 09:26:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2662381266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:19 compute-0 nova_compute[259627]: 2025-10-14 09:26:19.224 2 DEBUG nova.network.neutron [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updated VIF entry in instance network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:26:19 compute-0 nova_compute[259627]: 2025-10-14 09:26:19.224 2 DEBUG nova.network.neutron [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:19 compute-0 nova_compute[259627]: 2025-10-14 09:26:19.249 2 DEBUG oslo_concurrency.lockutils [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:19 compute-0 nova_compute[259627]: 2025-10-14 09:26:19.249 2 DEBUG nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-deleted-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:19 compute-0 nova_compute[259627]: 2025-10-14 09:26:19.250 2 INFO nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Neutron deleted interface 52d803a0-5139-4197-a575-2530583dda13; detaching it from the instance and deleting it from the info cache
Oct 14 09:26:19 compute-0 nova_compute[259627]: 2025-10-14 09:26:19.250 2 DEBUG nova.network.neutron [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:19 compute-0 nova_compute[259627]: 2025-10-14 09:26:19.274 2 DEBUG nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Detach interface failed, port_id=52d803a0-5139-4197-a575-2530583dda13, reason: Instance f41def60-a7be-4154-86bc-ef63a639ee94 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:26:19 compute-0 ceph-mon[74249]: pgmap v2195: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Oct 14 09:26:19 compute-0 podman[389285]: 2025-10-14 09:26:19.668465217 +0000 UTC m=+0.070912884 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:26:19 compute-0 podman[389284]: 2025-10-14 09:26:19.735640867 +0000 UTC m=+0.134476484 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:26:19 compute-0 nova_compute[259627]: 2025-10-14 09:26:19.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 145 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 80 op/s
Oct 14 09:26:21 compute-0 nova_compute[259627]: 2025-10-14 09:26:21.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:21 compute-0 ceph-mon[74249]: pgmap v2196: 305 pgs: 305 active+clean; 145 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 80 op/s
Oct 14 09:26:22 compute-0 ovn_controller[152662]: 2025-10-14T09:26:22Z|01350|binding|INFO|Releasing lport a300b10f-f6fd-47ab-bc03-160d747e5ac0 from this chassis (sb_readonly=0)
Oct 14 09:26:22 compute-0 nova_compute[259627]: 2025-10-14 09:26:22.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 88 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.6 KiB/s wr, 98 op/s
Oct 14 09:26:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:23 compute-0 ovn_controller[152662]: 2025-10-14T09:26:23Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:45:3f 10.100.0.12
Oct 14 09:26:23 compute-0 ovn_controller[152662]: 2025-10-14T09:26:23Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:45:3f 10.100.0.12
Oct 14 09:26:23 compute-0 ceph-mon[74249]: pgmap v2197: 305 pgs: 305 active+clean; 88 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.6 KiB/s wr, 98 op/s
Oct 14 09:26:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 88 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.5 KiB/s wr, 96 op/s
Oct 14 09:26:24 compute-0 nova_compute[259627]: 2025-10-14 09:26:24.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:25 compute-0 sudo[389329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:25 compute-0 sudo[389329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:25 compute-0 sudo[389329]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:25 compute-0 sudo[389354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:26:25 compute-0 sudo[389354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:25 compute-0 sudo[389354]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:25 compute-0 sudo[389379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:25 compute-0 sudo[389379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:25 compute-0 sudo[389379]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:25 compute-0 sudo[389404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:26:25 compute-0 sudo[389404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:25 compute-0 ceph-mon[74249]: pgmap v2198: 305 pgs: 305 active+clean; 88 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.5 KiB/s wr, 96 op/s
Oct 14 09:26:25 compute-0 sudo[389404]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:26:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:26:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:26:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:26:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:26:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:26:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5706d6d7-ffaf-4478-b036-7da8e9e245e7 does not exist
Oct 14 09:26:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c4b8f133-1fc2-4fdd-bbd9-c2a37bc5fd7b does not exist
Oct 14 09:26:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f7b1d075-aa45-4329-ae5b-85e17246b8a1 does not exist
Oct 14 09:26:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:26:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:26:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:26:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:26:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:26:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:26:26 compute-0 sudo[389460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:26 compute-0 sudo[389460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:26 compute-0 sudo[389460]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:26 compute-0 sudo[389485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:26:26 compute-0 sudo[389485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:26 compute-0 sudo[389485]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:26 compute-0 sudo[389510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:26 compute-0 sudo[389510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:26 compute-0 sudo[389510]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:26 compute-0 sudo[389535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:26:26 compute-0 sudo[389535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Oct 14 09:26:26 compute-0 nova_compute[259627]: 2025-10-14 09:26:26.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:26 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:26:26 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:26:26 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:26:26 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:26:26 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:26:26 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:26:26 compute-0 podman[389600]: 2025-10-14 09:26:26.730877805 +0000 UTC m=+0.059458870 container create 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:26:26 compute-0 systemd[1]: Started libpod-conmon-96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3.scope.
Oct 14 09:26:26 compute-0 podman[389600]: 2025-10-14 09:26:26.697420018 +0000 UTC m=+0.026001143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:26:26 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:26 compute-0 podman[389600]: 2025-10-14 09:26:26.842735119 +0000 UTC m=+0.171316244 container init 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 09:26:26 compute-0 podman[389600]: 2025-10-14 09:26:26.852932281 +0000 UTC m=+0.181513346 container start 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:26:26 compute-0 podman[389600]: 2025-10-14 09:26:26.857087254 +0000 UTC m=+0.185668319 container attach 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:26:26 compute-0 determined_black[389617]: 167 167
Oct 14 09:26:26 compute-0 systemd[1]: libpod-96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3.scope: Deactivated successfully.
Oct 14 09:26:26 compute-0 conmon[389617]: conmon 96f1daee704f99a9b0c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3.scope/container/memory.events
Oct 14 09:26:26 compute-0 podman[389600]: 2025-10-14 09:26:26.866229609 +0000 UTC m=+0.194810664 container died 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-21e75694929f4930f869bb1d59ac7d938764fe23e3a226f496bc4fb5e2b709d6-merged.mount: Deactivated successfully.
Oct 14 09:26:26 compute-0 podman[389600]: 2025-10-14 09:26:26.916091092 +0000 UTC m=+0.244672137 container remove 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:26:26 compute-0 systemd[1]: libpod-conmon-96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3.scope: Deactivated successfully.
Oct 14 09:26:27 compute-0 podman[389640]: 2025-10-14 09:26:27.125136317 +0000 UTC m=+0.060778493 container create d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:26:27 compute-0 systemd[1]: Started libpod-conmon-d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a.scope.
Oct 14 09:26:27 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:27 compute-0 podman[389640]: 2025-10-14 09:26:27.10340758 +0000 UTC m=+0.039049786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:27 compute-0 podman[389640]: 2025-10-14 09:26:27.216258499 +0000 UTC m=+0.151900715 container init d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:26:27 compute-0 podman[389640]: 2025-10-14 09:26:27.225029515 +0000 UTC m=+0.160671691 container start d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 09:26:27 compute-0 podman[389640]: 2025-10-14 09:26:27.228511481 +0000 UTC m=+0.164153677 container attach d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 09:26:27 compute-0 ceph-mon[74249]: pgmap v2199: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Oct 14 09:26:27 compute-0 nova_compute[259627]: 2025-10-14 09:26:27.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:28 compute-0 cool_maxwell[389657]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:26:28 compute-0 cool_maxwell[389657]: --> relative data size: 1.0
Oct 14 09:26:28 compute-0 cool_maxwell[389657]: --> All data devices are unavailable
Oct 14 09:26:28 compute-0 systemd[1]: libpod-d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a.scope: Deactivated successfully.
Oct 14 09:26:28 compute-0 podman[389640]: 2025-10-14 09:26:28.299597927 +0000 UTC m=+1.235240143 container died d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 09:26:28 compute-0 systemd[1]: libpod-d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a.scope: Consumed 1.015s CPU time.
Oct 14 09:26:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 09:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b-merged.mount: Deactivated successfully.
Oct 14 09:26:28 compute-0 podman[389640]: 2025-10-14 09:26:28.400498441 +0000 UTC m=+1.336140607 container remove d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 09:26:28 compute-0 systemd[1]: libpod-conmon-d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a.scope: Deactivated successfully.
Oct 14 09:26:28 compute-0 sudo[389535]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:28 compute-0 sudo[389700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:28 compute-0 sudo[389700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:28 compute-0 sudo[389700]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:28 compute-0 sudo[389725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:26:28 compute-0 sudo[389725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:28 compute-0 sudo[389725]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:28 compute-0 sudo[389750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:28 compute-0 sudo[389750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:28 compute-0 sudo[389750]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:28 compute-0 sudo[389775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:26:28 compute-0 sudo[389775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:29 compute-0 podman[389842]: 2025-10-14 09:26:29.238885057 +0000 UTC m=+0.057959623 container create 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:26:29 compute-0 systemd[1]: Started libpod-conmon-03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448.scope.
Oct 14 09:26:29 compute-0 podman[389842]: 2025-10-14 09:26:29.205146253 +0000 UTC m=+0.024220859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:26:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:29 compute-0 podman[389842]: 2025-10-14 09:26:29.316898405 +0000 UTC m=+0.135972981 container init 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:26:29 compute-0 podman[389842]: 2025-10-14 09:26:29.324174004 +0000 UTC m=+0.143248530 container start 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:26:29 compute-0 nostalgic_black[389858]: 167 167
Oct 14 09:26:29 compute-0 systemd[1]: libpod-03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448.scope: Deactivated successfully.
Oct 14 09:26:29 compute-0 podman[389842]: 2025-10-14 09:26:29.32964751 +0000 UTC m=+0.148722076 container attach 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:26:29 compute-0 podman[389842]: 2025-10-14 09:26:29.330414369 +0000 UTC m=+0.149488935 container died 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f320ef68bc6fe12d8308661a06df34a12a238288c7f9c4ae7e7239db58ad4b80-merged.mount: Deactivated successfully.
Oct 14 09:26:29 compute-0 podman[389842]: 2025-10-14 09:26:29.377692167 +0000 UTC m=+0.196766723 container remove 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:26:29 compute-0 systemd[1]: libpod-conmon-03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448.scope: Deactivated successfully.
Oct 14 09:26:29 compute-0 ceph-mon[74249]: pgmap v2200: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 09:26:29 compute-0 podman[389881]: 2025-10-14 09:26:29.593455768 +0000 UTC m=+0.067007967 container create e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 09:26:29 compute-0 systemd[1]: Started libpod-conmon-e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4.scope.
Oct 14 09:26:29 compute-0 podman[389881]: 2025-10-14 09:26:29.567722992 +0000 UTC m=+0.041275251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:26:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:29 compute-0 podman[389881]: 2025-10-14 09:26:29.699113769 +0000 UTC m=+0.172665998 container init e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:26:29 compute-0 podman[389881]: 2025-10-14 09:26:29.708101961 +0000 UTC m=+0.181654130 container start e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:26:29 compute-0 podman[389881]: 2025-10-14 09:26:29.712303675 +0000 UTC m=+0.185855924 container attach e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:26:29 compute-0 nova_compute[259627]: 2025-10-14 09:26:29.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 09:26:30 compute-0 keen_easley[389897]: {
Oct 14 09:26:30 compute-0 keen_easley[389897]:     "0": [
Oct 14 09:26:30 compute-0 keen_easley[389897]:         {
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "devices": [
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "/dev/loop3"
Oct 14 09:26:30 compute-0 keen_easley[389897]:             ],
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_name": "ceph_lv0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_size": "21470642176",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "name": "ceph_lv0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "tags": {
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cluster_name": "ceph",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.crush_device_class": "",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.encrypted": "0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osd_id": "0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.type": "block",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.vdo": "0"
Oct 14 09:26:30 compute-0 keen_easley[389897]:             },
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "type": "block",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "vg_name": "ceph_vg0"
Oct 14 09:26:30 compute-0 keen_easley[389897]:         }
Oct 14 09:26:30 compute-0 keen_easley[389897]:     ],
Oct 14 09:26:30 compute-0 keen_easley[389897]:     "1": [
Oct 14 09:26:30 compute-0 keen_easley[389897]:         {
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "devices": [
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "/dev/loop4"
Oct 14 09:26:30 compute-0 keen_easley[389897]:             ],
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_name": "ceph_lv1",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_size": "21470642176",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "name": "ceph_lv1",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "tags": {
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cluster_name": "ceph",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.crush_device_class": "",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.encrypted": "0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osd_id": "1",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.type": "block",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.vdo": "0"
Oct 14 09:26:30 compute-0 keen_easley[389897]:             },
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "type": "block",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "vg_name": "ceph_vg1"
Oct 14 09:26:30 compute-0 keen_easley[389897]:         }
Oct 14 09:26:30 compute-0 keen_easley[389897]:     ],
Oct 14 09:26:30 compute-0 keen_easley[389897]:     "2": [
Oct 14 09:26:30 compute-0 keen_easley[389897]:         {
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "devices": [
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "/dev/loop5"
Oct 14 09:26:30 compute-0 keen_easley[389897]:             ],
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_name": "ceph_lv2",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_size": "21470642176",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "name": "ceph_lv2",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "tags": {
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.cluster_name": "ceph",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.crush_device_class": "",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.encrypted": "0",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osd_id": "2",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.type": "block",
Oct 14 09:26:30 compute-0 keen_easley[389897]:                 "ceph.vdo": "0"
Oct 14 09:26:30 compute-0 keen_easley[389897]:             },
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "type": "block",
Oct 14 09:26:30 compute-0 keen_easley[389897]:             "vg_name": "ceph_vg2"
Oct 14 09:26:30 compute-0 keen_easley[389897]:         }
Oct 14 09:26:30 compute-0 keen_easley[389897]:     ]
Oct 14 09:26:30 compute-0 keen_easley[389897]: }
Oct 14 09:26:30 compute-0 systemd[1]: libpod-e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4.scope: Deactivated successfully.
Oct 14 09:26:30 compute-0 podman[389881]: 2025-10-14 09:26:30.440717513 +0000 UTC m=+0.914269712 container died e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6-merged.mount: Deactivated successfully.
Oct 14 09:26:30 compute-0 podman[389881]: 2025-10-14 09:26:30.505082903 +0000 UTC m=+0.978635062 container remove e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:26:30 compute-0 systemd[1]: libpod-conmon-e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4.scope: Deactivated successfully.
Oct 14 09:26:30 compute-0 sudo[389775]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:30 compute-0 sudo[389919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:30 compute-0 sudo[389919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:30 compute-0 sudo[389919]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:30 compute-0 sudo[389944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:26:30 compute-0 sudo[389944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:30 compute-0 sudo[389944]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:30 compute-0 sudo[389969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:30 compute-0 sudo[389969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:30 compute-0 sudo[389969]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:30 compute-0 sudo[389994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:26:30 compute-0 sudo[389994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:31 compute-0 podman[390057]: 2025-10-14 09:26:31.224138061 +0000 UTC m=+0.045496205 container create 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:26:31 compute-0 systemd[1]: Started libpod-conmon-903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54.scope.
Oct 14 09:26:31 compute-0 nova_compute[259627]: 2025-10-14 09:26:31.278 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433976.2759538, f41def60-a7be-4154-86bc-ef63a639ee94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:31 compute-0 nova_compute[259627]: 2025-10-14 09:26:31.280 2 INFO nova.compute.manager [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] VM Stopped (Lifecycle Event)
Oct 14 09:26:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:31 compute-0 podman[390057]: 2025-10-14 09:26:31.298777245 +0000 UTC m=+0.120135429 container init 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:26:31 compute-0 podman[390057]: 2025-10-14 09:26:31.2046777 +0000 UTC m=+0.026035894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:26:31 compute-0 nova_compute[259627]: 2025-10-14 09:26:31.300 2 DEBUG nova.compute.manager [None req-43748baa-03a5-427a-8826-3f0607419475 - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:31 compute-0 podman[390057]: 2025-10-14 09:26:31.310425253 +0000 UTC m=+0.131783407 container start 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:26:31 compute-0 podman[390057]: 2025-10-14 09:26:31.314223197 +0000 UTC m=+0.135581361 container attach 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:26:31 compute-0 eager_antonelli[390073]: 167 167
Oct 14 09:26:31 compute-0 systemd[1]: libpod-903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54.scope: Deactivated successfully.
Oct 14 09:26:31 compute-0 podman[390057]: 2025-10-14 09:26:31.315941619 +0000 UTC m=+0.137299763 container died 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:26:31 compute-0 nova_compute[259627]: 2025-10-14 09:26:31.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dac19be301b6f6428f48fa64978b2e75bb65447bd7a9b5aa6b1f0a740c81d543-merged.mount: Deactivated successfully.
Oct 14 09:26:31 compute-0 podman[390057]: 2025-10-14 09:26:31.358635034 +0000 UTC m=+0.179993188 container remove 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:26:31 compute-0 systemd[1]: libpod-conmon-903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54.scope: Deactivated successfully.
Oct 14 09:26:31 compute-0 ceph-mon[74249]: pgmap v2201: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 09:26:31 compute-0 podman[390096]: 2025-10-14 09:26:31.601707901 +0000 UTC m=+0.075836875 container create ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:26:31 compute-0 systemd[1]: Started libpod-conmon-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope.
Oct 14 09:26:31 compute-0 podman[390096]: 2025-10-14 09:26:31.572446378 +0000 UTC m=+0.046575402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:26:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:31 compute-0 podman[390096]: 2025-10-14 09:26:31.707983197 +0000 UTC m=+0.182112151 container init ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:26:31 compute-0 podman[390096]: 2025-10-14 09:26:31.720528227 +0000 UTC m=+0.194657211 container start ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 09:26:31 compute-0 podman[390096]: 2025-10-14 09:26:31.72470505 +0000 UTC m=+0.198833994 container attach ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:26:32 compute-0 modest_mclaren[390112]: {
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "osd_id": 2,
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "type": "bluestore"
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:     },
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "osd_id": 1,
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "type": "bluestore"
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:     },
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "osd_id": 0,
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:         "type": "bluestore"
Oct 14 09:26:32 compute-0 modest_mclaren[390112]:     }
Oct 14 09:26:32 compute-0 modest_mclaren[390112]: }
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:26:32
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 14 09:26:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:26:32 compute-0 systemd[1]: libpod-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope: Deactivated successfully.
Oct 14 09:26:32 compute-0 systemd[1]: libpod-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope: Consumed 1.120s CPU time.
Oct 14 09:26:32 compute-0 conmon[390112]: conmon ac3d3340070fb8322e6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope/container/memory.events
Oct 14 09:26:32 compute-0 podman[390096]: 2025-10-14 09:26:32.839271301 +0000 UTC m=+1.313400235 container died ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487-merged.mount: Deactivated successfully.
Oct 14 09:26:32 compute-0 podman[390096]: 2025-10-14 09:26:32.918438668 +0000 UTC m=+1.392567622 container remove ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:26:32 compute-0 systemd[1]: libpod-conmon-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope: Deactivated successfully.
Oct 14 09:26:32 compute-0 sudo[389994]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:26:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:26:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:26:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 96edb838-5106-49b9-a085-3621c6e7fe11 does not exist
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 4d7357fd-a2ad-4e04-9a62-cb5a9dcf9ec9 does not exist
Oct 14 09:26:33 compute-0 sudo[390156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:26:33 compute-0 sudo[390156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:33 compute-0 sudo[390156]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:33 compute-0 sudo[390181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:26:33 compute-0 sudo[390181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:26:33 compute-0 sudo[390181]: pam_unix(sudo:session): session closed for user root
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:26:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:26:33 compute-0 ceph-mon[74249]: pgmap v2202: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 14 09:26:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:26:33 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:26:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:26:34 compute-0 nova_compute[259627]: 2025-10-14 09:26:34.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:34 compute-0 nova_compute[259627]: 2025-10-14 09:26:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:35 compute-0 ceph-mon[74249]: pgmap v2203: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:26:36 compute-0 nova_compute[259627]: 2025-10-14 09:26:36.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:26:37 compute-0 nova_compute[259627]: 2025-10-14 09:26:37.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:37 compute-0 ceph-mon[74249]: pgmap v2204: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:26:37 compute-0 podman[390206]: 2025-10-14 09:26:37.714429693 +0000 UTC m=+0.107949258 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 09:26:37 compute-0 podman[390207]: 2025-10-14 09:26:37.735538515 +0000 UTC m=+0.126322792 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:26:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:26:38 compute-0 nova_compute[259627]: 2025-10-14 09:26:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:39 compute-0 ceph-mon[74249]: pgmap v2205: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:26:39 compute-0 nova_compute[259627]: 2025-10-14 09:26:39.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:39 compute-0 nova_compute[259627]: 2025-10-14 09:26:39.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:39 compute-0 nova_compute[259627]: 2025-10-14 09:26:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 09:26:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:26:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4044762309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.470 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.555 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.555 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:26:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4044762309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.714 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.716 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3464MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.716 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.716 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.837 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 4310595f-2280-438c-97ca-f2de57527501 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.838 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.838 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:26:40 compute-0 nova_compute[259627]: 2025-10-14 09:26:40.887 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:41 compute-0 nova_compute[259627]: 2025-10-14 09:26:41.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:26:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840802512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:41 compute-0 nova_compute[259627]: 2025-10-14 09:26:41.359 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:41 compute-0 nova_compute[259627]: 2025-10-14 09:26:41.366 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:26:41 compute-0 nova_compute[259627]: 2025-10-14 09:26:41.419 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:26:41 compute-0 nova_compute[259627]: 2025-10-14 09:26:41.445 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:26:41 compute-0 nova_compute[259627]: 2025-10-14 09:26:41.446 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:41 compute-0 ceph-mon[74249]: pgmap v2206: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 09:26:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1840802512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:42 compute-0 nova_compute[259627]: 2025-10-14 09:26:42.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 14 09:26:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007593366420850427 of space, bias 1.0, pg target 0.22780099262551282 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:26:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:26:43 compute-0 nova_compute[259627]: 2025-10-14 09:26:43.446 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:43 compute-0 nova_compute[259627]: 2025-10-14 09:26:43.447 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:43 compute-0 ceph-mon[74249]: pgmap v2207: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.048 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.049 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.069 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.141 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.142 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.149 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.150 2 INFO nova.compute.claims [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.285 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 14 09:26:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:26:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1661505640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.864 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.872 2 DEBUG nova.compute.provider_tree [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.894 2 DEBUG nova.scheduler.client.report [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.923 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.924 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.977 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.978 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.982 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.983 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:26:44 compute-0 nova_compute[259627]: 2025-10-14 09:26:44.999 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.017 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.115 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.118 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.119 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Creating image(s)
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.156 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.197 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.235 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.239 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.308 2 DEBUG nova.policy [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.358 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.359 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.359 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.360 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.393 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.399 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:45 compute-0 ceph-mon[74249]: pgmap v2208: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 14 09:26:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1661505640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.693 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.742 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.835 2 DEBUG nova.objects.instance [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 2595dec0-9170-4e8f-a6bc-9179d30519a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.861 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.861 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Ensure instance console log exists: /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.862 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.862 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.863 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.982 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:26:45 compute-0 nova_compute[259627]: 2025-10-14 09:26:45.982 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.012 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.213 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.213 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.214 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.214 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4310595f-2280-438c-97ca-f2de57527501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.268 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Successfully created port: 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:26:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.755 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.776 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.928 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.928 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.933 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:26:46 compute-0 nova_compute[259627]: 2025-10-14 09:26:46.934 2 INFO nova.compute.claims [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.099 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.393 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Successfully updated port: 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.411 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.411 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.412 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.499 2 DEBUG nova.compute.manager [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.500 2 DEBUG nova.compute.manager [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.500 2 DEBUG oslo_concurrency.lockutils [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:26:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764751085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.585 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.594 2 DEBUG nova.compute.provider_tree [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.664 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:26:47 compute-0 ceph-mon[74249]: pgmap v2209: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 14 09:26:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1764751085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.752 2 DEBUG nova.scheduler.client.report [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.786 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.788 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.838 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.839 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.856 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.873 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.969 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.972 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:26:47 compute-0 nova_compute[259627]: 2025-10-14 09:26:47.973 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Creating image(s)
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.008 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.033 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.056 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.060 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.163 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.164 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.165 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.165 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.192 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.197 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e16af982-3cd8-4600-99c4-aeec45986dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.506 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e16af982-3cd8-4600-99c4-aeec45986dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.592 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] resizing rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.686 2 DEBUG nova.objects.instance [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lazy-loading 'migration_context' on Instance uuid e16af982-3cd8-4600-99c4-aeec45986dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.708 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.709 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Ensure instance console log exists: /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.710 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.710 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.710 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.823 2 DEBUG nova.policy [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a250f9c11f864fb49faf97cbb4399ece', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '261a0f5c61f04b77863377f034e70f01', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.840 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.857 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:48 compute-0 nova_compute[259627]: 2025-10-14 09:26:48.857 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:26:49 compute-0 ceph-mon[74249]: pgmap v2210: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 14 09:26:49 compute-0 nova_compute[259627]: 2025-10-14 09:26:49.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 177 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.3 MiB/s wr, 26 op/s
Oct 14 09:26:50 compute-0 podman[390667]: 2025-10-14 09:26:50.684829394 +0000 UTC m=+0.078955912 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 14 09:26:50 compute-0 podman[390666]: 2025-10-14 09:26:50.747182594 +0000 UTC m=+0.144607964 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 14 09:26:50 compute-0 nova_compute[259627]: 2025-10-14 09:26:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:51 compute-0 nova_compute[259627]: 2025-10-14 09:26:51.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:51 compute-0 ceph-mon[74249]: pgmap v2211: 305 pgs: 305 active+clean; 177 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.3 MiB/s wr, 26 op/s
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.204 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.231 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.231 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance network_info: |[{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.232 2 DEBUG oslo_concurrency.lockutils [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.233 2 DEBUG nova.network.neutron [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.239 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start _get_guest_xml network_info=[{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.246 2 WARNING nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.260 2 DEBUG nova.virt.libvirt.host [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.261 2 DEBUG nova.virt.libvirt.host [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.269 2 DEBUG nova.virt.libvirt.host [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.270 2 DEBUG nova.virt.libvirt.host [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.271 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.271 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.272 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.273 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.273 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.274 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.274 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.275 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.276 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.276 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.277 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.277 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.283 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.418 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Successfully created port: 563493a8-f727-4a25-97de-548a04398264 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:26:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:26:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694355942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.844 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.869 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:52 compute-0 nova_compute[259627]: 2025-10-14 09:26:52.874 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:26:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348820056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.362 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.364 2 DEBUG nova.virt.libvirt.vif [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-358564178',display_name='tempest-TestNetworkBasicOps-server-358564178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-358564178',id=127,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHcVlyHKVFHxb0hriNyI1hppvpwNJ/aTRlLE7dBDeajB0uM5sP+bnasOk+ko2DL77CLK3QbWVr/+3RKN6o4h1D1BJ0FS9znP9UgUkNA33oyzkv3sPnYQc7bgh/xOganUMg==',key_name='tempest-TestNetworkBasicOps-1632567993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-nbahr6ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:45Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=2595dec0-9170-4e8f-a6bc-9179d30519a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.364 2 DEBUG nova.network.os_vif_util [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.365 2 DEBUG nova.network.os_vif_util [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.366 2 DEBUG nova.objects.instance [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2595dec0-9170-4e8f-a6bc-9179d30519a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.384 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <uuid>2595dec0-9170-4e8f-a6bc-9179d30519a9</uuid>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <name>instance-0000007f</name>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-358564178</nova:name>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:26:52</nova:creationTime>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <nova:port uuid="9ecc8f01-430d-4714-8d7e-e60d7edaa73c">
Oct 14 09:26:53 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <system>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <entry name="serial">2595dec0-9170-4e8f-a6bc-9179d30519a9</entry>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <entry name="uuid">2595dec0-9170-4e8f-a6bc-9179d30519a9</entry>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </system>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <os>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   </os>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <features>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   </features>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2595dec0-9170-4e8f-a6bc-9179d30519a9_disk">
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       </source>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config">
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       </source>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:26:53 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:50:8f:64"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <target dev="tap9ecc8f01-43"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/console.log" append="off"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <video>
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </video>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:26:53 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:26:53 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:26:53 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:26:53 compute-0 nova_compute[259627]: </domain>
Oct 14 09:26:53 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.385 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Preparing to wait for external event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.385 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.386 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.386 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.386 2 DEBUG nova.virt.libvirt.vif [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-358564178',display_name='tempest-TestNetworkBasicOps-server-358564178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-358564178',id=127,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHcVlyHKVFHxb0hriNyI1hppvpwNJ/aTRlLE7dBDeajB0uM5sP+bnasOk+ko2DL77CLK3QbWVr/+3RKN6o4h1D1BJ0FS9znP9UgUkNA33oyzkv3sPnYQc7bgh/xOganUMg==',key_name='tempest-TestNetworkBasicOps-1632567993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-nbahr6ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:45Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=2595dec0-9170-4e8f-a6bc-9179d30519a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.387 2 DEBUG nova.network.os_vif_util [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.387 2 DEBUG nova.network.os_vif_util [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.388 2 DEBUG os_vif [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ecc8f01-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ecc8f01-43, col_values=(('external_ids', {'iface-id': '9ecc8f01-430d-4714-8d7e-e60d7edaa73c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:8f:64', 'vm-uuid': '2595dec0-9170-4e8f-a6bc-9179d30519a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:53 compute-0 NetworkManager[44885]: <info>  [1760434013.4432] manager: (tap9ecc8f01-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/552)
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.453 2 INFO os_vif [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43')
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.505 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Successfully updated port: 563493a8-f727-4a25-97de-548a04398264 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.508 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.509 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.509 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:50:8f:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.509 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Using config drive
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.531 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.541 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.541 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquired lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.542 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:26:53 compute-0 ceph-mon[74249]: pgmap v2212: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:26:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2694355942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3348820056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.744 2 DEBUG nova.network.neutron [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.745 2 DEBUG nova.network.neutron [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.758 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.762 2 DEBUG oslo_concurrency.lockutils [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.860 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Creating config drive at /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.866 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkwk9554 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.934 2 DEBUG nova.compute.manager [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-changed-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.935 2 DEBUG nova.compute.manager [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing instance network info cache due to event network-changed-563493a8-f727-4a25-97de-548a04398264. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.936 2 DEBUG oslo_concurrency.lockutils [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:53 compute-0 nova_compute[259627]: 2025-10-14 09:26:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.034 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkwk9554" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.075 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.081 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.285 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.287 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Deleting local config drive /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config because it was imported into RBD.
Oct 14 09:26:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:26:54 compute-0 kernel: tap9ecc8f01-43: entered promiscuous mode
Oct 14 09:26:54 compute-0 NetworkManager[44885]: <info>  [1760434014.3656] manager: (tap9ecc8f01-43): new Tun device (/org/freedesktop/NetworkManager/Devices/553)
Oct 14 09:26:54 compute-0 ovn_controller[152662]: 2025-10-14T09:26:54Z|01351|binding|INFO|Claiming lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c for this chassis.
Oct 14 09:26:54 compute-0 ovn_controller[152662]: 2025-10-14T09:26:54Z|01352|binding|INFO|9ecc8f01-430d-4714-8d7e-e60d7edaa73c: Claiming fa:16:3e:50:8f:64 10.100.0.3
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.378 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:8f:64 10.100.0.3'], port_security=['fa:16:3e:50:8f:64 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2595dec0-9170-4e8f-a6bc-9179d30519a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e192907-665a-4f92-bc1f-6ecbfbe8292b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc8f01-430d-4714-8d7e-e60d7edaa73c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.380 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c in datapath 44344b65-f325-470a-bd36-6f52ed03d317 bound to our chassis
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.382 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44344b65-f325-470a-bd36-6f52ed03d317
Oct 14 09:26:54 compute-0 ovn_controller[152662]: 2025-10-14T09:26:54Z|01353|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c ovn-installed in OVS
Oct 14 09:26:54 compute-0 ovn_controller[152662]: 2025-10-14T09:26:54Z|01354|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c up in Southbound
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.399 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cf925185-b958-40aa-a237-a69cdd70526d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.400 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44344b65-f1 in ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.402 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44344b65-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.402 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2233d6-af71-4ca3-9aff-2819c2d36314]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[633b5405-ce9a-4de5-a209-dd99b3b630bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:54 compute-0 systemd-machined[214636]: New machine qemu-160-instance-0000007f.
Oct 14 09:26:54 compute-0 systemd-udevd[390845]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.429 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[71069dc2-eb44-4bdd-a992-968ebd23a7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-0000007f.
Oct 14 09:26:54 compute-0 NetworkManager[44885]: <info>  [1760434014.4427] device (tap9ecc8f01-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:26:54 compute-0 NetworkManager[44885]: <info>  [1760434014.4445] device (tap9ecc8f01-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.460 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8eca1783-e28b-46e6-8745-79d00c14446c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.499 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9e9b71-bb36-4006-bfc7-1342b4a81239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.507 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88074195-1718-4bae-ab33-bec8998498bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 NetworkManager[44885]: <info>  [1760434014.5090] manager: (tap44344b65-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/554)
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.557 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c58a12-7702-46a2-ab64-859590dd4fb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.560 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[00a1b721-995e-4b6d-9fa0-fb61e233b571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 NetworkManager[44885]: <info>  [1760434014.5971] device (tap44344b65-f0): carrier: link connected
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.605 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[854935db-6a96-43c0-a8ad-d5c48fd6f949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.630 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a5ad0b-c412-4f8a-9466-7db0af579d8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44344b65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:40:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779335, 'reachable_time': 36403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390876, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e67e97a-a53e-422e-8fd4-85e470f7a6a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:4026'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779335, 'tstamp': 779335}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390877, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2ce37f-5970-4eee-b1aa-534943ba823a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44344b65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:40:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779335, 'reachable_time': 36403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390878, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0bacf69e-48e5-4a4b-a601-4f44e64fc80e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.806 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a46c82-4f60-4a45-9620-77cde25f87c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.808 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44344b65-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.809 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.810 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44344b65-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:54 compute-0 kernel: tap44344b65-f0: entered promiscuous mode
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:54 compute-0 NetworkManager[44885]: <info>  [1760434014.8135] manager: (tap44344b65-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.817 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44344b65-f0, col_values=(('external_ids', {'iface-id': 'dabdfa0b-267a-4754-a026-601ab2593a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:54 compute-0 ovn_controller[152662]: 2025-10-14T09:26:54Z|01355|binding|INFO|Releasing lport dabdfa0b-267a-4754-a026-601ab2593a32 from this chassis (sb_readonly=0)
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.851 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44344b65-f325-470a-bd36-6f52ed03d317.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44344b65-f325-470a-bd36-6f52ed03d317.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.852 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[efc51777-122d-43ca-a09e-2c3d8df2cc6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.853 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-44344b65-f325-470a-bd36-6f52ed03d317
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/44344b65-f325-470a-bd36-6f52ed03d317.pid.haproxy
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 44344b65-f325-470a-bd36-6f52ed03d317
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:26:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.854 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'env', 'PROCESS_TAG=haproxy-44344b65-f325-470a-bd36-6f52ed03d317', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44344b65-f325-470a-bd36-6f52ed03d317.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:26:54 compute-0 nova_compute[259627]: 2025-10-14 09:26:54.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.302 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:55 compute-0 podman[390952]: 2025-10-14 09:26:55.325666665 +0000 UTC m=+0.081213948 container create 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.335 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Releasing lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.336 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance network_info: |[{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.337 2 DEBUG oslo_concurrency.lockutils [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.338 2 DEBUG nova.network.neutron [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing network info cache for port 563493a8-f727-4a25-97de-548a04398264 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.341 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start _get_guest_xml network_info=[{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.346 2 WARNING nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.351 2 DEBUG nova.virt.libvirt.host [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.352 2 DEBUG nova.virt.libvirt.host [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.361 2 DEBUG nova.virt.libvirt.host [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.362 2 DEBUG nova.virt.libvirt.host [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.363 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.363 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.364 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.364 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.365 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.365 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.365 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.366 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.366 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.367 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.367 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.367 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.371 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:55 compute-0 podman[390952]: 2025-10-14 09:26:55.278859198 +0000 UTC m=+0.034406571 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:26:55 compute-0 systemd[1]: Started libpod-conmon-5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03.scope.
Oct 14 09:26:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2b1df47e436b2c8cb55d0e32cd763db11840ac3a9a7beaf43710acbfe9e885/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:55 compute-0 podman[390952]: 2025-10-14 09:26:55.428482975 +0000 UTC m=+0.184030288 container init 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 09:26:55 compute-0 podman[390952]: 2025-10-14 09:26:55.441728412 +0000 UTC m=+0.197275695 container start 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:26:55 compute-0 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [NOTICE]   (390972) : New worker (390974) forked
Oct 14 09:26:55 compute-0 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [NOTICE]   (390972) : Loading success.
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.555 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434015.555475, 2595dec0-9170-4e8f-a6bc-9179d30519a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.557 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] VM Started (Lifecycle Event)
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.581 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.585 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434015.5557122, 2595dec0-9170-4e8f-a6bc-9179d30519a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.585 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] VM Paused (Lifecycle Event)
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.600 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.603 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.621 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:26:55 compute-0 ceph-mon[74249]: pgmap v2213: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:26:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:26:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543092115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.822 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.848 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:55 compute-0 nova_compute[259627]: 2025-10-14 09:26:55.854 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.060 2 DEBUG nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.062 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.064 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.064 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.065 2 DEBUG nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Processing event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.065 2 DEBUG nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.065 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.066 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.066 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.067 2 DEBUG nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.067 2 WARNING nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state building and task_state spawning.
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.068 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.073 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434016.0731921, 2595dec0-9170-4e8f-a6bc-9179d30519a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.074 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] VM Resumed (Lifecycle Event)
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.076 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.081 2 INFO nova.virt.libvirt.driver [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance spawned successfully.
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.082 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.112 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.122 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.129 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.130 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.131 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.131 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.132 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.133 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.190 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.254 2 INFO nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Took 11.14 seconds to spawn the instance on the hypervisor.
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.255 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:26:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707962533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.278 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.280 2 DEBUG nova.virt.libvirt.vif [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-211623791-acc',id=128,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KjYB8TOdxJDBX1/D4wnlTYCVpAUwpAx9R+OpzTEM4NvI2eIFjA8TxpNcbv510O1pJN+wHpAyy5P1FA9H3PUquH/ijntIYtfRD9NCYpHRGTApBy/zfs0NfRBu1//Wkcw==',key_name='tempest-TestSecurityGroupsBasicOps-887135051',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='261a0f5c61f04b77863377f034e70f01',ramdisk_id='',reservation_id='r-95hbfz6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-211623791',owner_user_name='tempest-TestSecurityGroupsBasicOps-211623791-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:47Z,user_data=None,user_id='a250f9c11f864fb49faf97cbb4399ece',uuid=e16af982-3cd8-4600-99c4-aeec45986dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.280 2 DEBUG nova.network.os_vif_util [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converting VIF {"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.282 2 DEBUG nova.network.os_vif_util [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.284 2 DEBUG nova.objects.instance [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lazy-loading 'pci_devices' on Instance uuid e16af982-3cd8-4600-99c4-aeec45986dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.310 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <uuid>e16af982-3cd8-4600-99c4-aeec45986dda</uuid>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <name>instance-00000080</name>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451</nova:name>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:26:55</nova:creationTime>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <nova:user uuid="a250f9c11f864fb49faf97cbb4399ece">tempest-TestSecurityGroupsBasicOps-211623791-project-member</nova:user>
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <nova:project uuid="261a0f5c61f04b77863377f034e70f01">tempest-TestSecurityGroupsBasicOps-211623791</nova:project>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <nova:port uuid="563493a8-f727-4a25-97de-548a04398264">
Oct 14 09:26:56 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <system>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <entry name="serial">e16af982-3cd8-4600-99c4-aeec45986dda</entry>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <entry name="uuid">e16af982-3cd8-4600-99c4-aeec45986dda</entry>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </system>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <os>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   </os>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <features>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   </features>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e16af982-3cd8-4600-99c4-aeec45986dda_disk">
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       </source>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e16af982-3cd8-4600-99c4-aeec45986dda_disk.config">
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       </source>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:26:56 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:8d:ee:f5"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <target dev="tap563493a8-f7"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/console.log" append="off"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <video>
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </video>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:26:56 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:26:56 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:26:56 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:26:56 compute-0 nova_compute[259627]: </domain>
Oct 14 09:26:56 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.313 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Preparing to wait for external event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.313 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.314 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.314 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.316 2 DEBUG nova.virt.libvirt.vif [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-211623791-acc',id=128,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KjYB8TOdxJDBX1/D4wnlTYCVpAUwpAx9R+OpzTEM4NvI2eIFjA8TxpNcbv510O1pJN+wHpAyy5P1FA9H3PUquH/ijntIYtfRD9NCYpHRGTApBy/zfs0NfRBu1//Wkcw==',key_name='tempest-TestSecurityGroupsBasicOps-887135051',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='261a0f5c61f04b77863377f034e70f01',ramdisk_id='',reservation_id='r-95hbfz6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-211623791',owner_user_name='tempest-TestSecurityGroupsBasicOps-211623791-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:47Z,user_data=None,user_id='a250f9c11f864fb49faf97cbb4399ece',uuid=e16af982-3cd8-4600-99c4-aeec45986dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.317 2 DEBUG nova.network.os_vif_util [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converting VIF {"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.318 2 DEBUG nova.network.os_vif_util [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.319 2 DEBUG os_vif [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:26:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap563493a8-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap563493a8-f7, col_values=(('external_ids', {'iface-id': '563493a8-f727-4a25-97de-548a04398264', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:ee:f5', 'vm-uuid': 'e16af982-3cd8-4600-99c4-aeec45986dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:56 compute-0 NetworkManager[44885]: <info>  [1760434016.3397] manager: (tap563493a8-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.340 2 INFO nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Took 12.22 seconds to build instance.
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.346 2 INFO os_vif [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7')
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.356 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.409 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.410 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.410 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] No VIF found with MAC fa:16:3e:8d:ee:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.411 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Using config drive
Oct 14 09:26:56 compute-0 nova_compute[259627]: 2025-10-14 09:26:56.436 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1543092115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2707962533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.028 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Creating config drive at /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.037 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyfpu45sa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.201 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyfpu45sa" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.248 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.255 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config e16af982-3cd8-4600-99c4-aeec45986dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.340 2 DEBUG nova.network.neutron [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updated VIF entry in instance network info cache for port 563493a8-f727-4a25-97de-548a04398264. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.341 2 DEBUG nova.network.neutron [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.358 2 DEBUG oslo_concurrency.lockutils [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.450 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config e16af982-3cd8-4600-99c4-aeec45986dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.450 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Deleting local config drive /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config because it was imported into RBD.
Oct 14 09:26:57 compute-0 kernel: tap563493a8-f7: entered promiscuous mode
Oct 14 09:26:57 compute-0 NetworkManager[44885]: <info>  [1760434017.4971] manager: (tap563493a8-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:57 compute-0 ovn_controller[152662]: 2025-10-14T09:26:57Z|01356|binding|INFO|Claiming lport 563493a8-f727-4a25-97de-548a04398264 for this chassis.
Oct 14 09:26:57 compute-0 ovn_controller[152662]: 2025-10-14T09:26:57Z|01357|binding|INFO|563493a8-f727-4a25-97de-548a04398264: Claiming fa:16:3e:8d:ee:f5 10.100.0.8
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.508 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:ee:f5 10.100.0.8'], port_security=['fa:16:3e:8d:ee:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e16af982-3cd8-4600-99c4-aeec45986dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d00461c7-a787-45ae-8db1-11ba8f94e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '261a0f5c61f04b77863377f034e70f01', 'neutron:revision_number': '2', 'neutron:security_group_ids': '662e95f2-6ec1-4f95-993e-421550aa8e7c 6cefb658-a903-42cb-a2fb-84294f9eff2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=789ab81e-5f79-4555-8f72-bf440f2a44f6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=563493a8-f727-4a25-97de-548a04398264) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.510 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 563493a8-f727-4a25-97de-548a04398264 in datapath d00461c7-a787-45ae-8db1-11ba8f94e301 bound to our chassis
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.511 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d00461c7-a787-45ae-8db1-11ba8f94e301
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.523 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85b0aef0-2bc7-4130-a2ff-92bceb7df645]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.523 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd00461c7-a1 in ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.527 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd00461c7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.527 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46b578cf-5a43-4be4-ba2c-08526ca8072a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.528 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2085ce47-fc73-42b9-9f20-e6b6f52db8d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 systemd-machined[214636]: New machine qemu-161-instance-00000080.
Oct 14 09:26:57 compute-0 ovn_controller[152662]: 2025-10-14T09:26:57Z|01358|binding|INFO|Setting lport 563493a8-f727-4a25-97de-548a04398264 ovn-installed in OVS
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.541 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5585c2c1-5a00-46a6-9d8b-4ab8c040f006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_controller[152662]: 2025-10-14T09:26:57Z|01359|binding|INFO|Setting lport 563493a8-f727-4a25-97de-548a04398264 up in Southbound
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:57 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000080.
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.554 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b96b433d-117b-4cd0-9d6f-f4bd77b35d59]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 systemd-udevd[391122]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:26:57 compute-0 NetworkManager[44885]: <info>  [1760434017.5828] device (tap563493a8-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:26:57 compute-0 NetworkManager[44885]: <info>  [1760434017.5837] device (tap563493a8-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.596 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec78492-9916-4ce3-bbc4-54c1175dc5b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.601 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[187bf499-5796-46be-8b06-c1954185f231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 NetworkManager[44885]: <info>  [1760434017.6029] manager: (tapd00461c7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/558)
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.636 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[79ac8d7f-cb93-4cdb-a281-02211f5b11b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.640 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[53e22cf1-6b22-4c25-b59a-a4e15fc0dd7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 NetworkManager[44885]: <info>  [1760434017.6585] device (tapd00461c7-a0): carrier: link connected
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.662 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aa637a32-d32e-4970-a7bb-d6777f3eedd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.677 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b050dad-9e69-4a3b-adbb-f67c728f5d9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd00461c7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:a4:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779642, 'reachable_time': 22653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391151, 'error': None, 'target': 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.696 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[011e1a87-3a7b-4eda-be96-f9ed47f3d064]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:a47e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779642, 'tstamp': 779642}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391152, 'error': None, 'target': 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.711 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6742087a-badd-4eef-b7c4-046a4ab36f53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd00461c7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:a4:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779642, 'reachable_time': 22653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391153, 'error': None, 'target': 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ceph-mon[74249]: pgmap v2214: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.750 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[89e1e5f5-4a71-4ba6-8f55-3a870add7178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d21996-ed8a-4e55-8b13-e0a21405bda0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.796 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd00461c7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd00461c7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:57 compute-0 kernel: tapd00461c7-a0: entered promiscuous mode
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:57 compute-0 NetworkManager[44885]: <info>  [1760434017.7999] manager: (tapd00461c7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.803 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd00461c7-a0, col_values=(('external_ids', {'iface-id': '0b564910-dd70-44c6-926f-33b940d515bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:57 compute-0 ovn_controller[152662]: 2025-10-14T09:26:57Z|01360|binding|INFO|Releasing lport 0b564910-dd70-44c6-926f-33b940d515bb from this chassis (sb_readonly=0)
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:57 compute-0 nova_compute[259627]: 2025-10-14 09:26:57.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.822 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d00461c7-a787-45ae-8db1-11ba8f94e301.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d00461c7-a787-45ae-8db1-11ba8f94e301.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.823 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[104f84ec-6e43-438b-b386-f408fc190c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.824 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-d00461c7-a787-45ae-8db1-11ba8f94e301
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/d00461c7-a787-45ae-8db1-11ba8f94e301.pid.haproxy
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID d00461c7-a787-45ae-8db1-11ba8f94e301
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:26:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.826 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'env', 'PROCESS_TAG=haproxy-d00461c7-a787-45ae-8db1-11ba8f94e301', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d00461c7-a787-45ae-8db1-11ba8f94e301.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:26:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.149 2 DEBUG nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.149 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Processing event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.151 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.151 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.151 2 DEBUG nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] No waiting events found dispatching network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.151 2 WARNING nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received unexpected event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 for instance with vm_state building and task_state spawning.
Oct 14 09:26:58 compute-0 podman[391192]: 2025-10-14 09:26:58.242527869 +0000 UTC m=+0.053978615 container create 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:26:58 compute-0 systemd[1]: Started libpod-conmon-05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18.scope.
Oct 14 09:26:58 compute-0 podman[391192]: 2025-10-14 09:26:58.219548391 +0000 UTC m=+0.030999147 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:26:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:26:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 14 09:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1209d6b30b19e606c66aff50aa191b9774b9d3e66e60de7f9a75b20180f215f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:26:58 compute-0 podman[391192]: 2025-10-14 09:26:58.347442122 +0000 UTC m=+0.158892888 container init 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:26:58 compute-0 podman[391192]: 2025-10-14 09:26:58.355195873 +0000 UTC m=+0.166646609 container start 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:26:58 compute-0 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [NOTICE]   (391246) : New worker (391248) forked
Oct 14 09:26:58 compute-0 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [NOTICE]   (391246) : Loading success.
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.755 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434018.7549312, e16af982-3cd8-4600-99c4-aeec45986dda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.756 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] VM Started (Lifecycle Event)
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.762 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.766 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.773 2 INFO nova.virt.libvirt.driver [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance spawned successfully.
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.774 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.812 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.819 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.828 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.829 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.829 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.830 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.831 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.832 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.840 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.841 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434018.7550426, e16af982-3cd8-4600-99c4-aeec45986dda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.841 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] VM Paused (Lifecycle Event)
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.873 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.877 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434018.766276, e16af982-3cd8-4600-99c4-aeec45986dda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.877 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] VM Resumed (Lifecycle Event)
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.902 2 DEBUG nova.compute.manager [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.902 2 DEBUG nova.compute.manager [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.903 2 DEBUG oslo_concurrency.lockutils [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.903 2 DEBUG oslo_concurrency.lockutils [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.904 2 DEBUG nova.network.neutron [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.908 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.912 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.918 2 INFO nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Took 10.95 seconds to spawn the instance on the hypervisor.
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.919 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.935 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:26:58 compute-0 nova_compute[259627]: 2025-10-14 09:26:58.987 2 INFO nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Took 12.16 seconds to build instance.
Oct 14 09:26:59 compute-0 nova_compute[259627]: 2025-10-14 09:26:59.021 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:26:59 compute-0 ceph-mon[74249]: pgmap v2215: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 14 09:26:59 compute-0 nova_compute[259627]: 2025-10-14 09:26:59.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:00 compute-0 nova_compute[259627]: 2025-10-14 09:27:00.223 2 DEBUG nova.network.neutron [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:00 compute-0 nova_compute[259627]: 2025-10-14 09:27:00.224 2 DEBUG nova.network.neutron [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:00 compute-0 nova_compute[259627]: 2025-10-14 09:27:00.249 2 DEBUG oslo_concurrency.lockutils [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Oct 14 09:27:01 compute-0 nova_compute[259627]: 2025-10-14 09:27:01.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:01 compute-0 ceph-mon[74249]: pgmap v2216: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Oct 14 09:27:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Oct 14 09:27:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:27:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:27:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:27:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:27:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:27:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:27:02 compute-0 nova_compute[259627]: 2025-10-14 09:27:02.953 2 DEBUG nova.compute.manager [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-changed-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:02 compute-0 nova_compute[259627]: 2025-10-14 09:27:02.955 2 DEBUG nova.compute.manager [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing instance network info cache due to event network-changed-563493a8-f727-4a25-97de-548a04398264. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:02 compute-0 nova_compute[259627]: 2025-10-14 09:27:02.955 2 DEBUG oslo_concurrency.lockutils [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:02 compute-0 nova_compute[259627]: 2025-10-14 09:27:02.955 2 DEBUG oslo_concurrency.lockutils [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:02 compute-0 nova_compute[259627]: 2025-10-14 09:27:02.955 2 DEBUG nova.network.neutron [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing network info cache for port 563493a8-f727-4a25-97de-548a04398264 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:03 compute-0 ceph-mon[74249]: pgmap v2217: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.072 2 DEBUG nova.network.neutron [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updated VIF entry in instance network info cache for port 563493a8-f727-4a25-97de-548a04398264. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.073 2 DEBUG nova.network.neutron [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.097 2 DEBUG oslo_concurrency.lockutils [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.330 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.331 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.349 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.426 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.427 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.436 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.437 2 INFO nova.compute.claims [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.631 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:04 compute-0 nova_compute[259627]: 2025-10-14 09:27:04.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:27:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2399357510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.138 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.148 2 DEBUG nova.compute.provider_tree [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.170 2 DEBUG nova.scheduler.client.report [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.201 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.202 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.257 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.258 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.276 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.316 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.417 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.420 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.421 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Creating image(s)
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.463 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.501 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.529 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.534 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:27:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213667799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:27:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:27:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213667799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.641 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.642 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.643 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.643 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.670 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.673 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1b7f03a3-6b73-478f-bf13-cf062714faef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:05 compute-0 ceph-mon[74249]: pgmap v2218: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 09:27:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2399357510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3213667799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:27:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3213667799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.848 2 DEBUG nova.policy [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:27:05 compute-0 nova_compute[259627]: 2025-10-14 09:27:05.962 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1b7f03a3-6b73-478f-bf13-cf062714faef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:06 compute-0 nova_compute[259627]: 2025-10-14 09:27:06.033 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:27:06 compute-0 nova_compute[259627]: 2025-10-14 09:27:06.116 2 DEBUG nova.objects.instance [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 1b7f03a3-6b73-478f-bf13-cf062714faef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:27:06 compute-0 nova_compute[259627]: 2025-10-14 09:27:06.156 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:27:06 compute-0 nova_compute[259627]: 2025-10-14 09:27:06.157 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Ensure instance console log exists: /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:27:06 compute-0 nova_compute[259627]: 2025-10-14 09:27:06.157 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:06 compute-0 nova_compute[259627]: 2025-10-14 09:27:06.157 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:06 compute-0 nova_compute[259627]: 2025-10-14 09:27:06.158 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 09:27:06 compute-0 nova_compute[259627]: 2025-10-14 09:27:06.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:07.041 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:07.042 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:07.043 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:07 compute-0 nova_compute[259627]: 2025-10-14 09:27:07.248 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Successfully created port: c84419ee-1585-485f-ae91-116f2123dadf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:27:07 compute-0 ceph-mon[74249]: pgmap v2219: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 09:27:07 compute-0 nova_compute[259627]: 2025-10-14 09:27:07.937 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Successfully updated port: c84419ee-1585-485f-ae91-116f2123dadf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:27:07 compute-0 nova_compute[259627]: 2025-10-14 09:27:07.957 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:07 compute-0 nova_compute[259627]: 2025-10-14 09:27:07.958 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:07 compute-0 nova_compute[259627]: 2025-10-14 09:27:07.958 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:27:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:08 compute-0 nova_compute[259627]: 2025-10-14 09:27:08.022 2 DEBUG nova.compute.manager [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-changed-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:08 compute-0 nova_compute[259627]: 2025-10-14 09:27:08.025 2 DEBUG nova.compute.manager [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing instance network info cache due to event network-changed-c84419ee-1585-485f-ae91-116f2123dadf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:08 compute-0 nova_compute[259627]: 2025-10-14 09:27:08.025 2 DEBUG oslo_concurrency.lockutils [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:08 compute-0 nova_compute[259627]: 2025-10-14 09:27:08.118 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:27:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Oct 14 09:27:08 compute-0 podman[391446]: 2025-10-14 09:27:08.720084113 +0000 UTC m=+0.105774664 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 09:27:08 compute-0 podman[391445]: 2025-10-14 09:27:08.72074401 +0000 UTC m=+0.112835950 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:27:09 compute-0 ovn_controller[152662]: 2025-10-14T09:27:09Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:8f:64 10.100.0.3
Oct 14 09:27:09 compute-0 ovn_controller[152662]: 2025-10-14T09:27:09Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:8f:64 10.100.0.3
Oct 14 09:27:09 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 14 09:27:09 compute-0 ceph-mon[74249]: pgmap v2220: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Oct 14 09:27:09 compute-0 nova_compute[259627]: 2025-10-14 09:27:09.959 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:09 compute-0 nova_compute[259627]: 2025-10-14 09:27:09.985 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:09 compute-0 nova_compute[259627]: 2025-10-14 09:27:09.986 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance network_info: |[{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:27:09 compute-0 nova_compute[259627]: 2025-10-14 09:27:09.986 2 DEBUG oslo_concurrency.lockutils [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:09 compute-0 nova_compute[259627]: 2025-10-14 09:27:09.986 2 DEBUG nova.network.neutron [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing network info cache for port c84419ee-1585-485f-ae91-116f2123dadf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:09 compute-0 nova_compute[259627]: 2025-10-14 09:27:09.992 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start _get_guest_xml network_info=[{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:27:09 compute-0 nova_compute[259627]: 2025-10-14 09:27:09.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:09 compute-0 nova_compute[259627]: 2025-10-14 09:27:09.999 2 WARNING nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.005 2 DEBUG nova.virt.libvirt.host [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.006 2 DEBUG nova.virt.libvirt.host [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.009 2 DEBUG nova.virt.libvirt.host [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.010 2 DEBUG nova.virt.libvirt.host [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.010 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.011 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.011 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.012 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.012 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.012 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.013 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.013 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.014 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.014 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.014 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.015 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.018 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 256 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 195 op/s
Oct 14 09:27:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:27:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606654335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.493 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.514 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.518 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:10 compute-0 ovn_controller[152662]: 2025-10-14T09:27:10Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:ee:f5 10.100.0.8
Oct 14 09:27:10 compute-0 ovn_controller[152662]: 2025-10-14T09:27:10Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:ee:f5 10.100.0.8
Oct 14 09:27:10 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/606654335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:27:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:27:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431566766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.956 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.959 2 DEBUG nova.virt.libvirt.vif [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2128241833',display_name='tempest-TestNetworkBasicOps-server-2128241833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2128241833',id=129,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOLhDC9VoGp8WYPVj/2CKqBQNdjKW4OK8XHbingpjAJEpUTBDLqjgHyIpa9e7zDDaRmI0fbF5BkwR2QzH+89ULkia3qer+DRxqv2g2mzU6TZpkV7v9oBLNlvSUqf9rYag==',key_name='tempest-TestNetworkBasicOps-1059153159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-3kh0vol0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:27:05Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=1b7f03a3-6b73-478f-bf13-cf062714faef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.960 2 DEBUG nova.network.os_vif_util [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.962 2 DEBUG nova.network.os_vif_util [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.964 2 DEBUG nova.objects.instance [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b7f03a3-6b73-478f-bf13-cf062714faef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:27:10 compute-0 nova_compute[259627]: 2025-10-14 09:27:10.990 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <uuid>1b7f03a3-6b73-478f-bf13-cf062714faef</uuid>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <name>instance-00000081</name>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-2128241833</nova:name>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:27:10</nova:creationTime>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <nova:port uuid="c84419ee-1585-485f-ae91-116f2123dadf">
Oct 14 09:27:10 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <system>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <entry name="serial">1b7f03a3-6b73-478f-bf13-cf062714faef</entry>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <entry name="uuid">1b7f03a3-6b73-478f-bf13-cf062714faef</entry>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </system>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <os>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   </os>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <features>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   </features>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1b7f03a3-6b73-478f-bf13-cf062714faef_disk">
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       </source>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config">
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       </source>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:27:10 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:44:6d:b2"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <target dev="tapc84419ee-15"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/console.log" append="off"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <video>
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </video>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:27:10 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:27:10 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:27:10 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:27:10 compute-0 nova_compute[259627]: </domain>
Oct 14 09:27:10 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.002 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Preparing to wait for external event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.002 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.003 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.003 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.004 2 DEBUG nova.virt.libvirt.vif [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2128241833',display_name='tempest-TestNetworkBasicOps-server-2128241833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2128241833',id=129,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOLhDC9VoGp8WYPVj/2CKqBQNdjKW4OK8XHbingpjAJEpUTBDLqjgHyIpa9e7zDDaRmI0fbF5BkwR2QzH+89ULkia3qer+DRxqv2g2mzU6TZpkV7v9oBLNlvSUqf9rYag==',key_name='tempest-TestNetworkBasicOps-1059153159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-3kh0vol0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:27:05Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=1b7f03a3-6b73-478f-bf13-cf062714faef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.005 2 DEBUG nova.network.os_vif_util [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.007 2 DEBUG nova.network.os_vif_util [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.008 2 DEBUG os_vif [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc84419ee-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc84419ee-15, col_values=(('external_ids', {'iface-id': 'c84419ee-1585-485f-ae91-116f2123dadf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:6d:b2', 'vm-uuid': '1b7f03a3-6b73-478f-bf13-cf062714faef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:11 compute-0 NetworkManager[44885]: <info>  [1760434031.0626] manager: (tapc84419ee-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.076 2 INFO os_vif [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15')
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.155 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.156 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.156 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:44:6d:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.157 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Using config drive
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.192 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.367 2 DEBUG nova.network.neutron [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updated VIF entry in instance network info cache for port c84419ee-1585-485f-ae91-116f2123dadf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.368 2 DEBUG nova.network.neutron [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.383 2 DEBUG oslo_concurrency.lockutils [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:11 compute-0 ceph-mon[74249]: pgmap v2221: 305 pgs: 305 active+clean; 256 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 195 op/s
Oct 14 09:27:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3431566766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.804 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Creating config drive at /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.809 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnqi2s1s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:11 compute-0 nova_compute[259627]: 2025-10-14 09:27:11.965 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnqi2s1s" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.000 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.003 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.180 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.182 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Deleting local config drive /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config because it was imported into RBD.
Oct 14 09:27:12 compute-0 kernel: tapc84419ee-15: entered promiscuous mode
Oct 14 09:27:12 compute-0 NetworkManager[44885]: <info>  [1760434032.2421] manager: (tapc84419ee-15): new Tun device (/org/freedesktop/NetworkManager/Devices/561)
Oct 14 09:27:12 compute-0 systemd-udevd[391619]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:12 compute-0 ovn_controller[152662]: 2025-10-14T09:27:12Z|01361|binding|INFO|Claiming lport c84419ee-1585-485f-ae91-116f2123dadf for this chassis.
Oct 14 09:27:12 compute-0 ovn_controller[152662]: 2025-10-14T09:27:12Z|01362|binding|INFO|c84419ee-1585-485f-ae91-116f2123dadf: Claiming fa:16:3e:44:6d:b2 10.100.0.9
Oct 14 09:27:12 compute-0 NetworkManager[44885]: <info>  [1760434032.2986] device (tapc84419ee-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:27:12 compute-0 NetworkManager[44885]: <info>  [1760434032.2996] device (tapc84419ee-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.301 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:6d:b2 10.100.0.9'], port_security=['fa:16:3e:44:6d:b2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1b7f03a3-6b73-478f-bf13-cf062714faef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '471f6dc6-ea8e-4b1e-a678-6d128cef3d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c84419ee-1585-485f-ae91-116f2123dadf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.302 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c84419ee-1585-485f-ae91-116f2123dadf in datapath 44344b65-f325-470a-bd36-6f52ed03d317 bound to our chassis
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.304 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44344b65-f325-470a-bd36-6f52ed03d317
Oct 14 09:27:12 compute-0 ovn_controller[152662]: 2025-10-14T09:27:12Z|01363|binding|INFO|Setting lport c84419ee-1585-485f-ae91-116f2123dadf ovn-installed in OVS
Oct 14 09:27:12 compute-0 ovn_controller[152662]: 2025-10-14T09:27:12Z|01364|binding|INFO|Setting lport c84419ee-1585-485f-ae91-116f2123dadf up in Southbound
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:12 compute-0 systemd-machined[214636]: New machine qemu-162-instance-00000081.
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.333 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9994f344-bbc6-43f4-8c3e-7a20df7eb95b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:12 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000081.
Oct 14 09:27:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 318 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.9 MiB/s wr, 187 op/s
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.369 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[320d6180-fb81-4e56-85b6-dce962f20bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.372 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8d981dc3-25e3-45b1-82da-0bb051708266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.415 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[50414cf6-eb6f-4903-9e19-931c2cb252bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.440 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da9206ab-0cb3-4861-8d5a-adc444d23dc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44344b65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:40:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779335, 'reachable_time': 36403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391636, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.462 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[996a6870-1b7e-42ee-9a7f-e4d53af4d83c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap44344b65-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779351, 'tstamp': 779351}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391637, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap44344b65-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779355, 'tstamp': 779355}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391637, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.464 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44344b65-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.468 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44344b65-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.468 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.468 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44344b65-f0, col_values=(('external_ids', {'iface-id': 'dabdfa0b-267a-4754-a026-601ab2593a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:12 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.469 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.530 2 DEBUG nova.compute.manager [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.530 2 DEBUG oslo_concurrency.lockutils [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.531 2 DEBUG oslo_concurrency.lockutils [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.531 2 DEBUG oslo_concurrency.lockutils [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:12 compute-0 nova_compute[259627]: 2025-10-14 09:27:12.531 2 DEBUG nova.compute.manager [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Processing event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:27:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.522 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434033.5219579, 1b7f03a3-6b73-478f-bf13-cf062714faef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.523 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] VM Started (Lifecycle Event)
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.527 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.531 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.536 2 INFO nova.virt.libvirt.driver [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance spawned successfully.
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.536 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.546 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.552 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.568 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.568 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.569 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.570 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.570 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.571 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.585 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.586 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434033.5221717, 1b7f03a3-6b73-478f-bf13-cf062714faef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.586 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] VM Paused (Lifecycle Event)
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.615 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.620 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434033.531478, 1b7f03a3-6b73-478f-bf13-cf062714faef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.620 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] VM Resumed (Lifecycle Event)
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.641 2 INFO nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Took 8.22 seconds to spawn the instance on the hypervisor.
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.642 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.643 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.654 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.678 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.706 2 INFO nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Took 9.31 seconds to build instance.
Oct 14 09:27:13 compute-0 nova_compute[259627]: 2025-10-14 09:27:13.721 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:13 compute-0 ceph-mon[74249]: pgmap v2222: 305 pgs: 305 active+clean; 318 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.9 MiB/s wr, 187 op/s
Oct 14 09:27:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 318 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 5.9 MiB/s wr, 138 op/s
Oct 14 09:27:14 compute-0 nova_compute[259627]: 2025-10-14 09:27:14.657 2 DEBUG nova.compute.manager [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:14 compute-0 nova_compute[259627]: 2025-10-14 09:27:14.658 2 DEBUG oslo_concurrency.lockutils [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:14 compute-0 nova_compute[259627]: 2025-10-14 09:27:14.658 2 DEBUG oslo_concurrency.lockutils [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:14 compute-0 nova_compute[259627]: 2025-10-14 09:27:14.658 2 DEBUG oslo_concurrency.lockutils [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:14 compute-0 nova_compute[259627]: 2025-10-14 09:27:14.658 2 DEBUG nova.compute.manager [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] No waiting events found dispatching network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:14 compute-0 nova_compute[259627]: 2025-10-14 09:27:14.659 2 WARNING nova.compute.manager [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received unexpected event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf for instance with vm_state active and task_state None.
Oct 14 09:27:14 compute-0 nova_compute[259627]: 2025-10-14 09:27:14.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:15 compute-0 ceph-mon[74249]: pgmap v2223: 305 pgs: 305 active+clean; 318 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 5.9 MiB/s wr, 138 op/s
Oct 14 09:27:16 compute-0 nova_compute[259627]: 2025-10-14 09:27:16.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 229 op/s
Oct 14 09:27:17 compute-0 ceph-mon[74249]: pgmap v2224: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 229 op/s
Oct 14 09:27:17 compute-0 nova_compute[259627]: 2025-10-14 09:27:17.866 2 DEBUG nova.compute.manager [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-changed-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:17 compute-0 nova_compute[259627]: 2025-10-14 09:27:17.866 2 DEBUG nova.compute.manager [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing instance network info cache due to event network-changed-c84419ee-1585-485f-ae91-116f2123dadf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:17 compute-0 nova_compute[259627]: 2025-10-14 09:27:17.867 2 DEBUG oslo_concurrency.lockutils [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:17 compute-0 nova_compute[259627]: 2025-10-14 09:27:17.867 2 DEBUG oslo_concurrency.lockutils [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:17 compute-0 nova_compute[259627]: 2025-10-14 09:27:17.867 2 DEBUG nova.network.neutron [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing network info cache for port c84419ee-1585-485f-ae91-116f2123dadf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 229 op/s
Oct 14 09:27:19 compute-0 nova_compute[259627]: 2025-10-14 09:27:19.714 2 DEBUG nova.network.neutron [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updated VIF entry in instance network info cache for port c84419ee-1585-485f-ae91-116f2123dadf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:19 compute-0 nova_compute[259627]: 2025-10-14 09:27:19.715 2 DEBUG nova.network.neutron [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:19 compute-0 nova_compute[259627]: 2025-10-14 09:27:19.742 2 DEBUG oslo_concurrency.lockutils [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:19 compute-0 ceph-mon[74249]: pgmap v2225: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 229 op/s
Oct 14 09:27:19 compute-0 nova_compute[259627]: 2025-10-14 09:27:19.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 230 op/s
Oct 14 09:27:21 compute-0 nova_compute[259627]: 2025-10-14 09:27:21.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:21 compute-0 podman[391681]: 2025-10-14 09:27:21.652230641 +0000 UTC m=+0.064011193 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 14 09:27:21 compute-0 podman[391680]: 2025-10-14 09:27:21.68698908 +0000 UTC m=+0.098804113 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 14 09:27:21 compute-0 ceph-mon[74249]: pgmap v2226: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 230 op/s
Oct 14 09:27:21 compute-0 nova_compute[259627]: 2025-10-14 09:27:21.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:21.954 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:21.956 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:27:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 172 op/s
Oct 14 09:27:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:23 compute-0 ceph-mon[74249]: pgmap v2227: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 172 op/s
Oct 14 09:27:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 133 KiB/s wr, 92 op/s
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.399 2 DEBUG nova.compute.manager [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-changed-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.400 2 DEBUG nova.compute.manager [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing instance network info cache due to event network-changed-563493a8-f727-4a25-97de-548a04398264. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.400 2 DEBUG oslo_concurrency.lockutils [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.400 2 DEBUG oslo_concurrency.lockutils [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.401 2 DEBUG nova.network.neutron [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing network info cache for port 563493a8-f727-4a25-97de-548a04398264 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.436 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.437 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.437 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.437 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.438 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.440 2 INFO nova.compute.manager [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Terminating instance
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.441 2 DEBUG nova.compute.manager [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:27:24 compute-0 kernel: tap563493a8-f7 (unregistering): left promiscuous mode
Oct 14 09:27:24 compute-0 NetworkManager[44885]: <info>  [1760434044.5240] device (tap563493a8-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 ovn_controller[152662]: 2025-10-14T09:27:24Z|01365|binding|INFO|Releasing lport 563493a8-f727-4a25-97de-548a04398264 from this chassis (sb_readonly=0)
Oct 14 09:27:24 compute-0 ovn_controller[152662]: 2025-10-14T09:27:24Z|01366|binding|INFO|Setting lport 563493a8-f727-4a25-97de-548a04398264 down in Southbound
Oct 14 09:27:24 compute-0 ovn_controller[152662]: 2025-10-14T09:27:24Z|01367|binding|INFO|Removing iface tap563493a8-f7 ovn-installed in OVS
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.553 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:ee:f5 10.100.0.8'], port_security=['fa:16:3e:8d:ee:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e16af982-3cd8-4600-99c4-aeec45986dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d00461c7-a787-45ae-8db1-11ba8f94e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '261a0f5c61f04b77863377f034e70f01', 'neutron:revision_number': '4', 'neutron:security_group_ids': '662e95f2-6ec1-4f95-993e-421550aa8e7c 6cefb658-a903-42cb-a2fb-84294f9eff2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=789ab81e-5f79-4555-8f72-bf440f2a44f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=563493a8-f727-4a25-97de-548a04398264) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.554 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 563493a8-f727-4a25-97de-548a04398264 in datapath d00461c7-a787-45ae-8db1-11ba8f94e301 unbound from our chassis
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.556 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d00461c7-a787-45ae-8db1-11ba8f94e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.558 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5315c4d1-f528-4c82-86de-d9ead3b38eae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.559 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 namespace which is not needed anymore
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct 14 09:27:24 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Consumed 13.395s CPU time.
Oct 14 09:27:24 compute-0 systemd-machined[214636]: Machine qemu-161-instance-00000080 terminated.
Oct 14 09:27:24 compute-0 kernel: tap563493a8-f7: entered promiscuous mode
Oct 14 09:27:24 compute-0 NetworkManager[44885]: <info>  [1760434044.6584] manager: (tap563493a8-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/562)
Oct 14 09:27:24 compute-0 kernel: tap563493a8-f7 (unregistering): left promiscuous mode
Oct 14 09:27:24 compute-0 ovn_controller[152662]: 2025-10-14T09:27:24Z|01368|binding|INFO|Claiming lport 563493a8-f727-4a25-97de-548a04398264 for this chassis.
Oct 14 09:27:24 compute-0 ovn_controller[152662]: 2025-10-14T09:27:24Z|01369|binding|INFO|563493a8-f727-4a25-97de-548a04398264: Claiming fa:16:3e:8d:ee:f5 10.100.0.8
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.665 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:ee:f5 10.100.0.8'], port_security=['fa:16:3e:8d:ee:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e16af982-3cd8-4600-99c4-aeec45986dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d00461c7-a787-45ae-8db1-11ba8f94e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '261a0f5c61f04b77863377f034e70f01', 'neutron:revision_number': '4', 'neutron:security_group_ids': '662e95f2-6ec1-4f95-993e-421550aa8e7c 6cefb658-a903-42cb-a2fb-84294f9eff2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=789ab81e-5f79-4555-8f72-bf440f2a44f6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=563493a8-f727-4a25-97de-548a04398264) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.679 2 INFO nova.virt.libvirt.driver [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance destroyed successfully.
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.679 2 DEBUG nova.objects.instance [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lazy-loading 'resources' on Instance uuid e16af982-3cd8-4600-99c4-aeec45986dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 ovn_controller[152662]: 2025-10-14T09:27:24Z|01370|binding|INFO|Releasing lport 563493a8-f727-4a25-97de-548a04398264 from this chassis (sb_readonly=0)
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.691 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:ee:f5 10.100.0.8'], port_security=['fa:16:3e:8d:ee:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e16af982-3cd8-4600-99c4-aeec45986dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d00461c7-a787-45ae-8db1-11ba8f94e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '261a0f5c61f04b77863377f034e70f01', 'neutron:revision_number': '4', 'neutron:security_group_ids': '662e95f2-6ec1-4f95-993e-421550aa8e7c 6cefb658-a903-42cb-a2fb-84294f9eff2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=789ab81e-5f79-4555-8f72-bf440f2a44f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=563493a8-f727-4a25-97de-548a04398264) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.697 2 DEBUG nova.virt.libvirt.vif [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-211623791-acc',id=128,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KjYB8TOdxJDBX1/D4wnlTYCVpAUwpAx9R+OpzTEM4NvI2eIFjA8TxpNcbv510O1pJN+wHpAyy5P1FA9H3PUquH/ijntIYtfRD9NCYpHRGTApBy/zfs0NfRBu1//Wkcw==',key_name='tempest-TestSecurityGroupsBasicOps-887135051',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:26:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='261a0f5c61f04b77863377f034e70f01',ramdisk_id='',reservation_id='r-95hbfz6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-211623791',owner_user_name='tempest-TestSecurityGroupsBasicOps-211623791-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:26:58Z,user_data=None,user_id='a250f9c11f864fb49faf97cbb4399ece',uuid=e16af982-3cd8-4600-99c4-aeec45986dda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.698 2 DEBUG nova.network.os_vif_util [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converting VIF {"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.698 2 DEBUG nova.network.os_vif_util [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.699 2 DEBUG os_vif [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.701 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563493a8-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.708 2 INFO os_vif [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7')
Oct 14 09:27:24 compute-0 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [NOTICE]   (391246) : haproxy version is 2.8.14-c23fe91
Oct 14 09:27:24 compute-0 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [NOTICE]   (391246) : path to executable is /usr/sbin/haproxy
Oct 14 09:27:24 compute-0 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [WARNING]  (391246) : Exiting Master process...
Oct 14 09:27:24 compute-0 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [WARNING]  (391246) : Exiting Master process...
Oct 14 09:27:24 compute-0 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [ALERT]    (391246) : Current worker (391248) exited with code 143 (Terminated)
Oct 14 09:27:24 compute-0 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [WARNING]  (391246) : All workers exited. Exiting... (0)
Oct 14 09:27:24 compute-0 systemd[1]: libpod-05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18.scope: Deactivated successfully.
Oct 14 09:27:24 compute-0 podman[391749]: 2025-10-14 09:27:24.750160358 +0000 UTC m=+0.063214683 container died 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18-userdata-shm.mount: Deactivated successfully.
Oct 14 09:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1209d6b30b19e606c66aff50aa191b9774b9d3e66e60de7f9a75b20180f215f-merged.mount: Deactivated successfully.
Oct 14 09:27:24 compute-0 podman[391749]: 2025-10-14 09:27:24.802100662 +0000 UTC m=+0.115154987 container cleanup 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 09:27:24 compute-0 systemd[1]: libpod-conmon-05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18.scope: Deactivated successfully.
Oct 14 09:27:24 compute-0 podman[391797]: 2025-10-14 09:27:24.885769729 +0000 UTC m=+0.062848694 container remove 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.894 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1361b30a-e04c-45c3-a369-0cb5bc0273df]: (4, ('Tue Oct 14 09:27:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 (05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18)\n05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18\nTue Oct 14 09:27:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 (05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18)\n05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[199b151d-644a-4617-9d7e-80e8d970d505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.897 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd00461c7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:24 compute-0 kernel: tapd00461c7-a0: left promiscuous mode
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.920 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[244aa472-2e7b-43f6-baee-409c3ec686ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.944 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3641f7cb-b1e6-476a-89a3-78aea62cf2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.946 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[92c98f13-2568-4fb0-88c7-7999fdf063a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.957 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.975 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[48267b23-20c4-40ed-bba9-19131fd70cdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779635, 'reachable_time': 32329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391813, 'error': None, 'target': 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 systemd[1]: run-netns-ovnmeta\x2dd00461c7\x2da787\x2d45ae\x2d8db1\x2d11ba8f94e301.mount: Deactivated successfully.
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.979 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.979 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[564821e3-9427-4f05-9ba3-bf7d6cd43329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.981 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 563493a8-f727-4a25-97de-548a04398264 in datapath d00461c7-a787-45ae-8db1-11ba8f94e301 unbound from our chassis
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.983 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d00461c7-a787-45ae-8db1-11ba8f94e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.989 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb04dbb8-8256-4fac-a1ee-4b1e868ac7cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.990 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 563493a8-f727-4a25-97de-548a04398264 in datapath d00461c7-a787-45ae-8db1-11ba8f94e301 unbound from our chassis
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.991 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d00461c7-a787-45ae-8db1-11ba8f94e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:27:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f42399a-a34d-45a6-806d-4912fe11ab7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:25 compute-0 nova_compute[259627]: 2025-10-14 09:27:24.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:25 compute-0 nova_compute[259627]: 2025-10-14 09:27:25.079 2 INFO nova.virt.libvirt.driver [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Deleting instance files /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda_del
Oct 14 09:27:25 compute-0 nova_compute[259627]: 2025-10-14 09:27:25.080 2 INFO nova.virt.libvirt.driver [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Deletion of /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda_del complete
Oct 14 09:27:25 compute-0 nova_compute[259627]: 2025-10-14 09:27:25.136 2 INFO nova.compute.manager [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Took 0.69 seconds to destroy the instance on the hypervisor.
Oct 14 09:27:25 compute-0 nova_compute[259627]: 2025-10-14 09:27:25.136 2 DEBUG oslo.service.loopingcall [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:27:25 compute-0 nova_compute[259627]: 2025-10-14 09:27:25.137 2 DEBUG nova.compute.manager [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:27:25 compute-0 nova_compute[259627]: 2025-10-14 09:27:25.137 2 DEBUG nova.network.neutron [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:27:25 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 14 09:27:25 compute-0 ovn_controller[152662]: 2025-10-14T09:27:25Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:6d:b2 10.100.0.9
Oct 14 09:27:25 compute-0 ovn_controller[152662]: 2025-10-14T09:27:25Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:6d:b2 10.100.0.9
Oct 14 09:27:25 compute-0 ceph-mon[74249]: pgmap v2228: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 133 KiB/s wr, 92 op/s
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.004 2 DEBUG nova.network.neutron [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.022 2 INFO nova.compute.manager [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Took 0.89 seconds to deallocate network for instance.
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.065 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.066 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.166 2 DEBUG oslo_concurrency.processutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.207 2 DEBUG nova.network.neutron [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updated VIF entry in instance network info cache for port 563493a8-f727-4a25-97de-548a04398264. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.208 2 DEBUG nova.network.neutron [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.232 2 DEBUG oslo_concurrency.lockutils [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 268 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.523 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-unplugged-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.523 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] No waiting events found dispatching network-vif-unplugged-563493a8-f727-4a25-97de-548a04398264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.524 2 WARNING nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received unexpected event network-vif-unplugged-563493a8-f727-4a25-97de-548a04398264 for instance with vm_state deleted and task_state None.
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] No waiting events found dispatching network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.525 2 WARNING nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received unexpected event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 for instance with vm_state deleted and task_state None.
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-deleted-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.525 2 INFO nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Neutron deleted interface 563493a8-f727-4a25-97de-548a04398264; detaching it from the instance and deleting it from the info cache
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG nova.network.neutron [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.542 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Detach interface failed, port_id=563493a8-f727-4a25-97de-548a04398264, reason: Instance e16af982-3cd8-4600-99c4-aeec45986dda could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:27:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:27:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897051182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.605 2 DEBUG oslo_concurrency.processutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.613 2 DEBUG nova.compute.provider_tree [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.629 2 DEBUG nova.scheduler.client.report [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.652 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.687 2 INFO nova.scheduler.client.report [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Deleted allocations for instance e16af982-3cd8-4600-99c4-aeec45986dda
Oct 14 09:27:26 compute-0 nova_compute[259627]: 2025-10-14 09:27:26.749 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2897051182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:27 compute-0 ceph-mon[74249]: pgmap v2229: 305 pgs: 305 active+clean; 268 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Oct 14 09:27:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 268 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 09:27:29 compute-0 nova_compute[259627]: 2025-10-14 09:27:29.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:29 compute-0 ceph-mon[74249]: pgmap v2230: 305 pgs: 305 active+clean; 268 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 09:27:30 compute-0 nova_compute[259627]: 2025-10-14 09:27:30.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:30 compute-0 ovn_controller[152662]: 2025-10-14T09:27:30Z|01371|binding|INFO|Releasing lport dabdfa0b-267a-4754-a026-601ab2593a32 from this chassis (sb_readonly=0)
Oct 14 09:27:30 compute-0 ovn_controller[152662]: 2025-10-14T09:27:30Z|01372|binding|INFO|Releasing lport a300b10f-f6fd-47ab-bc03-160d747e5ac0 from this chassis (sb_readonly=0)
Oct 14 09:27:30 compute-0 nova_compute[259627]: 2025-10-14 09:27:30.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 275 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.2 MiB/s wr, 88 op/s
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.303 2 DEBUG nova.compute.manager [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.304 2 DEBUG nova.compute.manager [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing instance network info cache due to event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.304 2 DEBUG oslo_concurrency.lockutils [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.304 2 DEBUG oslo_concurrency.lockutils [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.304 2 DEBUG nova.network.neutron [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.357 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.357 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.358 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.358 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.359 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.360 2 INFO nova.compute.manager [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Terminating instance
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.362 2 DEBUG nova.compute.manager [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:27:31 compute-0 kernel: tapea25832f-13 (unregistering): left promiscuous mode
Oct 14 09:27:31 compute-0 NetworkManager[44885]: <info>  [1760434051.4319] device (tapea25832f-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:27:31 compute-0 ovn_controller[152662]: 2025-10-14T09:27:31Z|01373|binding|INFO|Releasing lport ea25832f-13d3-41ec-874c-e622d24c912e from this chassis (sb_readonly=0)
Oct 14 09:27:31 compute-0 ovn_controller[152662]: 2025-10-14T09:27:31Z|01374|binding|INFO|Setting lport ea25832f-13d3-41ec-874c-e622d24c912e down in Southbound
Oct 14 09:27:31 compute-0 ovn_controller[152662]: 2025-10-14T09:27:31Z|01375|binding|INFO|Removing iface tapea25832f-13 ovn-installed in OVS
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.451 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:45:3f 10.100.0.12'], port_security=['fa:16:3e:fe:45:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4310595f-2280-438c-97ca-f2de57527501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ea5a077-a2c7-41d4-9c82-971893cbca2e 5ded162a-2a98-4fc1-94d1-b742c1816f61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57f23724-2b34-445a-b3d0-46a0f0ee87c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ea25832f-13d3-41ec-874c-e622d24c912e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.453 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ea25832f-13d3-41ec-874c-e622d24c912e in datapath 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c unbound from our chassis
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.455 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.457 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7073eb21-77e5-4aec-806f-18dfa98ad0b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.458 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c namespace which is not needed anymore
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:31 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 14 09:27:31 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Consumed 15.827s CPU time.
Oct 14 09:27:31 compute-0 systemd-machined[214636]: Machine qemu-159-instance-0000007e terminated.
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.614 2 INFO nova.virt.libvirt.driver [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance destroyed successfully.
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.616 2 DEBUG nova.objects.instance [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 4310595f-2280-438c-97ca-f2de57527501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.629 2 DEBUG nova.virt.libvirt.vif [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=126,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHc9phIuU5FXYInHFmneK7ofu0Hronr3GOHgS3ZKrK8UZEcxqPRrwvV2ktBWbk2vf9CswqByMiWPlH6Y1ffYCmRhb+LdZFzcPKCiYu31yXKGqBJ2r6m/arw2a5HgrQ1Icw==',key_name='tempest-TestSecurityGroupsBasicOps-69160241',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:26:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-pph0582m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:26:11Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4310595f-2280-438c-97ca-f2de57527501,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:27:31 compute-0 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [NOTICE]   (389150) : haproxy version is 2.8.14-c23fe91
Oct 14 09:27:31 compute-0 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [NOTICE]   (389150) : path to executable is /usr/sbin/haproxy
Oct 14 09:27:31 compute-0 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [WARNING]  (389150) : Exiting Master process...
Oct 14 09:27:31 compute-0 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [WARNING]  (389150) : Exiting Master process...
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.630 2 DEBUG nova.network.os_vif_util [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.631 2 DEBUG nova.network.os_vif_util [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.632 2 DEBUG os_vif [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:27:31 compute-0 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [ALERT]    (389150) : Current worker (389152) exited with code 143 (Terminated)
Oct 14 09:27:31 compute-0 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [WARNING]  (389150) : All workers exited. Exiting... (0)
Oct 14 09:27:31 compute-0 systemd[1]: libpod-a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1.scope: Deactivated successfully.
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea25832f-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:31 compute-0 podman[391860]: 2025-10-14 09:27:31.641309825 +0000 UTC m=+0.051701499 container died a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.650 2 INFO os_vif [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13')
Oct 14 09:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1-userdata-shm.mount: Deactivated successfully.
Oct 14 09:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a75670ad0d206a928ee627a44dd9949ee1ddfbb2472618a373916a2273c134-merged.mount: Deactivated successfully.
Oct 14 09:27:31 compute-0 podman[391860]: 2025-10-14 09:27:31.683110988 +0000 UTC m=+0.093502662 container cleanup a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.683 2 DEBUG nova.compute.manager [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-unplugged-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.683 2 DEBUG oslo_concurrency.lockutils [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.684 2 DEBUG oslo_concurrency.lockutils [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.684 2 DEBUG oslo_concurrency.lockutils [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.684 2 DEBUG nova.compute.manager [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] No waiting events found dispatching network-vif-unplugged-ea25832f-13d3-41ec-874c-e622d24c912e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.685 2 DEBUG nova.compute.manager [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-unplugged-ea25832f-13d3-41ec-874c-e622d24c912e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:27:31 compute-0 systemd[1]: libpod-conmon-a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1.scope: Deactivated successfully.
Oct 14 09:27:31 compute-0 podman[391913]: 2025-10-14 09:27:31.759913946 +0000 UTC m=+0.049165306 container remove a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.765 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1deb2ae4-3da8-4dbe-b8b9-4936af5a9cd4]: (4, ('Tue Oct 14 09:27:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c (a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1)\na44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1\nTue Oct 14 09:27:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c (a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1)\na44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35467f2e-bf67-4604-9735-01bc6fcc7ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.767 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69d5e5f4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:31 compute-0 kernel: tap69d5e5f4-00: left promiscuous mode
Oct 14 09:27:31 compute-0 nova_compute[259627]: 2025-10-14 09:27:31.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.833 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fac34b18-5fbc-4d50-8925-96ee80ca5a54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.859 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd98993-148d-4e15-9440-4530bf5206f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.860 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5672b267-2544-4248-bc9b-db59cdb543ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.877 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef2db38-5f57-40f3-9fa9-45254a26b559]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774860, 'reachable_time': 38782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391933, 'error': None, 'target': 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.879 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:27:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.879 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f0689c-4e21-4f84-8643-a5a27a369901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d69d5e5f4\x2d0ba9\x2d4874\x2d9f8a\x2dcd8b2484f91c.mount: Deactivated successfully.
Oct 14 09:27:31 compute-0 ceph-mon[74249]: pgmap v2231: 305 pgs: 305 active+clean; 275 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.2 MiB/s wr, 88 op/s
Oct 14 09:27:32 compute-0 nova_compute[259627]: 2025-10-14 09:27:32.127 2 INFO nova.virt.libvirt.driver [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Deleting instance files /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501_del
Oct 14 09:27:32 compute-0 nova_compute[259627]: 2025-10-14 09:27:32.129 2 INFO nova.virt.libvirt.driver [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Deletion of /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501_del complete
Oct 14 09:27:32 compute-0 nova_compute[259627]: 2025-10-14 09:27:32.212 2 INFO nova.compute.manager [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 14 09:27:32 compute-0 nova_compute[259627]: 2025-10-14 09:27:32.214 2 DEBUG oslo.service.loopingcall [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:27:32 compute-0 nova_compute[259627]: 2025-10-14 09:27:32.214 2 DEBUG nova.compute.manager [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:27:32 compute-0 nova_compute[259627]: 2025-10-14 09:27:32.215 2 DEBUG nova.network.neutron [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:27:32
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups']
Oct 14 09:27:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:27:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:27:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.255 2 DEBUG nova.network.neutron [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:33 compute-0 sudo[391935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:33 compute-0 sudo[391935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.286 2 INFO nova.compute.manager [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Took 1.07 seconds to deallocate network for instance.
Oct 14 09:27:33 compute-0 sudo[391935]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.295 2 DEBUG nova.network.neutron [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updated VIF entry in instance network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.295 2 DEBUG nova.network.neutron [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.323 2 DEBUG oslo_concurrency.lockutils [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.334 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.335 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:33 compute-0 sudo[391960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:27:33 compute-0 sudo[391960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:33 compute-0 sudo[391960]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.404 2 INFO nova.compute.manager [None req-cf961240-1598-44ce-b6f5-a0243f2a7e4f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Get console output
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.410 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.428 2 DEBUG oslo_concurrency.processutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:33 compute-0 sudo[391985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:33 compute-0 sudo[391985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:33 compute-0 sudo[391985]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:33 compute-0 sudo[392011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 14 09:27:33 compute-0 sudo[392011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.762 2 DEBUG nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.763 2 DEBUG oslo_concurrency.lockutils [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.763 2 DEBUG oslo_concurrency.lockutils [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.764 2 DEBUG oslo_concurrency.lockutils [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.764 2 DEBUG nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] No waiting events found dispatching network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.765 2 WARNING nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received unexpected event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e for instance with vm_state deleted and task_state None.
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.765 2 DEBUG nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-deleted-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.766 2 INFO nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Neutron deleted interface ea25832f-13d3-41ec-874c-e622d24c912e; detaching it from the instance and deleting it from the info cache
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.766 2 DEBUG nova.network.neutron [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.795 2 DEBUG nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Detach interface failed, port_id=ea25832f-13d3-41ec-874c-e622d24c912e, reason: Instance 4310595f-2280-438c-97ca-f2de57527501 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:27:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:27:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604392729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.891 2 DEBUG oslo_concurrency.processutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:33 compute-0 ceph-mon[74249]: pgmap v2232: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 14 09:27:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2604392729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.904 2 DEBUG nova.compute.provider_tree [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.925 2 DEBUG nova.scheduler.client.report [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.946 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:33 compute-0 nova_compute[259627]: 2025-10-14 09:27:33.985 2 INFO nova.scheduler.client.report [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 4310595f-2280-438c-97ca-f2de57527501
Oct 14 09:27:34 compute-0 nova_compute[259627]: 2025-10-14 09:27:34.101 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:34 compute-0 podman[392129]: 2025-10-14 09:27:34.132289446 +0000 UTC m=+0.070446851 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 14 09:27:34 compute-0 podman[392129]: 2025-10-14 09:27:34.227500789 +0000 UTC m=+0.165658164 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:27:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 14 09:27:35 compute-0 nova_compute[259627]: 2025-10-14 09:27:35.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:35 compute-0 sudo[392011]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:27:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:27:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:35 compute-0 sudo[392288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:35 compute-0 sudo[392288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:35 compute-0 sudo[392288]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:35 compute-0 nova_compute[259627]: 2025-10-14 09:27:35.306 2 INFO nova.compute.manager [None req-a5f7e701-0f04-41de-bdbc-e1ac188343a9 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Get console output
Oct 14 09:27:35 compute-0 nova_compute[259627]: 2025-10-14 09:27:35.317 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:27:35 compute-0 sudo[392313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:27:35 compute-0 sudo[392313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:35 compute-0 sudo[392313]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:35 compute-0 sudo[392338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:35 compute-0 sudo[392338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:35 compute-0 sudo[392338]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:35 compute-0 sudo[392363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:27:35 compute-0 sudo[392363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:35 compute-0 nova_compute[259627]: 2025-10-14 09:27:35.854 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:35 compute-0 nova_compute[259627]: 2025-10-14 09:27:35.855 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:35 compute-0 nova_compute[259627]: 2025-10-14 09:27:35.855 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:35 compute-0 nova_compute[259627]: 2025-10-14 09:27:35.856 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:35 compute-0 nova_compute[259627]: 2025-10-14 09:27:35.856 2 DEBUG nova.network.neutron [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:35 compute-0 ceph-mon[74249]: pgmap v2233: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 14 09:27:35 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:35 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:36 compute-0 sudo[392363]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:27:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:27:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:27:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:27:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:27:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d886b599-0801-4e24-8ed8-08fe500c5f36 does not exist
Oct 14 09:27:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f5e2cd36-cde0-4824-8d39-51004207e5fd does not exist
Oct 14 09:27:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f9583b4d-d82a-47a8-95b8-af3872ae75d3 does not exist
Oct 14 09:27:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:27:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:27:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:27:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:27:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:27:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:27:36 compute-0 sudo[392419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:36 compute-0 sudo[392419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:36 compute-0 sudo[392419]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:36 compute-0 sudo[392444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:27:36 compute-0 sudo[392444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:36 compute-0 sudo[392444]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 2.2 MiB/s wr, 122 op/s
Oct 14 09:27:36 compute-0 sudo[392469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:36 compute-0 sudo[392469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:36 compute-0 sudo[392469]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:36 compute-0 sudo[392494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:27:36 compute-0 sudo[392494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:36 compute-0 nova_compute[259627]: 2025-10-14 09:27:36.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:27:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:27:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:27:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:27:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:27:36 compute-0 podman[392559]: 2025-10-14 09:27:36.967624956 +0000 UTC m=+0.045733011 container create 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:27:37 compute-0 systemd[1]: Started libpod-conmon-347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043.scope.
Oct 14 09:27:37 compute-0 podman[392559]: 2025-10-14 09:27:36.946555785 +0000 UTC m=+0.024663880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:27:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.052 2 DEBUG nova.network.neutron [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.053 2 DEBUG nova.network.neutron [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:37 compute-0 podman[392559]: 2025-10-14 09:27:37.068125369 +0000 UTC m=+0.146233444 container init 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:27:37 compute-0 podman[392559]: 2025-10-14 09:27:37.075828779 +0000 UTC m=+0.153936844 container start 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.080 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.080 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.081 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.081 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.081 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.082 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.082 2 WARNING nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state active and task_state None.
Oct 14 09:27:37 compute-0 podman[392559]: 2025-10-14 09:27:37.082667248 +0000 UTC m=+0.160775303 container attach 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.082 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.083 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:37 compute-0 determined_bhabha[392575]: 167 167
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.083 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.083 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.083 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.084 2 WARNING nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state active and task_state None.
Oct 14 09:27:37 compute-0 systemd[1]: libpod-347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043.scope: Deactivated successfully.
Oct 14 09:27:37 compute-0 podman[392559]: 2025-10-14 09:27:37.08639742 +0000 UTC m=+0.164505475 container died 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b27d190ab211c3e88b9c157e0d0b7ac2d7db4853520a7c1b20007c92825bd45-merged.mount: Deactivated successfully.
Oct 14 09:27:37 compute-0 podman[392559]: 2025-10-14 09:27:37.126192214 +0000 UTC m=+0.204300279 container remove 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:27:37 compute-0 systemd[1]: libpod-conmon-347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043.scope: Deactivated successfully.
Oct 14 09:27:37 compute-0 podman[392599]: 2025-10-14 09:27:37.323058498 +0000 UTC m=+0.053268537 container create 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.336 2 INFO nova.compute.manager [None req-2dd7f0e6-65da-4b7f-9955-570aa8348b34 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Get console output
Oct 14 09:27:37 compute-0 nova_compute[259627]: 2025-10-14 09:27:37.345 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:27:37 compute-0 systemd[1]: Started libpod-conmon-1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776.scope.
Oct 14 09:27:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:37 compute-0 podman[392599]: 2025-10-14 09:27:37.30574048 +0000 UTC m=+0.035950549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:27:37 compute-0 podman[392599]: 2025-10-14 09:27:37.402949342 +0000 UTC m=+0.133159411 container init 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 09:27:37 compute-0 podman[392599]: 2025-10-14 09:27:37.41258427 +0000 UTC m=+0.142794359 container start 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 09:27:37 compute-0 podman[392599]: 2025-10-14 09:27:37.416429805 +0000 UTC m=+0.146639864 container attach 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 14 09:27:37 compute-0 ceph-mon[74249]: pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 2.2 MiB/s wr, 122 op/s
Oct 14 09:27:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.158 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.159 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.160 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.160 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.160 2 DEBUG nova.network.neutron [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 200 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 111 KiB/s wr, 51 op/s
Oct 14 09:27:38 compute-0 magical_rhodes[392616]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:27:38 compute-0 magical_rhodes[392616]: --> relative data size: 1.0
Oct 14 09:27:38 compute-0 magical_rhodes[392616]: --> All data devices are unavailable
Oct 14 09:27:38 compute-0 systemd[1]: libpod-1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776.scope: Deactivated successfully.
Oct 14 09:27:38 compute-0 podman[392599]: 2025-10-14 09:27:38.540073949 +0000 UTC m=+1.270284018 container died 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:27:38 compute-0 systemd[1]: libpod-1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776.scope: Consumed 1.070s CPU time.
Oct 14 09:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489-merged.mount: Deactivated successfully.
Oct 14 09:27:38 compute-0 podman[392599]: 2025-10-14 09:27:38.602140283 +0000 UTC m=+1.332350372 container remove 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:27:38 compute-0 systemd[1]: libpod-conmon-1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776.scope: Deactivated successfully.
Oct 14 09:27:38 compute-0 sudo[392494]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:38 compute-0 ovn_controller[152662]: 2025-10-14T09:27:38Z|01376|binding|INFO|Releasing lport dabdfa0b-267a-4754-a026-601ab2593a32 from this chassis (sb_readonly=0)
Oct 14 09:27:38 compute-0 sudo[392658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:38 compute-0 sudo[392658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:38 compute-0 sudo[392658]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:38 compute-0 sudo[392695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:27:38 compute-0 sudo[392695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:38 compute-0 sudo[392695]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.882 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.883 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.883 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.884 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.884 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.886 2 INFO nova.compute.manager [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Terminating instance
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.888 2 DEBUG nova.compute.manager [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:27:38 compute-0 podman[392683]: 2025-10-14 09:27:38.894838346 +0000 UTC m=+0.101945201 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:27:38 compute-0 podman[392682]: 2025-10-14 09:27:38.895325008 +0000 UTC m=+0.102737090 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:27:38 compute-0 kernel: tapc84419ee-15 (unregistering): left promiscuous mode
Oct 14 09:27:38 compute-0 sudo[392746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:38 compute-0 NetworkManager[44885]: <info>  [1760434058.9527] device (tapc84419ee-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:27:38 compute-0 sudo[392746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:38 compute-0 sudo[392746]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:38 compute-0 ovn_controller[152662]: 2025-10-14T09:27:38Z|01377|binding|INFO|Releasing lport c84419ee-1585-485f-ae91-116f2123dadf from this chassis (sb_readonly=0)
Oct 14 09:27:38 compute-0 ovn_controller[152662]: 2025-10-14T09:27:38Z|01378|binding|INFO|Setting lport c84419ee-1585-485f-ae91-116f2123dadf down in Southbound
Oct 14 09:27:38 compute-0 ovn_controller[152662]: 2025-10-14T09:27:38Z|01379|binding|INFO|Removing iface tapc84419ee-15 ovn-installed in OVS
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:38.973 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:6d:b2 10.100.0.9'], port_security=['fa:16:3e:44:6d:b2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1b7f03a3-6b73-478f-bf13-cf062714faef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '471f6dc6-ea8e-4b1e-a678-6d128cef3d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c84419ee-1585-485f-ae91-116f2123dadf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:38.974 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c84419ee-1585-485f-ae91-116f2123dadf in datapath 44344b65-f325-470a-bd36-6f52ed03d317 unbound from our chassis
Oct 14 09:27:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:38.975 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44344b65-f325-470a-bd36-6f52ed03d317
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:38 compute-0 ceph-mon[74249]: pgmap v2235: 305 pgs: 305 active+clean; 200 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 111 KiB/s wr, 51 op/s
Oct 14 09:27:38 compute-0 nova_compute[259627]: 2025-10-14 09:27:38.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:38.995 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[933b21b0-1d98-4516-8abd-b5890fb8227d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:39 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct 14 09:27:39 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000081.scope: Consumed 13.312s CPU time.
Oct 14 09:27:39 compute-0 systemd-machined[214636]: Machine qemu-162-instance-00000081 terminated.
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.028 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[af7031e4-d026-4113-96db-d17119adb4fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.033 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[66165316-3137-449f-ae8e-ee5f6a6942bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:39 compute-0 sudo[392773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:27:39 compute-0 sudo[392773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.078 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4699eb97-945d-44c5-99fb-080f3536073f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.108 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ea63d5cf-e7b0-4573-ab6f-ff578a1c4f90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44344b65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:40:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779335, 'reachable_time': 36403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392808, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.126 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b41153-630b-4d79-934e-c1b6f8bd28ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap44344b65-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779351, 'tstamp': 779351}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392811, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap44344b65-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779355, 'tstamp': 779355}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392811, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.128 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44344b65-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.160 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44344b65-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.161 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.161 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44344b65-f0, col_values=(('external_ids', {'iface-id': 'dabdfa0b-267a-4754-a026-601ab2593a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.161 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.165 2 INFO nova.virt.libvirt.driver [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance destroyed successfully.
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.166 2 DEBUG nova.objects.instance [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 1b7f03a3-6b73-478f-bf13-cf062714faef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.183 2 DEBUG nova.virt.libvirt.vif [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2128241833',display_name='tempest-TestNetworkBasicOps-server-2128241833',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2128241833',id=129,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOLhDC9VoGp8WYPVj/2CKqBQNdjKW4OK8XHbingpjAJEpUTBDLqjgHyIpa9e7zDDaRmI0fbF5BkwR2QzH+89ULkia3qer+DRxqv2g2mzU6TZpkV7v9oBLNlvSUqf9rYag==',key_name='tempest-TestNetworkBasicOps-1059153159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:27:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-3kh0vol0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:27:13Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=1b7f03a3-6b73-478f-bf13-cf062714faef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.184 2 DEBUG nova.network.os_vif_util [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.185 2 DEBUG nova.network.os_vif_util [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.185 2 DEBUG os_vif [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc84419ee-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.196 2 INFO os_vif [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15')
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.268 2 DEBUG nova.compute.manager [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-unplugged-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.270 2 DEBUG oslo_concurrency.lockutils [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.270 2 DEBUG oslo_concurrency.lockutils [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.271 2 DEBUG oslo_concurrency.lockutils [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.271 2 DEBUG nova.compute.manager [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] No waiting events found dispatching network-vif-unplugged-c84419ee-1585-485f-ae91-116f2123dadf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.272 2 DEBUG nova.compute.manager [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-unplugged-c84419ee-1585-485f-ae91-116f2123dadf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:27:39 compute-0 podman[392879]: 2025-10-14 09:27:39.455667063 +0000 UTC m=+0.038910172 container create 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:27:39 compute-0 systemd[1]: Started libpod-conmon-73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2.scope.
Oct 14 09:27:39 compute-0 podman[392879]: 2025-10-14 09:27:39.437125705 +0000 UTC m=+0.020368834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:27:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:27:39 compute-0 podman[392879]: 2025-10-14 09:27:39.575287049 +0000 UTC m=+0.158530248 container init 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.575 2 INFO nova.virt.libvirt.driver [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Deleting instance files /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef_del
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.576 2 INFO nova.virt.libvirt.driver [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Deletion of /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef_del complete
Oct 14 09:27:39 compute-0 podman[392879]: 2025-10-14 09:27:39.581938763 +0000 UTC m=+0.165181872 container start 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:27:39 compute-0 podman[392879]: 2025-10-14 09:27:39.585154993 +0000 UTC m=+0.168398152 container attach 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 09:27:39 compute-0 ecstatic_shtern[392896]: 167 167
Oct 14 09:27:39 compute-0 systemd[1]: libpod-73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2.scope: Deactivated successfully.
Oct 14 09:27:39 compute-0 podman[392879]: 2025-10-14 09:27:39.589789618 +0000 UTC m=+0.173032767 container died 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.620 2 INFO nova.compute.manager [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fba113757ec8e24d2a1b24f02ba1aafe218f6586fad5fa9ef9d586ca3cd32a59-merged.mount: Deactivated successfully.
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.621 2 DEBUG oslo.service.loopingcall [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.622 2 DEBUG nova.compute.manager [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.622 2 DEBUG nova.network.neutron [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.630 2 DEBUG nova.network.neutron [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.631 2 DEBUG nova.network.neutron [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:39 compute-0 podman[392879]: 2025-10-14 09:27:39.635975659 +0000 UTC m=+0.219218768 container remove 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.651 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.652 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.652 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:39 compute-0 systemd[1]: libpod-conmon-73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2.scope: Deactivated successfully.
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.653 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.654 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.654 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.654 2 WARNING nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state active and task_state None.
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.655 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.655 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.656 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.656 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.656 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.657 2 WARNING nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state active and task_state None.
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.678 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434044.6773894, e16af982-3cd8-4600-99c4-aeec45986dda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.678 2 INFO nova.compute.manager [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] VM Stopped (Lifecycle Event)
Oct 14 09:27:39 compute-0 nova_compute[259627]: 2025-10-14 09:27:39.698 2 DEBUG nova.compute.manager [None req-e3d240a9-4f3d-4bb0-8947-f47bfb8cabe2 - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:27:39 compute-0 podman[392922]: 2025-10-14 09:27:39.894033565 +0000 UTC m=+0.070498043 container create 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:27:39 compute-0 systemd[1]: Started libpod-conmon-6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e.scope.
Oct 14 09:27:39 compute-0 podman[392922]: 2025-10-14 09:27:39.8622681 +0000 UTC m=+0.038732648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:27:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:40 compute-0 podman[392922]: 2025-10-14 09:27:40.010347029 +0000 UTC m=+0.186811547 container init 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:27:40 compute-0 podman[392922]: 2025-10-14 09:27:40.023525115 +0000 UTC m=+0.199989623 container start 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:27:40 compute-0 podman[392922]: 2025-10-14 09:27:40.031196315 +0000 UTC m=+0.207660823 container attach 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.276 2 DEBUG nova.network.neutron [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.285 2 DEBUG nova.compute.manager [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-changed-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.285 2 DEBUG nova.compute.manager [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing instance network info cache due to event network-changed-c84419ee-1585-485f-ae91-116f2123dadf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.286 2 DEBUG oslo_concurrency.lockutils [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.286 2 DEBUG oslo_concurrency.lockutils [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.286 2 DEBUG nova.network.neutron [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing network info cache for port c84419ee-1585-485f-ae91-116f2123dadf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.302 2 INFO nova.compute.manager [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Took 0.68 seconds to deallocate network for instance.
Oct 14 09:27:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 153 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 114 KiB/s wr, 65 op/s
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.356 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.357 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.419916) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060419976, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2069, "num_deletes": 251, "total_data_size": 3397341, "memory_usage": 3446584, "flush_reason": "Manual Compaction"}
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060442702, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3308248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45171, "largest_seqno": 47239, "table_properties": {"data_size": 3298863, "index_size": 5879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19236, "raw_average_key_size": 20, "raw_value_size": 3280173, "raw_average_value_size": 3445, "num_data_blocks": 261, "num_entries": 952, "num_filter_entries": 952, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433847, "oldest_key_time": 1760433847, "file_creation_time": 1760434060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 22860 microseconds, and 15089 cpu microseconds.
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.445 2 DEBUG oslo_concurrency.processutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.442772) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3308248 bytes OK
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.442800) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.449425) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.449469) EVENT_LOG_v1 {"time_micros": 1760434060449459, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.449493) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3388631, prev total WAL file size 3388631, number of live WAL files 2.
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.450659) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3230KB)], [104(8463KB)]
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060450690, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11975056, "oldest_snapshot_seqno": -1}
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.489 2 DEBUG nova.network.neutron [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7045 keys, 10271158 bytes, temperature: kUnknown
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060500040, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10271158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10222651, "index_size": 29771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 181672, "raw_average_key_size": 25, "raw_value_size": 10095187, "raw_average_value_size": 1432, "num_data_blocks": 1172, "num_entries": 7045, "num_filter_entries": 7045, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.500264) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10271158 bytes
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.501639) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.3 rd, 207.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7559, records dropped: 514 output_compression: NoCompression
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.501663) EVENT_LOG_v1 {"time_micros": 1760434060501649, "job": 62, "event": "compaction_finished", "compaction_time_micros": 49414, "compaction_time_cpu_micros": 24755, "output_level": 6, "num_output_files": 1, "total_output_size": 10271158, "num_input_records": 7559, "num_output_records": 7045, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060502378, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060504004, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.450589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:27:40 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]: {
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:     "0": [
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:         {
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "devices": [
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "/dev/loop3"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             ],
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_name": "ceph_lv0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_size": "21470642176",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "name": "ceph_lv0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "tags": {
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cluster_name": "ceph",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.crush_device_class": "",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.encrypted": "0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osd_id": "0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.type": "block",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.vdo": "0"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             },
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "type": "block",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "vg_name": "ceph_vg0"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:         }
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:     ],
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:     "1": [
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:         {
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "devices": [
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "/dev/loop4"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             ],
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_name": "ceph_lv1",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_size": "21470642176",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "name": "ceph_lv1",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "tags": {
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cluster_name": "ceph",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.crush_device_class": "",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.encrypted": "0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osd_id": "1",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.type": "block",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.vdo": "0"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             },
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "type": "block",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "vg_name": "ceph_vg1"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:         }
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:     ],
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:     "2": [
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:         {
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "devices": [
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "/dev/loop5"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             ],
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_name": "ceph_lv2",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_size": "21470642176",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "name": "ceph_lv2",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "tags": {
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.cluster_name": "ceph",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.crush_device_class": "",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.encrypted": "0",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osd_id": "2",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.type": "block",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:                 "ceph.vdo": "0"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             },
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "type": "block",
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:             "vg_name": "ceph_vg2"
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:         }
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]:     ]
Oct 14 09:27:40 compute-0 compassionate_kowalevski[392938]: }
Oct 14 09:27:40 compute-0 systemd[1]: libpod-6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e.scope: Deactivated successfully.
Oct 14 09:27:40 compute-0 podman[392922]: 2025-10-14 09:27:40.811135587 +0000 UTC m=+0.987600055 container died 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049-merged.mount: Deactivated successfully.
Oct 14 09:27:40 compute-0 podman[392922]: 2025-10-14 09:27:40.86388023 +0000 UTC m=+1.040344688 container remove 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 09:27:40 compute-0 systemd[1]: libpod-conmon-6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e.scope: Deactivated successfully.
Oct 14 09:27:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:27:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960186832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:40 compute-0 sudo[392773]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.947 2 DEBUG oslo_concurrency.processutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.954 2 DEBUG nova.compute.provider_tree [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:40 compute-0 nova_compute[259627]: 2025-10-14 09:27:40.978 2 DEBUG nova.scheduler.client.report [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:27:40 compute-0 sudo[392982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:41 compute-0 sudo[392982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:41 compute-0 sudo[392982]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.019 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.021 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.021 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.021 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.021 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:41 compute-0 sudo[393007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.070 2 DEBUG nova.network.neutron [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:41 compute-0 sudo[393007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:41 compute-0 sudo[393007]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.095 2 DEBUG oslo_concurrency.lockutils [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.109 2 INFO nova.scheduler.client.report [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 1b7f03a3-6b73-478f-bf13-cf062714faef
Oct 14 09:27:41 compute-0 sudo[393033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:41 compute-0 sudo[393033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:41 compute-0 sudo[393033]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.181 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:41 compute-0 sudo[393077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:27:41 compute-0 sudo[393077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.385 2 DEBUG nova.compute.manager [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.387 2 DEBUG oslo_concurrency.lockutils [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.387 2 DEBUG oslo_concurrency.lockutils [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.388 2 DEBUG oslo_concurrency.lockutils [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.388 2 DEBUG nova.compute.manager [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] No waiting events found dispatching network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.389 2 WARNING nova.compute.manager [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received unexpected event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf for instance with vm_state deleted and task_state None.
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.390 2 DEBUG nova.compute.manager [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-deleted-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:41 compute-0 ceph-mon[74249]: pgmap v2236: 305 pgs: 305 active+clean; 153 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 114 KiB/s wr, 65 op/s
Oct 14 09:27:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1960186832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:27:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248073203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.488 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.581 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.583 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:27:41 compute-0 podman[393144]: 2025-10-14 09:27:41.597468576 +0000 UTC m=+0.052669313 container create aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:27:41 compute-0 systemd[1]: Started libpod-conmon-aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66.scope.
Oct 14 09:27:41 compute-0 podman[393144]: 2025-10-14 09:27:41.569166416 +0000 UTC m=+0.024367233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:27:41 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:27:41 compute-0 podman[393144]: 2025-10-14 09:27:41.689990022 +0000 UTC m=+0.145190759 container init aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:27:41 compute-0 podman[393144]: 2025-10-14 09:27:41.696976884 +0000 UTC m=+0.152177621 container start aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:27:41 compute-0 podman[393144]: 2025-10-14 09:27:41.700178664 +0000 UTC m=+0.155379401 container attach aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:27:41 compute-0 affectionate_mendel[393160]: 167 167
Oct 14 09:27:41 compute-0 systemd[1]: libpod-aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66.scope: Deactivated successfully.
Oct 14 09:27:41 compute-0 podman[393144]: 2025-10-14 09:27:41.704784427 +0000 UTC m=+0.159985174 container died aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b883645ed3837434afe8c6401a9c340c11bf4b1bc8e68343dc89c12d82ef9703-merged.mount: Deactivated successfully.
Oct 14 09:27:41 compute-0 podman[393144]: 2025-10-14 09:27:41.752140688 +0000 UTC m=+0.207341435 container remove aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:27:41 compute-0 systemd[1]: libpod-conmon-aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66.scope: Deactivated successfully.
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.801 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.802 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3402MB free_disk=59.92548370361328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.803 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.803 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.894 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2595dec0-9170-4e8f-a6bc-9179d30519a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.895 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.895 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:27:41 compute-0 nova_compute[259627]: 2025-10-14 09:27:41.958 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:41 compute-0 podman[393183]: 2025-10-14 09:27:41.960563258 +0000 UTC m=+0.058945148 container create ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:27:42 compute-0 systemd[1]: Started libpod-conmon-ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559.scope.
Oct 14 09:27:42 compute-0 podman[393183]: 2025-10-14 09:27:41.93071226 +0000 UTC m=+0.029094220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:27:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:27:42 compute-0 podman[393183]: 2025-10-14 09:27:42.074228326 +0000 UTC m=+0.172610266 container init ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:27:42 compute-0 podman[393183]: 2025-10-14 09:27:42.090137029 +0000 UTC m=+0.188518939 container start ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:27:42 compute-0 podman[393183]: 2025-10-14 09:27:42.094279752 +0000 UTC m=+0.192661632 container attach ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:27:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 29 KiB/s wr, 64 op/s
Oct 14 09:27:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4248073203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:27:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042198546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:42 compute-0 nova_compute[259627]: 2025-10-14 09:27:42.456 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:42 compute-0 nova_compute[259627]: 2025-10-14 09:27:42.467 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:27:42 compute-0 nova_compute[259627]: 2025-10-14 09:27:42.488 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:27:42 compute-0 nova_compute[259627]: 2025-10-14 09:27:42.511 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:27:42 compute-0 nova_compute[259627]: 2025-10-14 09:27:42.512 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]: {
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "osd_id": 2,
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "type": "bluestore"
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:     },
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "osd_id": 1,
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "type": "bluestore"
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:     },
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "osd_id": 0,
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:         "type": "bluestore"
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]:     }
Oct 14 09:27:43 compute-0 hardcore_zhukovsky[393200]: }
Oct 14 09:27:43 compute-0 systemd[1]: libpod-ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559.scope: Deactivated successfully.
Oct 14 09:27:43 compute-0 systemd[1]: libpod-ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559.scope: Consumed 1.076s CPU time.
Oct 14 09:27:43 compute-0 podman[393183]: 2025-10-14 09:27:43.165089951 +0000 UTC m=+1.263471911 container died ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.201 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208-merged.mount: Deactivated successfully.
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.202 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.203 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.205 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.205 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.208 2 INFO nova.compute.manager [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Terminating instance
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.210 2 DEBUG nova.compute.manager [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:27:43 compute-0 podman[393183]: 2025-10-14 09:27:43.256168022 +0000 UTC m=+1.354549932 container remove ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:27:43 compute-0 systemd[1]: libpod-conmon-ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559.scope: Deactivated successfully.
Oct 14 09:27:43 compute-0 kernel: tap9ecc8f01-43 (unregistering): left promiscuous mode
Oct 14 09:27:43 compute-0 NetworkManager[44885]: <info>  [1760434063.2881] device (tap9ecc8f01-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01380|binding|INFO|Releasing lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c from this chassis (sb_readonly=0)
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01381|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c down in Southbound
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01382|binding|INFO|Removing iface tap9ecc8f01-43 ovn-installed in OVS
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.311 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:8f:64 10.100.0.3'], port_security=['fa:16:3e:50:8f:64 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2595dec0-9170-4e8f-a6bc-9179d30519a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8e192907-665a-4f92-bc1f-6ecbfbe8292b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc8f01-430d-4714-8d7e-e60d7edaa73c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.312 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c in datapath 44344b65-f325-470a-bd36-6f52ed03d317 unbound from our chassis
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.313 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44344b65-f325-470a-bd36-6f52ed03d317, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1d7206-0e96-4207-920d-04893e9c9da6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.316 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 namespace which is not needed anymore
Oct 14 09:27:43 compute-0 sudo[393077]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:27:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:27:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:43 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct 14 09:27:43 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Consumed 14.215s CPU time.
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7e22ddb7-6ab6-443f-b8c2-563acfe6bf28 does not exist
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 761063e5-1bff-4b92-880f-755ac040b640 does not exist
Oct 14 09:27:43 compute-0 systemd-machined[214636]: Machine qemu-160-instance-0000007f terminated.
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007600997305788891 of space, bias 1.0, pg target 0.22802991917366675 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:27:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:27:43 compute-0 kernel: tap9ecc8f01-43: entered promiscuous mode
Oct 14 09:27:43 compute-0 NetworkManager[44885]: <info>  [1760434063.4350] manager: (tap9ecc8f01-43): new Tun device (/org/freedesktop/NetworkManager/Devices/563)
Oct 14 09:27:43 compute-0 kernel: tap9ecc8f01-43 (unregistering): left promiscuous mode
Oct 14 09:27:43 compute-0 ceph-mon[74249]: pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 29 KiB/s wr, 64 op/s
Oct 14 09:27:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1042198546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01383|binding|INFO|Claiming lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c for this chassis.
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01384|binding|INFO|9ecc8f01-430d-4714-8d7e-e60d7edaa73c: Claiming fa:16:3e:50:8f:64 10.100.0.3
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.455 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:8f:64 10.100.0.3'], port_security=['fa:16:3e:50:8f:64 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2595dec0-9170-4e8f-a6bc-9179d30519a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8e192907-665a-4f92-bc1f-6ecbfbe8292b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc8f01-430d-4714-8d7e-e60d7edaa73c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:43 compute-0 sudo[393276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:27:43 compute-0 sudo[393276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:43 compute-0 sudo[393276]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.475 2 INFO nova.virt.libvirt.driver [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance destroyed successfully.
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.477 2 DEBUG nova.objects.instance [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 2595dec0-9170-4e8f-a6bc-9179d30519a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01385|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c ovn-installed in OVS
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01386|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c up in Southbound
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01387|binding|INFO|Releasing lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c from this chassis (sb_readonly=1)
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01388|if_status|INFO|Dropped 2 log messages in last 633 seconds (most recently, 633 seconds ago) due to excessive rate
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01389|if_status|INFO|Not setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c down as sb is readonly
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01390|binding|INFO|Removing iface tap9ecc8f01-43 ovn-installed in OVS
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01391|binding|INFO|Releasing lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c from this chassis (sb_readonly=0)
Oct 14 09:27:43 compute-0 ovn_controller[152662]: 2025-10-14T09:27:43Z|01392|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c down in Southbound
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.508 2 DEBUG nova.virt.libvirt.vif [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-358564178',display_name='tempest-TestNetworkBasicOps-server-358564178',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-358564178',id=127,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHcVlyHKVFHxb0hriNyI1hppvpwNJ/aTRlLE7dBDeajB0uM5sP+bnasOk+ko2DL77CLK3QbWVr/+3RKN6o4h1D1BJ0FS9znP9UgUkNA33oyzkv3sPnYQc7bgh/xOganUMg==',key_name='tempest-TestNetworkBasicOps-1632567993',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:26:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-nbahr6ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:26:56Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=2595dec0-9170-4e8f-a6bc-9179d30519a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.508 2 DEBUG nova.network.os_vif_util [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.509 2 DEBUG nova.network.os_vif_util [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.509 2 DEBUG os_vif [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ecc8f01-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.513 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.517 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.520 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:8f:64 10.100.0.3'], port_security=['fa:16:3e:50:8f:64 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2595dec0-9170-4e8f-a6bc-9179d30519a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8e192907-665a-4f92-bc1f-6ecbfbe8292b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc8f01-430d-4714-8d7e-e60d7edaa73c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.520 2 DEBUG nova.compute.manager [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.521 2 DEBUG nova.compute.manager [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.521 2 DEBUG oslo_concurrency.lockutils [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.521 2 DEBUG oslo_concurrency.lockutils [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.521 2 DEBUG nova.network.neutron [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.525 2 INFO os_vif [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43')
Oct 14 09:27:43 compute-0 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [NOTICE]   (390972) : haproxy version is 2.8.14-c23fe91
Oct 14 09:27:43 compute-0 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [NOTICE]   (390972) : path to executable is /usr/sbin/haproxy
Oct 14 09:27:43 compute-0 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [WARNING]  (390972) : Exiting Master process...
Oct 14 09:27:43 compute-0 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [WARNING]  (390972) : Exiting Master process...
Oct 14 09:27:43 compute-0 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [ALERT]    (390972) : Current worker (390974) exited with code 143 (Terminated)
Oct 14 09:27:43 compute-0 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [WARNING]  (390972) : All workers exited. Exiting... (0)
Oct 14 09:27:43 compute-0 systemd[1]: libpod-5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03.scope: Deactivated successfully.
Oct 14 09:27:43 compute-0 podman[393315]: 2025-10-14 09:27:43.537525184 +0000 UTC m=+0.066886604 container died 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:27:43 compute-0 sudo[393328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:27:43 compute-0 sudo[393328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:27:43 compute-0 sudo[393328]: pam_unix(sudo:session): session closed for user root
Oct 14 09:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03-userdata-shm.mount: Deactivated successfully.
Oct 14 09:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d2b1df47e436b2c8cb55d0e32cd763db11840ac3a9a7beaf43710acbfe9e885-merged.mount: Deactivated successfully.
Oct 14 09:27:43 compute-0 podman[393315]: 2025-10-14 09:27:43.58836055 +0000 UTC m=+0.117721960 container cleanup 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:27:43 compute-0 systemd[1]: libpod-conmon-5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03.scope: Deactivated successfully.
Oct 14 09:27:43 compute-0 podman[393395]: 2025-10-14 09:27:43.66566894 +0000 UTC m=+0.049989076 container remove 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc5a587-4037-4cd5-8de7-eb3cfc015bf1]: (4, ('Tue Oct 14 09:27:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 (5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03)\n5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03\nTue Oct 14 09:27:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 (5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03)\n5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.674 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1eeadc33-9ba6-49ae-a8f2-e50e0a7cfd39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.675 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44344b65-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:43 compute-0 kernel: tap44344b65-f0: left promiscuous mode
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.688 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7116b1e8-1ed2-4a1c-9cf7-5bc8e53dc851]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.720 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f980644e-5f70-4b49-94e5-a75e613c7f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[99b46979-6a1f-4182-ba71-8d38a73edd4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5b3478-403b-4445-a64d-00591b3b8fe2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779325, 'reachable_time': 35808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393410, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d44344b65\x2df325\x2d470a\x2dbd36\x2d6f52ed03d317.mount: Deactivated successfully.
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.746 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.746 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[964efcdb-7d53-404d-945b-148377056a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.748 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c in datapath 44344b65-f325-470a-bd36-6f52ed03d317 unbound from our chassis
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.749 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44344b65-f325-470a-bd36-6f52ed03d317, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.750 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[290fef6f-8790-4199-b2ff-8f8d27cb1b26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.750 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c in datapath 44344b65-f325-470a-bd36-6f52ed03d317 unbound from our chassis
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.751 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44344b65-f325-470a-bd36-6f52ed03d317, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:27:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1735f23e-a2bd-41df-b5a1-8f05ed7dc0a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.871 2 DEBUG nova.compute.manager [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.871 2 DEBUG oslo_concurrency.lockutils [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.872 2 DEBUG oslo_concurrency.lockutils [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.873 2 DEBUG oslo_concurrency.lockutils [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.873 2 DEBUG nova.compute.manager [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.874 2 DEBUG nova.compute.manager [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.976 2 INFO nova.virt.libvirt.driver [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Deleting instance files /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9_del
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.977 2 INFO nova.virt.libvirt.driver [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Deletion of /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9_del complete
Oct 14 09:27:43 compute-0 nova_compute[259627]: 2025-10-14 09:27:43.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:44 compute-0 nova_compute[259627]: 2025-10-14 09:27:44.057 2 INFO nova.compute.manager [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 14 09:27:44 compute-0 nova_compute[259627]: 2025-10-14 09:27:44.058 2 DEBUG oslo.service.loopingcall [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:27:44 compute-0 nova_compute[259627]: 2025-10-14 09:27:44.059 2 DEBUG nova.compute.manager [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:27:44 compute-0 nova_compute[259627]: 2025-10-14 09:27:44.059 2 DEBUG nova.network.neutron [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:27:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 21 KiB/s wr, 57 op/s
Oct 14 09:27:44 compute-0 nova_compute[259627]: 2025-10-14 09:27:44.934 2 DEBUG nova.network.neutron [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:27:44 compute-0 nova_compute[259627]: 2025-10-14 09:27:44.935 2 DEBUG nova.network.neutron [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:44 compute-0 nova_compute[259627]: 2025-10-14 09:27:44.964 2 DEBUG oslo_concurrency.lockutils [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.259 2 DEBUG nova.network.neutron [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.278 2 INFO nova.compute.manager [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Took 1.22 seconds to deallocate network for instance.
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.346 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.347 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.398 2 DEBUG oslo_concurrency.processutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:45 compute-0 ceph-mon[74249]: pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 21 KiB/s wr, 57 op/s
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:27:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1780509116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.935 2 DEBUG oslo_concurrency.processutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.944 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.944 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.945 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.945 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.946 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.946 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.947 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.947 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.948 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.948 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.948 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.949 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.949 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.950 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.950 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.950 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.951 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.951 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.952 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.952 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.953 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-deleted-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.954 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.954 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.954 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.954 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.954 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.957 2 DEBUG nova.compute.provider_tree [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:27:45 compute-0 nova_compute[259627]: 2025-10-14 09:27:45.977 2 DEBUG nova.scheduler.client.report [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.002 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.026 2 INFO nova.scheduler.client.report [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 2595dec0-9170-4e8f-a6bc-9179d30519a9
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.102 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 22 KiB/s wr, 85 op/s
Oct 14 09:27:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1780509116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.609 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434051.6084642, 4310595f-2280-438c-97ca-f2de57527501 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.610 2 INFO nova.compute.manager [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] VM Stopped (Lifecycle Event)
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.634 2 DEBUG nova.compute.manager [None req-fdf9de99-a5eb-4653-a433-bef533947476 - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:27:46 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:47 compute-0 nova_compute[259627]: 2025-10-14 09:27:46.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:27:47 compute-0 ceph-mon[74249]: pgmap v2239: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 22 KiB/s wr, 85 op/s
Oct 14 09:27:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 14 09:27:48 compute-0 nova_compute[259627]: 2025-10-14 09:27:48.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.153 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.154 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.171 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.273 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.274 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.283 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.284 2 INFO nova.compute.claims [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.399 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:49 compute-0 ceph-mon[74249]: pgmap v2240: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 14 09:27:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:27:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046509908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.919 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.929 2 DEBUG nova.compute.provider_tree [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.945 2 DEBUG nova.scheduler.client.report [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.967 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:49 compute-0 nova_compute[259627]: 2025-10-14 09:27:49.968 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.015 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.016 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.034 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.052 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.138 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.139 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.140 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Creating image(s)
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.177 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.214 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.248 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.253 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.338 2 DEBUG nova.policy [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:27:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.361 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.362 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.363 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.363 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.396 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.400 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4046509908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.691 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.749 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.837 2 DEBUG nova.objects.instance [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 7a110a3c-a2ca-4314-a190-28a4505cc26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.859 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.860 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Ensure instance console log exists: /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.861 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.861 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.862 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:50 compute-0 nova_compute[259627]: 2025-10-14 09:27:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:51 compute-0 ceph-mon[74249]: pgmap v2241: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 14 09:27:51 compute-0 nova_compute[259627]: 2025-10-14 09:27:51.638 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Successfully created port: 3fc32773-5083-4341-9838-5282b7963f56 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:27:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.3 KiB/s wr, 42 op/s
Oct 14 09:27:52 compute-0 podman[393624]: 2025-10-14 09:27:52.673493633 +0000 UTC m=+0.077182328 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 09:27:52 compute-0 podman[393623]: 2025-10-14 09:27:52.707513524 +0000 UTC m=+0.109376704 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 09:27:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:53 compute-0 ceph-mon[74249]: pgmap v2242: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.3 KiB/s wr, 42 op/s
Oct 14 09:27:53 compute-0 nova_compute[259627]: 2025-10-14 09:27:53.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:54 compute-0 nova_compute[259627]: 2025-10-14 09:27:54.162 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434059.1603062, 1b7f03a3-6b73-478f-bf13-cf062714faef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:27:54 compute-0 nova_compute[259627]: 2025-10-14 09:27:54.163 2 INFO nova.compute.manager [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] VM Stopped (Lifecycle Event)
Oct 14 09:27:54 compute-0 nova_compute[259627]: 2025-10-14 09:27:54.192 2 DEBUG nova.compute.manager [None req-da76d5bc-a370-4b1f-94bf-2799d894a075 - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:27:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:27:54 compute-0 nova_compute[259627]: 2025-10-14 09:27:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:55 compute-0 ceph-mon[74249]: pgmap v2243: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.602 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Successfully updated port: 3fc32773-5083-4341-9838-5282b7963f56 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.616 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.616 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.616 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.709 2 DEBUG nova.compute.manager [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-changed-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.709 2 DEBUG nova.compute.manager [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing instance network info cache due to event network-changed-3fc32773-5083-4341-9838-5282b7963f56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.710 2 DEBUG oslo_concurrency.lockutils [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:27:55 compute-0 nova_compute[259627]: 2025-10-14 09:27:55.804 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:27:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.851 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.888 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.888 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance network_info: |[{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.889 2 DEBUG oslo_concurrency.lockutils [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.889 2 DEBUG nova.network.neutron [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing network info cache for port 3fc32773-5083-4341-9838-5282b7963f56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.894 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start _get_guest_xml network_info=[{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.900 2 WARNING nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.906 2 DEBUG nova.virt.libvirt.host [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.907 2 DEBUG nova.virt.libvirt.host [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.915 2 DEBUG nova.virt.libvirt.host [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.916 2 DEBUG nova.virt.libvirt.host [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.917 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.917 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.918 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.919 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.919 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.920 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.920 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.921 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.921 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.922 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.922 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.923 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:27:56 compute-0 nova_compute[259627]: 2025-10-14 09:27:56.928 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:27:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803485620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.432 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.460 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.465 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:57 compute-0 ceph-mon[74249]: pgmap v2244: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 09:27:57 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2803485620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:27:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:27:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713904424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.931 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.933 2 DEBUG nova.virt.libvirt.vif [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:27:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=130,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-tkp1b0io',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:27:50Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=7a110a3c-a2ca-4314-a190-28a4505cc26c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.934 2 DEBUG nova.network.os_vif_util [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.935 2 DEBUG nova.network.os_vif_util [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.936 2 DEBUG nova.objects.instance [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a110a3c-a2ca-4314-a190-28a4505cc26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.955 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <uuid>7a110a3c-a2ca-4314-a190-28a4505cc26c</uuid>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <name>instance-00000082</name>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167</nova:name>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:27:56</nova:creationTime>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <nova:port uuid="3fc32773-5083-4341-9838-5282b7963f56">
Oct 14 09:27:57 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <system>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <entry name="serial">7a110a3c-a2ca-4314-a190-28a4505cc26c</entry>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <entry name="uuid">7a110a3c-a2ca-4314-a190-28a4505cc26c</entry>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </system>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <os>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   </os>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <features>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   </features>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7a110a3c-a2ca-4314-a190-28a4505cc26c_disk">
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       </source>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config">
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       </source>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:27:57 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b3:5d:72"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <target dev="tap3fc32773-50"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/console.log" append="off"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <video>
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </video>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:27:57 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:27:57 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:27:57 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:27:57 compute-0 nova_compute[259627]: </domain>
Oct 14 09:27:57 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.957 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Preparing to wait for external event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.958 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.958 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.959 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.960 2 DEBUG nova.virt.libvirt.vif [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:27:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=130,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-tkp1b0io',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:27:50Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=7a110a3c-a2ca-4314-a190-28a4505cc26c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.961 2 DEBUG nova.network.os_vif_util [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.962 2 DEBUG nova.network.os_vif_util [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.962 2 DEBUG os_vif [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fc32773-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:57 compute-0 nova_compute[259627]: 2025-10-14 09:27:57.971 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3fc32773-50, col_values=(('external_ids', {'iface-id': '3fc32773-5083-4341-9838-5282b7963f56', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:5d:72', 'vm-uuid': '7a110a3c-a2ca-4314-a190-28a4505cc26c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:58 compute-0 NetworkManager[44885]: <info>  [1760434078.0169] manager: (tap3fc32773-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.025 2 INFO os_vif [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50')
Oct 14 09:27:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.188 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.189 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.189 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:b3:5d:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.191 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Using config drive
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.222 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.472 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434063.4699454, 2595dec0-9170-4e8f-a6bc-9179d30519a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.472 2 INFO nova.compute.manager [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] VM Stopped (Lifecycle Event)
Oct 14 09:27:58 compute-0 nova_compute[259627]: 2025-10-14 09:27:58.495 2 DEBUG nova.compute.manager [None req-a608c315-6725-4177-bc31-79de8ce5e48b - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:27:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3713904424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:27:59 compute-0 ceph-mon[74249]: pgmap v2245: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:27:59 compute-0 nova_compute[259627]: 2025-10-14 09:27:59.726 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Creating config drive at /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config
Oct 14 09:27:59 compute-0 nova_compute[259627]: 2025-10-14 09:27:59.734 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbwbaw7l6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:27:59 compute-0 nova_compute[259627]: 2025-10-14 09:27:59.903 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbwbaw7l6" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:27:59 compute-0 nova_compute[259627]: 2025-10-14 09:27:59.945 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:27:59 compute-0 nova_compute[259627]: 2025-10-14 09:27:59.951 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.177 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.179 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Deleting local config drive /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config because it was imported into RBD.
Oct 14 09:28:00 compute-0 kernel: tap3fc32773-50: entered promiscuous mode
Oct 14 09:28:00 compute-0 NetworkManager[44885]: <info>  [1760434080.2499] manager: (tap3fc32773-50): new Tun device (/org/freedesktop/NetworkManager/Devices/565)
Oct 14 09:28:00 compute-0 ovn_controller[152662]: 2025-10-14T09:28:00Z|01393|binding|INFO|Claiming lport 3fc32773-5083-4341-9838-5282b7963f56 for this chassis.
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:00 compute-0 ovn_controller[152662]: 2025-10-14T09:28:00Z|01394|binding|INFO|3fc32773-5083-4341-9838-5282b7963f56: Claiming fa:16:3e:b3:5d:72 10.100.0.12
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.265 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5d:72 10.100.0.12'], port_security=['fa:16:3e:b3:5d:72 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7a110a3c-a2ca-4314-a190-28a4505cc26c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f09a704d-6063-4e40-b690-c967cd364b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24af2eac-35ae-4c02-b261-8fe378764631 e76d2fee-d8c5-45a1-ac1f-55a35976452c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b90ca07-80d1-49c1-a91f-225f989dd9c6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3fc32773-5083-4341-9838-5282b7963f56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.266 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3fc32773-5083-4341-9838-5282b7963f56 in datapath f09a704d-6063-4e40-b690-c967cd364b32 bound to our chassis
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.267 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f09a704d-6063-4e40-b690-c967cd364b32
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.285 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9b88e589-d7d3-4b76-ab1b-3247b426fff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.286 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf09a704d-61 in ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.291 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf09a704d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[065fa1c6-b062-4576-b323-83e9be0333a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.292 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb0baec-6a77-45fc-b511-7f99b13813c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.311 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d48437d5-df4a-4b45-8dab-d1677ad65163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 systemd-udevd[393807]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:28:00 compute-0 systemd-machined[214636]: New machine qemu-163-instance-00000082.
Oct 14 09:28:00 compute-0 NetworkManager[44885]: <info>  [1760434080.3379] device (tap3fc32773-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:28:00 compute-0 NetworkManager[44885]: <info>  [1760434080.3389] device (tap3fc32773-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d8138b-a0b3-4411-ac63-a78a0821ff69]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000082.
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:00 compute-0 ovn_controller[152662]: 2025-10-14T09:28:00Z|01395|binding|INFO|Setting lport 3fc32773-5083-4341-9838-5282b7963f56 ovn-installed in OVS
Oct 14 09:28:00 compute-0 ovn_controller[152662]: 2025-10-14T09:28:00Z|01396|binding|INFO|Setting lport 3fc32773-5083-4341-9838-5282b7963f56 up in Southbound
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.380 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea7269d-61ba-4529-8d03-8f54f94a4ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 NetworkManager[44885]: <info>  [1760434080.3872] manager: (tapf09a704d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/566)
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.386 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee463a75-2fa7-4a94-b715-78958daa5af6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.427 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[466ada42-c8f4-4eb9-a145-b9734b8fe9c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.431 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[454e5323-383f-411d-9caf-5515472b1daf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 NetworkManager[44885]: <info>  [1760434080.4630] device (tapf09a704d-60): carrier: link connected
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.471 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ae41c2c4-812c-43b5-8ab8-9e381b0347fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.503 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b08aa3ba-8ed1-42ea-a9d4-804a070991f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf09a704d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f0:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785922, 'reachable_time': 22812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393838, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.529 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[020d32c4-fc09-4246-9248-bcbdd19ed59d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:f0e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785922, 'tstamp': 785922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393839, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.561 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5f35d808-a63a-4256-93fe-e2cc8fe4490b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf09a704d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f0:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785922, 'reachable_time': 22812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393840, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.610 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[377be5ec-b0cd-4e11-ab1a-f156c1fd316f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.695 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1052fa53-14ad-4435-8857-2dce6e4cbb46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.697 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09a704d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.697 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.697 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf09a704d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:00 compute-0 NetworkManager[44885]: <info>  [1760434080.7018] manager: (tapf09a704d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:00 compute-0 kernel: tapf09a704d-60: entered promiscuous mode
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.707 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf09a704d-60, col_values=(('external_ids', {'iface-id': 'e4065da2-8191-4cbc-a6ed-0505ac5ea1c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:00 compute-0 ovn_controller[152662]: 2025-10-14T09:28:00Z|01397|binding|INFO|Releasing lport e4065da2-8191-4cbc-a6ed-0505ac5ea1c6 from this chassis (sb_readonly=0)
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.744 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f09a704d-6063-4e40-b690-c967cd364b32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f09a704d-6063-4e40-b690-c967cd364b32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.745 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c59de69a-0bec-428e-bfa4-8aa04d5003ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.746 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-f09a704d-6063-4e40-b690-c967cd364b32
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/f09a704d-6063-4e40-b690-c967cd364b32.pid.haproxy
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID f09a704d-6063-4e40-b690-c967cd364b32
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:28:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.747 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'env', 'PROCESS_TAG=haproxy-f09a704d-6063-4e40-b690-c967cd364b32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f09a704d-6063-4e40-b690-c967cd364b32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.860 2 DEBUG nova.network.neutron [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updated VIF entry in instance network info cache for port 3fc32773-5083-4341-9838-5282b7963f56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.861 2 DEBUG nova.network.neutron [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:00 compute-0 nova_compute[259627]: 2025-10-14 09:28:00.880 2 DEBUG oslo_concurrency.lockutils [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:01 compute-0 podman[393914]: 2025-10-14 09:28:01.189946312 +0000 UTC m=+0.069233002 container create a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:28:01 compute-0 systemd[1]: Started libpod-conmon-a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8.scope.
Oct 14 09:28:01 compute-0 podman[393914]: 2025-10-14 09:28:01.148551329 +0000 UTC m=+0.027838069 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:28:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a05a81ea8f0e7b8861af2cf251020d339678d913c2f8ee38478cf7f1b84037/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:01 compute-0 podman[393914]: 2025-10-14 09:28:01.283003781 +0000 UTC m=+0.162290521 container init a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:28:01 compute-0 podman[393914]: 2025-10-14 09:28:01.29387034 +0000 UTC m=+0.173157050 container start a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:28:01 compute-0 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [NOTICE]   (393934) : New worker (393936) forked
Oct 14 09:28:01 compute-0 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [NOTICE]   (393934) : Loading success.
Oct 14 09:28:01 compute-0 nova_compute[259627]: 2025-10-14 09:28:01.479 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434081.478794, 7a110a3c-a2ca-4314-a190-28a4505cc26c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:01 compute-0 nova_compute[259627]: 2025-10-14 09:28:01.479 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] VM Started (Lifecycle Event)
Oct 14 09:28:01 compute-0 nova_compute[259627]: 2025-10-14 09:28:01.505 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:01 compute-0 nova_compute[259627]: 2025-10-14 09:28:01.509 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434081.4789352, 7a110a3c-a2ca-4314-a190-28a4505cc26c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:01 compute-0 nova_compute[259627]: 2025-10-14 09:28:01.509 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] VM Paused (Lifecycle Event)
Oct 14 09:28:01 compute-0 nova_compute[259627]: 2025-10-14 09:28:01.533 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:01 compute-0 nova_compute[259627]: 2025-10-14 09:28:01.536 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:28:01 compute-0 ceph-mon[74249]: pgmap v2246: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:28:01 compute-0 nova_compute[259627]: 2025-10-14 09:28:01.555 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:28:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:28:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:28:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:28:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:28:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:28:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:28:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:28:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:03 compute-0 nova_compute[259627]: 2025-10-14 09:28:03.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:03 compute-0 ceph-mon[74249]: pgmap v2247: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:28:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:28:05 compute-0 nova_compute[259627]: 2025-10-14 09:28:05.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:05 compute-0 ceph-mon[74249]: pgmap v2248: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:28:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:28:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2444135089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:28:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:28:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2444135089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:28:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:28:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2444135089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:28:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2444135089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:28:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:07.043 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:07.044 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:07.045 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:07 compute-0 ceph-mon[74249]: pgmap v2249: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:28:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.036931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088037002, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 507, "num_deletes": 256, "total_data_size": 438035, "memory_usage": 447720, "flush_reason": "Manual Compaction"}
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088043280, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 433906, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47240, "largest_seqno": 47746, "table_properties": {"data_size": 431072, "index_size": 806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6682, "raw_average_key_size": 18, "raw_value_size": 425372, "raw_average_value_size": 1175, "num_data_blocks": 35, "num_entries": 362, "num_filter_entries": 362, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434061, "oldest_key_time": 1760434061, "file_creation_time": 1760434088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 6400 microseconds, and 3141 cpu microseconds.
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.043340) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 433906 bytes OK
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.043365) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.045486) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.045511) EVENT_LOG_v1 {"time_micros": 1760434088045503, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.045534) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 435070, prev total WAL file size 435070, number of live WAL files 2.
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.046161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(423KB)], [107(10030KB)]
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088046213, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10705064, "oldest_snapshot_seqno": -1}
Oct 14 09:28:08 compute-0 nova_compute[259627]: 2025-10-14 09:28:08.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6884 keys, 10570467 bytes, temperature: kUnknown
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088129228, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10570467, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10522261, "index_size": 29918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 179261, "raw_average_key_size": 26, "raw_value_size": 10396795, "raw_average_value_size": 1510, "num_data_blocks": 1175, "num_entries": 6884, "num_filter_entries": 6884, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.129517) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10570467 bytes
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.130899) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.8 rd, 127.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(49.0) write-amplify(24.4) OK, records in: 7407, records dropped: 523 output_compression: NoCompression
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.130920) EVENT_LOG_v1 {"time_micros": 1760434088130910, "job": 64, "event": "compaction_finished", "compaction_time_micros": 83101, "compaction_time_cpu_micros": 53027, "output_level": 6, "num_output_files": 1, "total_output_size": 10570467, "num_input_records": 7407, "num_output_records": 6884, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088131199, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088133651, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.046070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:28:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:28:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:28:09 compute-0 ceph-mon[74249]: pgmap v2250: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:28:09 compute-0 podman[393945]: 2025-10-14 09:28:09.713680008 +0000 UTC m=+0.119230607 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 14 09:28:09 compute-0 podman[393946]: 2025-10-14 09:28:09.716430306 +0000 UTC m=+0.118569161 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:28:10 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:28:10 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.994 2 DEBUG nova.compute.manager [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:10 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.994 2 DEBUG oslo_concurrency.lockutils [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:10 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.995 2 DEBUG oslo_concurrency.lockutils [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:10 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.995 2 DEBUG oslo_concurrency.lockutils [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:10 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.995 2 DEBUG nova.compute.manager [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Processing event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:28:10 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.996 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.999 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434090.9992287, 7a110a3c-a2ca-4314-a190-28a4505cc26c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:10.999 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] VM Resumed (Lifecycle Event)
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.001 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.005 2 INFO nova.virt.libvirt.driver [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance spawned successfully.
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.006 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.017 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.025 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.029 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.030 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.030 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.031 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.032 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.032 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.052 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.109 2 INFO nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Took 20.97 seconds to spawn the instance on the hypervisor.
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.109 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.174 2 INFO nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Took 21.93 seconds to build instance.
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.189 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:11 compute-0 ceph-mon[74249]: pgmap v2251: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:28:11 compute-0 nova_compute[259627]: 2025-10-14 09:28:11.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:28:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:13 compute-0 nova_compute[259627]: 2025-10-14 09:28:13.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:13 compute-0 ceph-mon[74249]: pgmap v2252: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:28:13 compute-0 nova_compute[259627]: 2025-10-14 09:28:13.801 2 DEBUG nova.compute.manager [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:13 compute-0 nova_compute[259627]: 2025-10-14 09:28:13.802 2 DEBUG oslo_concurrency.lockutils [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:13 compute-0 nova_compute[259627]: 2025-10-14 09:28:13.803 2 DEBUG oslo_concurrency.lockutils [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:13 compute-0 nova_compute[259627]: 2025-10-14 09:28:13.803 2 DEBUG oslo_concurrency.lockutils [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:13 compute-0 nova_compute[259627]: 2025-10-14 09:28:13.803 2 DEBUG nova.compute.manager [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] No waiting events found dispatching network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:28:13 compute-0 nova_compute[259627]: 2025-10-14 09:28:13.804 2 WARNING nova.compute.manager [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received unexpected event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 for instance with vm_state active and task_state None.
Oct 14 09:28:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 341 B/s wr, 6 op/s
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.247 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.249 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.268 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.389 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.390 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.400 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.401 2 INFO nova.compute.claims [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:28:15 compute-0 ceph-mon[74249]: pgmap v2253: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 341 B/s wr, 6 op/s
Oct 14 09:28:15 compute-0 nova_compute[259627]: 2025-10-14 09:28:15.513 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:28:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863749456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.020 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.029 2 DEBUG nova.compute.provider_tree [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.056 2 DEBUG nova.scheduler.client.report [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.084 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.085 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.160 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.161 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.181 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.199 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.293 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.295 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.295 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Creating image(s)
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.325 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.356 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 70 op/s
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.387 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.392 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1863749456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.475 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.476 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.477 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.478 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.508 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.512 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9314cb71-9b9f-4379-90ba-61445b09c003_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.807 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9314cb71-9b9f-4379-90ba-61445b09c003_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.898 2 DEBUG nova.policy [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:28:16 compute-0 nova_compute[259627]: 2025-10-14 09:28:16.908 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.030 2 DEBUG nova.objects.instance [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 9314cb71-9b9f-4379-90ba-61445b09c003 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:17 compute-0 NetworkManager[44885]: <info>  [1760434097.0462] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Oct 14 09:28:17 compute-0 NetworkManager[44885]: <info>  [1760434097.0482] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.055 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.056 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Ensure instance console log exists: /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.056 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.057 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.057 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:17 compute-0 ovn_controller[152662]: 2025-10-14T09:28:17Z|01398|binding|INFO|Releasing lport e4065da2-8191-4cbc-a6ed-0505ac5ea1c6 from this chassis (sb_readonly=0)
Oct 14 09:28:17 compute-0 ceph-mon[74249]: pgmap v2254: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 70 op/s
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.592 2 DEBUG nova.compute.manager [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-changed-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.593 2 DEBUG nova.compute.manager [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing instance network info cache due to event network-changed-3fc32773-5083-4341-9838-5282b7963f56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.594 2 DEBUG oslo_concurrency.lockutils [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.594 2 DEBUG oslo_concurrency.lockutils [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:28:17 compute-0 nova_compute[259627]: 2025-10-14 09:28:17.595 2 DEBUG nova.network.neutron [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing network info cache for port 3fc32773-5083-4341-9838-5282b7963f56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:28:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:18 compute-0 nova_compute[259627]: 2025-10-14 09:28:18.095 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Successfully created port: 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:28:18 compute-0 nova_compute[259627]: 2025-10-14 09:28:18.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 14 09:28:19 compute-0 ceph-mon[74249]: pgmap v2255: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.745 2 DEBUG nova.network.neutron [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updated VIF entry in instance network info cache for port 3fc32773-5083-4341-9838-5282b7963f56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.746 2 DEBUG nova.network.neutron [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.772 2 DEBUG oslo_concurrency.lockutils [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.825 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Successfully updated port: 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.840 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.841 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.841 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.925 2 DEBUG nova.compute.manager [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.926 2 DEBUG nova.compute.manager [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing instance network info cache due to event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.926 2 DEBUG oslo_concurrency.lockutils [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:28:19 compute-0 nova_compute[259627]: 2025-10-14 09:28:19.986 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:28:20 compute-0 nova_compute[259627]: 2025-10-14 09:28:20.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 104 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 421 KiB/s wr, 64 op/s
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.269 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.307 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.308 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance network_info: |[{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.309 2 DEBUG oslo_concurrency.lockutils [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.309 2 DEBUG nova.network.neutron [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.314 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start _get_guest_xml network_info=[{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.320 2 WARNING nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.327 2 DEBUG nova.virt.libvirt.host [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.328 2 DEBUG nova.virt.libvirt.host [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.338 2 DEBUG nova.virt.libvirt.host [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.339 2 DEBUG nova.virt.libvirt.host [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.340 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.341 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.342 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.342 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.343 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.343 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.344 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.345 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.345 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.346 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.346 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.347 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.352 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:21 compute-0 ceph-mon[74249]: pgmap v2256: 305 pgs: 305 active+clean; 104 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 421 KiB/s wr, 64 op/s
Oct 14 09:28:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:28:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609265715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.897 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.924 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:21 compute-0 nova_compute[259627]: 2025-10-14 09:28:21.929 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:28:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969744071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:28:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 134 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.388 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.390 2 DEBUG nova.virt.libvirt.vif [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:28:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-575903537',display_name='tempest-TestNetworkBasicOps-server-575903537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-575903537',id=131,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZNK+Wbj9xAsgpWQ+tziM8jRjajF62ZTB5kTzrK/xn8p7FnYdtDdeLVfLEHTxjGjbKXIgJlw92Cf9a6ZSykZsO+5buce9Y3MqTSkOEHFe9bsgLvMZ76ONZmB8tSQVQONA==',key_name='tempest-TestNetworkBasicOps-1689754690',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-eaasug7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:28:16Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=9314cb71-9b9f-4379-90ba-61445b09c003,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.391 2 DEBUG nova.network.os_vif_util [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.392 2 DEBUG nova.network.os_vif_util [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.393 2 DEBUG nova.objects.instance [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9314cb71-9b9f-4379-90ba-61445b09c003 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.415 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <uuid>9314cb71-9b9f-4379-90ba-61445b09c003</uuid>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <name>instance-00000083</name>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <nova:name>tempest-TestNetworkBasicOps-server-575903537</nova:name>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:28:21</nova:creationTime>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <nova:port uuid="5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa">
Oct 14 09:28:22 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <system>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <entry name="serial">9314cb71-9b9f-4379-90ba-61445b09c003</entry>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <entry name="uuid">9314cb71-9b9f-4379-90ba-61445b09c003</entry>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </system>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <os>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   </os>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <features>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   </features>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9314cb71-9b9f-4379-90ba-61445b09c003_disk">
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       </source>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9314cb71-9b9f-4379-90ba-61445b09c003_disk.config">
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       </source>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:28:22 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:a9:0f:a6"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <target dev="tap5a63e80f-6b"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/console.log" append="off"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <video>
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </video>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:28:22 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:28:22 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:28:22 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:28:22 compute-0 nova_compute[259627]: </domain>
Oct 14 09:28:22 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.417 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Preparing to wait for external event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.417 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.418 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.418 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.419 2 DEBUG nova.virt.libvirt.vif [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:28:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-575903537',display_name='tempest-TestNetworkBasicOps-server-575903537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-575903537',id=131,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZNK+Wbj9xAsgpWQ+tziM8jRjajF62ZTB5kTzrK/xn8p7FnYdtDdeLVfLEHTxjGjbKXIgJlw92Cf9a6ZSykZsO+5buce9Y3MqTSkOEHFe9bsgLvMZ76ONZmB8tSQVQONA==',key_name='tempest-TestNetworkBasicOps-1689754690',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-eaasug7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:28:16Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=9314cb71-9b9f-4379-90ba-61445b09c003,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.419 2 DEBUG nova.network.os_vif_util [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.420 2 DEBUG nova.network.os_vif_util [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.420 2 DEBUG os_vif [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a63e80f-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.426 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a63e80f-6b, col_values=(('external_ids', {'iface-id': '5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:0f:a6', 'vm-uuid': '9314cb71-9b9f-4379-90ba-61445b09c003'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:22 compute-0 NetworkManager[44885]: <info>  [1760434102.4286] manager: (tap5a63e80f-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/570)
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.436 2 INFO os_vif [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b')
Oct 14 09:28:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2609265715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:28:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/969744071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.504 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.505 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.505 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:a9:0f:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.506 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Using config drive
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.524 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.986 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Creating config drive at /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config
Oct 14 09:28:22 compute-0 nova_compute[259627]: 2025-10-14 09:28:22.993 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5tipgpue execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.149 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5tipgpue" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.180 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.184 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.239 2 DEBUG nova.network.neutron [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updated VIF entry in instance network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.240 2 DEBUG nova.network.neutron [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.268 2 DEBUG oslo_concurrency.lockutils [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.371 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.372 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Deleting local config drive /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config because it was imported into RBD.
Oct 14 09:28:23 compute-0 kernel: tap5a63e80f-6b: entered promiscuous mode
Oct 14 09:28:23 compute-0 NetworkManager[44885]: <info>  [1760434103.4278] manager: (tap5a63e80f-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/571)
Oct 14 09:28:23 compute-0 ovn_controller[152662]: 2025-10-14T09:28:23Z|01399|binding|INFO|Claiming lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa for this chassis.
Oct 14 09:28:23 compute-0 ovn_controller[152662]: 2025-10-14T09:28:23Z|01400|binding|INFO|5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa: Claiming fa:16:3e:a9:0f:a6 10.100.0.12
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.442 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:0f:a6 10.100.0.12'], port_security=['fa:16:3e:a9:0f:a6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9314cb71-9b9f-4379-90ba-61445b09c003', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee202a2d-dda0-40f4-92dc-f1908630878a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42808c73-a7f9-4337-928b-894f78f53e75, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.443 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa in datapath 0ad5af7c-fbad-46b1-979b-db7d2639a7c3 bound to our chassis
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.444 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ad5af7c-fbad-46b1-979b-db7d2639a7c3
Oct 14 09:28:23 compute-0 ovn_controller[152662]: 2025-10-14T09:28:23Z|01401|binding|INFO|Setting lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa ovn-installed in OVS
Oct 14 09:28:23 compute-0 ovn_controller[152662]: 2025-10-14T09:28:23Z|01402|binding|INFO|Setting lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa up in Southbound
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.464 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aef3a2e5-ecf4-4112-ad58-3a7805aa7ea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.468 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ad5af7c-f1 in ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.470 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ad5af7c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e44e86-5ec0-4564-a21c-41a5eb712539]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.472 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2165a011-dc42-49f9-87da-dbf86580a4a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 systemd-machined[214636]: New machine qemu-164-instance-00000083.
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.487 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1e6d26-3381-405d-829f-9fbcad5e3a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000083.
Oct 14 09:28:23 compute-0 ceph-mon[74249]: pgmap v2257: 305 pgs: 305 active+clean; 134 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 14 09:28:23 compute-0 systemd-udevd[394335]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8b79a5ad-84e7-4746-91ad-bc1a6680c9a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 NetworkManager[44885]: <info>  [1760434103.5399] device (tap5a63e80f-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:28:23 compute-0 NetworkManager[44885]: <info>  [1760434103.5416] device (tap5a63e80f-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:28:23 compute-0 podman[394307]: 2025-10-14 09:28:23.551178135 +0000 UTC m=+0.078439509 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.562 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e14a343f-d418-47cb-b83b-496d79576866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.569 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52f732fb-e621-43e5-b184-669129d0cced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 NetworkManager[44885]: <info>  [1760434103.5711] manager: (tap0ad5af7c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/572)
Oct 14 09:28:23 compute-0 systemd-udevd[394354]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:28:23 compute-0 podman[394306]: 2025-10-14 09:28:23.626089336 +0000 UTC m=+0.163229024 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.627 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2c05430a-5314-4180-a6a3-67ee56fa9fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.630 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f517f1-d4cd-4ba7-a2ce-16b6c8a8c241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 NetworkManager[44885]: <info>  [1760434103.6584] device (tap0ad5af7c-f0): carrier: link connected
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.658 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[431703c3-1767-4da7-b193-e2ca27c8dfb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c9887-9d61-4b75-823c-f51bd67f3933]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ad5af7c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:03:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788241, 'reachable_time': 18563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394388, 'error': None, 'target': 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.688 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d800a6-bddb-413c-a21b-9a03ac16dcf9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:3be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 788241, 'tstamp': 788241}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394389, 'error': None, 'target': 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.690 2 DEBUG nova.compute.manager [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.691 2 DEBUG oslo_concurrency.lockutils [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.691 2 DEBUG oslo_concurrency.lockutils [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.691 2 DEBUG oslo_concurrency.lockutils [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.691 2 DEBUG nova.compute.manager [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Processing event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.706 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58f0d4a4-ee8b-4fb1-93c3-f4295be31389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ad5af7c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:03:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788241, 'reachable_time': 18563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394390, 'error': None, 'target': 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6cda7639-76f7-41bf-bf03-eaf55c21a2b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.793 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4bce08cd-61d8-4248-bd1f-5bb8324f2077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ad5af7c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.798 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ad5af7c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:23 compute-0 NetworkManager[44885]: <info>  [1760434103.8003] manager: (tap0ad5af7c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Oct 14 09:28:23 compute-0 kernel: tap0ad5af7c-f0: entered promiscuous mode
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.804 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ad5af7c-f0, col_values=(('external_ids', {'iface-id': 'b64d84a0-5356-4297-8228-85522d087442'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:23 compute-0 ovn_controller[152662]: 2025-10-14T09:28:23Z|01403|binding|INFO|Releasing lport b64d84a0-5356-4297-8228-85522d087442 from this chassis (sb_readonly=0)
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:23 compute-0 nova_compute[259627]: 2025-10-14 09:28:23.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.818 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ad5af7c-fbad-46b1-979b-db7d2639a7c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ad5af7c-fbad-46b1-979b-db7d2639a7c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.819 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5a5ef4-9dac-410a-b9ec-e03eba29e18b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.819 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0ad5af7c-fbad-46b1-979b-db7d2639a7c3
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0ad5af7c-fbad-46b1-979b-db7d2639a7c3.pid.haproxy
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0ad5af7c-fbad-46b1-979b-db7d2639a7c3
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:28:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.821 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'env', 'PROCESS_TAG=haproxy-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ad5af7c-fbad-46b1-979b-db7d2639a7c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:28:24 compute-0 ovn_controller[152662]: 2025-10-14T09:28:24Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:5d:72 10.100.0.12
Oct 14 09:28:24 compute-0 ovn_controller[152662]: 2025-10-14T09:28:24Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:5d:72 10.100.0.12
Oct 14 09:28:24 compute-0 podman[394464]: 2025-10-14 09:28:24.244846605 +0000 UTC m=+0.080971912 container create de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 09:28:24 compute-0 systemd[1]: Started libpod-conmon-de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c.scope.
Oct 14 09:28:24 compute-0 podman[394464]: 2025-10-14 09:28:24.203886173 +0000 UTC m=+0.040011910 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:28:24 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f023d5bb6e112cd231a0a14f750e65530a9849b6d39894261763a8809936d7ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:24 compute-0 podman[394464]: 2025-10-14 09:28:24.343845351 +0000 UTC m=+0.179970678 container init de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:28:24 compute-0 podman[394464]: 2025-10-14 09:28:24.350498256 +0000 UTC m=+0.186623563 container start de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:28:24 compute-0 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [NOTICE]   (394483) : New worker (394485) forked
Oct 14 09:28:24 compute-0 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [NOTICE]   (394483) : Loading success.
Oct 14 09:28:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 134 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.555 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434104.554593, 9314cb71-9b9f-4379-90ba-61445b09c003 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.556 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] VM Started (Lifecycle Event)
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.558 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.563 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.569 2 INFO nova.virt.libvirt.driver [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance spawned successfully.
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.569 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.595 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.603 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.609 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.610 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.611 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.612 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.612 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.613 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.654 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.655 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434104.5548694, 9314cb71-9b9f-4379-90ba-61445b09c003 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.655 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] VM Paused (Lifecycle Event)
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.698 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.704 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434104.5616372, 9314cb71-9b9f-4379-90ba-61445b09c003 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.704 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] VM Resumed (Lifecycle Event)
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.711 2 INFO nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Took 8.42 seconds to spawn the instance on the hypervisor.
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.712 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.730 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.734 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.775 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.804 2 INFO nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Took 9.45 seconds to build instance.
Oct 14 09:28:24 compute-0 nova_compute[259627]: 2025-10-14 09:28:24.822 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:25 compute-0 nova_compute[259627]: 2025-10-14 09:28:25.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:25 compute-0 ceph-mon[74249]: pgmap v2258: 305 pgs: 305 active+clean; 134 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 14 09:28:25 compute-0 nova_compute[259627]: 2025-10-14 09:28:25.810 2 DEBUG nova.compute.manager [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:25 compute-0 nova_compute[259627]: 2025-10-14 09:28:25.810 2 DEBUG oslo_concurrency.lockutils [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:25 compute-0 nova_compute[259627]: 2025-10-14 09:28:25.811 2 DEBUG oslo_concurrency.lockutils [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:25 compute-0 nova_compute[259627]: 2025-10-14 09:28:25.811 2 DEBUG oslo_concurrency.lockutils [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:25 compute-0 nova_compute[259627]: 2025-10-14 09:28:25.811 2 DEBUG nova.compute.manager [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] No waiting events found dispatching network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:28:25 compute-0 nova_compute[259627]: 2025-10-14 09:28:25.811 2 WARNING nova.compute.manager [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received unexpected event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa for instance with vm_state active and task_state None.
Oct 14 09:28:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 14 09:28:27 compute-0 nova_compute[259627]: 2025-10-14 09:28:27.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:27.373 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:28:27 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:27.374 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:28:27 compute-0 nova_compute[259627]: 2025-10-14 09:28:27.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:27 compute-0 ceph-mon[74249]: pgmap v2259: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 14 09:28:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 14 09:28:28 compute-0 nova_compute[259627]: 2025-10-14 09:28:28.548 2 DEBUG nova.compute.manager [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:28 compute-0 nova_compute[259627]: 2025-10-14 09:28:28.549 2 DEBUG nova.compute.manager [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing instance network info cache due to event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:28:28 compute-0 nova_compute[259627]: 2025-10-14 09:28:28.550 2 DEBUG oslo_concurrency.lockutils [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:28:28 compute-0 nova_compute[259627]: 2025-10-14 09:28:28.550 2 DEBUG oslo_concurrency.lockutils [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:28:28 compute-0 nova_compute[259627]: 2025-10-14 09:28:28.551 2 DEBUG nova.network.neutron [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:28:29 compute-0 ceph-mon[74249]: pgmap v2260: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 14 09:28:29 compute-0 nova_compute[259627]: 2025-10-14 09:28:29.879 2 DEBUG nova.network.neutron [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updated VIF entry in instance network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:28:29 compute-0 nova_compute[259627]: 2025-10-14 09:28:29.881 2 DEBUG nova.network.neutron [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:29 compute-0 nova_compute[259627]: 2025-10-14 09:28:29.905 2 DEBUG oslo_concurrency.lockutils [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:30 compute-0 nova_compute[259627]: 2025-10-14 09:28:30.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 142 op/s
Oct 14 09:28:31 compute-0 ceph-mon[74249]: pgmap v2261: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 142 op/s
Oct 14 09:28:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:32.376 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 160 op/s
Oct 14 09:28:32 compute-0 nova_compute[259627]: 2025-10-14 09:28:32.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:28:32
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'images']
Oct 14 09:28:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:28:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:28:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:28:33 compute-0 ceph-mon[74249]: pgmap v2262: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 160 op/s
Oct 14 09:28:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Oct 14 09:28:35 compute-0 nova_compute[259627]: 2025-10-14 09:28:35.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:35 compute-0 ceph-mon[74249]: pgmap v2263: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Oct 14 09:28:36 compute-0 ovn_controller[152662]: 2025-10-14T09:28:36Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:0f:a6 10.100.0.12
Oct 14 09:28:36 compute-0 ovn_controller[152662]: 2025-10-14T09:28:36Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:0f:a6 10.100.0.12
Oct 14 09:28:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 178 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 164 op/s
Oct 14 09:28:36 compute-0 nova_compute[259627]: 2025-10-14 09:28:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:36 compute-0 nova_compute[259627]: 2025-10-14 09:28:36.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:37 compute-0 ceph-mon[74249]: pgmap v2264: 305 pgs: 305 active+clean; 178 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 164 op/s
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.678 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.679 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.697 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.790 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.791 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.802 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.802 2 INFO nova.compute.claims [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:28:37 compute-0 nova_compute[259627]: 2025-10-14 09:28:37.977 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 178 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Oct 14 09:28:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:28:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782866377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.463 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.471 2 DEBUG nova.compute.provider_tree [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.486 2 DEBUG nova.scheduler.client.report [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.504 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.504 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.544 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.545 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:28:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1782866377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.564 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.584 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.679 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.680 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.680 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Creating image(s)
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.707 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.735 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.759 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.763 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.856 2 DEBUG nova.policy [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.859 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.860 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.860 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.860 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.881 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:38 compute-0 nova_compute[259627]: 2025-10-14 09:28:38.884 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.153 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.220 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.323 2 DEBUG nova.objects.instance [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid fb3db81b-4d6f-4736-9d4b-b1900fad6488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.343 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.343 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Ensure instance console log exists: /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.344 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.344 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.344 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:39 compute-0 ceph-mon[74249]: pgmap v2265: 305 pgs: 305 active+clean; 178 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Oct 14 09:28:39 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.785 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Successfully created port: 3550cf12-50e7-4809-9e33-8057ba120200 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:28:40 compute-0 nova_compute[259627]: 2025-10-14 09:28:39.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:40 compute-0 nova_compute[259627]: 2025-10-14 09:28:40.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 218 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 97 op/s
Oct 14 09:28:40 compute-0 podman[394682]: 2025-10-14 09:28:40.699157124 +0000 UTC m=+0.101556160 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:28:40 compute-0 podman[394683]: 2025-10-14 09:28:40.72446963 +0000 UTC m=+0.123790550 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:28:40 compute-0 nova_compute[259627]: 2025-10-14 09:28:40.880 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Successfully updated port: 3550cf12-50e7-4809-9e33-8057ba120200 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:28:40 compute-0 nova_compute[259627]: 2025-10-14 09:28:40.910 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:28:40 compute-0 nova_compute[259627]: 2025-10-14 09:28:40.910 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:28:40 compute-0 nova_compute[259627]: 2025-10-14 09:28:40.910 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:28:41 compute-0 nova_compute[259627]: 2025-10-14 09:28:41.032 2 DEBUG nova.compute.manager [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-changed-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:41 compute-0 nova_compute[259627]: 2025-10-14 09:28:41.033 2 DEBUG nova.compute.manager [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Refreshing instance network info cache due to event network-changed-3550cf12-50e7-4809-9e33-8057ba120200. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:28:41 compute-0 nova_compute[259627]: 2025-10-14 09:28:41.033 2 DEBUG oslo_concurrency.lockutils [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:28:41 compute-0 nova_compute[259627]: 2025-10-14 09:28:41.074 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:28:41 compute-0 nova_compute[259627]: 2025-10-14 09:28:41.233 2 INFO nova.compute.manager [None req-205ddce6-9c36-481e-a7be-7598671b08c9 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Get console output
Oct 14 09:28:41 compute-0 nova_compute[259627]: 2025-10-14 09:28:41.241 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:28:41 compute-0 ceph-mon[74249]: pgmap v2266: 305 pgs: 305 active+clean; 218 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 97 op/s
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.111 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Updating instance_info_cache with network_info: [{"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.137 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.137 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance network_info: |[{"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.138 2 DEBUG oslo_concurrency.lockutils [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.138 2 DEBUG nova.network.neutron [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Refreshing network info cache for port 3550cf12-50e7-4809-9e33-8057ba120200 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.144 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start _get_guest_xml network_info=[{"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.150 2 WARNING nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.156 2 DEBUG nova.virt.libvirt.host [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.157 2 DEBUG nova.virt.libvirt.host [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.161 2 DEBUG nova.virt.libvirt.host [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.162 2 DEBUG nova.virt.libvirt.host [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.163 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.163 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.164 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.165 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.165 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.165 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.166 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.166 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.166 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.167 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.167 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.167 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.172 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:42 compute-0 ovn_controller[152662]: 2025-10-14T09:28:42Z|01404|binding|INFO|Releasing lport b64d84a0-5356-4297-8228-85522d087442 from this chassis (sb_readonly=0)
Oct 14 09:28:42 compute-0 ovn_controller[152662]: 2025-10-14T09:28:42Z|01405|binding|INFO|Releasing lport e4065da2-8191-4cbc-a6ed-0505ac5ea1c6 from this chassis (sb_readonly=0)
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:28:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1855327202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.688 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.725 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.731 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:42 compute-0 nova_compute[259627]: 2025-10-14 09:28:42.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.010 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.010 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:28:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038311572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.162 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.164 2 DEBUG nova.virt.libvirt.vif [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:28:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=132,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jr53qs99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:28:38Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=fb3db81b-4d6f-4736-9d4b-b1900fad6488,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.164 2 DEBUG nova.network.os_vif_util [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.165 2 DEBUG nova.network.os_vif_util [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.166 2 DEBUG nova.objects.instance [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid fb3db81b-4d6f-4736-9d4b-b1900fad6488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.190 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <uuid>fb3db81b-4d6f-4736-9d4b-b1900fad6488</uuid>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <name>instance-00000084</name>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944</nova:name>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:28:42</nova:creationTime>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <nova:port uuid="3550cf12-50e7-4809-9e33-8057ba120200">
Oct 14 09:28:43 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <system>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <entry name="serial">fb3db81b-4d6f-4736-9d4b-b1900fad6488</entry>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <entry name="uuid">fb3db81b-4d6f-4736-9d4b-b1900fad6488</entry>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </system>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <os>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   </os>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <features>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   </features>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk">
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       </source>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config">
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       </source>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:28:43 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:64:be:04"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <target dev="tap3550cf12-50"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/console.log" append="off"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <video>
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </video>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:28:43 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:28:43 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:28:43 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:28:43 compute-0 nova_compute[259627]: </domain>
Oct 14 09:28:43 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.192 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Preparing to wait for external event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.192 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.193 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.193 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.194 2 DEBUG nova.virt.libvirt.vif [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:28:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=132,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jr53qs99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:28:38Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=fb3db81b-4d6f-4736-9d4b-b1900fad6488,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.194 2 DEBUG nova.network.os_vif_util [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.195 2 DEBUG nova.network.os_vif_util [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.196 2 DEBUG os_vif [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3550cf12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3550cf12-50, col_values=(('external_ids', {'iface-id': '3550cf12-50e7-4809-9e33-8057ba120200', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:be:04', 'vm-uuid': 'fb3db81b-4d6f-4736-9d4b-b1900fad6488'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:43 compute-0 NetworkManager[44885]: <info>  [1760434123.2444] manager: (tap3550cf12-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.253 2 INFO os_vif [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50')
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.324 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.325 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.325 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:64:be:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.326 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Using config drive
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.349 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018621902878166309 of space, bias 1.0, pg target 0.5586570863449892 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:28:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:28:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:28:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193183784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.483 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.503 2 INFO nova.compute.manager [None req-46a2b89b-47bc-4ae1-bdd1-d444b0552f4f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Get console output
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.508 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.569 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.569 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.582 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.582 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:28:43 compute-0 ceph-mon[74249]: pgmap v2267: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct 14 09:28:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1855327202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:28:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2038311572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:28:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4193183784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:43 compute-0 sudo[394825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:43 compute-0 sudo[394825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:43 compute-0 sudo[394825]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:43 compute-0 sudo[394850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:28:43 compute-0 sudo[394850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:43 compute-0 sudo[394850]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:43 compute-0 sudo[394875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:43 compute-0 sudo[394875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:43 compute-0 sudo[394875]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.835 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.836 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3254MB free_disk=59.87657165527344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.836 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.837 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.859 2 DEBUG nova.network.neutron [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Updated VIF entry in instance network info cache for port 3550cf12-50e7-4809-9e33-8057ba120200. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.860 2 DEBUG nova.network.neutron [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Updating instance_info_cache with network_info: [{"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:43 compute-0 sudo[394900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:28:43 compute-0 sudo[394900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.886 2 DEBUG oslo_concurrency.lockutils [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.966 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Creating config drive at /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config
Oct 14 09:28:43 compute-0 nova_compute[259627]: 2025-10-14 09:28:43.973 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqwzpkb5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.034 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 7a110a3c-a2ca-4314-a190-28a4505cc26c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.034 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 9314cb71-9b9f-4379-90ba-61445b09c003 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.035 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance fb3db81b-4d6f-4736-9d4b-b1900fad6488 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.035 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.035 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.128 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqwzpkb5" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.162 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.166 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.272 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.381 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.383 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Deleting local config drive /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config because it was imported into RBD.
Oct 14 09:28:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:44 compute-0 NetworkManager[44885]: <info>  [1760434124.4969] manager: (tap3550cf12-50): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Oct 14 09:28:44 compute-0 kernel: tap3550cf12-50: entered promiscuous mode
Oct 14 09:28:44 compute-0 ovn_controller[152662]: 2025-10-14T09:28:44Z|01406|binding|INFO|Claiming lport 3550cf12-50e7-4809-9e33-8057ba120200 for this chassis.
Oct 14 09:28:44 compute-0 ovn_controller[152662]: 2025-10-14T09:28:44Z|01407|binding|INFO|3550cf12-50e7-4809-9e33-8057ba120200: Claiming fa:16:3e:64:be:04 10.100.0.7
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.511 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:be:04 10.100.0.7'], port_security=['fa:16:3e:64:be:04 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fb3db81b-4d6f-4736-9d4b-b1900fad6488', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f09a704d-6063-4e40-b690-c967cd364b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e76d2fee-d8c5-45a1-ac1f-55a35976452c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b90ca07-80d1-49c1-a91f-225f989dd9c6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3550cf12-50e7-4809-9e33-8057ba120200) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.512 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3550cf12-50e7-4809-9e33-8057ba120200 in datapath f09a704d-6063-4e40-b690-c967cd364b32 bound to our chassis
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.514 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f09a704d-6063-4e40-b690-c967cd364b32
Oct 14 09:28:44 compute-0 sudo[394900]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:44 compute-0 ovn_controller[152662]: 2025-10-14T09:28:44Z|01408|binding|INFO|Setting lport 3550cf12-50e7-4809-9e33-8057ba120200 ovn-installed in OVS
Oct 14 09:28:44 compute-0 ovn_controller[152662]: 2025-10-14T09:28:44Z|01409|binding|INFO|Setting lport 3550cf12-50e7-4809-9e33-8057ba120200 up in Southbound
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.537 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1922984b-4c86-4a10-9454-5cc4faa7c374]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:44 compute-0 systemd-udevd[395030]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:28:44 compute-0 systemd-machined[214636]: New machine qemu-165-instance-00000084.
Oct 14 09:28:44 compute-0 NetworkManager[44885]: <info>  [1760434124.5620] device (tap3550cf12-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:28:44 compute-0 NetworkManager[44885]: <info>  [1760434124.5633] device (tap3550cf12-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:28:44 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000084.
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.572 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[757d30c7-7da3-4645-a4d0-bb4ea97ea1f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.576 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[94a1461c-2c43-4453-81f2-8089ea6a3d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:28:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:28:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:28:44 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.606 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27a3c312-2699-4650-8364-adcf73d278d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:28:44 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:28:44 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 2b2e3883-e30c-4c78-9465-7a100b7c52c0 does not exist
Oct 14 09:28:44 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 9aa4f7a2-9c83-4a37-8685-205309344383 does not exist
Oct 14 09:28:44 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8d42f651-2478-4f47-8b0f-7216def51906 does not exist
Oct 14 09:28:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:28:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:28:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:28:44 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:28:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:28:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b57ffaf9-cdcc-4944-8930-4eca8f4f9abf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf09a704d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f0:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785922, 'reachable_time': 22812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395039, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.662 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f12d4450-ff40-4060-ae11-099bb3c22460]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf09a704d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785940, 'tstamp': 785940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395043, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf09a704d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785944, 'tstamp': 785944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395043, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.665 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09a704d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.669 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf09a704d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.669 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.670 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf09a704d-60, col_values=(('external_ids', {'iface-id': 'e4065da2-8191-4cbc-a6ed-0505ac5ea1c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.671 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.724 2 INFO nova.compute.manager [None req-4079e86d-4772-4253-a7bb-e2158f196723 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Get console output
Oct 14 09:28:44 compute-0 sudo[395045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:44 compute-0 sudo[395045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:44 compute-0 sudo[395045]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.742 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 14 09:28:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:28:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149852731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.793 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.801 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:28:44 compute-0 sudo[395070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:28:44 compute-0 sudo[395070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.815 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:28:44 compute-0 sudo[395070]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.837 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:28:44 compute-0 nova_compute[259627]: 2025-10-14 09:28:44.838 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:44 compute-0 sudo[395097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:44 compute-0 sudo[395097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:44 compute-0 sudo[395097]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:44 compute-0 sudo[395122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:28:44 compute-0 sudo[395122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:45 compute-0 nova_compute[259627]: 2025-10-14 09:28:45.036 2 DEBUG nova.compute.manager [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:45 compute-0 nova_compute[259627]: 2025-10-14 09:28:45.036 2 DEBUG oslo_concurrency.lockutils [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:45 compute-0 nova_compute[259627]: 2025-10-14 09:28:45.037 2 DEBUG oslo_concurrency.lockutils [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:45 compute-0 nova_compute[259627]: 2025-10-14 09:28:45.037 2 DEBUG oslo_concurrency.lockutils [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:45 compute-0 nova_compute[259627]: 2025-10-14 09:28:45.037 2 DEBUG nova.compute.manager [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Processing event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:28:45 compute-0 nova_compute[259627]: 2025-10-14 09:28:45.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:45 compute-0 podman[395186]: 2025-10-14 09:28:45.537106378 +0000 UTC m=+0.072833591 container create ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:28:45 compute-0 systemd[1]: Started libpod-conmon-ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5.scope.
Oct 14 09:28:45 compute-0 ceph-mon[74249]: pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 09:28:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:28:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:28:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:28:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:28:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:28:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:28:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1149852731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:45 compute-0 podman[395186]: 2025-10-14 09:28:45.506482221 +0000 UTC m=+0.042209454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:28:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:28:45 compute-0 podman[395186]: 2025-10-14 09:28:45.649527286 +0000 UTC m=+0.185254579 container init ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 09:28:45 compute-0 podman[395186]: 2025-10-14 09:28:45.662038495 +0000 UTC m=+0.197765728 container start ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:28:45 compute-0 podman[395186]: 2025-10-14 09:28:45.666269569 +0000 UTC m=+0.201996792 container attach ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:28:45 compute-0 systemd[1]: libpod-ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5.scope: Deactivated successfully.
Oct 14 09:28:45 compute-0 exciting_visvesvaraya[395202]: 167 167
Oct 14 09:28:45 compute-0 conmon[395202]: conmon ce88c42c3ce94b15e913 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5.scope/container/memory.events
Oct 14 09:28:45 compute-0 podman[395186]: 2025-10-14 09:28:45.674346299 +0000 UTC m=+0.210073502 container died ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 14 09:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-386c022e0d39d0307b71d4c188a70e173ba6ea69d27dc7e0d8a815240d96c3b8-merged.mount: Deactivated successfully.
Oct 14 09:28:45 compute-0 podman[395186]: 2025-10-14 09:28:45.725912743 +0000 UTC m=+0.261639946 container remove ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:28:45 compute-0 systemd[1]: libpod-conmon-ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5.scope: Deactivated successfully.
Oct 14 09:28:45 compute-0 nova_compute[259627]: 2025-10-14 09:28:45.832 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:45 compute-0 nova_compute[259627]: 2025-10-14 09:28:45.834 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:45 compute-0 podman[395244]: 2025-10-14 09:28:45.920654434 +0000 UTC m=+0.034956285 container create ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:28:45 compute-0 systemd[1]: Started libpod-conmon-ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3.scope.
Oct 14 09:28:46 compute-0 podman[395244]: 2025-10-14 09:28:45.907189091 +0000 UTC m=+0.021490962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:28:46 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:46 compute-0 podman[395244]: 2025-10-14 09:28:46.04228091 +0000 UTC m=+0.156582781 container init ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:28:46 compute-0 podman[395244]: 2025-10-14 09:28:46.048629176 +0000 UTC m=+0.162931067 container start ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 09:28:46 compute-0 podman[395244]: 2025-10-14 09:28:46.05240506 +0000 UTC m=+0.166706901 container attach ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.087 2 DEBUG nova.compute.manager [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.088 2 DEBUG nova.compute.manager [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing instance network info cache due to event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.088 2 DEBUG oslo_concurrency.lockutils [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.088 2 DEBUG oslo_concurrency.lockutils [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.088 2 DEBUG nova.network.neutron [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.183 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.184 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.184 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.185 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.185 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.186 2 INFO nova.compute.manager [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Terminating instance
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.187 2 DEBUG nova.compute.manager [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:28:46 compute-0 kernel: tap5a63e80f-6b (unregistering): left promiscuous mode
Oct 14 09:28:46 compute-0 NetworkManager[44885]: <info>  [1760434126.2577] device (tap5a63e80f-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:46 compute-0 ovn_controller[152662]: 2025-10-14T09:28:46Z|01410|binding|INFO|Releasing lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa from this chassis (sb_readonly=0)
Oct 14 09:28:46 compute-0 ovn_controller[152662]: 2025-10-14T09:28:46Z|01411|binding|INFO|Setting lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa down in Southbound
Oct 14 09:28:46 compute-0 ovn_controller[152662]: 2025-10-14T09:28:46Z|01412|binding|INFO|Removing iface tap5a63e80f-6b ovn-installed in OVS
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.282 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:0f:a6 10.100.0.12'], port_security=['fa:16:3e:a9:0f:a6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9314cb71-9b9f-4379-90ba-61445b09c003', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee202a2d-dda0-40f4-92dc-f1908630878a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42808c73-a7f9-4337-928b-894f78f53e75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.284 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa in datapath 0ad5af7c-fbad-46b1-979b-db7d2639a7c3 unbound from our chassis
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.287 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ad5af7c-fbad-46b1-979b-db7d2639a7c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.289 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[078159ee-9aba-4770-891b-2c67904b2cdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.290 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 namespace which is not needed anymore
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:46 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct 14 09:28:46 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Consumed 13.228s CPU time.
Oct 14 09:28:46 compute-0 systemd-machined[214636]: Machine qemu-164-instance-00000083 terminated.
Oct 14 09:28:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.427 2 INFO nova.virt.libvirt.driver [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance destroyed successfully.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.428 2 DEBUG nova.objects.instance [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 9314cb71-9b9f-4379-90ba-61445b09c003 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:28:46 compute-0 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [NOTICE]   (394483) : haproxy version is 2.8.14-c23fe91
Oct 14 09:28:46 compute-0 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [NOTICE]   (394483) : path to executable is /usr/sbin/haproxy
Oct 14 09:28:46 compute-0 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [WARNING]  (394483) : Exiting Master process...
Oct 14 09:28:46 compute-0 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [ALERT]    (394483) : Current worker (394485) exited with code 143 (Terminated)
Oct 14 09:28:46 compute-0 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [WARNING]  (394483) : All workers exited. Exiting... (0)
Oct 14 09:28:46 compute-0 systemd[1]: libpod-de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c.scope: Deactivated successfully.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.475 2 DEBUG nova.virt.libvirt.vif [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:28:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-575903537',display_name='tempest-TestNetworkBasicOps-server-575903537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-575903537',id=131,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZNK+Wbj9xAsgpWQ+tziM8jRjajF62ZTB5kTzrK/xn8p7FnYdtDdeLVfLEHTxjGjbKXIgJlw92Cf9a6ZSykZsO+5buce9Y3MqTSkOEHFe9bsgLvMZ76ONZmB8tSQVQONA==',key_name='tempest-TestNetworkBasicOps-1689754690',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-eaasug7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:28:24Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=9314cb71-9b9f-4379-90ba-61445b09c003,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.476 2 DEBUG nova.network.os_vif_util [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.477 2 DEBUG nova.network.os_vif_util [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.478 2 DEBUG os_vif [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:28:46 compute-0 podman[395310]: 2025-10-14 09:28:46.481428221 +0000 UTC m=+0.071176650 container died de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a63e80f-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.490 2 INFO os_vif [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b')
Oct 14 09:28:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c-userdata-shm.mount: Deactivated successfully.
Oct 14 09:28:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f023d5bb6e112cd231a0a14f750e65530a9849b6d39894261763a8809936d7ae-merged.mount: Deactivated successfully.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.526 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434126.5262241, fb3db81b-4d6f-4736-9d4b-b1900fad6488 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.527 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] VM Started (Lifecycle Event)
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.529 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:28:46 compute-0 podman[395310]: 2025-10-14 09:28:46.541427393 +0000 UTC m=+0.131175802 container cleanup de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:28:46 compute-0 systemd[1]: libpod-conmon-de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c.scope: Deactivated successfully.
Oct 14 09:28:46 compute-0 podman[395366]: 2025-10-14 09:28:46.630828262 +0000 UTC m=+0.058167558 container remove de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd56a2f-e248-4950-833d-3ce7f1561ed0]: (4, ('Tue Oct 14 09:28:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 (de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c)\nde148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c\nTue Oct 14 09:28:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 (de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c)\nde148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.643 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[975de803-0474-423f-acf2-96cc759c1200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.644 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ad5af7c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:28:46 compute-0 kernel: tap0ad5af7c-f0: left promiscuous mode
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52b369af-d0dc-4819-8443-f381bd43be2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.687 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.687 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.687 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61b659a5-72a5-4f26-bed7-e4177177a997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.688 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6e7a35-fce1-416e-a81e-3471f3d91acc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.691 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.694 2 INFO nova.virt.libvirt.driver [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance spawned successfully.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.694 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.708 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9f1d11-bee8-4000-96df-d785b67ba38e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788231, 'reachable_time': 19998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395380, 'error': None, 'target': 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d0ad5af7c\x2dfbad\x2d46b1\x2d979b\x2ddb7d2639a7c3.mount: Deactivated successfully.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.714 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.714 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434126.5264158, fb3db81b-4d6f-4736-9d4b-b1900fad6488 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.714 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] VM Paused (Lifecycle Event)
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.715 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:28:46 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.716 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8a403ea4-825e-4077-a6ea-c0211f3d8cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.719 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.720 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.720 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.721 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.721 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.721 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.733 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.739 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434126.672971, fb3db81b-4d6f-4736-9d4b-b1900fad6488 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.739 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] VM Resumed (Lifecycle Event)
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.777 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.782 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.801 2 INFO nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Took 8.12 seconds to spawn the instance on the hypervisor.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.801 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.836 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.873 2 INFO nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Took 9.12 seconds to build instance.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.883 2 INFO nova.virt.libvirt.driver [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Deleting instance files /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003_del
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.883 2 INFO nova.virt.libvirt.driver [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Deletion of /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003_del complete
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.890 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.931 2 INFO nova.compute.manager [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.932 2 DEBUG oslo.service.loopingcall [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.932 2 DEBUG nova.compute.manager [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.932 2 DEBUG nova.network.neutron [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:28:46 compute-0 nova_compute[259627]: 2025-10-14 09:28:46.992 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.151 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.152 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.152 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.152 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.153 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] No waiting events found dispatching network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.153 2 WARNING nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received unexpected event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 for instance with vm_state active and task_state None.
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.153 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-unplugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.153 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.153 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] No waiting events found dispatching network-vif-unplugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-unplugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.155 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.155 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.155 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] No waiting events found dispatching network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.155 2 WARNING nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received unexpected event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa for instance with vm_state active and task_state deleting.
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.157 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.157 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.157 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.157 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7a110a3c-a2ca-4314-a190-28a4505cc26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:28:47 compute-0 peaceful_lalande[395284]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:28:47 compute-0 peaceful_lalande[395284]: --> relative data size: 1.0
Oct 14 09:28:47 compute-0 peaceful_lalande[395284]: --> All data devices are unavailable
Oct 14 09:28:47 compute-0 systemd[1]: libpod-ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3.scope: Deactivated successfully.
Oct 14 09:28:47 compute-0 podman[395244]: 2025-10-14 09:28:47.209816999 +0000 UTC m=+1.324118860 container died ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 09:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413-merged.mount: Deactivated successfully.
Oct 14 09:28:47 compute-0 podman[395244]: 2025-10-14 09:28:47.265834703 +0000 UTC m=+1.380136554 container remove ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:28:47 compute-0 systemd[1]: libpod-conmon-ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3.scope: Deactivated successfully.
Oct 14 09:28:47 compute-0 sudo[395122]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:47 compute-0 sudo[395418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.393 2 DEBUG nova.network.neutron [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updated VIF entry in instance network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.394 2 DEBUG nova.network.neutron [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:47 compute-0 sudo[395418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:47 compute-0 sudo[395418]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.419 2 DEBUG oslo_concurrency.lockutils [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:47 compute-0 sudo[395443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:28:47 compute-0 sudo[395443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:47 compute-0 sudo[395443]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:47 compute-0 sudo[395468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:47 compute-0 sudo[395468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:47 compute-0 sudo[395468]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:47 compute-0 sudo[395493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.581 2 DEBUG nova.network.neutron [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:47 compute-0 sudo[395493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.601 2 INFO nova.compute.manager [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Took 0.67 seconds to deallocate network for instance.
Oct 14 09:28:47 compute-0 ceph-mon[74249]: pgmap v2269: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.671 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.672 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:28:47 compute-0 nova_compute[259627]: 2025-10-14 09:28:47.781 2 DEBUG oslo_concurrency.processutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:28:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:48 compute-0 podman[395573]: 2025-10-14 09:28:48.042249658 +0000 UTC m=+0.068431762 container create ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:28:48 compute-0 systemd[1]: Started libpod-conmon-ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82.scope.
Oct 14 09:28:48 compute-0 podman[395573]: 2025-10-14 09:28:48.023628299 +0000 UTC m=+0.049810423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:28:48 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:28:48 compute-0 podman[395573]: 2025-10-14 09:28:48.151392655 +0000 UTC m=+0.177574839 container init ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:28:48 compute-0 podman[395573]: 2025-10-14 09:28:48.165484263 +0000 UTC m=+0.191666377 container start ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:28:48 compute-0 podman[395573]: 2025-10-14 09:28:48.169990095 +0000 UTC m=+0.196172329 container attach ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:28:48 compute-0 brave_visvesvaraya[395595]: 167 167
Oct 14 09:28:48 compute-0 systemd[1]: libpod-ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82.scope: Deactivated successfully.
Oct 14 09:28:48 compute-0 podman[395600]: 2025-10-14 09:28:48.242891746 +0000 UTC m=+0.046329566 container died ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:28:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-38fb6f5d08bc56e428c0361843182f654c5429059ff0dd1c1cfe164013565ab0-merged.mount: Deactivated successfully.
Oct 14 09:28:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:28:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000951975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:48 compute-0 podman[395600]: 2025-10-14 09:28:48.283327625 +0000 UTC m=+0.086765395 container remove ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:28:48 compute-0 systemd[1]: libpod-conmon-ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82.scope: Deactivated successfully.
Oct 14 09:28:48 compute-0 nova_compute[259627]: 2025-10-14 09:28:48.299 2 DEBUG oslo_concurrency.processutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:28:48 compute-0 nova_compute[259627]: 2025-10-14 09:28:48.309 2 DEBUG nova.compute.provider_tree [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:28:48 compute-0 nova_compute[259627]: 2025-10-14 09:28:48.335 2 DEBUG nova.scheduler.client.report [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:28:48 compute-0 nova_compute[259627]: 2025-10-14 09:28:48.352 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:48 compute-0 nova_compute[259627]: 2025-10-14 09:28:48.377 2 INFO nova.scheduler.client.report [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 9314cb71-9b9f-4379-90ba-61445b09c003
Oct 14 09:28:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 2.5 MiB/s wr, 66 op/s
Oct 14 09:28:48 compute-0 nova_compute[259627]: 2025-10-14 09:28:48.452 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:28:48 compute-0 podman[395623]: 2025-10-14 09:28:48.528441422 +0000 UTC m=+0.067997761 container create 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:28:48 compute-0 systemd[1]: Started libpod-conmon-36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4.scope.
Oct 14 09:28:48 compute-0 podman[395623]: 2025-10-14 09:28:48.499462766 +0000 UTC m=+0.039019155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:28:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4000951975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:28:48 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:48 compute-0 podman[395623]: 2025-10-14 09:28:48.64210912 +0000 UTC m=+0.181665459 container init 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:28:48 compute-0 podman[395623]: 2025-10-14 09:28:48.654488606 +0000 UTC m=+0.194044935 container start 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:28:48 compute-0 podman[395623]: 2025-10-14 09:28:48.659146841 +0000 UTC m=+0.198703230 container attach 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:28:49 compute-0 nova_compute[259627]: 2025-10-14 09:28:49.265 2 DEBUG nova.compute.manager [req-35790b4c-97bc-481f-971e-3535542d9667 req-bca5c904-f152-451d-a1b4-201a452e942d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-deleted-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]: {
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:     "0": [
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:         {
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "devices": [
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "/dev/loop3"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             ],
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_name": "ceph_lv0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_size": "21470642176",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "name": "ceph_lv0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "tags": {
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cluster_name": "ceph",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.crush_device_class": "",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.encrypted": "0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osd_id": "0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.type": "block",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.vdo": "0"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             },
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "type": "block",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "vg_name": "ceph_vg0"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:         }
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:     ],
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:     "1": [
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:         {
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "devices": [
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "/dev/loop4"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             ],
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_name": "ceph_lv1",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_size": "21470642176",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "name": "ceph_lv1",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "tags": {
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cluster_name": "ceph",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.crush_device_class": "",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.encrypted": "0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osd_id": "1",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.type": "block",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.vdo": "0"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             },
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "type": "block",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "vg_name": "ceph_vg1"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:         }
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:     ],
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:     "2": [
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:         {
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "devices": [
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "/dev/loop5"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             ],
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_name": "ceph_lv2",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_size": "21470642176",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "name": "ceph_lv2",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "tags": {
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.cluster_name": "ceph",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.crush_device_class": "",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.encrypted": "0",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osd_id": "2",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.type": "block",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:                 "ceph.vdo": "0"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             },
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "type": "block",
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:             "vg_name": "ceph_vg2"
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:         }
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]:     ]
Oct 14 09:28:49 compute-0 vigilant_galileo[395640]: }
Oct 14 09:28:49 compute-0 systemd[1]: libpod-36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4.scope: Deactivated successfully.
Oct 14 09:28:49 compute-0 podman[395623]: 2025-10-14 09:28:49.52357009 +0000 UTC m=+1.063126389 container died 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354-merged.mount: Deactivated successfully.
Oct 14 09:28:49 compute-0 podman[395623]: 2025-10-14 09:28:49.598742208 +0000 UTC m=+1.138298517 container remove 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:28:49 compute-0 ceph-mon[74249]: pgmap v2270: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 2.5 MiB/s wr, 66 op/s
Oct 14 09:28:49 compute-0 systemd[1]: libpod-conmon-36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4.scope: Deactivated successfully.
Oct 14 09:28:49 compute-0 sudo[395493]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:49 compute-0 sudo[395663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:49 compute-0 sudo[395663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:49 compute-0 sudo[395663]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:49 compute-0 sudo[395688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:28:49 compute-0 sudo[395688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:49 compute-0 sudo[395688]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:49 compute-0 sudo[395713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:49 compute-0 sudo[395713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:49 compute-0 sudo[395713]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:49 compute-0 nova_compute[259627]: 2025-10-14 09:28:49.942 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:28:49 compute-0 nova_compute[259627]: 2025-10-14 09:28:49.960 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:28:49 compute-0 nova_compute[259627]: 2025-10-14 09:28:49.961 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:28:49 compute-0 nova_compute[259627]: 2025-10-14 09:28:49.962 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:49 compute-0 nova_compute[259627]: 2025-10-14 09:28:49.962 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:28:49 compute-0 sudo[395738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:28:49 compute-0 sudo[395738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:50 compute-0 nova_compute[259627]: 2025-10-14 09:28:50.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 209 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 105 op/s
Oct 14 09:28:50 compute-0 podman[395805]: 2025-10-14 09:28:50.434828707 +0000 UTC m=+0.049463463 container create e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 09:28:50 compute-0 systemd[1]: Started libpod-conmon-e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48.scope.
Oct 14 09:28:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:28:50 compute-0 podman[395805]: 2025-10-14 09:28:50.41348049 +0000 UTC m=+0.028115336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:28:50 compute-0 podman[395805]: 2025-10-14 09:28:50.52034059 +0000 UTC m=+0.134975386 container init e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:28:50 compute-0 podman[395805]: 2025-10-14 09:28:50.529148938 +0000 UTC m=+0.143783694 container start e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:28:50 compute-0 podman[395805]: 2025-10-14 09:28:50.53329487 +0000 UTC m=+0.147929706 container attach e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:28:50 compute-0 zen_morse[395821]: 167 167
Oct 14 09:28:50 compute-0 systemd[1]: libpod-e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48.scope: Deactivated successfully.
Oct 14 09:28:50 compute-0 podman[395805]: 2025-10-14 09:28:50.535551766 +0000 UTC m=+0.150186532 container died e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:28:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8f7a85bdae22fac6eee88101392a013edfc1ae108344148c37642d17c44b8ce-merged.mount: Deactivated successfully.
Oct 14 09:28:50 compute-0 podman[395805]: 2025-10-14 09:28:50.585858439 +0000 UTC m=+0.200493225 container remove e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:28:50 compute-0 systemd[1]: libpod-conmon-e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48.scope: Deactivated successfully.
Oct 14 09:28:50 compute-0 podman[395845]: 2025-10-14 09:28:50.805056275 +0000 UTC m=+0.049420992 container create cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:28:50 compute-0 systemd[1]: Started libpod-conmon-cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e.scope.
Oct 14 09:28:50 compute-0 podman[395845]: 2025-10-14 09:28:50.787793899 +0000 UTC m=+0.032158626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:28:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:28:50 compute-0 podman[395845]: 2025-10-14 09:28:50.919183635 +0000 UTC m=+0.163548382 container init cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:28:50 compute-0 podman[395845]: 2025-10-14 09:28:50.928251829 +0000 UTC m=+0.172616566 container start cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:28:50 compute-0 podman[395845]: 2025-10-14 09:28:50.934042752 +0000 UTC m=+0.178407509 container attach cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:28:50 compute-0 nova_compute[259627]: 2025-10-14 09:28:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:51 compute-0 nova_compute[259627]: 2025-10-14 09:28:51.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:51 compute-0 ceph-mon[74249]: pgmap v2271: 305 pgs: 305 active+clean; 209 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 105 op/s
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]: {
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "osd_id": 2,
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "type": "bluestore"
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:     },
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "osd_id": 1,
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "type": "bluestore"
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:     },
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "osd_id": 0,
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:         "type": "bluestore"
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]:     }
Oct 14 09:28:52 compute-0 dreamy_northcutt[395861]: }
Oct 14 09:28:52 compute-0 systemd[1]: libpod-cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e.scope: Deactivated successfully.
Oct 14 09:28:52 compute-0 systemd[1]: libpod-cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e.scope: Consumed 1.170s CPU time.
Oct 14 09:28:52 compute-0 podman[395894]: 2025-10-14 09:28:52.150204743 +0000 UTC m=+0.032832712 container died cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d-merged.mount: Deactivated successfully.
Oct 14 09:28:52 compute-0 podman[395894]: 2025-10-14 09:28:52.223564156 +0000 UTC m=+0.106192105 container remove cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:28:52 compute-0 systemd[1]: libpod-conmon-cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e.scope: Deactivated successfully.
Oct 14 09:28:52 compute-0 sudo[395738]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:28:52 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:28:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:28:52 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:28:52 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 327edd82-5740-4f51-9ae8-1a86ddce70f6 does not exist
Oct 14 09:28:52 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 4717444d-2d99-476f-8bf4-26530edc97c1 does not exist
Oct 14 09:28:52 compute-0 sudo[395909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:28:52 compute-0 sudo[395909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:52 compute-0 sudo[395909]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 167 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 964 KiB/s wr, 132 op/s
Oct 14 09:28:52 compute-0 sudo[395934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:28:52 compute-0 sudo[395934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:28:52 compute-0 sudo[395934]: pam_unix(sudo:session): session closed for user root
Oct 14 09:28:52 compute-0 ovn_controller[152662]: 2025-10-14T09:28:52Z|01413|binding|INFO|Releasing lport e4065da2-8191-4cbc-a6ed-0505ac5ea1c6 from this chassis (sb_readonly=0)
Oct 14 09:28:52 compute-0 nova_compute[259627]: 2025-10-14 09:28:52.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:28:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:28:53 compute-0 ceph-mon[74249]: pgmap v2272: 305 pgs: 305 active+clean; 167 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 964 KiB/s wr, 132 op/s
Oct 14 09:28:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 167 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 102 op/s
Oct 14 09:28:54 compute-0 podman[395961]: 2025-10-14 09:28:54.700322065 +0000 UTC m=+0.107380454 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:28:54 compute-0 podman[395960]: 2025-10-14 09:28:54.739420471 +0000 UTC m=+0.144365438 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:28:54 compute-0 nova_compute[259627]: 2025-10-14 09:28:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:54 compute-0 nova_compute[259627]: 2025-10-14 09:28:54.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:28:55 compute-0 nova_compute[259627]: 2025-10-14 09:28:55.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:55 compute-0 ceph-mon[74249]: pgmap v2273: 305 pgs: 305 active+clean; 167 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 102 op/s
Oct 14 09:28:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 103 op/s
Oct 14 09:28:56 compute-0 nova_compute[259627]: 2025-10-14 09:28:56.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:28:57 compute-0 ceph-mon[74249]: pgmap v2274: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 103 op/s
Oct 14 09:28:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:28:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 KiB/s wr, 97 op/s
Oct 14 09:28:58 compute-0 ovn_controller[152662]: 2025-10-14T09:28:58Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:be:04 10.100.0.7
Oct 14 09:28:58 compute-0 ovn_controller[152662]: 2025-10-14T09:28:58Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:be:04 10.100.0.7
Oct 14 09:28:59 compute-0 ceph-mon[74249]: pgmap v2275: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 KiB/s wr, 97 op/s
Oct 14 09:29:00 compute-0 nova_compute[259627]: 2025-10-14 09:29:00.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 183 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Oct 14 09:29:01 compute-0 nova_compute[259627]: 2025-10-14 09:29:01.424 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434126.4195702, 9314cb71-9b9f-4379-90ba-61445b09c003 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:01 compute-0 nova_compute[259627]: 2025-10-14 09:29:01.425 2 INFO nova.compute.manager [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] VM Stopped (Lifecycle Event)
Oct 14 09:29:01 compute-0 nova_compute[259627]: 2025-10-14 09:29:01.452 2 DEBUG nova.compute.manager [None req-1352b517-72e4-4122-b8e3-f4ae766c41e0 - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:01 compute-0 ceph-mon[74249]: pgmap v2276: 305 pgs: 305 active+clean; 183 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Oct 14 09:29:01 compute-0 nova_compute[259627]: 2025-10-14 09:29:01.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 14 09:29:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:29:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:29:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:29:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:29:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:29:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:29:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:03 compute-0 ceph-mon[74249]: pgmap v2277: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 14 09:29:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.237 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.238 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.239 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.239 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.240 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.242 2 INFO nova.compute.manager [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Terminating instance
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.243 2 DEBUG nova.compute.manager [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:29:05 compute-0 kernel: tap3550cf12-50 (unregistering): left promiscuous mode
Oct 14 09:29:05 compute-0 NetworkManager[44885]: <info>  [1760434145.3068] device (tap3550cf12-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:29:05 compute-0 ovn_controller[152662]: 2025-10-14T09:29:05Z|01414|binding|INFO|Releasing lport 3550cf12-50e7-4809-9e33-8057ba120200 from this chassis (sb_readonly=0)
Oct 14 09:29:05 compute-0 ovn_controller[152662]: 2025-10-14T09:29:05Z|01415|binding|INFO|Setting lport 3550cf12-50e7-4809-9e33-8057ba120200 down in Southbound
Oct 14 09:29:05 compute-0 ovn_controller[152662]: 2025-10-14T09:29:05Z|01416|binding|INFO|Removing iface tap3550cf12-50 ovn-installed in OVS
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.337 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:be:04 10.100.0.7'], port_security=['fa:16:3e:64:be:04 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fb3db81b-4d6f-4736-9d4b-b1900fad6488', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f09a704d-6063-4e40-b690-c967cd364b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e76d2fee-d8c5-45a1-ac1f-55a35976452c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b90ca07-80d1-49c1-a91f-225f989dd9c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3550cf12-50e7-4809-9e33-8057ba120200) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.339 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3550cf12-50e7-4809-9e33-8057ba120200 in datapath f09a704d-6063-4e40-b690-c967cd364b32 unbound from our chassis
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.341 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f09a704d-6063-4e40-b690-c967cd364b32
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.363 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c7733bdc-bb62-4ad2-9e33-6f4db693bf72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:05 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 14 09:29:05 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Consumed 13.422s CPU time.
Oct 14 09:29:05 compute-0 systemd-machined[214636]: Machine qemu-165-instance-00000084 terminated.
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.401 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77b84394-3235-4f58-8a81-28237fc0631a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.407 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1383e082-cb1e-4df1-a598-e8478a590986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.451 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5a652061-b8ee-440b-b3b8-2dc52f7940eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ba40fb-085f-4f4b-90c6-e9305cc6a089]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf09a704d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f0:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785922, 'reachable_time': 22812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396019, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:05 compute-0 ceph-mon[74249]: pgmap v2278: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.494 2 INFO nova.virt.libvirt.driver [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance destroyed successfully.
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.495 2 DEBUG nova.objects.instance [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid fb3db81b-4d6f-4736-9d4b-b1900fad6488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.514 2 DEBUG nova.virt.libvirt.vif [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:28:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=132,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:28:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jr53qs99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:28:46Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=fb3db81b-4d6f-4736-9d4b-b1900fad6488,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.515 2 DEBUG nova.network.os_vif_util [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33658422-8cdf-4148-be4f-2b69efbf511b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf09a704d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785940, 'tstamp': 785940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396025, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf09a704d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785944, 'tstamp': 785944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396025, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.517 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09a704d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.517 2 DEBUG nova.network.os_vif_util [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.518 2 DEBUG os_vif [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3550cf12-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.529 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf09a704d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.529 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.530 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf09a704d-60, col_values=(('external_ids', {'iface-id': 'e4065da2-8191-4cbc-a6ed-0505ac5ea1c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:05 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.530 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.534 2 INFO os_vif [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50')
Oct 14 09:29:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:29:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2201009748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:29:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:29:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2201009748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.957 2 INFO nova.virt.libvirt.driver [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Deleting instance files /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488_del
Oct 14 09:29:05 compute-0 nova_compute[259627]: 2025-10-14 09:29:05.957 2 INFO nova.virt.libvirt.driver [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Deletion of /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488_del complete
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.029 2 INFO nova.compute.manager [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.030 2 DEBUG oslo.service.loopingcall [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.032 2 DEBUG nova.compute.manager [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.032 2 DEBUG nova.network.neutron [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.256 2 DEBUG nova.compute.manager [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-unplugged-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.256 2 DEBUG oslo_concurrency.lockutils [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.257 2 DEBUG oslo_concurrency.lockutils [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.257 2 DEBUG oslo_concurrency.lockutils [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.257 2 DEBUG nova.compute.manager [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] No waiting events found dispatching network-vif-unplugged-3550cf12-50e7-4809-9e33-8057ba120200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.258 2 DEBUG nova.compute.manager [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-unplugged-3550cf12-50e7-4809-9e33-8057ba120200 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:29:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:29:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2201009748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:29:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2201009748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.857 2 DEBUG nova.network.neutron [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.877 2 INFO nova.compute.manager [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Took 0.84 seconds to deallocate network for instance.
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.922 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.923 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:06 compute-0 nova_compute[259627]: 2025-10-14 09:29:06.941 2 DEBUG nova.compute.manager [req-00898b96-40fc-4c62-b482-8aa889b2c237 req-d3e89ba1-1be4-4a01-a66c-6a63638fd66d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-deleted-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:07 compute-0 nova_compute[259627]: 2025-10-14 09:29:07.002 2 DEBUG oslo_concurrency.processutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:07.044 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:07.045 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:07.046 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:29:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530213059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:07 compute-0 ceph-mon[74249]: pgmap v2279: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:29:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2530213059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:07 compute-0 nova_compute[259627]: 2025-10-14 09:29:07.524 2 DEBUG oslo_concurrency.processutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:07 compute-0 nova_compute[259627]: 2025-10-14 09:29:07.534 2 DEBUG nova.compute.provider_tree [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:29:07 compute-0 nova_compute[259627]: 2025-10-14 09:29:07.559 2 DEBUG nova.scheduler.client.report [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:29:07 compute-0 nova_compute[259627]: 2025-10-14 09:29:07.590 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:07 compute-0 nova_compute[259627]: 2025-10-14 09:29:07.634 2 INFO nova.scheduler.client.report [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance fb3db81b-4d6f-4736-9d4b-b1900fad6488
Oct 14 09:29:07 compute-0 nova_compute[259627]: 2025-10-14 09:29:07.739 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:08 compute-0 nova_compute[259627]: 2025-10-14 09:29:08.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:08 compute-0 nova_compute[259627]: 2025-10-14 09:29:08.362 2 DEBUG nova.compute.manager [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:08 compute-0 nova_compute[259627]: 2025-10-14 09:29:08.363 2 DEBUG oslo_concurrency.lockutils [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:08 compute-0 nova_compute[259627]: 2025-10-14 09:29:08.363 2 DEBUG oslo_concurrency.lockutils [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:08 compute-0 nova_compute[259627]: 2025-10-14 09:29:08.364 2 DEBUG oslo_concurrency.lockutils [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:08 compute-0 nova_compute[259627]: 2025-10-14 09:29:08.365 2 DEBUG nova.compute.manager [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] No waiting events found dispatching network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:29:08 compute-0 nova_compute[259627]: 2025-10-14 09:29:08.365 2 WARNING nova.compute.manager [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received unexpected event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 for instance with vm_state deleted and task_state None.
Oct 14 09:29:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:29:09 compute-0 ceph-mon[74249]: pgmap v2280: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 164 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.467 2 DEBUG nova.compute.manager [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-changed-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.467 2 DEBUG nova.compute.manager [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing instance network info cache due to event network-changed-3fc32773-5083-4341-9838-5282b7963f56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.468 2 DEBUG oslo_concurrency.lockutils [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.468 2 DEBUG oslo_concurrency.lockutils [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.468 2 DEBUG nova.network.neutron [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing network info cache for port 3fc32773-5083-4341-9838-5282b7963f56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.537 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.537 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.538 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.538 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.538 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.540 2 INFO nova.compute.manager [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Terminating instance
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.541 2 DEBUG nova.compute.manager [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:10 compute-0 kernel: tap3fc32773-50 (unregistering): left promiscuous mode
Oct 14 09:29:10 compute-0 NetworkManager[44885]: <info>  [1760434150.6090] device (tap3fc32773-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:29:10 compute-0 ovn_controller[152662]: 2025-10-14T09:29:10Z|01417|binding|INFO|Releasing lport 3fc32773-5083-4341-9838-5282b7963f56 from this chassis (sb_readonly=0)
Oct 14 09:29:10 compute-0 ovn_controller[152662]: 2025-10-14T09:29:10Z|01418|binding|INFO|Setting lport 3fc32773-5083-4341-9838-5282b7963f56 down in Southbound
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:10 compute-0 ovn_controller[152662]: 2025-10-14T09:29:10Z|01419|binding|INFO|Removing iface tap3fc32773-50 ovn-installed in OVS
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.627 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5d:72 10.100.0.12'], port_security=['fa:16:3e:b3:5d:72 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7a110a3c-a2ca-4314-a190-28a4505cc26c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f09a704d-6063-4e40-b690-c967cd364b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24af2eac-35ae-4c02-b261-8fe378764631 e76d2fee-d8c5-45a1-ac1f-55a35976452c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b90ca07-80d1-49c1-a91f-225f989dd9c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3fc32773-5083-4341-9838-5282b7963f56) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.628 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3fc32773-5083-4341-9838-5282b7963f56 in datapath f09a704d-6063-4e40-b690-c967cd364b32 unbound from our chassis
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.630 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f09a704d-6063-4e40-b690-c967cd364b32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.631 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0063334-cf47-4f65-8404-ed43e66c3388]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.632 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 namespace which is not needed anymore
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:10 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct 14 09:29:10 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Consumed 15.198s CPU time.
Oct 14 09:29:10 compute-0 systemd-machined[214636]: Machine qemu-163-instance-00000082 terminated.
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.799 2 INFO nova.virt.libvirt.driver [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance destroyed successfully.
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.800 2 DEBUG nova.objects.instance [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 7a110a3c-a2ca-4314-a190-28a4505cc26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:29:10 compute-0 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [NOTICE]   (393934) : haproxy version is 2.8.14-c23fe91
Oct 14 09:29:10 compute-0 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [NOTICE]   (393934) : path to executable is /usr/sbin/haproxy
Oct 14 09:29:10 compute-0 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [WARNING]  (393934) : Exiting Master process...
Oct 14 09:29:10 compute-0 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [WARNING]  (393934) : Exiting Master process...
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.828 2 DEBUG nova.virt.libvirt.vif [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:27:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=130,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:28:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-tkp1b0io',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:28:11Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=7a110a3c-a2ca-4314-a190-28a4505cc26c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.829 2 DEBUG nova.network.os_vif_util [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.830 2 DEBUG nova.network.os_vif_util [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.830 2 DEBUG os_vif [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fc32773-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:29:10 compute-0 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [ALERT]    (393934) : Current worker (393936) exited with code 143 (Terminated)
Oct 14 09:29:10 compute-0 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [WARNING]  (393934) : All workers exited. Exiting... (0)
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.839 2 INFO os_vif [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50')
Oct 14 09:29:10 compute-0 systemd[1]: libpod-a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8.scope: Deactivated successfully.
Oct 14 09:29:10 compute-0 podman[396097]: 2025-10-14 09:29:10.85146507 +0000 UTC m=+0.082157571 container died a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:29:10 compute-0 podman[396095]: 2025-10-14 09:29:10.872227423 +0000 UTC m=+0.104761869 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8-userdata-shm.mount: Deactivated successfully.
Oct 14 09:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-98a05a81ea8f0e7b8861af2cf251020d339678d913c2f8ee38478cf7f1b84037-merged.mount: Deactivated successfully.
Oct 14 09:29:10 compute-0 podman[396092]: 2025-10-14 09:29:10.896552974 +0000 UTC m=+0.139264782 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 14 09:29:10 compute-0 podman[396097]: 2025-10-14 09:29:10.912236931 +0000 UTC m=+0.142929442 container cleanup a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 09:29:10 compute-0 systemd[1]: libpod-conmon-a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8.scope: Deactivated successfully.
Oct 14 09:29:10 compute-0 podman[396188]: 2025-10-14 09:29:10.981576984 +0000 UTC m=+0.046053919 container remove a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[47e32a42-c5f5-4d9c-8c11-fac3e02a0af6]: (4, ('Tue Oct 14 09:29:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 (a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8)\na90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8\nTue Oct 14 09:29:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 (a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8)\na90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.990 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b67a4f15-8756-4f21-aec1-57238e905f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.991 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09a704d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:11 compute-0 kernel: tapf09a704d-60: left promiscuous mode
Oct 14 09:29:10 compute-0 nova_compute[259627]: 2025-10-14 09:29:10.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.998 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24477a2b-9a19-47ed-8288-16216d3b887e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.025 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bbaed3-3ff5-4732-a859-448cb6e392db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.026 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[069e60ff-4efb-42a6-8c70-8f8cd283c790]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.047 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[47c63d8a-e96f-4841-8484-de27f8b146c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785913, 'reachable_time': 21456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396203, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.049 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:29:11 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.049 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[481dbc38-c833-4935-9822-9f420fdb1b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:11 compute-0 systemd[1]: run-netns-ovnmeta\x2df09a704d\x2d6063\x2d4e40\x2db690\x2dc967cd364b32.mount: Deactivated successfully.
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.287 2 INFO nova.virt.libvirt.driver [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Deleting instance files /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c_del
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.289 2 INFO nova.virt.libvirt.driver [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Deletion of /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c_del complete
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.373 2 INFO nova.compute.manager [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.374 2 DEBUG oslo.service.loopingcall [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.375 2 DEBUG nova.compute.manager [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.376 2 DEBUG nova.network.neutron [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:11 compute-0 ceph-mon[74249]: pgmap v2281: 305 pgs: 305 active+clean; 164 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.991 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:11 compute-0 nova_compute[259627]: 2025-10-14 09:29:11.992 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.072 2 DEBUG nova.network.neutron [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.096 2 INFO nova.compute.manager [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Took 0.72 seconds to deallocate network for instance.
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.136 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.137 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.165 2 DEBUG nova.compute.manager [req-4f0f0582-9461-42e5-a1bd-65f80d3c4731 req-53f22dbf-579b-4ed7-9546-d15cb008246c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-vif-deleted-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.175 2 DEBUG oslo_concurrency.processutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.321 2 DEBUG nova.network.neutron [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updated VIF entry in instance network info cache for port 3fc32773-5083-4341-9838-5282b7963f56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.323 2 DEBUG nova.network.neutron [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.355 2 DEBUG oslo_concurrency.lockutils [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 1.0 MiB/s wr, 73 op/s
Oct 14 09:29:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:29:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2256220778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.618 2 DEBUG oslo_concurrency.processutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.624 2 DEBUG nova.compute.provider_tree [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.640 2 DEBUG nova.scheduler.client.report [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.663 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.690 2 INFO nova.scheduler.client.report [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 7a110a3c-a2ca-4314-a190-28a4505cc26c
Oct 14 09:29:12 compute-0 nova_compute[259627]: 2025-10-14 09:29:12.752 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:13.008 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:43:1c 10.100.0.2 2001:db8::f816:3eff:fe4c:431c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4c:431c/64', 'neutron:device_id': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a306a4f-b3d3-4c63-8490-e1049c247650) old=Port_Binding(mac=['fa:16:3e:4c:43:1c 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:29:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:13.010 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a306a4f-b3d3-4c63-8490-e1049c247650 in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 updated
Oct 14 09:29:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:13.011 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cbcf7e5-ac17-454b-893d-3fda266aa395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:29:13 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:13.012 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3014cadb-a777-4f97-af69-24b204ded998]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:13 compute-0 ceph-mon[74249]: pgmap v2282: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 1.0 MiB/s wr, 73 op/s
Oct 14 09:29:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2256220778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 14 09:29:15 compute-0 nova_compute[259627]: 2025-10-14 09:29:15.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:15 compute-0 ceph-mon[74249]: pgmap v2283: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 14 09:29:15 compute-0 nova_compute[259627]: 2025-10-14 09:29:15.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 14 09:29:16 compute-0 nova_compute[259627]: 2025-10-14 09:29:16.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:16 compute-0 nova_compute[259627]: 2025-10-14 09:29:16.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:17 compute-0 ceph-mon[74249]: pgmap v2284: 305 pgs: 305 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 14 09:29:17 compute-0 nova_compute[259627]: 2025-10-14 09:29:17.753 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:17 compute-0 nova_compute[259627]: 2025-10-14 09:29:17.754 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:17 compute-0 nova_compute[259627]: 2025-10-14 09:29:17.771 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:29:17 compute-0 nova_compute[259627]: 2025-10-14 09:29:17.840 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:17 compute-0 nova_compute[259627]: 2025-10-14 09:29:17.840 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:17 compute-0 nova_compute[259627]: 2025-10-14 09:29:17.851 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:29:17 compute-0 nova_compute[259627]: 2025-10-14 09:29:17.852 2 INFO nova.compute.claims [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:29:17 compute-0 nova_compute[259627]: 2025-10-14 09:29:17.959 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:29:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177090088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.412 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.422 2 DEBUG nova.compute.provider_tree [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.447 2 DEBUG nova.scheduler.client.report [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.485 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.486 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.546 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.547 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:29:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/177090088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.569 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.591 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.694 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.696 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.696 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Creating image(s)
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.715 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.736 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.755 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.758 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.839 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.840 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.840 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.841 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.860 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.864 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:18 compute-0 nova_compute[259627]: 2025-10-14 09:29:18.901 2 DEBUG nova.policy [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.145 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.233 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.324 2 DEBUG nova.objects.instance [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.343 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.343 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Ensure instance console log exists: /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.344 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.344 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.344 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:19 compute-0 ceph-mon[74249]: pgmap v2285: 305 pgs: 305 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 14 09:29:19 compute-0 nova_compute[259627]: 2025-10-14 09:29:19.640 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Successfully created port: 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 45 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 47 KiB/s wr, 55 op/s
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.446 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Successfully updated port: 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.468 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.469 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.469 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.491 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434145.4899757, fb3db81b-4d6f-4736-9d4b-b1900fad6488 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.492 2 INFO nova.compute.manager [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] VM Stopped (Lifecycle Event)
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.520 2 DEBUG nova.compute.manager [None req-e27d8aab-7e7a-4f2d-91e0-09bbf36e269a - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.576 2 DEBUG nova.compute.manager [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.577 2 DEBUG nova.compute.manager [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing instance network info cache due to event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.577 2 DEBUG oslo_concurrency.lockutils [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.671 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:29:20 compute-0 nova_compute[259627]: 2025-10-14 09:29:20.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:21 compute-0 ceph-mon[74249]: pgmap v2286: 305 pgs: 305 active+clean; 45 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 47 KiB/s wr, 55 op/s
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.628 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.654 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.655 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance network_info: |[{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.655 2 DEBUG oslo_concurrency.lockutils [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.655 2 DEBUG nova.network.neutron [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.659 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start _get_guest_xml network_info=[{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.665 2 WARNING nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:29:21 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:29:21 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.676 2 DEBUG nova.virt.libvirt.host [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.677 2 DEBUG nova.virt.libvirt.host [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.682 2 DEBUG nova.virt.libvirt.host [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.682 2 DEBUG nova.virt.libvirt.host [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.683 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.684 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.684 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.685 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.685 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.686 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.686 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.687 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.687 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.688 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.688 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.689 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:29:21 compute-0 nova_compute[259627]: 2025-10-14 09:29:21.693 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:29:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723144166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.187 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.221 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.226 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 88 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 14 09:29:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1723144166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:29:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/48654064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.765 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.768 2 DEBUG nova.virt.libvirt.vif [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-766373181',display_name='tempest-TestGettingAddress-server-766373181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-766373181',id=133,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-kdy1goa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:18Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=9bab7e53-30b3-4cd0-ad07-3cc9b5c05492,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.769 2 DEBUG nova.network.os_vif_util [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.770 2 DEBUG nova.network.os_vif_util [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.772 2 DEBUG nova.objects.instance [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.786 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <uuid>9bab7e53-30b3-4cd0-ad07-3cc9b5c05492</uuid>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <name>instance-00000085</name>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-766373181</nova:name>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:29:21</nova:creationTime>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <nova:port uuid="312d35c6-7aa5-4056-b4ed-679cf0e1a12a">
Oct 14 09:29:22 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe40:44e9" ipVersion="6"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <system>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <entry name="serial">9bab7e53-30b3-4cd0-ad07-3cc9b5c05492</entry>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <entry name="uuid">9bab7e53-30b3-4cd0-ad07-3cc9b5c05492</entry>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </system>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <os>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   </os>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <features>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   </features>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk">
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       </source>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config">
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       </source>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:29:22 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:40:44:e9"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <target dev="tap312d35c6-7a"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/console.log" append="off"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <video>
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </video>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:29:22 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:29:22 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:29:22 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:29:22 compute-0 nova_compute[259627]: </domain>
Oct 14 09:29:22 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.787 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Preparing to wait for external event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.788 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.788 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.788 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.789 2 DEBUG nova.virt.libvirt.vif [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-766373181',display_name='tempest-TestGettingAddress-server-766373181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-766373181',id=133,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-kdy1goa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:18Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=9bab7e53-30b3-4cd0-ad07-3cc9b5c05492,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.789 2 DEBUG nova.network.os_vif_util [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.790 2 DEBUG nova.network.os_vif_util [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.790 2 DEBUG os_vif [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap312d35c6-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap312d35c6-7a, col_values=(('external_ids', {'iface-id': '312d35c6-7aa5-4056-b4ed-679cf0e1a12a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:44:e9', 'vm-uuid': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:22 compute-0 NetworkManager[44885]: <info>  [1760434162.8325] manager: (tap312d35c6-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.838 2 INFO os_vif [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a')
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.920 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.920 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.920 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:40:44:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.921 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Using config drive
Oct 14 09:29:22 compute-0 nova_compute[259627]: 2025-10-14 09:29:22.947 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.271 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Creating config drive at /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.280 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfazezws5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.448 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfazezws5" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.488 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.493 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.572 2 DEBUG nova.network.neutron [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updated VIF entry in instance network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.573 2 DEBUG nova.network.neutron [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:23 compute-0 ceph-mon[74249]: pgmap v2287: 305 pgs: 305 active+clean; 88 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 14 09:29:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/48654064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.596 2 DEBUG oslo_concurrency.lockutils [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.696 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.697 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Deleting local config drive /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config because it was imported into RBD.
Oct 14 09:29:23 compute-0 kernel: tap312d35c6-7a: entered promiscuous mode
Oct 14 09:29:23 compute-0 NetworkManager[44885]: <info>  [1760434163.7728] manager: (tap312d35c6-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:23 compute-0 ovn_controller[152662]: 2025-10-14T09:29:23Z|01420|binding|INFO|Claiming lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a for this chassis.
Oct 14 09:29:23 compute-0 ovn_controller[152662]: 2025-10-14T09:29:23Z|01421|binding|INFO|312d35c6-7aa5-4056-b4ed-679cf0e1a12a: Claiming fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:23 compute-0 systemd-udevd[396550]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:29:23 compute-0 NetworkManager[44885]: <info>  [1760434163.8135] device (tap312d35c6-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.811 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], port_security=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fe40:44e9/64', 'neutron:device_id': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=312d35c6-7aa5-4056-b4ed-679cf0e1a12a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:29:23 compute-0 NetworkManager[44885]: <info>  [1760434163.8161] device (tap312d35c6-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.814 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 bound to our chassis
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.817 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cbcf7e5-ac17-454b-893d-3fda266aa395
Oct 14 09:29:23 compute-0 systemd-machined[214636]: New machine qemu-166-instance-00000085.
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.832 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a508083a-8f8e-40ed-87d2-dec4836b25a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.833 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3cbcf7e5-a1 in ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.835 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3cbcf7e5-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.835 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f814d7f8-e67c-41ea-a591-f8988d3e1d34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.836 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d59ff287-a8bd-49f5-9fe3-f25f4d3a143d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:23 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000085.
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.848 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a48e10a7-4c41-4f95-a21a-911b49281161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.877 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[abc5121e-09b9-4434-aac6-b76e644cf24b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:23 compute-0 ovn_controller[152662]: 2025-10-14T09:29:23Z|01422|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a ovn-installed in OVS
Oct 14 09:29:23 compute-0 ovn_controller[152662]: 2025-10-14T09:29:23Z|01423|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a up in Southbound
Oct 14 09:29:23 compute-0 nova_compute[259627]: 2025-10-14 09:29:23.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.936 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[502bae7a-a3ad-481e-acc5-99a9f5edca96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:23 compute-0 NetworkManager[44885]: <info>  [1760434163.9480] manager: (tap3cbcf7e5-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/578)
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.947 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bccbe0f-d8a3-41ff-945d-2956d7c6d4dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.987 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[935ff4ec-b98d-4ade-9d40-8f372fe15841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.991 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce013b93-1e1c-4999-81ff-d44eb417f436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:24 compute-0 NetworkManager[44885]: <info>  [1760434164.0101] device (tap3cbcf7e5-a0): carrier: link connected
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.015 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8b104349-f716-4837-b055-eef56fe146bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.032 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce99378e-c4a4-4521-ac2f-66925ef1d2ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cbcf7e5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:43:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794277, 'reachable_time': 28589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396585, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.047 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1978cf1a-3b5a-4ca5-9b2d-5e1191836c87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:431c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794277, 'tstamp': 794277}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396586, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.064 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc10ed3-92d3-4f8f-8437-62f506552b6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cbcf7e5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:43:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794277, 'reachable_time': 28589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396587, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.097 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a617f439-edcc-43f2-a322-da9dc8a392a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.169 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8aad28-3810-4f3e-a470-b2ab56615180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.171 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbcf7e5-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.171 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.172 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cbcf7e5-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:24 compute-0 NetworkManager[44885]: <info>  [1760434164.1744] manager: (tap3cbcf7e5-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Oct 14 09:29:24 compute-0 kernel: tap3cbcf7e5-a0: entered promiscuous mode
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.178 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cbcf7e5-a0, col_values=(('external_ids', {'iface-id': '2a306a4f-b3d3-4c63-8490-e1049c247650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:24 compute-0 ovn_controller[152662]: 2025-10-14T09:29:24Z|01424|binding|INFO|Releasing lport 2a306a4f-b3d3-4c63-8490-e1049c247650 from this chassis (sb_readonly=0)
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.180 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3cbcf7e5-ac17-454b-893d-3fda266aa395.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3cbcf7e5-ac17-454b-893d-3fda266aa395.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.181 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2435e46e-6744-48d7-96de-a6abc87e0e8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.182 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-3cbcf7e5-ac17-454b-893d-3fda266aa395
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/3cbcf7e5-ac17-454b-893d-3fda266aa395.pid.haproxy
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 3cbcf7e5-ac17-454b-893d-3fda266aa395
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:29:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.183 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'env', 'PROCESS_TAG=haproxy-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3cbcf7e5-ac17-454b-893d-3fda266aa395.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 88 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 09:29:24 compute-0 podman[396659]: 2025-10-14 09:29:24.617449087 +0000 UTC m=+0.086379756 container create c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:29:24 compute-0 podman[396659]: 2025-10-14 09:29:24.571578283 +0000 UTC m=+0.040509002 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:29:24 compute-0 systemd[1]: Started libpod-conmon-c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d.scope.
Oct 14 09:29:24 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba749212af37e81d7c4992e139c5f78aa6b1ef87bc093a5fe8a54df954b3324e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:24 compute-0 podman[396659]: 2025-10-14 09:29:24.736430347 +0000 UTC m=+0.205361026 container init c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:29:24 compute-0 podman[396659]: 2025-10-14 09:29:24.741570804 +0000 UTC m=+0.210501453 container start c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:29:24 compute-0 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [NOTICE]   (396685) : New worker (396691) forked
Oct 14 09:29:24 compute-0 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [NOTICE]   (396685) : Loading success.
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.786 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434164.7859428, 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.786 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] VM Started (Lifecycle Event)
Oct 14 09:29:24 compute-0 podman[396678]: 2025-10-14 09:29:24.796822009 +0000 UTC m=+0.059941552 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.810 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.814 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434164.78601, 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.814 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] VM Paused (Lifecycle Event)
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.841 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.845 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:29:24 compute-0 nova_compute[259627]: 2025-10-14 09:29:24.872 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:29:24 compute-0 podman[396710]: 2025-10-14 09:29:24.899834415 +0000 UTC m=+0.072685378 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller)
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.005 2 DEBUG nova.compute.manager [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.006 2 DEBUG oslo_concurrency.lockutils [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.006 2 DEBUG oslo_concurrency.lockutils [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.007 2 DEBUG oslo_concurrency.lockutils [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.007 2 DEBUG nova.compute.manager [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Processing event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.007 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.011 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434165.011257, 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.011 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] VM Resumed (Lifecycle Event)
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.013 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.018 2 INFO nova.virt.libvirt.driver [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance spawned successfully.
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.019 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.037 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.042 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.046 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.046 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.047 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.047 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.048 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.048 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.069 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.094 2 INFO nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Took 6.40 seconds to spawn the instance on the hypervisor.
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.095 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.152 2 INFO nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Took 7.34 seconds to build instance.
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.167 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:25 compute-0 ceph-mon[74249]: pgmap v2288: 305 pgs: 305 active+clean; 88 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.796 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434150.795242, 7a110a3c-a2ca-4314-a190-28a4505cc26c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.798 2 INFO nova.compute.manager [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] VM Stopped (Lifecycle Event)
Oct 14 09:29:25 compute-0 nova_compute[259627]: 2025-10-14 09:29:25.825 2 DEBUG nova.compute.manager [None req-cdf0494d-3ed2-46c6-b8cb-4f100e072a39 - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 14 09:29:27 compute-0 nova_compute[259627]: 2025-10-14 09:29:27.545 2 DEBUG nova.compute.manager [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:27 compute-0 nova_compute[259627]: 2025-10-14 09:29:27.546 2 DEBUG oslo_concurrency.lockutils [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:27 compute-0 nova_compute[259627]: 2025-10-14 09:29:27.547 2 DEBUG oslo_concurrency.lockutils [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:27 compute-0 nova_compute[259627]: 2025-10-14 09:29:27.547 2 DEBUG oslo_concurrency.lockutils [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:27 compute-0 nova_compute[259627]: 2025-10-14 09:29:27.548 2 DEBUG nova.compute.manager [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:29:27 compute-0 nova_compute[259627]: 2025-10-14 09:29:27.548 2 WARNING nova.compute.manager [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state active and task_state None.
Oct 14 09:29:27 compute-0 ceph-mon[74249]: pgmap v2289: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 14 09:29:27 compute-0 nova_compute[259627]: 2025-10-14 09:29:27.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:29:29 compute-0 ceph-mon[74249]: pgmap v2290: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.386 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.387 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.476 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.566 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.567 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.575 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.576 2 INFO nova.compute.claims [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:29:30 compute-0 nova_compute[259627]: 2025-10-14 09:29:30.721 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:31 compute-0 NetworkManager[44885]: <info>  [1760434171.0948] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Oct 14 09:29:31 compute-0 NetworkManager[44885]: <info>  [1760434171.0968] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:31 compute-0 ovn_controller[152662]: 2025-10-14T09:29:31Z|01425|binding|INFO|Releasing lport 2a306a4f-b3d3-4c63-8490-e1049c247650 from this chassis (sb_readonly=0)
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:29:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/409564366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.270 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.278 2 DEBUG nova.compute.provider_tree [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.300 2 DEBUG nova.scheduler.client.report [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.327 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.328 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:29:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:31.377 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:31.379 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.393 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.394 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.423 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.432 2 DEBUG nova.compute.manager [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.432 2 DEBUG nova.compute.manager [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing instance network info cache due to event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.433 2 DEBUG oslo_concurrency.lockutils [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.433 2 DEBUG oslo_concurrency.lockutils [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.434 2 DEBUG nova.network.neutron [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.449 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.555 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.558 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.558 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Creating image(s)
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.593 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:31 compute-0 ceph-mon[74249]: pgmap v2291: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 14 09:29:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/409564366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.632 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.662 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.665 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.728 2 DEBUG nova.policy [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.770 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.771 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.772 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.772 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.797 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:31 compute-0 nova_compute[259627]: 2025-10-14 09:29:31.801 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.116 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.174 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.276 2 DEBUG nova.objects.instance [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.299 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.299 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Ensure instance console log exists: /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.300 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.301 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.301 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:29:32
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', 'default.rgw.log']
Oct 14 09:29:32 compute-0 nova_compute[259627]: 2025-10-14 09:29:32.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:29:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:33 compute-0 nova_compute[259627]: 2025-10-14 09:29:33.244 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Successfully created port: 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:29:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:29:33 compute-0 ceph-mon[74249]: pgmap v2292: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Oct 14 09:29:33 compute-0 nova_compute[259627]: 2025-10-14 09:29:33.802 2 DEBUG nova.network.neutron [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updated VIF entry in instance network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:29:33 compute-0 nova_compute[259627]: 2025-10-14 09:29:33.803 2 DEBUG nova.network.neutron [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:33 compute-0 nova_compute[259627]: 2025-10-14 09:29:33.835 2 DEBUG oslo_concurrency.lockutils [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:34 compute-0 nova_compute[259627]: 2025-10-14 09:29:34.107 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Successfully updated port: 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:29:34 compute-0 nova_compute[259627]: 2025-10-14 09:29:34.122 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:34 compute-0 nova_compute[259627]: 2025-10-14 09:29:34.122 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:34 compute-0 nova_compute[259627]: 2025-10-14 09:29:34.123 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:29:34 compute-0 nova_compute[259627]: 2025-10-14 09:29:34.239 2 DEBUG nova.compute.manager [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:34 compute-0 nova_compute[259627]: 2025-10-14 09:29:34.240 2 DEBUG nova.compute.manager [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing instance network info cache due to event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:29:34 compute-0 nova_compute[259627]: 2025-10-14 09:29:34.241 2 DEBUG oslo_concurrency.lockutils [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:34 compute-0 nova_compute[259627]: 2025-10-14 09:29:34.339 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:29:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:35.382 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:35 compute-0 ceph-mon[74249]: pgmap v2293: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.827 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.853 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.854 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance network_info: |[{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.854 2 DEBUG oslo_concurrency.lockutils [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.855 2 DEBUG nova.network.neutron [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.859 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start _get_guest_xml network_info=[{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.865 2 WARNING nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.870 2 DEBUG nova.virt.libvirt.host [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.871 2 DEBUG nova.virt.libvirt.host [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.876 2 DEBUG nova.virt.libvirt.host [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.877 2 DEBUG nova.virt.libvirt.host [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.877 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.878 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.879 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.879 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.880 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.880 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.881 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.881 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.882 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.882 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.882 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.883 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:29:35 compute-0 nova_compute[259627]: 2025-10-14 09:29:35.888 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:35 compute-0 ovn_controller[152662]: 2025-10-14T09:29:35Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:44:e9 10.100.0.8
Oct 14 09:29:35 compute-0 ovn_controller[152662]: 2025-10-14T09:29:35Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:44:e9 10.100.0.8
Oct 14 09:29:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:29:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265088364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.368 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.405 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 148 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 121 op/s
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.412 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2265088364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:29:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130448813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.905 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.907 2 DEBUG nova.virt.libvirt.vif [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=134,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHiiMn1IPbRe1fqI+0HCf//9qD9/3fgeK9VnineHi5Mo2MAqU+EDOJGm1zH6VOW3MfRMImj3kzww8OxL4WA50EnUF8UJVTfKwxadLnhug9+sBPKdPWQa79dlH3frsHdZeQ==',key_name='tempest-TestSecurityGroupsBasicOps-1209305033',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-2bof4d8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:31Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.908 2 DEBUG nova.network.os_vif_util [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.909 2 DEBUG nova.network.os_vif_util [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.910 2 DEBUG nova.objects.instance [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.927 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <uuid>f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e</uuid>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <name>instance-00000086</name>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662</nova:name>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:29:35</nova:creationTime>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <nova:port uuid="3eaf2ed5-bd76-4749-b8f0-58985c91a040">
Oct 14 09:29:36 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <system>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <entry name="serial">f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e</entry>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <entry name="uuid">f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e</entry>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </system>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <os>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   </os>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <features>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   </features>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk">
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config">
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       </source>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:29:36 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:8c:c0:28"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <target dev="tap3eaf2ed5-bd"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/console.log" append="off"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <video>
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </video>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:29:36 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:29:36 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:29:36 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:29:36 compute-0 nova_compute[259627]: </domain>
Oct 14 09:29:36 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.929 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Preparing to wait for external event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.930 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.930 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.931 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.932 2 DEBUG nova.virt.libvirt.vif [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=134,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHiiMn1IPbRe1fqI+0HCf//9qD9/3fgeK9VnineHi5Mo2MAqU+EDOJGm1zH6VOW3MfRMImj3kzww8OxL4WA50EnUF8UJVTfKwxadLnhug9+sBPKdPWQa79dlH3frsHdZeQ==',key_name='tempest-TestSecurityGroupsBasicOps-1209305033',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-2bof4d8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:31Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.932 2 DEBUG nova.network.os_vif_util [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.933 2 DEBUG nova.network.os_vif_util [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.934 2 DEBUG os_vif [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eaf2ed5-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3eaf2ed5-bd, col_values=(('external_ids', {'iface-id': '3eaf2ed5-bd76-4749-b8f0-58985c91a040', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:c0:28', 'vm-uuid': 'f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:36 compute-0 NetworkManager[44885]: <info>  [1760434176.9747] manager: (tap3eaf2ed5-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:36 compute-0 nova_compute[259627]: 2025-10-14 09:29:36.984 2 INFO os_vif [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd')
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.038 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.039 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.039 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:8c:c0:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.040 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Using config drive
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.061 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.304 2 DEBUG nova.network.neutron [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updated VIF entry in instance network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.305 2 DEBUG nova.network.neutron [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.331 2 DEBUG oslo_concurrency.lockutils [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.439 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Creating config drive at /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.450 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8hydacz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.601 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8hydacz" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.647 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.653 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:37 compute-0 ceph-mon[74249]: pgmap v2294: 305 pgs: 305 active+clean; 148 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 121 op/s
Oct 14 09:29:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4130448813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.861 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.863 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Deleting local config drive /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config because it was imported into RBD.
Oct 14 09:29:37 compute-0 kernel: tap3eaf2ed5-bd: entered promiscuous mode
Oct 14 09:29:37 compute-0 NetworkManager[44885]: <info>  [1760434177.9171] manager: (tap3eaf2ed5-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Oct 14 09:29:37 compute-0 ovn_controller[152662]: 2025-10-14T09:29:37Z|01426|binding|INFO|Claiming lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 for this chassis.
Oct 14 09:29:37 compute-0 ovn_controller[152662]: 2025-10-14T09:29:37Z|01427|binding|INFO|3eaf2ed5-bd76-4749-b8f0-58985c91a040: Claiming fa:16:3e:8c:c0:28 10.100.0.14
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:37 compute-0 ovn_controller[152662]: 2025-10-14T09:29:37Z|01428|binding|INFO|Setting lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 ovn-installed in OVS
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:37 compute-0 nova_compute[259627]: 2025-10-14 09:29:37.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:37 compute-0 systemd-machined[214636]: New machine qemu-167-instance-00000086.
Oct 14 09:29:37 compute-0 systemd-udevd[397060]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:29:37 compute-0 NetworkManager[44885]: <info>  [1760434177.9835] device (tap3eaf2ed5-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:29:37 compute-0 NetworkManager[44885]: <info>  [1760434177.9845] device (tap3eaf2ed5-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:29:37 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000086.
Oct 14 09:29:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.161 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:c0:28 10.100.0.14'], port_security=['fa:16:3e:8c:c0:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9fc4eb48-6270-40e6-85b5-254b29fe464f fbf03768-a7c8-4472-a870-318ef2b37cd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd3c523a-c2e8-4daa-a5c1-e2e6a8d953b6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3eaf2ed5-bd76-4749-b8f0-58985c91a040) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:29:38 compute-0 ovn_controller[152662]: 2025-10-14T09:29:38Z|01429|binding|INFO|Setting lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 up in Southbound
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.163 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 in datapath 7a804945-64f0-4b07-8e8b-2ad2beb7451e bound to our chassis
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.164 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a804945-64f0-4b07-8e8b-2ad2beb7451e
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb0470f-2f8b-457a-b71d-ca2eb1bd58f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.187 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a804945-61 in ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.188 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a804945-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.189 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cf5b0a-77ff-4e8b-a61e-8d87a68f62a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.189 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65b43c3a-e33b-4c41-b032-c92128aef563]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.212 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[658c1207-bf6f-46aa-acf0-8121730e49fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.244 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6199e169-4a54-4655-9468-7ca56b250e2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.278 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[13a85189-b9c7-47f7-add8-f32e9f8d2509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 systemd-udevd[397062]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:29:38 compute-0 NetworkManager[44885]: <info>  [1760434178.2877] manager: (tap7a804945-60): new Veth device (/org/freedesktop/NetworkManager/Devices/584)
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.295 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc014fc-62d0-4271-9f2c-1d42a6760b05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.349 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9fd12b-d540-4e57-8b44-64ebfc77976b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.353 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3d89d8-f4a0-4eee-8310-9567ee7b5949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 NetworkManager[44885]: <info>  [1760434178.3901] device (tap7a804945-60): carrier: link connected
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.393 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[123fbbda-aa8e-4d44-a223-c7b5b8549a01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 148 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 111 op/s
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.413 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d10adb-4bc5-441d-8b7c-a615999f4342]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a804945-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:f3:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795714, 'reachable_time': 37260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397093, 'error': None, 'target': 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.439 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd80551-fe81-4dc6-9a34-d0321d9a631f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:f304'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795714, 'tstamp': 795714}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397094, 'error': None, 'target': 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.461 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ef01c3-8cb4-4882-b419-603fa876a33b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a804945-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:f3:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795714, 'reachable_time': 37260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397102, 'error': None, 'target': 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.499 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6fae0f-a846-431d-9bc6-075298bd0f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.538 2 DEBUG nova.compute.manager [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.539 2 DEBUG oslo_concurrency.lockutils [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.540 2 DEBUG oslo_concurrency.lockutils [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.540 2 DEBUG oslo_concurrency.lockutils [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.540 2 DEBUG nova.compute.manager [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Processing event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.568 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78f6b162-0bfc-4320-b61d-2a2e27801683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.569 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a804945-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a804945-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:38 compute-0 NetworkManager[44885]: <info>  [1760434178.5727] manager: (tap7a804945-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Oct 14 09:29:38 compute-0 kernel: tap7a804945-60: entered promiscuous mode
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.576 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a804945-60, col_values=(('external_ids', {'iface-id': '8593aa40-4696-48eb-b1b8-2c53671eee76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:38 compute-0 ovn_controller[152662]: 2025-10-14T09:29:38Z|01430|binding|INFO|Releasing lport 8593aa40-4696-48eb-b1b8-2c53671eee76 from this chassis (sb_readonly=0)
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.579 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a804945-64f0-4b07-8e8b-2ad2beb7451e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a804945-64f0-4b07-8e8b-2ad2beb7451e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:29:38 compute-0 nova_compute[259627]: 2025-10-14 09:29:38.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.591 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c9c536-3057-44ca-aec4-f229a0517f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.592 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-7a804945-64f0-4b07-8e8b-2ad2beb7451e
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/7a804945-64f0-4b07-8e8b-2ad2beb7451e.pid.haproxy
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 7a804945-64f0-4b07-8e8b-2ad2beb7451e
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:29:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.593 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'env', 'PROCESS_TAG=haproxy-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a804945-64f0-4b07-8e8b-2ad2beb7451e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:29:39 compute-0 podman[397169]: 2025-10-14 09:29:39.07971009 +0000 UTC m=+0.069290073 container create 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:29:39 compute-0 podman[397169]: 2025-10-14 09:29:39.051000431 +0000 UTC m=+0.040580424 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:29:39 compute-0 systemd[1]: Started libpod-conmon-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c.scope.
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.152 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.154 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434179.151438, f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.155 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] VM Started (Lifecycle Event)
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.158 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.164 2 INFO nova.virt.libvirt.driver [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance spawned successfully.
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.165 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.180 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.185 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cdbb75e8a79d144f3413c716738114ae1bf49c3311a5a61a4a5f20e5f113958/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.200 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.200 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.201 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.201 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.201 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.202 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.205 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.206 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434179.153142, f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.207 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] VM Paused (Lifecycle Event)
Oct 14 09:29:39 compute-0 podman[397169]: 2025-10-14 09:29:39.219290439 +0000 UTC m=+0.208870482 container init 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:29:39 compute-0 podman[397169]: 2025-10-14 09:29:39.230316192 +0000 UTC m=+0.219896205 container start 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.240 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.246 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434179.1619325, f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.246 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] VM Resumed (Lifecycle Event)
Oct 14 09:29:39 compute-0 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [NOTICE]   (397188) : New worker (397190) forked
Oct 14 09:29:39 compute-0 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [NOTICE]   (397188) : Loading success.
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.263 2 INFO nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Took 7.71 seconds to spawn the instance on the hypervisor.
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.264 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.278 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.282 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.327 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.345 2 INFO nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Took 8.81 seconds to build instance.
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.364 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:39 compute-0 ceph-mon[74249]: pgmap v2295: 305 pgs: 305 active+clean; 148 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 111 op/s
Oct 14 09:29:39 compute-0 nova_compute[259627]: 2025-10-14 09:29:39.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:40 compute-0 nova_compute[259627]: 2025-10-14 09:29:40.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 160 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 119 op/s
Oct 14 09:29:40 compute-0 nova_compute[259627]: 2025-10-14 09:29:40.651 2 DEBUG nova.compute.manager [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:40 compute-0 nova_compute[259627]: 2025-10-14 09:29:40.651 2 DEBUG oslo_concurrency.lockutils [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:40 compute-0 nova_compute[259627]: 2025-10-14 09:29:40.651 2 DEBUG oslo_concurrency.lockutils [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:40 compute-0 nova_compute[259627]: 2025-10-14 09:29:40.652 2 DEBUG oslo_concurrency.lockutils [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:40 compute-0 nova_compute[259627]: 2025-10-14 09:29:40.652 2 DEBUG nova.compute.manager [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] No waiting events found dispatching network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:29:40 compute-0 nova_compute[259627]: 2025-10-14 09:29:40.652 2 WARNING nova.compute.manager [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received unexpected event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 for instance with vm_state active and task_state None.
Oct 14 09:29:41 compute-0 podman[397199]: 2025-10-14 09:29:41.652665776 +0000 UTC m=+0.065934590 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:29:41 compute-0 ceph-mon[74249]: pgmap v2296: 305 pgs: 305 active+clean; 160 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 119 op/s
Oct 14 09:29:41 compute-0 podman[397200]: 2025-10-14 09:29:41.673898731 +0000 UTC m=+0.087117594 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 09:29:41 compute-0 nova_compute[259627]: 2025-10-14 09:29:41.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Oct 14 09:29:42 compute-0 nova_compute[259627]: 2025-10-14 09:29:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.054552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183054576, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1018, "num_deletes": 250, "total_data_size": 1430857, "memory_usage": 1458072, "flush_reason": "Manual Compaction"}
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183061230, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 864902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47747, "largest_seqno": 48764, "table_properties": {"data_size": 860999, "index_size": 1555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10530, "raw_average_key_size": 20, "raw_value_size": 852537, "raw_average_value_size": 1678, "num_data_blocks": 70, "num_entries": 508, "num_filter_entries": 508, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434089, "oldest_key_time": 1760434089, "file_creation_time": 1760434183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 6741 microseconds, and 3216 cpu microseconds.
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.061282) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 864902 bytes OK
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.061352) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.063447) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.063468) EVENT_LOG_v1 {"time_micros": 1760434183063461, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.063487) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1426049, prev total WAL file size 1426049, number of live WAL files 2.
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.064559) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373530' seq:72057594037927935, type:22 .. '6D6772737461740032303031' seq:0, type:0; will stop at (end)
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(844KB)], [110(10MB)]
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183064668, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11435369, "oldest_snapshot_seqno": -1}
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6922 keys, 8683031 bytes, temperature: kUnknown
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183130437, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8683031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8638130, "index_size": 26485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180238, "raw_average_key_size": 26, "raw_value_size": 8515567, "raw_average_value_size": 1230, "num_data_blocks": 1035, "num_entries": 6922, "num_filter_entries": 6922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.130720) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8683031 bytes
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.132475) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 131.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(23.3) write-amplify(10.0) OK, records in: 7392, records dropped: 470 output_compression: NoCompression
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.132495) EVENT_LOG_v1 {"time_micros": 1760434183132487, "job": 66, "event": "compaction_finished", "compaction_time_micros": 65842, "compaction_time_cpu_micros": 43856, "output_level": 6, "num_output_files": 1, "total_output_size": 8683031, "num_input_records": 7392, "num_output_records": 6922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183132782, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183134992, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.064299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:29:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011050157297974865 of space, bias 1.0, pg target 0.33150471893924593 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:29:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:29:43 compute-0 ceph-mon[74249]: pgmap v2297: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Oct 14 09:29:43 compute-0 nova_compute[259627]: 2025-10-14 09:29:43.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Oct 14 09:29:44 compute-0 nova_compute[259627]: 2025-10-14 09:29:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:44 compute-0 nova_compute[259627]: 2025-10-14 09:29:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.010 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:29:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708871909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.547 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.610 2 DEBUG nova.compute.manager [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.611 2 DEBUG nova.compute.manager [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing instance network info cache due to event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.612 2 DEBUG oslo_concurrency.lockutils [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.612 2 DEBUG oslo_concurrency.lockutils [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.613 2 DEBUG nova.network.neutron [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.652 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.653 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.661 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.662 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:29:45 compute-0 ceph-mon[74249]: pgmap v2298: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Oct 14 09:29:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1708871909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.909 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.911 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3311MB free_disk=59.921993255615234GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.911 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:45 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.912 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:45.999 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.019 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.034 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.035 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.047 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.068 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.188 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 09:29:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:29:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2023518290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.702 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2023518290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.712 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.735 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.763 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:29:46 compute-0 nova_compute[259627]: 2025-10-14 09:29:46.764 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:47 compute-0 nova_compute[259627]: 2025-10-14 09:29:47.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:47 compute-0 nova_compute[259627]: 2025-10-14 09:29:47.391 2 DEBUG nova.network.neutron [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updated VIF entry in instance network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:29:47 compute-0 nova_compute[259627]: 2025-10-14 09:29:47.392 2 DEBUG nova.network.neutron [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:47 compute-0 nova_compute[259627]: 2025-10-14 09:29:47.425 2 DEBUG oslo_concurrency.lockutils [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:47 compute-0 ceph-mon[74249]: pgmap v2299: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 09:29:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 625 KiB/s wr, 117 op/s
Oct 14 09:29:48 compute-0 nova_compute[259627]: 2025-10-14 09:29:48.766 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:48 compute-0 nova_compute[259627]: 2025-10-14 09:29:48.766 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:29:48 compute-0 nova_compute[259627]: 2025-10-14 09:29:48.795 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:29:48 compute-0 nova_compute[259627]: 2025-10-14 09:29:48.795 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:48 compute-0 nova_compute[259627]: 2025-10-14 09:29:48.795 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:29:49 compute-0 ceph-mon[74249]: pgmap v2300: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 625 KiB/s wr, 117 op/s
Oct 14 09:29:49 compute-0 nova_compute[259627]: 2025-10-14 09:29:49.807 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:49 compute-0 nova_compute[259627]: 2025-10-14 09:29:49.808 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:49 compute-0 nova_compute[259627]: 2025-10-14 09:29:49.826 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:29:49 compute-0 nova_compute[259627]: 2025-10-14 09:29:49.901 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:49 compute-0 nova_compute[259627]: 2025-10-14 09:29:49.901 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:49 compute-0 nova_compute[259627]: 2025-10-14 09:29:49.910 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:29:49 compute-0 nova_compute[259627]: 2025-10-14 09:29:49.910 2 INFO nova.compute.claims [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.052 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:50 compute-0 ovn_controller[152662]: 2025-10-14T09:29:50Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:c0:28 10.100.0.14
Oct 14 09:29:50 compute-0 ovn_controller[152662]: 2025-10-14T09:29:50Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:c0:28 10.100.0.14
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 176 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 123 op/s
Oct 14 09:29:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:29:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922411037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.579 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.587 2 DEBUG nova.compute.provider_tree [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.610 2 DEBUG nova.scheduler.client.report [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.638 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.639 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.704 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.705 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.727 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:29:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/922411037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.748 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.825 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.827 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.827 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Creating image(s)
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.856 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.886 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.914 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.919 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:50 compute-0 nova_compute[259627]: 2025-10-14 09:29:50.976 2 DEBUG nova.policy [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.023 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.025 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.026 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.026 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.058 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.063 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.419 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.496 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.605 2 DEBUG nova.objects.instance [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid edf4b59a-fe1b-48ae-92a3-7bec88fc7491 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.624 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.624 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Ensure instance console log exists: /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.625 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.625 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.626 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:51 compute-0 ceph-mon[74249]: pgmap v2301: 305 pgs: 305 active+clean; 176 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 123 op/s
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:51 compute-0 nova_compute[259627]: 2025-10-14 09:29:51.992 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Successfully created port: f7181e0a-4f5b-4d28-8302-c28ab393a348 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:29:52 compute-0 nova_compute[259627]: 2025-10-14 09:29:52.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 192 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Oct 14 09:29:52 compute-0 sudo[397472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:29:52 compute-0 sudo[397472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:52 compute-0 sudo[397472]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:52 compute-0 sudo[397497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:29:52 compute-0 sudo[397497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:52 compute-0 sudo[397497]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:52 compute-0 sudo[397522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:29:52 compute-0 sudo[397522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:52 compute-0 sudo[397522]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:52 compute-0 sudo[397547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:29:52 compute-0 sudo[397547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:53 compute-0 nova_compute[259627]: 2025-10-14 09:29:53.201 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Successfully updated port: f7181e0a-4f5b-4d28-8302-c28ab393a348 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:29:53 compute-0 nova_compute[259627]: 2025-10-14 09:29:53.223 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:53 compute-0 nova_compute[259627]: 2025-10-14 09:29:53.223 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:53 compute-0 nova_compute[259627]: 2025-10-14 09:29:53.224 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:29:53 compute-0 nova_compute[259627]: 2025-10-14 09:29:53.412 2 DEBUG nova.compute.manager [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:53 compute-0 nova_compute[259627]: 2025-10-14 09:29:53.412 2 DEBUG nova.compute.manager [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing instance network info cache due to event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:29:53 compute-0 nova_compute[259627]: 2025-10-14 09:29:53.413 2 DEBUG oslo_concurrency.lockutils [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:29:53 compute-0 sudo[397547]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:53 compute-0 nova_compute[259627]: 2025-10-14 09:29:53.505 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:29:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:29:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:29:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:29:53 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:29:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:29:53 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:29:53 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a16765dc-d020-418b-bc47-d031b859894d does not exist
Oct 14 09:29:53 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6c6d1e84-3a64-4fd6-a52f-8bacec2aec01 does not exist
Oct 14 09:29:53 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 746212b4-6ef5-485c-b715-e8ce50f2df4a does not exist
Oct 14 09:29:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:29:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:29:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:29:53 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:29:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:29:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:29:53 compute-0 sudo[397603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:29:53 compute-0 sudo[397603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:53 compute-0 sudo[397603]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:53 compute-0 sudo[397628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:29:53 compute-0 sudo[397628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:53 compute-0 sudo[397628]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:53 compute-0 ceph-mon[74249]: pgmap v2302: 305 pgs: 305 active+clean; 192 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Oct 14 09:29:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:29:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:29:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:29:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:29:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:29:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:29:53 compute-0 sudo[397653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:29:53 compute-0 sudo[397653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:53 compute-0 sudo[397653]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:53 compute-0 sudo[397678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:29:53 compute-0 sudo[397678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:54 compute-0 podman[397743]: 2025-10-14 09:29:54.307142159 +0000 UTC m=+0.070995435 container create 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:29:54 compute-0 systemd[1]: Started libpod-conmon-41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818.scope.
Oct 14 09:29:54 compute-0 podman[397743]: 2025-10-14 09:29:54.271434087 +0000 UTC m=+0.035287353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:29:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:29:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 192 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 739 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:29:54 compute-0 podman[397743]: 2025-10-14 09:29:54.417842104 +0000 UTC m=+0.181695380 container init 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 09:29:54 compute-0 podman[397743]: 2025-10-14 09:29:54.426812206 +0000 UTC m=+0.190665452 container start 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:29:54 compute-0 podman[397743]: 2025-10-14 09:29:54.430413925 +0000 UTC m=+0.194267171 container attach 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 14 09:29:54 compute-0 distracted_black[397759]: 167 167
Oct 14 09:29:54 compute-0 systemd[1]: libpod-41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818.scope: Deactivated successfully.
Oct 14 09:29:54 compute-0 conmon[397759]: conmon 41aad58c88be03037c79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818.scope/container/memory.events
Oct 14 09:29:54 compute-0 podman[397743]: 2025-10-14 09:29:54.43709323 +0000 UTC m=+0.200946476 container died 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7a86ed7a5ceee2fd2a67325f977d5a84cdf1f322976a53434581eefe65dfa82-merged.mount: Deactivated successfully.
Oct 14 09:29:54 compute-0 podman[397743]: 2025-10-14 09:29:54.486101301 +0000 UTC m=+0.249954577 container remove 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:29:54 compute-0 systemd[1]: libpod-conmon-41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818.scope: Deactivated successfully.
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.580 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.599 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.600 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance network_info: |[{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.602 2 DEBUG oslo_concurrency.lockutils [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.603 2 DEBUG nova.network.neutron [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.609 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start _get_guest_xml network_info=[{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.617 2 WARNING nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.631 2 DEBUG nova.virt.libvirt.host [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.633 2 DEBUG nova.virt.libvirt.host [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.638 2 DEBUG nova.virt.libvirt.host [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.639 2 DEBUG nova.virt.libvirt.host [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.640 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.641 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.643 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.643 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.644 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.645 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.646 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.646 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.647 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.648 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.648 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.649 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.654 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:54 compute-0 podman[397784]: 2025-10-14 09:29:54.73453437 +0000 UTC m=+0.064176787 container create 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:29:54 compute-0 systemd[1]: Started libpod-conmon-5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972.scope.
Oct 14 09:29:54 compute-0 podman[397784]: 2025-10-14 09:29:54.709169523 +0000 UTC m=+0.038811940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:29:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:54 compute-0 podman[397784]: 2025-10-14 09:29:54.846143868 +0000 UTC m=+0.175786295 container init 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:29:54 compute-0 podman[397784]: 2025-10-14 09:29:54.862288427 +0000 UTC m=+0.191930804 container start 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:29:54 compute-0 podman[397784]: 2025-10-14 09:29:54.865546587 +0000 UTC m=+0.195188974 container attach 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:29:54 compute-0 nova_compute[259627]: 2025-10-14 09:29:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:29:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:29:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886267692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.150 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.184 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.190 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:55 compute-0 podman[397874]: 2025-10-14 09:29:55.665966386 +0000 UTC m=+0.068684179 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 09:29:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:29:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365070571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.705 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.707 2 DEBUG nova.virt.libvirt.vif [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1997112513',display_name='tempest-TestGettingAddress-server-1997112513',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1997112513',id=135,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-viso8tmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:50Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=edf4b59a-fe1b-48ae-92a3-7bec88fc7491,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.708 2 DEBUG nova.network.os_vif_util [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.709 2 DEBUG nova.network.os_vif_util [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.711 2 DEBUG nova.objects.instance [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid edf4b59a-fe1b-48ae-92a3-7bec88fc7491 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.728 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <uuid>edf4b59a-fe1b-48ae-92a3-7bec88fc7491</uuid>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <name>instance-00000087</name>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-1997112513</nova:name>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:29:54</nova:creationTime>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <nova:port uuid="f7181e0a-4f5b-4d28-8302-c28ab393a348">
Oct 14 09:29:55 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe90:3d7f" ipVersion="6"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <system>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <entry name="serial">edf4b59a-fe1b-48ae-92a3-7bec88fc7491</entry>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <entry name="uuid">edf4b59a-fe1b-48ae-92a3-7bec88fc7491</entry>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </system>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <os>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   </os>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <features>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   </features>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk">
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       </source>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config">
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       </source>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:29:55 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:90:3d:7f"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <target dev="tapf7181e0a-4f"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/console.log" append="off"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <video>
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </video>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:29:55 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:29:55 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:29:55 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:29:55 compute-0 nova_compute[259627]: </domain>
Oct 14 09:29:55 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.729 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Preparing to wait for external event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.729 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.730 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.730 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.731 2 DEBUG nova.virt.libvirt.vif [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1997112513',display_name='tempest-TestGettingAddress-server-1997112513',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1997112513',id=135,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-viso8tmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:50Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=edf4b59a-fe1b-48ae-92a3-7bec88fc7491,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.731 2 DEBUG nova.network.os_vif_util [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.732 2 DEBUG nova.network.os_vif_util [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.732 2 DEBUG os_vif [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:55 compute-0 podman[397871]: 2025-10-14 09:29:55.733788421 +0000 UTC m=+0.139675982 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7181e0a-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf7181e0a-4f, col_values=(('external_ids', {'iface-id': 'f7181e0a-4f5b-4d28-8302-c28ab393a348', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:3d:7f', 'vm-uuid': 'edf4b59a-fe1b-48ae-92a3-7bec88fc7491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:55 compute-0 NetworkManager[44885]: <info>  [1760434195.7402] manager: (tapf7181e0a-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.749 2 INFO os_vif [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f')
Oct 14 09:29:55 compute-0 ceph-mon[74249]: pgmap v2303: 305 pgs: 305 active+clean; 192 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 739 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:29:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3886267692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3365070571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.816 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.816 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.817 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:90:3d:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.817 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Using config drive
Oct 14 09:29:55 compute-0 nova_compute[259627]: 2025-10-14 09:29:55.840 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:55 compute-0 admiring_villani[397802]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:29:55 compute-0 admiring_villani[397802]: --> relative data size: 1.0
Oct 14 09:29:55 compute-0 admiring_villani[397802]: --> All data devices are unavailable
Oct 14 09:29:55 compute-0 systemd[1]: libpod-5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972.scope: Deactivated successfully.
Oct 14 09:29:55 compute-0 systemd[1]: libpod-5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972.scope: Consumed 1.008s CPU time.
Oct 14 09:29:56 compute-0 podman[397955]: 2025-10-14 09:29:56.011026662 +0000 UTC m=+0.046411988 container died 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7-merged.mount: Deactivated successfully.
Oct 14 09:29:56 compute-0 podman[397955]: 2025-10-14 09:29:56.093933161 +0000 UTC m=+0.129318387 container remove 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:29:56 compute-0 systemd[1]: libpod-conmon-5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972.scope: Deactivated successfully.
Oct 14 09:29:56 compute-0 sudo[397678]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.216 2 DEBUG nova.network.neutron [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updated VIF entry in instance network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.218 2 DEBUG nova.network.neutron [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.241 2 DEBUG oslo_concurrency.lockutils [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.249 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Creating config drive at /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config
Oct 14 09:29:56 compute-0 sudo[397970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:29:56 compute-0 sudo[397970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.259 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4aa38oa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:56 compute-0 sudo[397970]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:56 compute-0 sudo[397995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:29:56 compute-0 sudo[397995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:56 compute-0 sudo[397995]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 802 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.429 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4aa38oa" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:56 compute-0 sudo[398023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:29:56 compute-0 sudo[398023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:56 compute-0 sudo[398023]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.474 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.484 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:29:56 compute-0 sudo[398064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:29:56 compute-0 sudo[398064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.691 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.692 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Deleting local config drive /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config because it was imported into RBD.
Oct 14 09:29:56 compute-0 kernel: tapf7181e0a-4f: entered promiscuous mode
Oct 14 09:29:56 compute-0 NetworkManager[44885]: <info>  [1760434196.7730] manager: (tapf7181e0a-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:56 compute-0 ovn_controller[152662]: 2025-10-14T09:29:56Z|01431|binding|INFO|Claiming lport f7181e0a-4f5b-4d28-8302-c28ab393a348 for this chassis.
Oct 14 09:29:56 compute-0 ovn_controller[152662]: 2025-10-14T09:29:56Z|01432|binding|INFO|f7181e0a-4f5b-4d28-8302-c28ab393a348: Claiming fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.786 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f'], port_security=['fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe90:3d7f/64', 'neutron:device_id': 'edf4b59a-fe1b-48ae-92a3-7bec88fc7491', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f7181e0a-4f5b-4d28-8302-c28ab393a348) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.789 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f7181e0a-4f5b-4d28-8302-c28ab393a348 in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 bound to our chassis
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.793 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cbcf7e5-ac17-454b-893d-3fda266aa395
Oct 14 09:29:56 compute-0 ovn_controller[152662]: 2025-10-14T09:29:56Z|01433|binding|INFO|Setting lport f7181e0a-4f5b-4d28-8302-c28ab393a348 ovn-installed in OVS
Oct 14 09:29:56 compute-0 ovn_controller[152662]: 2025-10-14T09:29:56Z|01434|binding|INFO|Setting lport f7181e0a-4f5b-4d28-8302-c28ab393a348 up in Southbound
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:56 compute-0 systemd-udevd[398144]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:29:56 compute-0 nova_compute[259627]: 2025-10-14 09:29:56.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.824 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d5eeba-d48c-4f99-b803-674a880a54ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:56 compute-0 NetworkManager[44885]: <info>  [1760434196.8446] device (tapf7181e0a-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:29:56 compute-0 NetworkManager[44885]: <info>  [1760434196.8466] device (tapf7181e0a-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:29:56 compute-0 systemd-machined[214636]: New machine qemu-168-instance-00000087.
Oct 14 09:29:56 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000087.
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.870 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9491a1-2bba-4635-bb6a-58ad763d7310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.876 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[903ae0c3-1e5e-45bd-ae33-9f285087ddfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.915 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[60054373-d9aa-4b16-8764-4b18b1d53771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.945 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e4fa25-7168-4796-8149-5d54aba8185e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cbcf7e5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:43:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794277, 'reachable_time': 28589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398169, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.971 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0981af-7ff4-4b3b-beb3-38a94961973c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3cbcf7e5-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794288, 'tstamp': 794288}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398178, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3cbcf7e5-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794292, 'tstamp': 794292}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398178, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:29:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.972 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbcf7e5-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:29:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:57.013 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cbcf7e5-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:57.013 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:57.014 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cbcf7e5-a0, col_values=(('external_ids', {'iface-id': '2a306a4f-b3d3-4c63-8490-e1049c247650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:29:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:29:57.014 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:29:57 compute-0 podman[398179]: 2025-10-14 09:29:57.076547601 +0000 UTC m=+0.046077570 container create 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:29:57 compute-0 systemd[1]: Started libpod-conmon-37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b.scope.
Oct 14 09:29:57 compute-0 podman[398179]: 2025-10-14 09:29:57.055102441 +0000 UTC m=+0.024632400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:29:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:29:57 compute-0 podman[398179]: 2025-10-14 09:29:57.184836057 +0000 UTC m=+0.154366016 container init 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:29:57 compute-0 podman[398179]: 2025-10-14 09:29:57.197069239 +0000 UTC m=+0.166599188 container start 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:29:57 compute-0 podman[398179]: 2025-10-14 09:29:57.20074203 +0000 UTC m=+0.170271999 container attach 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 09:29:57 compute-0 objective_chandrasekhar[398196]: 167 167
Oct 14 09:29:57 compute-0 systemd[1]: libpod-37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b.scope: Deactivated successfully.
Oct 14 09:29:57 compute-0 conmon[398196]: conmon 37a8ed99351be3a276a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b.scope/container/memory.events
Oct 14 09:29:57 compute-0 podman[398179]: 2025-10-14 09:29:57.207790634 +0000 UTC m=+0.177321053 container died 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-50fa703c85e1f47f2f1ef77a5018910296cc1e6b308d504d97ed3263486322c5-merged.mount: Deactivated successfully.
Oct 14 09:29:57 compute-0 podman[398179]: 2025-10-14 09:29:57.25621574 +0000 UTC m=+0.225745689 container remove 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:29:57 compute-0 systemd[1]: libpod-conmon-37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b.scope: Deactivated successfully.
Oct 14 09:29:57 compute-0 podman[398260]: 2025-10-14 09:29:57.45939643 +0000 UTC m=+0.043420143 container create f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 09:29:57 compute-0 systemd[1]: Started libpod-conmon-f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5.scope.
Oct 14 09:29:57 compute-0 podman[398260]: 2025-10-14 09:29:57.439670664 +0000 UTC m=+0.023694397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:29:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:57 compute-0 podman[398260]: 2025-10-14 09:29:57.560128439 +0000 UTC m=+0.144152162 container init f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:29:57 compute-0 podman[398260]: 2025-10-14 09:29:57.566869976 +0000 UTC m=+0.150893689 container start f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:29:57 compute-0 podman[398260]: 2025-10-14 09:29:57.570514706 +0000 UTC m=+0.154538429 container attach f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:29:57 compute-0 ceph-mon[74249]: pgmap v2304: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 802 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.860 2 DEBUG nova.compute.manager [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.861 2 DEBUG oslo_concurrency.lockutils [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.862 2 DEBUG oslo_concurrency.lockutils [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.862 2 DEBUG oslo_concurrency.lockutils [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.863 2 DEBUG nova.compute.manager [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Processing event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.893 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434197.892682, edf4b59a-fe1b-48ae-92a3-7bec88fc7491 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.894 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] VM Started (Lifecycle Event)
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.898 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.903 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.908 2 INFO nova.virt.libvirt.driver [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance spawned successfully.
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.908 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.916 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.921 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.938 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.938 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.939 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.940 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.941 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.942 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.948 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.949 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434197.8967695, edf4b59a-fe1b-48ae-92a3-7bec88fc7491 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.949 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] VM Paused (Lifecycle Event)
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.983 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.987 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434197.90257, edf4b59a-fe1b-48ae-92a3-7bec88fc7491 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:29:57 compute-0 nova_compute[259627]: 2025-10-14 09:29:57.988 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] VM Resumed (Lifecycle Event)
Oct 14 09:29:58 compute-0 nova_compute[259627]: 2025-10-14 09:29:58.009 2 INFO nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Took 7.18 seconds to spawn the instance on the hypervisor.
Oct 14 09:29:58 compute-0 nova_compute[259627]: 2025-10-14 09:29:58.009 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:58 compute-0 nova_compute[259627]: 2025-10-14 09:29:58.011 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:29:58 compute-0 nova_compute[259627]: 2025-10-14 09:29:58.018 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:29:58 compute-0 nova_compute[259627]: 2025-10-14 09:29:58.048 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:29:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:29:58 compute-0 nova_compute[259627]: 2025-10-14 09:29:58.073 2 INFO nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Took 8.20 seconds to build instance.
Oct 14 09:29:58 compute-0 nova_compute[259627]: 2025-10-14 09:29:58.090 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]: {
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:     "0": [
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:         {
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "devices": [
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "/dev/loop3"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             ],
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_name": "ceph_lv0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_size": "21470642176",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "name": "ceph_lv0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "tags": {
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cluster_name": "ceph",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.crush_device_class": "",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.encrypted": "0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osd_id": "0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.type": "block",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.vdo": "0"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             },
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "type": "block",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "vg_name": "ceph_vg0"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:         }
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:     ],
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:     "1": [
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:         {
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "devices": [
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "/dev/loop4"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             ],
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_name": "ceph_lv1",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_size": "21470642176",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "name": "ceph_lv1",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "tags": {
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cluster_name": "ceph",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.crush_device_class": "",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.encrypted": "0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osd_id": "1",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.type": "block",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.vdo": "0"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             },
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "type": "block",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "vg_name": "ceph_vg1"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:         }
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:     ],
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:     "2": [
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:         {
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "devices": [
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "/dev/loop5"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             ],
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_name": "ceph_lv2",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_size": "21470642176",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "name": "ceph_lv2",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "tags": {
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.cluster_name": "ceph",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.crush_device_class": "",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.encrypted": "0",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osd_id": "2",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.type": "block",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:                 "ceph.vdo": "0"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             },
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "type": "block",
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:             "vg_name": "ceph_vg2"
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:         }
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]:     ]
Oct 14 09:29:58 compute-0 beautiful_brattain[398277]: }
Oct 14 09:29:58 compute-0 systemd[1]: libpod-f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5.scope: Deactivated successfully.
Oct 14 09:29:58 compute-0 podman[398260]: 2025-10-14 09:29:58.387601346 +0000 UTC m=+0.971625079 container died f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:29:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 09:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a-merged.mount: Deactivated successfully.
Oct 14 09:29:58 compute-0 podman[398260]: 2025-10-14 09:29:58.44807117 +0000 UTC m=+1.032094873 container remove f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 09:29:58 compute-0 systemd[1]: libpod-conmon-f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5.scope: Deactivated successfully.
Oct 14 09:29:58 compute-0 sudo[398064]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:58 compute-0 sudo[398297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:29:58 compute-0 sudo[398297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:58 compute-0 sudo[398297]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:58 compute-0 sudo[398322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:29:58 compute-0 sudo[398322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:58 compute-0 sudo[398322]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:58 compute-0 sudo[398347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:29:58 compute-0 sudo[398347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:58 compute-0 sudo[398347]: pam_unix(sudo:session): session closed for user root
Oct 14 09:29:58 compute-0 sudo[398372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:29:58 compute-0 sudo[398372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:29:59 compute-0 podman[398440]: 2025-10-14 09:29:59.064394359 +0000 UTC m=+0.040591644 container create a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:29:59 compute-0 systemd[1]: Started libpod-conmon-a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202.scope.
Oct 14 09:29:59 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:29:59 compute-0 podman[398440]: 2025-10-14 09:29:59.139043574 +0000 UTC m=+0.115240889 container init a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:29:59 compute-0 podman[398440]: 2025-10-14 09:29:59.045668786 +0000 UTC m=+0.021866131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:29:59 compute-0 podman[398440]: 2025-10-14 09:29:59.148079437 +0000 UTC m=+0.124276732 container start a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 09:29:59 compute-0 adoring_bose[398456]: 167 167
Oct 14 09:29:59 compute-0 systemd[1]: libpod-a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202.scope: Deactivated successfully.
Oct 14 09:29:59 compute-0 podman[398440]: 2025-10-14 09:29:59.195098399 +0000 UTC m=+0.171295684 container attach a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:29:59 compute-0 podman[398440]: 2025-10-14 09:29:59.195885268 +0000 UTC m=+0.172082573 container died a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a026b168da758625ff37bf5b2f7c40799777bbabbbeee7c8fc983328b214ee5-merged.mount: Deactivated successfully.
Oct 14 09:29:59 compute-0 podman[398440]: 2025-10-14 09:29:59.246636602 +0000 UTC m=+0.222833927 container remove a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:29:59 compute-0 systemd[1]: libpod-conmon-a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202.scope: Deactivated successfully.
Oct 14 09:29:59 compute-0 podman[398478]: 2025-10-14 09:29:59.438972815 +0000 UTC m=+0.038648986 container create fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:29:59 compute-0 systemd[1]: Started libpod-conmon-fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10.scope.
Oct 14 09:29:59 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:29:59 compute-0 podman[398478]: 2025-10-14 09:29:59.420832536 +0000 UTC m=+0.020508697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:29:59 compute-0 podman[398478]: 2025-10-14 09:29:59.540633457 +0000 UTC m=+0.140309688 container init fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 09:29:59 compute-0 podman[398478]: 2025-10-14 09:29:59.553561866 +0000 UTC m=+0.153238037 container start fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:29:59 compute-0 podman[398478]: 2025-10-14 09:29:59.557698468 +0000 UTC m=+0.157374639 container attach fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:29:59 compute-0 ceph-mon[74249]: pgmap v2305: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 09:29:59 compute-0 nova_compute[259627]: 2025-10-14 09:29:59.992 2 DEBUG nova.compute.manager [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:29:59 compute-0 nova_compute[259627]: 2025-10-14 09:29:59.993 2 DEBUG oslo_concurrency.lockutils [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:29:59 compute-0 nova_compute[259627]: 2025-10-14 09:29:59.994 2 DEBUG oslo_concurrency.lockutils [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:29:59 compute-0 nova_compute[259627]: 2025-10-14 09:29:59.994 2 DEBUG oslo_concurrency.lockutils [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:29:59 compute-0 nova_compute[259627]: 2025-10-14 09:29:59.994 2 DEBUG nova.compute.manager [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] No waiting events found dispatching network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:29:59 compute-0 nova_compute[259627]: 2025-10-14 09:29:59.995 2 WARNING nova.compute.manager [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received unexpected event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 for instance with vm_state active and task_state None.
Oct 14 09:30:00 compute-0 nova_compute[259627]: 2025-10-14 09:30:00.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 789 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]: {
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "osd_id": 2,
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "type": "bluestore"
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:     },
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "osd_id": 1,
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "type": "bluestore"
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:     },
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "osd_id": 0,
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:         "type": "bluestore"
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]:     }
Oct 14 09:30:00 compute-0 amazing_goldstine[398495]: }
Oct 14 09:30:00 compute-0 systemd[1]: libpod-fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10.scope: Deactivated successfully.
Oct 14 09:30:00 compute-0 podman[398478]: 2025-10-14 09:30:00.537483859 +0000 UTC m=+1.137160000 container died fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634-merged.mount: Deactivated successfully.
Oct 14 09:30:00 compute-0 podman[398478]: 2025-10-14 09:30:00.588720195 +0000 UTC m=+1.188396336 container remove fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:30:00 compute-0 systemd[1]: libpod-conmon-fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10.scope: Deactivated successfully.
Oct 14 09:30:00 compute-0 sudo[398372]: pam_unix(sudo:session): session closed for user root
Oct 14 09:30:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:30:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:30:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:30:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:30:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3c5f605b-7aec-49c8-9a61-610334c8b099 does not exist
Oct 14 09:30:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5b70eb0b-8aa3-4a7f-8c1b-5c3fb3238cea does not exist
Oct 14 09:30:00 compute-0 sudo[398539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:30:00 compute-0 sudo[398539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:30:00 compute-0 sudo[398539]: pam_unix(sudo:session): session closed for user root
Oct 14 09:30:00 compute-0 nova_compute[259627]: 2025-10-14 09:30:00.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:00 compute-0 sudo[398564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:30:00 compute-0 sudo[398564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:30:00 compute-0 sudo[398564]: pam_unix(sudo:session): session closed for user root
Oct 14 09:30:01 compute-0 ceph-mon[74249]: pgmap v2306: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 789 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Oct 14 09:30:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:30:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:30:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 159 op/s
Oct 14 09:30:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:30:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:30:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:30:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:30:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:30:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:30:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:03 compute-0 ceph-mon[74249]: pgmap v2307: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 159 op/s
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.101 2 DEBUG nova.compute.manager [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.102 2 DEBUG nova.compute.manager [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing instance network info cache due to event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.102 2 DEBUG oslo_concurrency.lockutils [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.103 2 DEBUG oslo_concurrency.lockutils [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.103 2 DEBUG nova.network.neutron [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.106 2 DEBUG nova.compute.manager [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.107 2 DEBUG nova.compute.manager [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing instance network info cache due to event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.107 2 DEBUG oslo_concurrency.lockutils [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.108 2 DEBUG oslo_concurrency.lockutils [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.108 2 DEBUG nova.network.neutron [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.243 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.244 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.244 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.244 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.244 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.245 2 INFO nova.compute.manager [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Terminating instance
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.246 2 DEBUG nova.compute.manager [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:30:04 compute-0 kernel: tap3eaf2ed5-bd (unregistering): left promiscuous mode
Oct 14 09:30:04 compute-0 NetworkManager[44885]: <info>  [1760434204.3073] device (tap3eaf2ed5-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:30:04 compute-0 ovn_controller[152662]: 2025-10-14T09:30:04Z|01435|binding|INFO|Releasing lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 from this chassis (sb_readonly=0)
Oct 14 09:30:04 compute-0 ovn_controller[152662]: 2025-10-14T09:30:04Z|01436|binding|INFO|Setting lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 down in Southbound
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:04 compute-0 ovn_controller[152662]: 2025-10-14T09:30:04Z|01437|binding|INFO|Removing iface tap3eaf2ed5-bd ovn-installed in OVS
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.333 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:c0:28 10.100.0.14'], port_security=['fa:16:3e:8c:c0:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9fc4eb48-6270-40e6-85b5-254b29fe464f fbf03768-a7c8-4472-a870-318ef2b37cd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd3c523a-c2e8-4daa-a5c1-e2e6a8d953b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3eaf2ed5-bd76-4749-b8f0-58985c91a040) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.334 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 in datapath 7a804945-64f0-4b07-8e8b-2ad2beb7451e unbound from our chassis
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.335 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a804945-64f0-4b07-8e8b-2ad2beb7451e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.336 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[62ce8280-3ee8-4280-a093-837d350e7d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.337 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e namespace which is not needed anymore
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:04 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct 14 09:30:04 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000086.scope: Consumed 13.037s CPU time.
Oct 14 09:30:04 compute-0 systemd-machined[214636]: Machine qemu-167-instance-00000086 terminated.
Oct 14 09:30:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 115 op/s
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.491 2 INFO nova.virt.libvirt.driver [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance destroyed successfully.
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.492 2 DEBUG nova.objects.instance [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:30:04 compute-0 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [NOTICE]   (397188) : haproxy version is 2.8.14-c23fe91
Oct 14 09:30:04 compute-0 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [NOTICE]   (397188) : path to executable is /usr/sbin/haproxy
Oct 14 09:30:04 compute-0 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [WARNING]  (397188) : Exiting Master process...
Oct 14 09:30:04 compute-0 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [ALERT]    (397188) : Current worker (397190) exited with code 143 (Terminated)
Oct 14 09:30:04 compute-0 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [WARNING]  (397188) : All workers exited. Exiting... (0)
Oct 14 09:30:04 compute-0 systemd[1]: libpod-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c.scope: Deactivated successfully.
Oct 14 09:30:04 compute-0 conmon[397184]: conmon 31b0bf40db3070dd9ef7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c.scope/container/memory.events
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.510 2 DEBUG nova.virt.libvirt.vif [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=134,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHiiMn1IPbRe1fqI+0HCf//9qD9/3fgeK9VnineHi5Mo2MAqU+EDOJGm1zH6VOW3MfRMImj3kzww8OxL4WA50EnUF8UJVTfKwxadLnhug9+sBPKdPWQa79dlH3frsHdZeQ==',key_name='tempest-TestSecurityGroupsBasicOps-1209305033',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:29:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-2bof4d8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:29:39Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.510 2 DEBUG nova.network.os_vif_util [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:30:04 compute-0 podman[398613]: 2025-10-14 09:30:04.514530639 +0000 UTC m=+0.077396133 container died 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.514 2 DEBUG nova.network.os_vif_util [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.515 2 DEBUG os_vif [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eaf2ed5-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.530 2 INFO os_vif [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd')
Oct 14 09:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c-userdata-shm.mount: Deactivated successfully.
Oct 14 09:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cdbb75e8a79d144f3413c716738114ae1bf49c3311a5a61a4a5f20e5f113958-merged.mount: Deactivated successfully.
Oct 14 09:30:04 compute-0 podman[398613]: 2025-10-14 09:30:04.558165087 +0000 UTC m=+0.121030601 container cleanup 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:30:04 compute-0 systemd[1]: libpod-conmon-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c.scope: Deactivated successfully.
Oct 14 09:30:04 compute-0 podman[398665]: 2025-10-14 09:30:04.65300484 +0000 UTC m=+0.062766141 container remove 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.668 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fed169c0-934c-48c4-b3e8-035a098702bb]: (4, ('Tue Oct 14 09:30:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e (31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c)\n31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c\nTue Oct 14 09:30:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e (31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c)\n31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[611778be-cd25-4341-92f9-d1bcccc19b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.672 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a804945-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:04 compute-0 kernel: tap7a804945-60: left promiscuous mode
Oct 14 09:30:04 compute-0 nova_compute[259627]: 2025-10-14 09:30:04.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.697 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5264363-608f-418d-9e20-09d7066b064b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.720 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aeab9965-fdc3-4684-97ee-006b6c448eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f688e92a-3242-4c1b-ae4e-3386f364548e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.747 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf96b21-0f09-43c3-9a99-af6f70eb3d24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795703, 'reachable_time': 31120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398684, 'error': None, 'target': 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d7a804945\x2d64f0\x2d4b07\x2d8e8b\x2d2ad2beb7451e.mount: Deactivated successfully.
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.750 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:30:04 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.750 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f7c71d-2903-416f-9be4-622700bfd46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.034 2 INFO nova.virt.libvirt.driver [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Deleting instance files /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_del
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.035 2 INFO nova.virt.libvirt.driver [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Deletion of /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_del complete
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.116 2 INFO nova.compute.manager [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.117 2 DEBUG oslo.service.loopingcall [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.117 2 DEBUG nova.compute.manager [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.118 2 DEBUG nova.network.neutron [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.262 2 DEBUG nova.network.neutron [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updated VIF entry in instance network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.262 2 DEBUG nova.network.neutron [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.290 2 DEBUG oslo_concurrency.lockutils [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:30:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948722892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:30:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:30:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948722892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:30:05 compute-0 ceph-mon[74249]: pgmap v2308: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 115 op/s
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.712 2 DEBUG nova.network.neutron [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.731 2 INFO nova.compute.manager [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Took 0.61 seconds to deallocate network for instance.
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.767 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.768 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.845 2 DEBUG oslo_concurrency.processutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.949 2 DEBUG nova.network.neutron [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updated VIF entry in instance network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.951 2 DEBUG nova.network.neutron [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:05 compute-0 nova_compute[259627]: 2025-10-14 09:30:05.977 2 DEBUG oslo_concurrency.lockutils [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.221 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-unplugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.222 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.222 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.222 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.223 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] No waiting events found dispatching network-vif-unplugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.223 2 WARNING nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received unexpected event network-vif-unplugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 for instance with vm_state deleted and task_state None.
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.224 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.224 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.224 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.225 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.225 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] No waiting events found dispatching network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.225 2 WARNING nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received unexpected event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 for instance with vm_state deleted and task_state None.
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.225 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-deleted-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:30:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244300800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.301 2 DEBUG oslo_concurrency.processutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.309 2 DEBUG nova.compute.provider_tree [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.346 2 DEBUG nova.scheduler.client.report [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.387 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 167 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 143 op/s
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.492 2 INFO nova.scheduler.client.report [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e
Oct 14 09:30:06 compute-0 nova_compute[259627]: 2025-10-14 09:30:06.566 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3948722892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:30:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3948722892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:30:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/244300800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:07.045 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:07.046 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:07.046 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:07 compute-0 ceph-mon[74249]: pgmap v2309: 305 pgs: 305 active+clean; 167 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 143 op/s
Oct 14 09:30:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 167 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 102 op/s
Oct 14 09:30:09 compute-0 nova_compute[259627]: 2025-10-14 09:30:09.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:09 compute-0 ceph-mon[74249]: pgmap v2310: 305 pgs: 305 active+clean; 167 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 102 op/s
Oct 14 09:30:10 compute-0 nova_compute[259627]: 2025-10-14 09:30:10.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 184 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 113 op/s
Oct 14 09:30:10 compute-0 ovn_controller[152662]: 2025-10-14T09:30:10Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:3d:7f 10.100.0.11
Oct 14 09:30:10 compute-0 ovn_controller[152662]: 2025-10-14T09:30:10Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:3d:7f 10.100.0.11
Oct 14 09:30:11 compute-0 ceph-mon[74249]: pgmap v2311: 305 pgs: 305 active+clean; 184 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 113 op/s
Oct 14 09:30:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Oct 14 09:30:12 compute-0 podman[398709]: 2025-10-14 09:30:12.722679696 +0000 UTC m=+0.121816871 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 09:30:12 compute-0 podman[398708]: 2025-10-14 09:30:12.728342256 +0000 UTC m=+0.130113016 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:30:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:13 compute-0 ovn_controller[152662]: 2025-10-14T09:30:13Z|01438|binding|INFO|Releasing lport 2a306a4f-b3d3-4c63-8490-e1049c247650 from this chassis (sb_readonly=0)
Oct 14 09:30:13 compute-0 ceph-mon[74249]: pgmap v2312: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Oct 14 09:30:13 compute-0 nova_compute[259627]: 2025-10-14 09:30:13.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:13 compute-0 nova_compute[259627]: 2025-10-14 09:30:13.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 14 09:30:14 compute-0 nova_compute[259627]: 2025-10-14 09:30:14.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:15 compute-0 nova_compute[259627]: 2025-10-14 09:30:15.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:15 compute-0 ceph-mon[74249]: pgmap v2313: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 14 09:30:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 14 09:30:17 compute-0 nova_compute[259627]: 2025-10-14 09:30:17.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:17 compute-0 ceph-mon[74249]: pgmap v2314: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 14 09:30:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:19 compute-0 nova_compute[259627]: 2025-10-14 09:30:19.489 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434204.4882314, f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:30:19 compute-0 nova_compute[259627]: 2025-10-14 09:30:19.489 2 INFO nova.compute.manager [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] VM Stopped (Lifecycle Event)
Oct 14 09:30:19 compute-0 nova_compute[259627]: 2025-10-14 09:30:19.518 2 DEBUG nova.compute.manager [None req-372005b1-3c92-4d11-ba35-9b3a1ba2530c - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:30:19 compute-0 nova_compute[259627]: 2025-10-14 09:30:19.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:19 compute-0 ceph-mon[74249]: pgmap v2315: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:20 compute-0 nova_compute[259627]: 2025-10-14 09:30:20.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.315 2 DEBUG nova.compute.manager [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.315 2 DEBUG nova.compute.manager [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing instance network info cache due to event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.316 2 DEBUG oslo_concurrency.lockutils [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.316 2 DEBUG oslo_concurrency.lockutils [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.317 2 DEBUG nova.network.neutron [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.367 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.367 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.368 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.368 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.369 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.371 2 INFO nova.compute.manager [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Terminating instance
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.373 2 DEBUG nova.compute.manager [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:21 compute-0 kernel: tapf7181e0a-4f (unregistering): left promiscuous mode
Oct 14 09:30:21 compute-0 NetworkManager[44885]: <info>  [1760434221.4592] device (tapf7181e0a-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:21 compute-0 ovn_controller[152662]: 2025-10-14T09:30:21Z|01439|binding|INFO|Releasing lport f7181e0a-4f5b-4d28-8302-c28ab393a348 from this chassis (sb_readonly=0)
Oct 14 09:30:21 compute-0 ovn_controller[152662]: 2025-10-14T09:30:21Z|01440|binding|INFO|Setting lport f7181e0a-4f5b-4d28-8302-c28ab393a348 down in Southbound
Oct 14 09:30:21 compute-0 ovn_controller[152662]: 2025-10-14T09:30:21Z|01441|binding|INFO|Removing iface tapf7181e0a-4f ovn-installed in OVS
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.482 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f'], port_security=['fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe90:3d7f/64', 'neutron:device_id': 'edf4b59a-fe1b-48ae-92a3-7bec88fc7491', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f7181e0a-4f5b-4d28-8302-c28ab393a348) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.485 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f7181e0a-4f5b-4d28-8302-c28ab393a348 in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 unbound from our chassis
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.487 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cbcf7e5-ac17-454b-893d-3fda266aa395
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.514 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d577d5ee-cb55-4b57-80f7-d86838d083c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:21 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 14 09:30:21 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000087.scope: Consumed 13.241s CPU time.
Oct 14 09:30:21 compute-0 systemd-machined[214636]: Machine qemu-168-instance-00000087 terminated.
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.562 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38b032a7-d2be-4250-b737-836ab07533be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.566 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f01e51c4-c188-40ab-99ff-998dfc3c1fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.622 2 INFO nova.virt.libvirt.driver [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance destroyed successfully.
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.623 2 DEBUG nova.objects.instance [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid edf4b59a-fe1b-48ae-92a3-7bec88fc7491 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.626 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c044161a-fe4c-4588-967f-d4847cef4766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.642 2 DEBUG nova.virt.libvirt.vif [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:29:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1997112513',display_name='tempest-TestGettingAddress-server-1997112513',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1997112513',id=135,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:29:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-viso8tmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:29:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=edf4b59a-fe1b-48ae-92a3-7bec88fc7491,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.643 2 DEBUG nova.network.os_vif_util [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.644 2 DEBUG nova.network.os_vif_util [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.645 2 DEBUG os_vif [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7181e0a-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e72025-b0c3-4371-a0e8-09b78e79e92f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cbcf7e5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:43:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794277, 'reachable_time': 28589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398770, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.714 2 INFO os_vif [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f')
Oct 14 09:30:21 compute-0 ceph-mon[74249]: pgmap v2316: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.724 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fd01d993-e84c-441a-8942-49768b006d66]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3cbcf7e5-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794288, 'tstamp': 794288}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398772, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3cbcf7e5-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794292, 'tstamp': 794292}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398772, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.726 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbcf7e5-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.730 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cbcf7e5-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.730 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.731 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cbcf7e5-a0, col_values=(('external_ids', {'iface-id': '2a306a4f-b3d3-4c63-8490-e1049c247650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.731 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:30:21 compute-0 nova_compute[259627]: 2025-10-14 09:30:21.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:22 compute-0 nova_compute[259627]: 2025-10-14 09:30:22.115 2 INFO nova.virt.libvirt.driver [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Deleting instance files /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491_del
Oct 14 09:30:22 compute-0 nova_compute[259627]: 2025-10-14 09:30:22.118 2 INFO nova.virt.libvirt.driver [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Deletion of /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491_del complete
Oct 14 09:30:22 compute-0 nova_compute[259627]: 2025-10-14 09:30:22.187 2 INFO nova.compute.manager [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 14 09:30:22 compute-0 nova_compute[259627]: 2025-10-14 09:30:22.188 2 DEBUG oslo.service.loopingcall [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:30:22 compute-0 nova_compute[259627]: 2025-10-14 09:30:22.188 2 DEBUG nova.compute.manager [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:30:22 compute-0 nova_compute[259627]: 2025-10-14 09:30:22.189 2 DEBUG nova.network.neutron [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:30:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 1012 KiB/s wr, 53 op/s
Oct 14 09:30:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.161 2 DEBUG nova.network.neutron [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.187 2 INFO nova.compute.manager [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Took 1.00 seconds to deallocate network for instance.
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.240 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.241 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.346 2 DEBUG oslo_concurrency.processutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.518 2 DEBUG nova.compute.manager [req-7f27f3c9-3d45-4bfa-869f-376a91dcf49b req-801c7f03-99dc-439a-ae63-a0ce8c25d45d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-vif-deleted-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.533 2 DEBUG nova.network.neutron [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updated VIF entry in instance network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.534 2 DEBUG nova.network.neutron [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.553 2 DEBUG oslo_concurrency.lockutils [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:30:23 compute-0 ceph-mon[74249]: pgmap v2317: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 1012 KiB/s wr, 53 op/s
Oct 14 09:30:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:30:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827997056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.815 2 DEBUG oslo_concurrency.processutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.822 2 DEBUG nova.compute.provider_tree [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.839 2 DEBUG nova.scheduler.client.report [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.861 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.889 2 INFO nova.scheduler.client.report [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance edf4b59a-fe1b-48ae-92a3-7bec88fc7491
Oct 14 09:30:23 compute-0 nova_compute[259627]: 2025-10-14 09:30:23.965 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 14 09:30:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2827997056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.450 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.451 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.486 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.541 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.542 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.543 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.543 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.544 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.546 2 INFO nova.compute.manager [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Terminating instance
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.548 2 DEBUG nova.compute.manager [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:30:25 compute-0 kernel: tap312d35c6-7a (unregistering): left promiscuous mode
Oct 14 09:30:25 compute-0 NetworkManager[44885]: <info>  [1760434225.6147] device (tap312d35c6-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.623 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.623 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01442|binding|INFO|Releasing lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a from this chassis (sb_readonly=0)
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01443|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a down in Southbound
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01444|binding|INFO|Removing iface tap312d35c6-7a ovn-installed in OVS
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.635 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], port_security=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fe40:44e9/64', 'neutron:device_id': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=312d35c6-7aa5-4056-b4ed-679cf0e1a12a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.636 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 unbound from our chassis
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.637 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cbcf7e5-ac17-454b-893d-3fda266aa395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.638 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52110287-8618-4d68-9eb0-56698faae691]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.639 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 namespace which is not needed anymore
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.652 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.652 2 INFO nova.compute.claims [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.685 2 DEBUG nova.compute.manager [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.686 2 DEBUG nova.compute.manager [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing instance network info cache due to event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.687 2 DEBUG oslo_concurrency.lockutils [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.687 2 DEBUG oslo_concurrency.lockutils [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.688 2 DEBUG nova.network.neutron [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:30:25 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct 14 09:30:25 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000085.scope: Consumed 14.820s CPU time.
Oct 14 09:30:25 compute-0 systemd-machined[214636]: Machine qemu-166-instance-00000085 terminated.
Oct 14 09:30:25 compute-0 ceph-mon[74249]: pgmap v2318: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 14 09:30:25 compute-0 podman[398824]: 2025-10-14 09:30:25.762514395 +0000 UTC m=+0.067622362 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:30:25 compute-0 kernel: tap312d35c6-7a: entered promiscuous mode
Oct 14 09:30:25 compute-0 NetworkManager[44885]: <info>  [1760434225.7698] manager: (tap312d35c6-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/588)
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01445|binding|INFO|Claiming lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a for this chassis.
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01446|binding|INFO|312d35c6-7aa5-4056-b4ed-679cf0e1a12a: Claiming fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9
Oct 14 09:30:25 compute-0 systemd-udevd[398821]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 kernel: tap312d35c6-7a (unregistering): left promiscuous mode
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.784 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], port_security=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fe40:44e9/64', 'neutron:device_id': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=312d35c6-7aa5-4056-b4ed-679cf0e1a12a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01447|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a ovn-installed in OVS
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01448|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a up in Southbound
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01449|binding|INFO|Releasing lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a from this chassis (sb_readonly=1)
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01450|binding|INFO|Removing iface tap312d35c6-7a ovn-installed in OVS
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01451|if_status|INFO|Dropped 2 log messages in last 162 seconds (most recently, 162 seconds ago) due to excessive rate
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01452|if_status|INFO|Not setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a down as sb is readonly
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01453|binding|INFO|Releasing lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a from this chassis (sb_readonly=0)
Oct 14 09:30:25 compute-0 ovn_controller[152662]: 2025-10-14T09:30:25Z|01454|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a down in Southbound
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.808 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], port_security=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fe40:44e9/64', 'neutron:device_id': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=312d35c6-7aa5-4056-b4ed-679cf0e1a12a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.810 2 INFO nova.virt.libvirt.driver [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance destroyed successfully.
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.811 2 DEBUG nova.objects.instance [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [NOTICE]   (396685) : haproxy version is 2.8.14-c23fe91
Oct 14 09:30:25 compute-0 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [NOTICE]   (396685) : path to executable is /usr/sbin/haproxy
Oct 14 09:30:25 compute-0 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [WARNING]  (396685) : Exiting Master process...
Oct 14 09:30:25 compute-0 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [WARNING]  (396685) : Exiting Master process...
Oct 14 09:30:25 compute-0 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [ALERT]    (396685) : Current worker (396691) exited with code 143 (Terminated)
Oct 14 09:30:25 compute-0 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [WARNING]  (396685) : All workers exited. Exiting... (0)
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.821 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:25 compute-0 systemd[1]: libpod-c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d.scope: Deactivated successfully.
Oct 14 09:30:25 compute-0 podman[398859]: 2025-10-14 09:30:25.834220657 +0000 UTC m=+0.055490842 container died c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.858 2 DEBUG nova.virt.libvirt.vif [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-766373181',display_name='tempest-TestGettingAddress-server-766373181',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-766373181',id=133,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:29:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-kdy1goa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:29:25Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=9bab7e53-30b3-4cd0-ad07-3cc9b5c05492,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.860 2 DEBUG nova.network.os_vif_util [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.861 2 DEBUG nova.network.os_vif_util [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.861 2 DEBUG os_vif [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap312d35c6-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d-userdata-shm.mount: Deactivated successfully.
Oct 14 09:30:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba749212af37e81d7c4992e139c5f78aa6b1ef87bc093a5fe8a54df954b3324e-merged.mount: Deactivated successfully.
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 podman[398859]: 2025-10-14 09:30:25.880593473 +0000 UTC m=+0.101863628 container cleanup c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.881 2 INFO os_vif [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a')
Oct 14 09:30:25 compute-0 systemd[1]: libpod-conmon-c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d.scope: Deactivated successfully.
Oct 14 09:30:25 compute-0 podman[398860]: 2025-10-14 09:30:25.929898791 +0000 UTC m=+0.132947756 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:30:25 compute-0 podman[398916]: 2025-10-14 09:30:25.948601983 +0000 UTC m=+0.046730595 container remove c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.956 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61ca7753-941c-4d37-927d-a3ce332ae3e1]: (4, ('Tue Oct 14 09:30:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 (c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d)\nc291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d\nTue Oct 14 09:30:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 (c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d)\nc291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.961 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a030cb9-98b6-4622-b006-2dfedd01aed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.961 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbcf7e5-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 nova_compute[259627]: 2025-10-14 09:30:25.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:25 compute-0 kernel: tap3cbcf7e5-a0: left promiscuous mode
Oct 14 09:30:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40397486-df2e-4ee3-874f-c154f2858f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.015 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[71ade00c-1180-4df8-aeaa-21d47061d975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.017 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70c67373-26e3-466a-a34c-df055a8ec49a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.033 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34f925f8-d714-46b5-873c-1ed3398c3631]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794269, 'reachable_time': 38248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398969, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.035 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.035 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[670a9662-aa4b-444c-b041-756be4c15c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d3cbcf7e5\x2dac17\x2d454b\x2d893d\x2d3fda266aa395.mount: Deactivated successfully.
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.036 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 unbound from our chassis
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cbcf7e5-ac17-454b-893d-3fda266aa395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.039 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2938bd7-c185-4452-85ab-b022626487bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.040 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 unbound from our chassis
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.041 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cbcf7e5-ac17-454b-893d-3fda266aa395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:30:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.042 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb2f3fc-dd5a-4eef-b217-2effbf381304]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.149 2 DEBUG nova.compute.manager [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.151 2 DEBUG oslo_concurrency.lockutils [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.152 2 DEBUG oslo_concurrency.lockutils [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.153 2 DEBUG oslo_concurrency.lockutils [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.153 2 DEBUG nova.compute.manager [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.154 2 DEBUG nova.compute.manager [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:30:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:30:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2686686302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.277 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.285 2 DEBUG nova.compute.provider_tree [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.303 2 DEBUG nova.scheduler.client.report [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.316 2 INFO nova.virt.libvirt.driver [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Deleting instance files /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_del
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.318 2 INFO nova.virt.libvirt.driver [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Deletion of /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_del complete
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.345 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.345 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.379 2 INFO nova.compute.manager [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.380 2 DEBUG oslo.service.loopingcall [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.380 2 DEBUG nova.compute.manager [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.380 2 DEBUG nova.network.neutron [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.398 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.398 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:30:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.426 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.448 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.527 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.529 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.530 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Creating image(s)
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.560 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.587 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.614 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.618 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.678 2 DEBUG nova.policy [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.726 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.727 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.728 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.728 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2686686302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.762 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:26 compute-0 nova_compute[259627]: 2025-10-14 09:30:26.767 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.076 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.164 2 DEBUG nova.network.neutron [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.172 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.216 2 INFO nova.compute.manager [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Took 0.84 seconds to deallocate network for instance.
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.293 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.293 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.300 2 DEBUG nova.objects.instance [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.322 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.323 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Ensure instance console log exists: /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.323 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.324 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.324 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.385 2 DEBUG oslo_concurrency.processutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.492 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Successfully created port: 2f6bb222-680e-469f-83d5-517735604bb0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.640 2 DEBUG nova.network.neutron [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updated VIF entry in instance network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.641 2 DEBUG nova.network.neutron [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.660 2 DEBUG oslo_concurrency.lockutils [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:30:27 compute-0 ceph-mon[74249]: pgmap v2319: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.817 2 DEBUG nova.compute.manager [req-3b4dd3a2-9665-478b-8e52-4b37e5d5f87d req-f610e572-1305-479d-9353-1b76cb6e7a45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-deleted-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.818 2 INFO nova.compute.manager [req-3b4dd3a2-9665-478b-8e52-4b37e5d5f87d req-f610e572-1305-479d-9353-1b76cb6e7a45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Neutron deleted interface 312d35c6-7aa5-4056-b4ed-679cf0e1a12a; detaching it from the instance and deleting it from the info cache
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.818 2 DEBUG nova.network.neutron [req-3b4dd3a2-9665-478b-8e52-4b37e5d5f87d req-f610e572-1305-479d-9353-1b76cb6e7a45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.842 2 DEBUG nova.compute.manager [req-3b4dd3a2-9665-478b-8e52-4b37e5d5f87d req-f610e572-1305-479d-9353-1b76cb6e7a45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Detach interface failed, port_id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a, reason: Instance 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:30:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:30:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382039413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.887 2 DEBUG oslo_concurrency.processutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.894 2 DEBUG nova.compute.provider_tree [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.909 2 DEBUG nova.scheduler.client.report [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.932 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:27 compute-0 nova_compute[259627]: 2025-10-14 09:30:27.973 2 INFO nova.scheduler.client.report [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.045 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.299 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Successfully updated port: 2f6bb222-680e-469f-83d5-517735604bb0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.305 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.306 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.306 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.307 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.307 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.307 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.307 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.308 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.308 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.308 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.308 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.309 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.309 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.309 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.309 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.310 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.310 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.310 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.310 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.311 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.311 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.311 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.311 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.312 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.312 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.312 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.312 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.313 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.313 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.313 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.322 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.322 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.322 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:30:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 09:30:28 compute-0 nova_compute[259627]: 2025-10-14 09:30:28.465 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:30:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3382039413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.459 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.482 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.482 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance network_info: |[{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.488 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start _get_guest_xml network_info=[{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.494 2 WARNING nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.500 2 DEBUG nova.virt.libvirt.host [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.500 2 DEBUG nova.virt.libvirt.host [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.503 2 DEBUG nova.virt.libvirt.host [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.503 2 DEBUG nova.virt.libvirt.host [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.504 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.504 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.504 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.504 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.506 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.506 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.507 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.509 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:29 compute-0 ceph-mon[74249]: pgmap v2320: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.959 2 DEBUG nova.compute.manager [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-changed-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.959 2 DEBUG nova.compute.manager [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing instance network info cache due to event network-changed-2f6bb222-680e-469f-83d5-517735604bb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.960 2 DEBUG oslo_concurrency.lockutils [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.960 2 DEBUG oslo_concurrency.lockutils [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.961 2 DEBUG nova.network.neutron [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:30:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:30:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587058374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:30:29 compute-0 nova_compute[259627]: 2025-10-14 09:30:29.987 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.010 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.015 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 124 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 746 KiB/s wr, 37 op/s
Oct 14 09:30:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:30:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1728374326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.462 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.464 2 DEBUG nova.virt.libvirt.vif [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=136,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jzso6cvr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:26Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.464 2 DEBUG nova.network.os_vif_util [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.465 2 DEBUG nova.network.os_vif_util [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.466 2 DEBUG nova.objects.instance [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.515 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <uuid>4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4</uuid>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <name>instance-00000088</name>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337</nova:name>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:30:29</nova:creationTime>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <nova:port uuid="2f6bb222-680e-469f-83d5-517735604bb0">
Oct 14 09:30:30 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <system>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <entry name="serial">4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4</entry>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <entry name="uuid">4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4</entry>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </system>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <os>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   </os>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <features>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   </features>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk">
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       </source>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config">
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       </source>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:30:30 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:57:bb:97"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <target dev="tap2f6bb222-68"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/console.log" append="off"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <video>
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </video>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:30:30 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:30:30 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:30:30 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:30:30 compute-0 nova_compute[259627]: </domain>
Oct 14 09:30:30 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.517 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Preparing to wait for external event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.517 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.517 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.517 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.518 2 DEBUG nova.virt.libvirt.vif [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=136,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jzso6cvr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:26Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.518 2 DEBUG nova.network.os_vif_util [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.519 2 DEBUG nova.network.os_vif_util [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.519 2 DEBUG os_vif [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f6bb222-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f6bb222-68, col_values=(('external_ids', {'iface-id': '2f6bb222-680e-469f-83d5-517735604bb0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:bb:97', 'vm-uuid': '4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:30:30 compute-0 NetworkManager[44885]: <info>  [1760434230.5287] manager: (tap2f6bb222-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.532 2 INFO os_vif [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68')
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.600 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.600 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.601 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:57:bb:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.602 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Using config drive
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.634 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1587058374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:30:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1728374326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.985 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Creating config drive at /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config
Oct 14 09:30:30 compute-0 nova_compute[259627]: 2025-10-14 09:30:30.990 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu3ihbsce execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.160 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu3ihbsce" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.182 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.185 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.372 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.373 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Deleting local config drive /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config because it was imported into RBD.
Oct 14 09:30:31 compute-0 kernel: tap2f6bb222-68: entered promiscuous mode
Oct 14 09:30:31 compute-0 NetworkManager[44885]: <info>  [1760434231.4395] manager: (tap2f6bb222-68): new Tun device (/org/freedesktop/NetworkManager/Devices/590)
Oct 14 09:30:31 compute-0 ovn_controller[152662]: 2025-10-14T09:30:31Z|01455|binding|INFO|Claiming lport 2f6bb222-680e-469f-83d5-517735604bb0 for this chassis.
Oct 14 09:30:31 compute-0 ovn_controller[152662]: 2025-10-14T09:30:31Z|01456|binding|INFO|2f6bb222-680e-469f-83d5-517735604bb0: Claiming fa:16:3e:57:bb:97 10.100.0.9
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:31 compute-0 systemd-udevd[399293]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.468 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:bb:97 10.100.0.9'], port_security=['fa:16:3e:57:bb:97 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e0f5391-ee56-46d7-8aa9-9ba00efbe0e1 93b5966a-7949-42d1-a83d-7ff7c7667c63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a89aaec7-5335-48cb-8b51-b7dcd7e1d5f5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2f6bb222-680e-469f-83d5-517735604bb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.469 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6bb222-680e-469f-83d5-517735604bb0 in datapath 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a bound to our chassis
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.470 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a
Oct 14 09:30:31 compute-0 ovn_controller[152662]: 2025-10-14T09:30:31Z|01457|binding|INFO|Setting lport 2f6bb222-680e-469f-83d5-517735604bb0 ovn-installed in OVS
Oct 14 09:30:31 compute-0 ovn_controller[152662]: 2025-10-14T09:30:31Z|01458|binding|INFO|Setting lport 2f6bb222-680e-469f-83d5-517735604bb0 up in Southbound
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:31 compute-0 NetworkManager[44885]: <info>  [1760434231.4863] device (tap2f6bb222-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:30:31 compute-0 NetworkManager[44885]: <info>  [1760434231.4882] device (tap2f6bb222-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.488 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be55eec0-e8df-4d9a-8fa4-178906779226]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.490 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e05ecd2-81 in ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.492 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e05ecd2-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98d9ccf7-1f1c-49c0-a636-2e9b1bb42b72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.493 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca031c2-ef59-4e87-800e-7476a4ba6f88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 systemd-machined[214636]: New machine qemu-169-instance-00000088.
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.510 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[65c18e46-78f4-4fbd-b1c6-d88437ac355d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000088.
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[807a4759-cbcb-4d4e-bf0c-ce0f188b6388]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.574 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27c07f54-69cd-4bb9-ab61-0ffab471f23a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 NetworkManager[44885]: <info>  [1760434231.5809] manager: (tap6e05ecd2-80): new Veth device (/org/freedesktop/NetworkManager/Devices/591)
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.581 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd8a2ea-8d30-4c63-aba5-e90e1c2c65d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.619 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7a83ba1a-8be2-4cb0-9b27-3fb14388621e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.622 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebd2fc7-7ad4-47f4-862b-444b59804e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 NetworkManager[44885]: <info>  [1760434231.6502] device (tap6e05ecd2-80): carrier: link connected
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.661 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[07a196f6-9afc-484c-b273-1917a81ce3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.680 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[76c8fdd6-d7cc-469b-84ce-9398a30d6853]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e05ecd2-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:c2:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801041, 'reachable_time': 33084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399329, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.700 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f98c2d1b-0ab0-422f-aac8-0d406e8530e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:c23a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801041, 'tstamp': 801041}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399330, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[906594ef-9c45-4c5f-a223-3a98294a65cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e05ecd2-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:c2:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801041, 'reachable_time': 33084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399331, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.723 2 DEBUG nova.network.neutron [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updated VIF entry in instance network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.724 2 DEBUG nova.network.neutron [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.737 2 DEBUG oslo_concurrency.lockutils [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e9ae58-26dd-4389-94d2-82a7ce911c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ceph-mon[74249]: pgmap v2321: 305 pgs: 305 active+clean; 124 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 746 KiB/s wr, 37 op/s
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.810 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13e6a980-a85b-4ab9-80c5-9ce277eb9d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.811 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e05ecd2-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.812 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.812 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e05ecd2-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:31 compute-0 NetworkManager[44885]: <info>  [1760434231.8142] manager: (tap6e05ecd2-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Oct 14 09:30:31 compute-0 kernel: tap6e05ecd2-80: entered promiscuous mode
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.817 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e05ecd2-80, col_values=(('external_ids', {'iface-id': '4697e43c-b02d-4f27-aea8-a54cad6fa2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:31 compute-0 ovn_controller[152662]: 2025-10-14T09:30:31Z|01459|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.833 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.834 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca32ad8-5da9-431d-bdf2-2bda31e966d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.836 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a.pid.haproxy
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.837 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'env', 'PROCESS_TAG=haproxy-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:31 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.857 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:30:31 compute-0 ovn_controller[152662]: 2025-10-14T09:30:31Z|01460|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 09:30:31 compute-0 nova_compute[259627]: 2025-10-14 09:30:31.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:32 compute-0 ovn_controller[152662]: 2025-10-14T09:30:32Z|01461|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.081 2 DEBUG nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.081 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.082 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.082 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.083 2 DEBUG nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Processing event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.083 2 DEBUG nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.083 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.084 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.084 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.084 2 DEBUG nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] No waiting events found dispatching network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.085 2 WARNING nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received unexpected event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 for instance with vm_state building and task_state spawning.
Oct 14 09:30:32 compute-0 podman[399403]: 2025-10-14 09:30:32.340996935 +0000 UTC m=+0.089358869 container create ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:30:32 compute-0 podman[399403]: 2025-10-14 09:30:32.295573853 +0000 UTC m=+0.043935837 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:30:32 compute-0 systemd[1]: Started libpod-conmon-ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4.scope.
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 14 09:30:32 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a4ae5bac5fb7737a47f6362c904610fbce4171335446f4e8312252dc95d3e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:30:32 compute-0 podman[399403]: 2025-10-14 09:30:32.461965424 +0000 UTC m=+0.210327358 container init ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:30:32 compute-0 podman[399403]: 2025-10-14 09:30:32.472608637 +0000 UTC m=+0.220970541 container start ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:30:32 compute-0 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [NOTICE]   (399422) : New worker (399424) forked
Oct 14 09:30:32 compute-0 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [NOTICE]   (399422) : Loading success.
Oct 14 09:30:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:32.535 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.572 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434232.5717463, 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.573 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] VM Started (Lifecycle Event)
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.576 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.584 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.588 2 INFO nova.virt.libvirt.driver [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance spawned successfully.
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.589 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.655 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.656 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.657 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.658 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.659 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.660 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.666 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.670 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.712 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.713 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434232.5720613, 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.714 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] VM Paused (Lifecycle Event)
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.743 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.749 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434232.5836525, 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.750 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] VM Resumed (Lifecycle Event)
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.756 2 INFO nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Took 6.23 seconds to spawn the instance on the hypervisor.
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.756 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.771 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.775 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.804 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.827 2 INFO nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Took 7.26 seconds to build instance.
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:30:32
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.control', 'images', '.rgw.root', 'vms']
Oct 14 09:30:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:30:32 compute-0 nova_compute[259627]: 2025-10-14 09:30:32.866 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:30:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:30:33 compute-0 ceph-mon[74249]: pgmap v2322: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 14 09:30:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 14 09:30:35 compute-0 nova_compute[259627]: 2025-10-14 09:30:35.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:35 compute-0 nova_compute[259627]: 2025-10-14 09:30:35.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:35 compute-0 ceph-mon[74249]: pgmap v2323: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 14 09:30:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Oct 14 09:30:36 compute-0 nova_compute[259627]: 2025-10-14 09:30:36.621 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434221.6195803, edf4b59a-fe1b-48ae-92a3-7bec88fc7491 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:30:36 compute-0 nova_compute[259627]: 2025-10-14 09:30:36.621 2 INFO nova.compute.manager [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] VM Stopped (Lifecycle Event)
Oct 14 09:30:36 compute-0 nova_compute[259627]: 2025-10-14 09:30:36.647 2 DEBUG nova.compute.manager [None req-04138097-0440-40de-bdf7-e1ca56659660 - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:30:37 compute-0 ovn_controller[152662]: 2025-10-14T09:30:37Z|01462|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 09:30:37 compute-0 nova_compute[259627]: 2025-10-14 09:30:37.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:37 compute-0 NetworkManager[44885]: <info>  [1760434237.4034] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Oct 14 09:30:37 compute-0 NetworkManager[44885]: <info>  [1760434237.4062] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Oct 14 09:30:37 compute-0 ovn_controller[152662]: 2025-10-14T09:30:37Z|01463|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 09:30:37 compute-0 nova_compute[259627]: 2025-10-14 09:30:37.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:37 compute-0 nova_compute[259627]: 2025-10-14 09:30:37.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:37 compute-0 ceph-mon[74249]: pgmap v2324: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Oct 14 09:30:37 compute-0 nova_compute[259627]: 2025-10-14 09:30:37.992 2 DEBUG nova.compute.manager [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-changed-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:30:37 compute-0 nova_compute[259627]: 2025-10-14 09:30:37.993 2 DEBUG nova.compute.manager [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing instance network info cache due to event network-changed-2f6bb222-680e-469f-83d5-517735604bb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:30:37 compute-0 nova_compute[259627]: 2025-10-14 09:30:37.994 2 DEBUG oslo_concurrency.lockutils [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:30:37 compute-0 nova_compute[259627]: 2025-10-14 09:30:37.994 2 DEBUG oslo_concurrency.lockutils [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:30:37 compute-0 nova_compute[259627]: 2025-10-14 09:30:37.994 2 DEBUG nova.network.neutron [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:30:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:30:38 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:30:38.537 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:30:39 compute-0 ceph-mon[74249]: pgmap v2325: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:30:40 compute-0 nova_compute[259627]: 2025-10-14 09:30:40.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:30:40 compute-0 nova_compute[259627]: 2025-10-14 09:30:40.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:40 compute-0 nova_compute[259627]: 2025-10-14 09:30:40.806 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434225.805421, 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:30:40 compute-0 nova_compute[259627]: 2025-10-14 09:30:40.808 2 INFO nova.compute.manager [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] VM Stopped (Lifecycle Event)
Oct 14 09:30:40 compute-0 nova_compute[259627]: 2025-10-14 09:30:40.848 2 DEBUG nova.compute.manager [None req-85a827a1-fbd8-46f8-8624-21847f342c34 - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:30:40 compute-0 nova_compute[259627]: 2025-10-14 09:30:40.920 2 DEBUG nova.network.neutron [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updated VIF entry in instance network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:30:40 compute-0 nova_compute[259627]: 2025-10-14 09:30:40.921 2 DEBUG nova.network.neutron [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:40 compute-0 nova_compute[259627]: 2025-10-14 09:30:40.949 2 DEBUG oslo_concurrency.lockutils [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:30:41 compute-0 ceph-mon[74249]: pgmap v2326: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:30:41 compute-0 nova_compute[259627]: 2025-10-14 09:30:41.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 122 op/s
Oct 14 09:30:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:30:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:30:43 compute-0 podman[399434]: 2025-10-14 09:30:43.699644564 +0000 UTC m=+0.094123967 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd)
Oct 14 09:30:43 compute-0 podman[399435]: 2025-10-14 09:30:43.725495033 +0000 UTC m=+0.118515470 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:30:43 compute-0 ceph-mon[74249]: pgmap v2327: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 122 op/s
Oct 14 09:30:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:30:44 compute-0 ovn_controller[152662]: 2025-10-14T09:30:44Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:bb:97 10.100.0.9
Oct 14 09:30:44 compute-0 ovn_controller[152662]: 2025-10-14T09:30:44Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:bb:97 10.100.0.9
Oct 14 09:30:44 compute-0 nova_compute[259627]: 2025-10-14 09:30:44.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:44 compute-0 nova_compute[259627]: 2025-10-14 09:30:44.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:44 compute-0 nova_compute[259627]: 2025-10-14 09:30:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:30:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710949761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.521 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.654 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.655 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.842 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.843 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3492MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.843 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.844 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:45 compute-0 ceph-mon[74249]: pgmap v2328: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:30:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3710949761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:45 compute-0 nova_compute[259627]: 2025-10-14 09:30:45.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.021 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.075 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 14 09:30:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:30:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/693183136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.560 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.569 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.593 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.625 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:30:46 compute-0 nova_compute[259627]: 2025-10-14 09:30:46.626 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/693183136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:47 compute-0 nova_compute[259627]: 2025-10-14 09:30:47.628 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:47 compute-0 ceph-mon[74249]: pgmap v2329: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 14 09:30:47 compute-0 nova_compute[259627]: 2025-10-14 09:30:47.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:47 compute-0 nova_compute[259627]: 2025-10-14 09:30:47.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:30:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:48 compute-0 nova_compute[259627]: 2025-10-14 09:30:48.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:48 compute-0 nova_compute[259627]: 2025-10-14 09:30:48.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:30:48 compute-0 nova_compute[259627]: 2025-10-14 09:30:48.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:30:49 compute-0 nova_compute[259627]: 2025-10-14 09:30:49.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:49 compute-0 nova_compute[259627]: 2025-10-14 09:30:49.672 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:30:49 compute-0 nova_compute[259627]: 2025-10-14 09:30:49.673 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:30:49 compute-0 nova_compute[259627]: 2025-10-14 09:30:49.673 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:30:49 compute-0 nova_compute[259627]: 2025-10-14 09:30:49.674 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:30:49 compute-0 ceph-mon[74249]: pgmap v2330: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:50 compute-0 nova_compute[259627]: 2025-10-14 09:30:50.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:50 compute-0 nova_compute[259627]: 2025-10-14 09:30:50.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:51 compute-0 ceph-mon[74249]: pgmap v2331: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:52 compute-0 nova_compute[259627]: 2025-10-14 09:30:52.163 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:30:52 compute-0 nova_compute[259627]: 2025-10-14 09:30:52.180 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:30:52 compute-0 nova_compute[259627]: 2025-10-14 09:30:52.181 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:30:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:53 compute-0 nova_compute[259627]: 2025-10-14 09:30:53.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:53 compute-0 ceph-mon[74249]: pgmap v2332: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:53 compute-0 nova_compute[259627]: 2025-10-14 09:30:53.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:54 compute-0 nova_compute[259627]: 2025-10-14 09:30:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:30:55 compute-0 nova_compute[259627]: 2025-10-14 09:30:55.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:55 compute-0 nova_compute[259627]: 2025-10-14 09:30:55.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:30:55 compute-0 ceph-mon[74249]: pgmap v2333: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:56 compute-0 podman[399520]: 2025-10-14 09:30:56.699965344 +0000 UTC m=+0.093399539 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:30:56 compute-0 podman[399519]: 2025-10-14 09:30:56.755343282 +0000 UTC m=+0.153863443 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.330 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.330 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.349 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.431 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.431 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.442 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.442 2 INFO nova.compute.claims [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.509 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.510 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.536 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.613 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:57 compute-0 nova_compute[259627]: 2025-10-14 09:30:57.675 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:57 compute-0 ceph-mon[74249]: pgmap v2334: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:30:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:30:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:30:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385454893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.085 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.093 2 DEBUG nova.compute.provider_tree [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.165 2 DEBUG nova.scheduler.client.report [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.188 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.189 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.193 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.202 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.203 2 INFO nova.compute.claims [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.339 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.340 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.361 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.391 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.408 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.605 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.607 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.608 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Creating image(s)
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.634 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.658 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.678 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.681 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.767 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.768 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.769 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.769 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.787 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.790 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b30a994a-5fb7-4344-9944-98d3d75d3b04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:30:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771659984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.870 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.876 2 DEBUG nova.compute.provider_tree [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.893 2 DEBUG nova.policy [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:30:58 compute-0 nova_compute[259627]: 2025-10-14 09:30:58.930 2 DEBUG nova.scheduler.client.report [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:30:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/385454893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2771659984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.045 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b30a994a-5fb7-4344-9944-98d3d75d3b04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.090 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.114 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.114 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.166 2 DEBUG nova.objects.instance [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid b30a994a-5fb7-4344-9944-98d3d75d3b04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.192 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.192 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Ensure instance console log exists: /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.193 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.193 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.193 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.203 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.203 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.243 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.270 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.426 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.427 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.428 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Creating image(s)
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.461 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.502 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.539 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.544 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.649 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.651 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.652 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.652 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.688 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.694 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.743 2 DEBUG nova.policy [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:30:59 compute-0 ceph-mon[74249]: pgmap v2335: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:30:59 compute-0 nova_compute[259627]: 2025-10-14 09:30:59.993 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.043 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.127 2 DEBUG nova.objects.instance [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 4ff65022-c3e1-4ee6-b866-7892555ef52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.145 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.145 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Ensure instance console log exists: /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.146 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.146 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.146 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.266 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Successfully created port: f2639397-8fb2-4541-a298-fd68219e1e47 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.376 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Successfully created port: 20f15fae-2789-43f5-8ca3-2a412dba5625 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 149 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 893 KiB/s wr, 8 op/s
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:00 compute-0 nova_compute[259627]: 2025-10-14 09:31:00.741 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Successfully created port: c22b3ec2-f5a1-4c97-8648-a463e9e12545 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:31:00 compute-0 sudo[399942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:00 compute-0 sudo[399942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:00 compute-0 sudo[399942]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:00 compute-0 sudo[399967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:31:00 compute-0 sudo[399967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:00 compute-0 sudo[399967]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:01 compute-0 sudo[399992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:01 compute-0 sudo[399992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:01 compute-0 sudo[399992]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:01 compute-0 sudo[400017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:31:01 compute-0 sudo[400017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:01 compute-0 nova_compute[259627]: 2025-10-14 09:31:01.097 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Successfully updated port: 20f15fae-2789-43f5-8ca3-2a412dba5625 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:31:01 compute-0 nova_compute[259627]: 2025-10-14 09:31:01.116 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:01 compute-0 nova_compute[259627]: 2025-10-14 09:31:01.117 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:01 compute-0 nova_compute[259627]: 2025-10-14 09:31:01.117 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:31:01 compute-0 nova_compute[259627]: 2025-10-14 09:31:01.194 2 DEBUG nova.compute.manager [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:01 compute-0 nova_compute[259627]: 2025-10-14 09:31:01.194 2 DEBUG nova.compute.manager [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing instance network info cache due to event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:01 compute-0 nova_compute[259627]: 2025-10-14 09:31:01.195 2 DEBUG oslo_concurrency.lockutils [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:01 compute-0 sudo[400017]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:31:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:31:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:31:01 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:31:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:31:01 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:31:01 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev fb69312d-d2b5-4528-8d5d-8b9ff46eed9f does not exist
Oct 14 09:31:01 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 746fbf9d-33e8-41ed-947d-4c61e8f1df80 does not exist
Oct 14 09:31:01 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6f41fa35-5568-49eb-9d51-77847a85f6a7 does not exist
Oct 14 09:31:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:31:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:31:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:31:01 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:31:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:31:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:31:01 compute-0 sudo[400073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:01 compute-0 sudo[400073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:01 compute-0 sudo[400073]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:01 compute-0 nova_compute[259627]: 2025-10-14 09:31:01.883 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:31:01 compute-0 sudo[400098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:31:01 compute-0 sudo[400098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:01 compute-0 sudo[400098]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:01 compute-0 ceph-mon[74249]: pgmap v2336: 305 pgs: 305 active+clean; 149 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 893 KiB/s wr, 8 op/s
Oct 14 09:31:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:31:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:31:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:31:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:31:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:31:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:31:02 compute-0 sudo[400123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:02 compute-0 sudo[400123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:02 compute-0 sudo[400123]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:02 compute-0 sudo[400148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:31:02 compute-0 sudo[400148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:02 compute-0 nova_compute[259627]: 2025-10-14 09:31:02.140 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Successfully updated port: f2639397-8fb2-4541-a298-fd68219e1e47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:31:02 compute-0 nova_compute[259627]: 2025-10-14 09:31:02.225 2 DEBUG nova.compute.manager [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:02 compute-0 nova_compute[259627]: 2025-10-14 09:31:02.226 2 DEBUG nova.compute.manager [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing instance network info cache due to event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:02 compute-0 nova_compute[259627]: 2025-10-14 09:31:02.226 2 DEBUG oslo_concurrency.lockutils [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:02 compute-0 nova_compute[259627]: 2025-10-14 09:31:02.226 2 DEBUG oslo_concurrency.lockutils [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:02 compute-0 nova_compute[259627]: 2025-10-14 09:31:02.226 2 DEBUG nova.network.neutron [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 213 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 48 op/s
Oct 14 09:31:02 compute-0 podman[400215]: 2025-10-14 09:31:02.515266169 +0000 UTC m=+0.070811739 container create e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:31:02 compute-0 systemd[1]: Started libpod-conmon-e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd.scope.
Oct 14 09:31:02 compute-0 podman[400215]: 2025-10-14 09:31:02.483724295 +0000 UTC m=+0.039269915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:31:02 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:31:02 compute-0 podman[400215]: 2025-10-14 09:31:02.634951226 +0000 UTC m=+0.190496806 container init e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 09:31:02 compute-0 podman[400215]: 2025-10-14 09:31:02.649458582 +0000 UTC m=+0.205004142 container start e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:31:02 compute-0 podman[400215]: 2025-10-14 09:31:02.653567003 +0000 UTC m=+0.209112563 container attach e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:31:02 compute-0 agitated_keller[400232]: 167 167
Oct 14 09:31:02 compute-0 systemd[1]: libpod-e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd.scope: Deactivated successfully.
Oct 14 09:31:02 compute-0 conmon[400232]: conmon e923e8739478cb2d50e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd.scope/container/memory.events
Oct 14 09:31:02 compute-0 podman[400237]: 2025-10-14 09:31:02.730938142 +0000 UTC m=+0.045378145 container died e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:31:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:31:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:31:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e73c11aa86aaea91b43b0ef76e3a5973c171310b7fc73cd4e8427d689997bb4-merged.mount: Deactivated successfully.
Oct 14 09:31:02 compute-0 podman[400237]: 2025-10-14 09:31:02.791817975 +0000 UTC m=+0.106257938 container remove e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:31:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:31:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:31:02 compute-0 systemd[1]: libpod-conmon-e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd.scope: Deactivated successfully.
Oct 14 09:31:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:31:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:31:02 compute-0 nova_compute[259627]: 2025-10-14 09:31:02.848 2 DEBUG nova.network.neutron [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:31:02 compute-0 podman[400259]: 2025-10-14 09:31:02.990596564 +0000 UTC m=+0.057263467 container create a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:31:03 compute-0 ceph-mon[74249]: pgmap v2337: 305 pgs: 305 active+clean; 213 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 48 op/s
Oct 14 09:31:03 compute-0 systemd[1]: Started libpod-conmon-a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13.scope.
Oct 14 09:31:03 compute-0 podman[400259]: 2025-10-14 09:31:02.962281599 +0000 UTC m=+0.028948542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:31:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:03 compute-0 podman[400259]: 2025-10-14 09:31:03.11434035 +0000 UTC m=+0.181007223 container init a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:31:03 compute-0 podman[400259]: 2025-10-14 09:31:03.128056327 +0000 UTC m=+0.194723230 container start a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:31:03 compute-0 podman[400259]: 2025-10-14 09:31:03.131925222 +0000 UTC m=+0.198592105 container attach a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.495 2 DEBUG nova.network.neutron [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.523 2 DEBUG oslo_concurrency.lockutils [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.549 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.571 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.571 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance network_info: |[{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.572 2 DEBUG oslo_concurrency.lockutils [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.572 2 DEBUG nova.network.neutron [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.577 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start _get_guest_xml network_info=[{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.585 2 WARNING nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.598 2 DEBUG nova.virt.libvirt.host [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.599 2 DEBUG nova.virt.libvirt.host [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.606 2 DEBUG nova.virt.libvirt.host [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.607 2 DEBUG nova.virt.libvirt.host [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.607 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.608 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.609 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.609 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.610 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.610 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.611 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.611 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.611 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.612 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.612 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.613 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.617 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.743 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Successfully updated port: c22b3ec2-f5a1-4c97-8648-a463e9e12545 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.763 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.763 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.764 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:31:03 compute-0 nova_compute[259627]: 2025-10-14 09:31:03.940 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:31:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:31:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477009379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.103 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.130 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.135 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3477009379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:04 compute-0 tender_wing[400276]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:31:04 compute-0 tender_wing[400276]: --> relative data size: 1.0
Oct 14 09:31:04 compute-0 tender_wing[400276]: --> All data devices are unavailable
Oct 14 09:31:04 compute-0 systemd[1]: libpod-a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13.scope: Deactivated successfully.
Oct 14 09:31:04 compute-0 systemd[1]: libpod-a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13.scope: Consumed 1.086s CPU time.
Oct 14 09:31:04 compute-0 podman[400259]: 2025-10-14 09:31:04.276002488 +0000 UTC m=+1.342669391 container died a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090-merged.mount: Deactivated successfully.
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.331 2 DEBUG nova.compute.manager [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-changed-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.333 2 DEBUG nova.compute.manager [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing instance network info cache due to event network-changed-c22b3ec2-f5a1-4c97-8648-a463e9e12545. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.334 2 DEBUG oslo_concurrency.lockutils [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:04 compute-0 podman[400259]: 2025-10-14 09:31:04.354030392 +0000 UTC m=+1.420697265 container remove a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:31:04 compute-0 systemd[1]: libpod-conmon-a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13.scope: Deactivated successfully.
Oct 14 09:31:04 compute-0 sudo[400148]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 213 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 48 op/s
Oct 14 09:31:04 compute-0 sudo[400379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:04 compute-0 sudo[400379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:04 compute-0 sudo[400379]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:04 compute-0 sudo[400404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:31:04 compute-0 sudo[400404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:04 compute-0 sudo[400404]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:31:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751276044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.614 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.615 2 DEBUG nova.virt.libvirt.vif [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=138,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-69wk0x4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:59Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4ff65022-c3e1-4ee6-b866-7892555ef52f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.616 2 DEBUG nova.network.os_vif_util [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.617 2 DEBUG nova.network.os_vif_util [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.618 2 DEBUG nova.objects.instance [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ff65022-c3e1-4ee6-b866-7892555ef52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.637 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <uuid>4ff65022-c3e1-4ee6-b866-7892555ef52f</uuid>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <name>instance-0000008a</name>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503</nova:name>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:31:03</nova:creationTime>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <nova:port uuid="20f15fae-2789-43f5-8ca3-2a412dba5625">
Oct 14 09:31:04 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <system>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <entry name="serial">4ff65022-c3e1-4ee6-b866-7892555ef52f</entry>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <entry name="uuid">4ff65022-c3e1-4ee6-b866-7892555ef52f</entry>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </system>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <os>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   </os>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <features>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   </features>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4ff65022-c3e1-4ee6-b866-7892555ef52f_disk">
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       </source>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config">
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       </source>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:31:04 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:59:13:28"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <target dev="tap20f15fae-27"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/console.log" append="off"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <video>
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </video>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:31:04 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:31:04 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:31:04 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:31:04 compute-0 nova_compute[259627]: </domain>
Oct 14 09:31:04 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.638 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Preparing to wait for external event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.638 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.638 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.639 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.639 2 DEBUG nova.virt.libvirt.vif [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=138,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-69wk0x4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:59Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4ff65022-c3e1-4ee6-b866-7892555ef52f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.640 2 DEBUG nova.network.os_vif_util [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.640 2 DEBUG nova.network.os_vif_util [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.641 2 DEBUG os_vif [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20f15fae-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20f15fae-27, col_values=(('external_ids', {'iface-id': '20f15fae-2789-43f5-8ca3-2a412dba5625', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:13:28', 'vm-uuid': '4ff65022-c3e1-4ee6-b866-7892555ef52f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:04 compute-0 NetworkManager[44885]: <info>  [1760434264.6517] manager: (tap20f15fae-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.657 2 INFO os_vif [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27')
Oct 14 09:31:04 compute-0 sudo[400429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:04 compute-0 sudo[400429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:04 compute-0 sudo[400429]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.723 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.724 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.724 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:59:13:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.724 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Using config drive
Oct 14 09:31:04 compute-0 sudo[400458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:31:04 compute-0 nova_compute[259627]: 2025-10-14 09:31:04.759 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:04 compute-0 sudo[400458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:05 compute-0 ceph-mon[74249]: pgmap v2338: 305 pgs: 305 active+clean; 213 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 48 op/s
Oct 14 09:31:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2751276044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:05 compute-0 podman[400540]: 2025-10-14 09:31:05.173625685 +0000 UTC m=+0.058034035 container create 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:31:05 compute-0 systemd[1]: Started libpod-conmon-0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372.scope.
Oct 14 09:31:05 compute-0 podman[400540]: 2025-10-14 09:31:05.149571385 +0000 UTC m=+0.033979705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:31:05 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:31:05 compute-0 podman[400540]: 2025-10-14 09:31:05.28383083 +0000 UTC m=+0.168239240 container init 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:31:05 compute-0 podman[400540]: 2025-10-14 09:31:05.297124016 +0000 UTC m=+0.181532366 container start 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:31:05 compute-0 podman[400540]: 2025-10-14 09:31:05.301543114 +0000 UTC m=+0.185951464 container attach 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:31:05 compute-0 boring_raman[400557]: 167 167
Oct 14 09:31:05 compute-0 systemd[1]: libpod-0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372.scope: Deactivated successfully.
Oct 14 09:31:05 compute-0 conmon[400557]: conmon 0a454cd99e4ecd55682f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372.scope/container/memory.events
Oct 14 09:31:05 compute-0 podman[400540]: 2025-10-14 09:31:05.306164948 +0000 UTC m=+0.190573298 container died 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac05094bd029e5f2e0ace3c4be0c25d16e02217ddc01df89ab426382e3f6da2-merged.mount: Deactivated successfully.
Oct 14 09:31:05 compute-0 podman[400540]: 2025-10-14 09:31:05.366848957 +0000 UTC m=+0.251257267 container remove 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:31:05 compute-0 systemd[1]: libpod-conmon-0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372.scope: Deactivated successfully.
Oct 14 09:31:05 compute-0 nova_compute[259627]: 2025-10-14 09:31:05.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:05 compute-0 podman[400583]: 2025-10-14 09:31:05.601440813 +0000 UTC m=+0.053690599 container create 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:31:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:31:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120634801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:31:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:31:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120634801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:31:05 compute-0 systemd[1]: Started libpod-conmon-43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7.scope.
Oct 14 09:31:05 compute-0 podman[400583]: 2025-10-14 09:31:05.581936304 +0000 UTC m=+0.034186120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:31:05 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:05 compute-0 podman[400583]: 2025-10-14 09:31:05.71786882 +0000 UTC m=+0.170118636 container init 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:31:05 compute-0 podman[400583]: 2025-10-14 09:31:05.728956832 +0000 UTC m=+0.181206638 container start 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:31:05 compute-0 podman[400583]: 2025-10-14 09:31:05.733066083 +0000 UTC m=+0.185315959 container attach 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:31:05 compute-0 nova_compute[259627]: 2025-10-14 09:31:05.882 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Creating config drive at /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config
Oct 14 09:31:05 compute-0 nova_compute[259627]: 2025-10-14 09:31:05.892 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphnklfotg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:05 compute-0 nova_compute[259627]: 2025-10-14 09:31:05.960 2 DEBUG nova.network.neutron [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updated VIF entry in instance network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:31:05 compute-0 nova_compute[259627]: 2025-10-14 09:31:05.962 2 DEBUG nova.network.neutron [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:05 compute-0 nova_compute[259627]: 2025-10-14 09:31:05.989 2 DEBUG oslo_concurrency.lockutils [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.066 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphnklfotg" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.111 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.118 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4120634801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:31:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4120634801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.330 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.332 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Deleting local config drive /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config because it was imported into RBD.
Oct 14 09:31:06 compute-0 kernel: tap20f15fae-27: entered promiscuous mode
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:06 compute-0 ovn_controller[152662]: 2025-10-14T09:31:06Z|01464|binding|INFO|Claiming lport 20f15fae-2789-43f5-8ca3-2a412dba5625 for this chassis.
Oct 14 09:31:06 compute-0 ovn_controller[152662]: 2025-10-14T09:31:06Z|01465|binding|INFO|20f15fae-2789-43f5-8ca3-2a412dba5625: Claiming fa:16:3e:59:13:28 10.100.0.10
Oct 14 09:31:06 compute-0 NetworkManager[44885]: <info>  [1760434266.4208] manager: (tap20f15fae-27): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.428 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:13:28 10.100.0.10'], port_security=['fa:16:3e:59:13:28 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4ff65022-c3e1-4ee6-b866-7892555ef52f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e0f5391-ee56-46d7-8aa9-9ba00efbe0e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a89aaec7-5335-48cb-8b51-b7dcd7e1d5f5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=20f15fae-2789-43f5-8ca3-2a412dba5625) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.429 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 20f15fae-2789-43f5-8ca3-2a412dba5625 in datapath 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a bound to our chassis
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.431 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a
Oct 14 09:31:06 compute-0 ovn_controller[152662]: 2025-10-14T09:31:06Z|01466|binding|INFO|Setting lport 20f15fae-2789-43f5-8ca3-2a412dba5625 up in Southbound
Oct 14 09:31:06 compute-0 ovn_controller[152662]: 2025-10-14T09:31:06Z|01467|binding|INFO|Setting lport 20f15fae-2789-43f5-8ca3-2a412dba5625 ovn-installed in OVS
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 213 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.449 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[462a7724-a6be-4b3f-b5a2-ad0b25c76546]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:06 compute-0 systemd-udevd[400660]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:31:06 compute-0 NetworkManager[44885]: <info>  [1760434266.4656] device (tap20f15fae-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:31:06 compute-0 NetworkManager[44885]: <info>  [1760434266.4663] device (tap20f15fae-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:31:06 compute-0 systemd-machined[214636]: New machine qemu-170-instance-0000008a.
Oct 14 09:31:06 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-0000008a.
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.492 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5f600944-56e9-4c23-b412-52a98dd6e2aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.495 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[84b0a67a-cb59-45fd-aed8-92083d90b591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]: {
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:     "0": [
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:         {
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "devices": [
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "/dev/loop3"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             ],
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_name": "ceph_lv0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_size": "21470642176",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "name": "ceph_lv0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "tags": {
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cluster_name": "ceph",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.crush_device_class": "",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.encrypted": "0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osd_id": "0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.type": "block",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.vdo": "0"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             },
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "type": "block",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "vg_name": "ceph_vg0"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:         }
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:     ],
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:     "1": [
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:         {
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "devices": [
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "/dev/loop4"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             ],
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_name": "ceph_lv1",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_size": "21470642176",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "name": "ceph_lv1",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "tags": {
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cluster_name": "ceph",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.crush_device_class": "",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.encrypted": "0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osd_id": "1",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.type": "block",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.vdo": "0"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             },
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "type": "block",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "vg_name": "ceph_vg1"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:         }
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:     ],
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:     "2": [
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:         {
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "devices": [
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "/dev/loop5"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             ],
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_name": "ceph_lv2",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_size": "21470642176",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "name": "ceph_lv2",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "tags": {
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.cluster_name": "ceph",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.crush_device_class": "",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.encrypted": "0",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osd_id": "2",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.type": "block",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:                 "ceph.vdo": "0"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             },
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "type": "block",
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:             "vg_name": "ceph_vg2"
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:         }
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]:     ]
Oct 14 09:31:06 compute-0 trusting_rhodes[400600]: }
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.523 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1f5623-ca9f-4b5e-aa6c-30caea7136b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:06 compute-0 systemd[1]: libpod-43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7.scope: Deactivated successfully.
Oct 14 09:31:06 compute-0 podman[400583]: 2025-10-14 09:31:06.534458039 +0000 UTC m=+0.986707835 container died 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.542 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[647b90d6-5758-45de-8de7-a74b1acc8094]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e05ecd2-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:c2:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801041, 'reachable_time': 33084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400673, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092-merged.mount: Deactivated successfully.
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.565 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf024cf-1936-409c-b451-eba4175fe26a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e05ecd2-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801053, 'tstamp': 801053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400683, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e05ecd2-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801056, 'tstamp': 801056}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400683, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.566 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e05ecd2-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:06 compute-0 nova_compute[259627]: 2025-10-14 09:31:06.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e05ecd2-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.571 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e05ecd2-80, col_values=(('external_ids', {'iface-id': '4697e43c-b02d-4f27-aea8-a54cad6fa2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.571 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:06 compute-0 podman[400583]: 2025-10-14 09:31:06.588578337 +0000 UTC m=+1.040828113 container remove 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:31:06 compute-0 systemd[1]: libpod-conmon-43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7.scope: Deactivated successfully.
Oct 14 09:31:06 compute-0 sudo[400458]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:06 compute-0 sudo[400690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:06 compute-0 sudo[400690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:06 compute-0 sudo[400690]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:06 compute-0 sudo[400715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:31:06 compute-0 sudo[400715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:06 compute-0 sudo[400715]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:06 compute-0 sudo[400740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:06 compute-0 sudo[400740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:06 compute-0 sudo[400740]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:06 compute-0 sudo[400765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:31:06 compute-0 sudo[400765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:07.047 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:07.047 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:07.048 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:07 compute-0 ceph-mon[74249]: pgmap v2339: 305 pgs: 305 active+clean; 213 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Oct 14 09:31:07 compute-0 podman[400871]: 2025-10-14 09:31:07.253378102 +0000 UTC m=+0.063850538 container create be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:31:07 compute-0 systemd[1]: Started libpod-conmon-be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758.scope.
Oct 14 09:31:07 compute-0 podman[400871]: 2025-10-14 09:31:07.2264126 +0000 UTC m=+0.036885096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:31:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:31:07 compute-0 podman[400871]: 2025-10-14 09:31:07.338627824 +0000 UTC m=+0.149100330 container init be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:31:07 compute-0 podman[400871]: 2025-10-14 09:31:07.352569796 +0000 UTC m=+0.163042252 container start be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:31:07 compute-0 podman[400871]: 2025-10-14 09:31:07.35680535 +0000 UTC m=+0.167277876 container attach be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:31:07 compute-0 optimistic_grothendieck[400887]: 167 167
Oct 14 09:31:07 compute-0 systemd[1]: libpod-be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758.scope: Deactivated successfully.
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.413 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:07 compute-0 podman[400892]: 2025-10-14 09:31:07.428668413 +0000 UTC m=+0.044840811 container died be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.437 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.438 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance network_info: |[{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.439 2 DEBUG oslo_concurrency.lockutils [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.440 2 DEBUG nova.network.neutron [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing network info cache for port c22b3ec2-f5a1-4c97-8648-a463e9e12545 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.447 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start _get_guest_xml network_info=[{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.454 2 WARNING nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:31:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2e2815a89b83457a3087176cc1d5ba8533ddeb1506f0a6ac1a70785f78eb1e4-merged.mount: Deactivated successfully.
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.466 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434267.4642997, 4ff65022-c3e1-4ee6-b866-7892555ef52f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.466 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] VM Started (Lifecycle Event)
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.474 2 DEBUG nova.virt.libvirt.host [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:31:07 compute-0 podman[400892]: 2025-10-14 09:31:07.47456862 +0000 UTC m=+0.090740938 container remove be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.476 2 DEBUG nova.virt.libvirt.host [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.480 2 DEBUG nova.virt.libvirt.host [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.480 2 DEBUG nova.virt.libvirt.host [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.481 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.481 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.481 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.482 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.482 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.482 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:31:07 compute-0 systemd[1]: libpod-conmon-be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758.scope: Deactivated successfully.
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.482 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.484 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.484 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.485 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.485 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.485 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.489 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.534 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.538 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434267.466683, 4ff65022-c3e1-4ee6-b866-7892555ef52f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.538 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] VM Paused (Lifecycle Event)
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.624 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.652 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:31:07 compute-0 podman[400934]: 2025-10-14 09:31:07.771790964 +0000 UTC m=+0.085695134 container create f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 09:31:07 compute-0 podman[400934]: 2025-10-14 09:31:07.744001612 +0000 UTC m=+0.057905772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:31:07 compute-0 systemd[1]: Started libpod-conmon-f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f.scope.
Oct 14 09:31:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.879 2 DEBUG nova.compute.manager [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.880 2 DEBUG oslo_concurrency.lockutils [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.881 2 DEBUG oslo_concurrency.lockutils [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.881 2 DEBUG oslo_concurrency.lockutils [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.882 2 DEBUG nova.compute.manager [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Processing event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.883 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:31:07 compute-0 podman[400934]: 2025-10-14 09:31:07.884020838 +0000 UTC m=+0.197925058 container init f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.889 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434267.8887336, 4ff65022-c3e1-4ee6-b866-7892555ef52f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.889 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] VM Resumed (Lifecycle Event)
Oct 14 09:31:07 compute-0 podman[400934]: 2025-10-14 09:31:07.893670385 +0000 UTC m=+0.207574515 container start f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.895 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:31:07 compute-0 podman[400934]: 2025-10-14 09:31:07.897581861 +0000 UTC m=+0.211486031 container attach f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.900 2 INFO nova.virt.libvirt.driver [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance spawned successfully.
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.902 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.910 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.930 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.939 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.940 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.940 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.941 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.942 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.942 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:31:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239780367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.966 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:31:07 compute-0 nova_compute[259627]: 2025-10-14 09:31:07.982 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.008 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.013 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.065 2 INFO nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Took 8.64 seconds to spawn the instance on the hypervisor.
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.065 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.124 2 INFO nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Took 10.52 seconds to build instance.
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.141 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2239780367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 213 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:31:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:31:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826269457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.469 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.470 2 DEBUG nova.virt.libvirt.vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.471 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.472 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.473 2 DEBUG nova.virt.libvirt.vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.473 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.474 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.475 2 DEBUG nova.objects.instance [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid b30a994a-5fb7-4344-9944-98d3d75d3b04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.490 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <uuid>b30a994a-5fb7-4344-9944-98d3d75d3b04</uuid>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <name>instance-00000089</name>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-1494982542</nova:name>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:31:07</nova:creationTime>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:port uuid="f2639397-8fb2-4541-a298-fd68219e1e47">
Oct 14 09:31:08 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <nova:port uuid="c22b3ec2-f5a1-4c97-8648-a463e9e12545">
Oct 14 09:31:08 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fecf:8475" ipVersion="6"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <system>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <entry name="serial">b30a994a-5fb7-4344-9944-98d3d75d3b04</entry>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <entry name="uuid">b30a994a-5fb7-4344-9944-98d3d75d3b04</entry>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </system>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <os>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   </os>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <features>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   </features>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b30a994a-5fb7-4344-9944-98d3d75d3b04_disk">
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config">
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       </source>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:31:08 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:a4:1e:d0"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <target dev="tapf2639397-8f"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:cf:84:75"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <target dev="tapc22b3ec2-f5"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/console.log" append="off"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <video>
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </video>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:31:08 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:31:08 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:31:08 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:31:08 compute-0 nova_compute[259627]: </domain>
Oct 14 09:31:08 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.492 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Preparing to wait for external event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.492 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.493 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.493 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.493 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Preparing to wait for external event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.494 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.494 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.494 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.495 2 DEBUG nova.virt.libvirt.vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.495 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.496 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.496 2 DEBUG os_vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2639397-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.503 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2639397-8f, col_values=(('external_ids', {'iface-id': 'f2639397-8fb2-4541-a298-fd68219e1e47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:1e:d0', 'vm-uuid': 'b30a994a-5fb7-4344-9944-98d3d75d3b04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:08 compute-0 NetworkManager[44885]: <info>  [1760434268.5055] manager: (tapf2639397-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.515 2 INFO os_vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f')
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.516 2 DEBUG nova.virt.libvirt.vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.516 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.517 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.518 2 DEBUG os_vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.522 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc22b3ec2-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc22b3ec2-f5, col_values=(('external_ids', {'iface-id': 'c22b3ec2-f5a1-4c97-8648-a463e9e12545', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:84:75', 'vm-uuid': 'b30a994a-5fb7-4344-9944-98d3d75d3b04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:08 compute-0 NetworkManager[44885]: <info>  [1760434268.5254] manager: (tapc22b3ec2-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/598)
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.533 2 INFO os_vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5')
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.602 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.603 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.603 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:a4:1e:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.603 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:cf:84:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.604 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Using config drive
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.635 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:08 compute-0 naughty_gould[400951]: {
Oct 14 09:31:08 compute-0 naughty_gould[400951]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "osd_id": 2,
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "type": "bluestore"
Oct 14 09:31:08 compute-0 naughty_gould[400951]:     },
Oct 14 09:31:08 compute-0 naughty_gould[400951]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "osd_id": 1,
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "type": "bluestore"
Oct 14 09:31:08 compute-0 naughty_gould[400951]:     },
Oct 14 09:31:08 compute-0 naughty_gould[400951]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "osd_id": 0,
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:31:08 compute-0 naughty_gould[400951]:         "type": "bluestore"
Oct 14 09:31:08 compute-0 naughty_gould[400951]:     }
Oct 14 09:31:08 compute-0 naughty_gould[400951]: }
Oct 14 09:31:08 compute-0 systemd[1]: libpod-f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f.scope: Deactivated successfully.
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.873 2 DEBUG nova.network.neutron [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updated VIF entry in instance network info cache for port c22b3ec2-f5a1-4c97-8648-a463e9e12545. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.874 2 DEBUG nova.network.neutron [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:08 compute-0 podman[401049]: 2025-10-14 09:31:08.883932936 +0000 UTC m=+0.029460884 container died f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.892 2 DEBUG oslo_concurrency.lockutils [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106-merged.mount: Deactivated successfully.
Oct 14 09:31:08 compute-0 podman[401049]: 2025-10-14 09:31:08.960753781 +0000 UTC m=+0.106281729 container remove f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:31:08 compute-0 systemd[1]: libpod-conmon-f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f.scope: Deactivated successfully.
Oct 14 09:31:08 compute-0 nova_compute[259627]: 2025-10-14 09:31:08.997 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Creating config drive at /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.004 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0xyf95q_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:09 compute-0 sudo[400765]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:31:09 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:31:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:31:09 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:31:09 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 661d9b53-51ab-4eae-b45d-139a965febd8 does not exist
Oct 14 09:31:09 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a1c03be6-192b-4ca1-979d-11d710b6f3c3 does not exist
Oct 14 09:31:09 compute-0 sudo[401067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:31:09 compute-0 sudo[401067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:09 compute-0 sudo[401067]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.154 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0xyf95q_" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:09 compute-0 ceph-mon[74249]: pgmap v2340: 305 pgs: 305 active+clean; 213 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 09:31:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3826269457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:31:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:31:09 compute-0 sudo[401092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.189 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:09 compute-0 sudo[401092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.193 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:09 compute-0 sudo[401092]: pam_unix(sudo:session): session closed for user root
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.377 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.378 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Deleting local config drive /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config because it was imported into RBD.
Oct 14 09:31:09 compute-0 kernel: tapf2639397-8f: entered promiscuous mode
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.4290] manager: (tapf2639397-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/599)
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01468|binding|INFO|Claiming lport f2639397-8fb2-4541-a298-fd68219e1e47 for this chassis.
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01469|binding|INFO|f2639397-8fb2-4541-a298-fd68219e1e47: Claiming fa:16:3e:a4:1e:d0 10.100.0.12
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.4426] manager: (tapc22b3ec2-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.446 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:1e:d0 10.100.0.12'], port_security=['fa:16:3e:a4:1e:d0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b30a994a-5fb7-4344-9944-98d3d75d3b04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f55c896-9e6e-44ae-a4b7-c1c60b86826e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f2639397-8fb2-4541-a298-fd68219e1e47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.447 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f2639397-8fb2-4541-a298-fd68219e1e47 in datapath 20dd724c-9d71-4931-8e8b-4dd3fbbacc17 bound to our chassis
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.448 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20dd724c-9d71-4931-8e8b-4dd3fbbacc17
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.459 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f205dda6-ed14-4b07-93be-9f17829d6aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.460 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20dd724c-91 in ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.461 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20dd724c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.461 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac939309-e8cf-4a35-ba7c-349a5f0e4008]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.462 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d93e94ac-eddb-48e9-8d54-e5a2aacfcbb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 kernel: tapc22b3ec2-f5: entered promiscuous mode
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01470|binding|INFO|Setting lport f2639397-8fb2-4541-a298-fd68219e1e47 ovn-installed in OVS
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01471|binding|INFO|Setting lport f2639397-8fb2-4541-a298-fd68219e1e47 up in Southbound
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.482 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[acd45070-14b6-4907-bfc1-2ece6bc89aa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01472|if_status|INFO|Not updating pb chassis for c22b3ec2-f5a1-4c97-8648-a463e9e12545 now as sb is readonly
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01473|binding|INFO|Claiming lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 for this chassis.
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01474|binding|INFO|c22b3ec2-f5a1-4c97-8648-a463e9e12545: Claiming fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475
Oct 14 09:31:09 compute-0 systemd-machined[214636]: New machine qemu-171-instance-00000089.
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.493 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475'], port_security=['fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fecf:8475/64', 'neutron:device_id': 'b30a994a-5fb7-4344-9944-98d3d75d3b04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1dc63515-48cf-4886-956d-024d1d9cb848', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b23756-77cb-4493-bd13-0170877e81b9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c22b3ec2-f5a1-4c97-8648-a463e9e12545) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:31:09 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-00000089.
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01475|binding|INFO|Setting lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 ovn-installed in OVS
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01476|binding|INFO|Setting lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 up in Southbound
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.508 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c55d9127-9bc2-4712-a156-e26d3d31c96e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 systemd-udevd[401175]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:31:09 compute-0 systemd-udevd[401176]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.544 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3ee912-0d40-4ec2-ba8b-1abf61efd42b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.5480] device (tapf2639397-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.5488] device (tapc22b3ec2-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.5496] device (tapf2639397-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.5500] device (tapc22b3ec2-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.5508] manager: (tap20dd724c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/601)
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.550 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0833f736-8a71-4721-923c-4a9da1020c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.591 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a84ef928-a20a-4a74-a66d-9b83f07cec69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.594 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a251a86-4e47-42d3-9a7e-6b52ffa6f2f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.6134] device (tap20dd724c-90): carrier: link connected
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.618 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8370e306-ef4d-413b-9cb3-4e19de94b2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[082e0782-eef9-409d-a09c-7e7f4bb90758]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20dd724c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:07:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804837, 'reachable_time': 24485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401204, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[125a53c7-45dd-497c-9a88-45282426247a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:7a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804837, 'tstamp': 804837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401205, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.683 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e795851-a8b8-4b18-bd25-f8f71fa093d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20dd724c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:07:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804837, 'reachable_time': 24485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401206, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.727 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e31a3a2f-90b5-4217-96cb-e689edfac604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.805 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd31d99-fe3b-4f99-a1a7-e51831791438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.808 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20dd724c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.808 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.809 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20dd724c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:09 compute-0 NetworkManager[44885]: <info>  [1760434269.8127] manager: (tap20dd724c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Oct 14 09:31:09 compute-0 kernel: tap20dd724c-90: entered promiscuous mode
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.817 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20dd724c-90, col_values=(('external_ids', {'iface-id': '1308be16-f790-4063-acf0-2c8f6fdde665'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:09 compute-0 ovn_controller[152662]: 2025-10-14T09:31:09Z|01477|binding|INFO|Releasing lport 1308be16-f790-4063-acf0-2c8f6fdde665 from this chassis (sb_readonly=0)
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.852 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20dd724c-9d71-4931-8e8b-4dd3fbbacc17.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20dd724c-9d71-4931-8e8b-4dd3fbbacc17.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a075f4a-12c3-45c6-bae1-d4e950a5a919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.854 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-20dd724c-9d71-4931-8e8b-4dd3fbbacc17
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/20dd724c-9d71-4931-8e8b-4dd3fbbacc17.pid.haproxy
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 20dd724c-9d71-4931-8e8b-4dd3fbbacc17
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:31:09 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.855 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'env', 'PROCESS_TAG=haproxy-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20dd724c-9d71-4931-8e8b-4dd3fbbacc17.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.880 2 DEBUG nova.compute.manager [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.881 2 DEBUG oslo_concurrency.lockutils [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.881 2 DEBUG oslo_concurrency.lockutils [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.881 2 DEBUG oslo_concurrency.lockutils [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.882 2 DEBUG nova.compute.manager [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Processing event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.974 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.975 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.975 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.976 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.976 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] No waiting events found dispatching network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.976 2 WARNING nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received unexpected event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 for instance with vm_state active and task_state None.
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.977 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.977 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.978 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.978 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.978 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Processing event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.979 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.979 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.980 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.980 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.980 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:31:09 compute-0 nova_compute[259627]: 2025-10-14 09:31:09.981 2 WARNING nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received unexpected event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 for instance with vm_state building and task_state spawning.
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.322 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.323 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434270.3223557, b30a994a-5fb7-4344-9944-98d3d75d3b04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.323 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] VM Started (Lifecycle Event)
Oct 14 09:31:10 compute-0 podman[401281]: 2025-10-14 09:31:10.322346744 +0000 UTC m=+0.061017719 container create 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.336 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.339 2 INFO nova.virt.libvirt.driver [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance spawned successfully.
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.339 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.345 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.348 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.359 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.359 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.360 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.360 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.361 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.361 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.366 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.366 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434270.3256197, b30a994a-5fb7-4344-9944-98d3d75d3b04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.366 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] VM Paused (Lifecycle Event)
Oct 14 09:31:10 compute-0 podman[401281]: 2025-10-14 09:31:10.286878253 +0000 UTC m=+0.025549238 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:31:10 compute-0 systemd[1]: Started libpod-conmon-2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406.scope.
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.397 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.403 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434270.3356943, b30a994a-5fb7-4344-9944-98d3d75d3b04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.403 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] VM Resumed (Lifecycle Event)
Oct 14 09:31:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3a221f83799bd67fc58856adc1cff01fbd3fa4b8be45bf1363f70f768eb22a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.427 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.436 2 INFO nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Took 11.83 seconds to spawn the instance on the hypervisor.
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.436 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.438 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:31:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 3.6 MiB/s wr, 89 op/s
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.457 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:10 compute-0 podman[401281]: 2025-10-14 09:31:10.463743084 +0000 UTC m=+0.202414069 container init 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:31:10 compute-0 podman[401281]: 2025-10-14 09:31:10.47175686 +0000 UTC m=+0.210427835 container start 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:31:10 compute-0 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [NOTICE]   (401300) : New worker (401302) forked
Oct 14 09:31:10 compute-0 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [NOTICE]   (401300) : Loading success.
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.535 2 INFO nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Took 13.14 seconds to build instance.
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.548 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c22b3ec2-f5a1-4c97-8648-a463e9e12545 in datapath 1dc63515-48cf-4886-956d-024d1d9cb848 unbound from our chassis
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.550 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1dc63515-48cf-4886-956d-024d1d9cb848
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.552 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.570 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[22171076-d596-4309-9bf8-8c6749a9161d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.571 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1dc63515-41 in ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.574 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1dc63515-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.574 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58f65509-00a7-4b6c-82c5-8c801cc1d3ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.581 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[799df613-5cdb-43f7-99cf-a11768659f09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.594 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c9be0c75-d292-4360-883d-e10e31d781d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.615 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[244de2da-7b44-4715-aeed-2511dc515a12]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.655 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[05be8ed3-baf1-44f3-b111-85cd57611cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.666 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[545d3984-4005-4065-8369-9f724b50a346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 NetworkManager[44885]: <info>  [1760434270.6676] manager: (tap1dc63515-40): new Veth device (/org/freedesktop/NetworkManager/Devices/603)
Oct 14 09:31:10 compute-0 systemd-udevd[401187]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.703 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[95568699-e527-4d63-8831-b1cc3233db3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.706 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[da48852e-e5c1-4175-8ff6-1f0125deb873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 NetworkManager[44885]: <info>  [1760434270.7308] device (tap1dc63515-40): carrier: link connected
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.738 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb50297-016b-464f-a554-8ce4e0ca4cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.762 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f45696d6-8f32-484b-a883-b6c1ed2a4dd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1dc63515-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:e0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804949, 'reachable_time': 40867, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401321, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.787 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e11c013-ef67-4512-bef3-32c38f2b0087]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:e0fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804949, 'tstamp': 804949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401322, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.813 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fe2130-e9cb-4f6c-ad9b-c49cc023acd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1dc63515-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:e0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804949, 'reachable_time': 40867, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401323, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db9b7f0c-9fbc-49ae-a441-fc3618128393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.901 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9cdb8f-1363-48da-a302-2528c12e5673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc63515-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.903 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dc63515-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:10 compute-0 kernel: tap1dc63515-40: entered promiscuous mode
Oct 14 09:31:10 compute-0 NetworkManager[44885]: <info>  [1760434270.9049] manager: (tap1dc63515-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.909 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1dc63515-40, col_values=(('external_ids', {'iface-id': '9754cafe-7819-456f-943e-9907e7f07233'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:10 compute-0 ovn_controller[152662]: 2025-10-14T09:31:10Z|01478|binding|INFO|Releasing lport 9754cafe-7819-456f-943e-9907e7f07233 from this chassis (sb_readonly=0)
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:10 compute-0 nova_compute[259627]: 2025-10-14 09:31:10.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.928 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1dc63515-48cf-4886-956d-024d1d9cb848.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1dc63515-48cf-4886-956d-024d1d9cb848.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.928 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b1ec84-4808-4655-9de6-895323dc9423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.929 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-1dc63515-48cf-4886-956d-024d1d9cb848
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/1dc63515-48cf-4886-956d-024d1d9cb848.pid.haproxy
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 1dc63515-48cf-4886-956d-024d1d9cb848
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:31:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.930 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'env', 'PROCESS_TAG=haproxy-1dc63515-48cf-4886-956d-024d1d9cb848', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1dc63515-48cf-4886-956d-024d1d9cb848.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:31:11 compute-0 podman[401354]: 2025-10-14 09:31:11.32953463 +0000 UTC m=+0.070867550 container create f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 09:31:11 compute-0 systemd[1]: Started libpod-conmon-f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a.scope.
Oct 14 09:31:11 compute-0 podman[401354]: 2025-10-14 09:31:11.295151156 +0000 UTC m=+0.036484166 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:31:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5feeed86f65fad647b0e405deecf168013385c50e6fadd5dfe122e086c585633/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:31:11 compute-0 podman[401354]: 2025-10-14 09:31:11.425867294 +0000 UTC m=+0.167200214 container init f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS)
Oct 14 09:31:11 compute-0 podman[401354]: 2025-10-14 09:31:11.43301444 +0000 UTC m=+0.174347360 container start f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:31:11 compute-0 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [NOTICE]   (401373) : New worker (401375) forked
Oct 14 09:31:11 compute-0 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [NOTICE]   (401373) : Loading success.
Oct 14 09:31:11 compute-0 ceph-mon[74249]: pgmap v2341: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 3.6 MiB/s wr, 89 op/s
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.145 2 DEBUG nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.145 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.146 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.146 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.146 2 DEBUG nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.147 2 WARNING nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received unexpected event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 for instance with vm_state active and task_state None.
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.147 2 DEBUG nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.147 2 DEBUG nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing instance network info cache due to event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.148 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.148 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:12 compute-0 nova_compute[259627]: 2025-10-14 09:31:12.148 2 DEBUG nova.network.neutron [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 129 op/s
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.523862) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272523910, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1041, "num_deletes": 251, "total_data_size": 1407883, "memory_usage": 1428064, "flush_reason": "Manual Compaction"}
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272534945, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1382355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48765, "largest_seqno": 49805, "table_properties": {"data_size": 1377320, "index_size": 2495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11204, "raw_average_key_size": 19, "raw_value_size": 1367173, "raw_average_value_size": 2419, "num_data_blocks": 111, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434183, "oldest_key_time": 1760434183, "file_creation_time": 1760434272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 11419 microseconds, and 6911 cpu microseconds.
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.535278) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1382355 bytes OK
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.535452) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.537365) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.537387) EVENT_LOG_v1 {"time_micros": 1760434272537380, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.537407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1402961, prev total WAL file size 1402961, number of live WAL files 2.
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.539290) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1349KB)], [113(8479KB)]
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272539330, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10065386, "oldest_snapshot_seqno": -1}
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6973 keys, 8354813 bytes, temperature: kUnknown
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272584776, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8354813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8310082, "index_size": 26187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 182015, "raw_average_key_size": 26, "raw_value_size": 8187163, "raw_average_value_size": 1174, "num_data_blocks": 1016, "num_entries": 6973, "num_filter_entries": 6973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.585147) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8354813 bytes
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.586487) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.6 rd, 183.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(13.3) write-amplify(6.0) OK, records in: 7487, records dropped: 514 output_compression: NoCompression
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.586503) EVENT_LOG_v1 {"time_micros": 1760434272586495, "job": 68, "event": "compaction_finished", "compaction_time_micros": 45633, "compaction_time_cpu_micros": 19262, "output_level": 6, "num_output_files": 1, "total_output_size": 8354813, "num_input_records": 7487, "num_output_records": 6973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272587158, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272588707, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.539218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:31:12 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:31:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:13 compute-0 ceph-mon[74249]: pgmap v2342: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 129 op/s
Oct 14 09:31:13 compute-0 nova_compute[259627]: 2025-10-14 09:31:13.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:14 compute-0 nova_compute[259627]: 2025-10-14 09:31:14.063 2 DEBUG nova.network.neutron [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updated VIF entry in instance network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:31:14 compute-0 nova_compute[259627]: 2025-10-14 09:31:14.063 2 DEBUG nova.network.neutron [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:14 compute-0 nova_compute[259627]: 2025-10-14 09:31:14.098 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:14 compute-0 nova_compute[259627]: 2025-10-14 09:31:14.215 2 DEBUG nova.compute.manager [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:14 compute-0 nova_compute[259627]: 2025-10-14 09:31:14.215 2 DEBUG nova.compute.manager [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing instance network info cache due to event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:14 compute-0 nova_compute[259627]: 2025-10-14 09:31:14.219 2 DEBUG oslo_concurrency.lockutils [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:14 compute-0 nova_compute[259627]: 2025-10-14 09:31:14.219 2 DEBUG oslo_concurrency.lockutils [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:14 compute-0 nova_compute[259627]: 2025-10-14 09:31:14.219 2 DEBUG nova.network.neutron [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 90 op/s
Oct 14 09:31:14 compute-0 podman[401384]: 2025-10-14 09:31:14.645841212 +0000 UTC m=+0.065096119 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:31:14 compute-0 podman[401385]: 2025-10-14 09:31:14.655241413 +0000 UTC m=+0.060291731 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:31:15 compute-0 nova_compute[259627]: 2025-10-14 09:31:15.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:15 compute-0 ceph-mon[74249]: pgmap v2343: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 90 op/s
Oct 14 09:31:16 compute-0 nova_compute[259627]: 2025-10-14 09:31:16.027 2 DEBUG nova.network.neutron [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updated VIF entry in instance network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:31:16 compute-0 nova_compute[259627]: 2025-10-14 09:31:16.028 2 DEBUG nova.network.neutron [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:16 compute-0 nova_compute[259627]: 2025-10-14 09:31:16.048 2 DEBUG oslo_concurrency.lockutils [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:16 compute-0 nova_compute[259627]: 2025-10-14 09:31:16.267 2 DEBUG nova.compute.manager [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:16 compute-0 nova_compute[259627]: 2025-10-14 09:31:16.267 2 DEBUG nova.compute.manager [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing instance network info cache due to event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:16 compute-0 nova_compute[259627]: 2025-10-14 09:31:16.267 2 DEBUG oslo_concurrency.lockutils [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:16 compute-0 nova_compute[259627]: 2025-10-14 09:31:16.268 2 DEBUG oslo_concurrency.lockutils [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:16 compute-0 nova_compute[259627]: 2025-10-14 09:31:16.268 2 DEBUG nova.network.neutron [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 155 op/s
Oct 14 09:31:17 compute-0 ceph-mon[74249]: pgmap v2344: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 155 op/s
Oct 14 09:31:18 compute-0 nova_compute[259627]: 2025-10-14 09:31:18.000 2 DEBUG nova.network.neutron [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updated VIF entry in instance network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:31:18 compute-0 nova_compute[259627]: 2025-10-14 09:31:18.000 2 DEBUG nova.network.neutron [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:18 compute-0 nova_compute[259627]: 2025-10-14 09:31:18.021 2 DEBUG oslo_concurrency.lockutils [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 148 op/s
Oct 14 09:31:18 compute-0 nova_compute[259627]: 2025-10-14 09:31:18.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:19 compute-0 ceph-mon[74249]: pgmap v2345: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 148 op/s
Oct 14 09:31:20 compute-0 ovn_controller[152662]: 2025-10-14T09:31:20Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:59:13:28 10.100.0.10
Oct 14 09:31:20 compute-0 ovn_controller[152662]: 2025-10-14T09:31:20Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:13:28 10.100.0.10
Oct 14 09:31:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 218 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 598 KiB/s wr, 164 op/s
Oct 14 09:31:20 compute-0 nova_compute[259627]: 2025-10-14 09:31:20.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:21 compute-0 ceph-mon[74249]: pgmap v2346: 305 pgs: 305 active+clean; 218 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 598 KiB/s wr, 164 op/s
Oct 14 09:31:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 239 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 160 op/s
Oct 14 09:31:22 compute-0 ovn_controller[152662]: 2025-10-14T09:31:22Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:1e:d0 10.100.0.12
Oct 14 09:31:22 compute-0 ovn_controller[152662]: 2025-10-14T09:31:22Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:1e:d0 10.100.0.12
Oct 14 09:31:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:23 compute-0 ceph-mon[74249]: pgmap v2347: 305 pgs: 305 active+clean; 239 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 160 op/s
Oct 14 09:31:23 compute-0 nova_compute[259627]: 2025-10-14 09:31:23.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 239 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct 14 09:31:25 compute-0 nova_compute[259627]: 2025-10-14 09:31:25.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:25 compute-0 ceph-mon[74249]: pgmap v2348: 305 pgs: 305 active+clean; 239 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct 14 09:31:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 190 op/s
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.640 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.641 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.641 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.642 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.643 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.645 2 INFO nova.compute.manager [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Terminating instance
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.647 2 DEBUG nova.compute.manager [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:31:26 compute-0 kernel: tap20f15fae-27 (unregistering): left promiscuous mode
Oct 14 09:31:26 compute-0 NetworkManager[44885]: <info>  [1760434286.7061] device (tap20f15fae-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:26 compute-0 ovn_controller[152662]: 2025-10-14T09:31:26Z|01479|binding|INFO|Releasing lport 20f15fae-2789-43f5-8ca3-2a412dba5625 from this chassis (sb_readonly=0)
Oct 14 09:31:26 compute-0 ovn_controller[152662]: 2025-10-14T09:31:26Z|01480|binding|INFO|Setting lport 20f15fae-2789-43f5-8ca3-2a412dba5625 down in Southbound
Oct 14 09:31:26 compute-0 ovn_controller[152662]: 2025-10-14T09:31:26Z|01481|binding|INFO|Removing iface tap20f15fae-27 ovn-installed in OVS
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.738 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:13:28 10.100.0.10', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4ff65022-c3e1-4ee6-b866-7892555ef52f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a89aaec7-5335-48cb-8b51-b7dcd7e1d5f5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=20f15fae-2789-43f5-8ca3-2a412dba5625) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.741 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 20f15fae-2789-43f5-8ca3-2a412dba5625 in datapath 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a unbound from our chassis
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.745 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:26 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct 14 09:31:26 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Consumed 13.727s CPU time.
Oct 14 09:31:26 compute-0 systemd-machined[214636]: Machine qemu-170-instance-0000008a terminated.
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.764 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c2ac24-c59d-445e-a9f5-db025f818744]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.801 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6b078a-238e-45cd-8080-7d3c384668d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.804 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6bbc5c-4064-4338-adfa-545606295bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.835 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1c194fdc-f4c2-41bc-86b2-9adc288e7f60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:26 compute-0 podman[401426]: 2025-10-14 09:31:26.847675032 +0000 UTC m=+0.091113537 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.851 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d80cd22e-5939-4cd1-9677-e44b929a7bb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e05ecd2-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:c2:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801041, 'reachable_time': 33084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 15, 'inoctets': 1208, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 15, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1208, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 15, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401469, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.872 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0047c4f6-80ad-494d-b3b8-715d99ca187c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e05ecd2-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801053, 'tstamp': 801053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401473, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e05ecd2-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801056, 'tstamp': 801056}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401473, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.873 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e05ecd2-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e05ecd2-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e05ecd2-80, col_values=(('external_ids', {'iface-id': '4697e43c-b02d-4f27-aea8-a54cad6fa2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.886 2 INFO nova.virt.libvirt.driver [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance destroyed successfully.
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.886 2 DEBUG nova.objects.instance [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 4ff65022-c3e1-4ee6-b866-7892555ef52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.902 2 DEBUG nova.virt.libvirt.vif [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=138,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-69wk0x4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:08Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4ff65022-c3e1-4ee6-b866-7892555ef52f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.903 2 DEBUG nova.network.os_vif_util [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.903 2 DEBUG nova.network.os_vif_util [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.904 2 DEBUG os_vif [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20f15fae-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:26 compute-0 nova_compute[259627]: 2025-10-14 09:31:26.910 2 INFO os_vif [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27')
Oct 14 09:31:26 compute-0 podman[401438]: 2025-10-14 09:31:26.912941214 +0000 UTC m=+0.116092770 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.022 2 DEBUG nova.compute.manager [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-unplugged-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.023 2 DEBUG oslo_concurrency.lockutils [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.023 2 DEBUG oslo_concurrency.lockutils [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.023 2 DEBUG oslo_concurrency.lockutils [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.024 2 DEBUG nova.compute.manager [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] No waiting events found dispatching network-vif-unplugged-20f15fae-2789-43f5-8ca3-2a412dba5625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.024 2 DEBUG nova.compute.manager [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-unplugged-20f15fae-2789-43f5-8ca3-2a412dba5625 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.273 2 INFO nova.virt.libvirt.driver [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Deleting instance files /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f_del
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.274 2 INFO nova.virt.libvirt.driver [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Deletion of /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f_del complete
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.369 2 INFO nova.compute.manager [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.369 2 DEBUG oslo.service.loopingcall [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.369 2 DEBUG nova.compute.manager [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:31:27 compute-0 nova_compute[259627]: 2025-10-14 09:31:27.369 2 DEBUG nova.network.neutron [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:31:27 compute-0 ceph-mon[74249]: pgmap v2349: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 190 op/s
Oct 14 09:31:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.218 2 DEBUG nova.network.neutron [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.238 2 INFO nova.compute.manager [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Took 0.87 seconds to deallocate network for instance.
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.310 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.311 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.400 2 DEBUG oslo_concurrency.processutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Oct 14 09:31:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:31:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431981852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.862 2 DEBUG oslo_concurrency.processutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.870 2 DEBUG nova.compute.provider_tree [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.890 2 DEBUG nova.scheduler.client.report [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.925 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:28 compute-0 nova_compute[259627]: 2025-10-14 09:31:28.970 2 INFO nova.scheduler.client.report [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 4ff65022-c3e1-4ee6-b866-7892555ef52f
Oct 14 09:31:29 compute-0 nova_compute[259627]: 2025-10-14 09:31:29.078 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:29 compute-0 nova_compute[259627]: 2025-10-14 09:31:29.134 2 DEBUG nova.compute.manager [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:29 compute-0 nova_compute[259627]: 2025-10-14 09:31:29.135 2 DEBUG oslo_concurrency.lockutils [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:29 compute-0 nova_compute[259627]: 2025-10-14 09:31:29.136 2 DEBUG oslo_concurrency.lockutils [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:29 compute-0 nova_compute[259627]: 2025-10-14 09:31:29.137 2 DEBUG oslo_concurrency.lockutils [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:29 compute-0 nova_compute[259627]: 2025-10-14 09:31:29.137 2 DEBUG nova.compute.manager [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] No waiting events found dispatching network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:31:29 compute-0 nova_compute[259627]: 2025-10-14 09:31:29.138 2 WARNING nova.compute.manager [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received unexpected event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 for instance with vm_state deleted and task_state None.
Oct 14 09:31:29 compute-0 nova_compute[259627]: 2025-10-14 09:31:29.139 2 DEBUG nova.compute.manager [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-deleted-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:29 compute-0 ceph-mon[74249]: pgmap v2350: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Oct 14 09:31:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1431981852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.233 2 DEBUG nova.compute.manager [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-changed-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.233 2 DEBUG nova.compute.manager [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing instance network info cache due to event network-changed-2f6bb222-680e-469f-83d5-517735604bb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.234 2 DEBUG oslo_concurrency.lockutils [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.234 2 DEBUG oslo_concurrency.lockutils [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.234 2 DEBUG nova.network.neutron [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.314 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.314 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.315 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.315 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.316 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.317 2 INFO nova.compute.manager [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Terminating instance
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.319 2 DEBUG nova.compute.manager [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:31:30 compute-0 kernel: tap2f6bb222-68 (unregistering): left promiscuous mode
Oct 14 09:31:30 compute-0 NetworkManager[44885]: <info>  [1760434290.3879] device (tap2f6bb222-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:31:30 compute-0 ovn_controller[152662]: 2025-10-14T09:31:30Z|01482|binding|INFO|Releasing lport 2f6bb222-680e-469f-83d5-517735604bb0 from this chassis (sb_readonly=0)
Oct 14 09:31:30 compute-0 ovn_controller[152662]: 2025-10-14T09:31:30Z|01483|binding|INFO|Setting lport 2f6bb222-680e-469f-83d5-517735604bb0 down in Southbound
Oct 14 09:31:30 compute-0 ovn_controller[152662]: 2025-10-14T09:31:30Z|01484|binding|INFO|Removing iface tap2f6bb222-68 ovn-installed in OVS
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 255 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 4.3 MiB/s wr, 139 op/s
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct 14 09:31:30 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000088.scope: Consumed 15.002s CPU time.
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 systemd-machined[214636]: Machine qemu-169-instance-00000088 terminated.
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.474 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:bb:97 10.100.0.9'], port_security=['fa:16:3e:57:bb:97 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e0f5391-ee56-46d7-8aa9-9ba00efbe0e1 93b5966a-7949-42d1-a83d-7ff7c7667c63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a89aaec7-5335-48cb-8b51-b7dcd7e1d5f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2f6bb222-680e-469f-83d5-517735604bb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.477 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6bb222-680e-469f-83d5-517735604bb0 in datapath 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a unbound from our chassis
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.479 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8d3d56-5022-47d9-ab07-93ea9ef4a7a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.481 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a namespace which is not needed anymore
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.567 2 INFO nova.virt.libvirt.driver [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance destroyed successfully.
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.568 2 DEBUG nova.objects.instance [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.588 2 DEBUG nova.virt.libvirt.vif [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=136,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:30:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jzso6cvr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:30:32Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.589 2 DEBUG nova.network.os_vif_util [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.590 2 DEBUG nova.network.os_vif_util [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.594 2 DEBUG os_vif [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f6bb222-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.607 2 INFO os_vif [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68')
Oct 14 09:31:30 compute-0 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [NOTICE]   (399422) : haproxy version is 2.8.14-c23fe91
Oct 14 09:31:30 compute-0 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [NOTICE]   (399422) : path to executable is /usr/sbin/haproxy
Oct 14 09:31:30 compute-0 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [WARNING]  (399422) : Exiting Master process...
Oct 14 09:31:30 compute-0 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [ALERT]    (399422) : Current worker (399424) exited with code 143 (Terminated)
Oct 14 09:31:30 compute-0 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [WARNING]  (399422) : All workers exited. Exiting... (0)
Oct 14 09:31:30 compute-0 systemd[1]: libpod-ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4.scope: Deactivated successfully.
Oct 14 09:31:30 compute-0 podman[401565]: 2025-10-14 09:31:30.69973129 +0000 UTC m=+0.072802928 container died ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2a4ae5bac5fb7737a47f6362c904610fbce4171335446f4e8312252dc95d3e9-merged.mount: Deactivated successfully.
Oct 14 09:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4-userdata-shm.mount: Deactivated successfully.
Oct 14 09:31:30 compute-0 podman[401565]: 2025-10-14 09:31:30.766222141 +0000 UTC m=+0.139293489 container cleanup ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:31:30 compute-0 systemd[1]: libpod-conmon-ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4.scope: Deactivated successfully.
Oct 14 09:31:30 compute-0 podman[401610]: 2025-10-14 09:31:30.838471904 +0000 UTC m=+0.047807684 container remove ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fae4238d-bad7-4f3b-a6e3-d140c50493c4]: (4, ('Tue Oct 14 09:31:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a (ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4)\nddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4\nTue Oct 14 09:31:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a (ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4)\nddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.846 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c88f41c-860d-4190-ac7c-687efc585e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.846 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e05ecd2-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 kernel: tap6e05ecd2-80: left promiscuous mode
Oct 14 09:31:30 compute-0 nova_compute[259627]: 2025-10-14 09:31:30.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.869 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2459610b-7a9d-494b-932f-98ca8a3af4b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.897 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3634177d-77ee-47b6-9239-7e2a65bc1267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.898 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35c9d179-fedb-458b-95ea-deb9027c715d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.916 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f78bb363-28ca-4a05-b895-8fc1af4aa0bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801032, 'reachable_time': 41238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401630, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.918 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:31:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.918 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3a0577-4259-4339-b81e-60213f8ae0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e05ecd2\x2d8dfd\x2d4d78\x2d8fed\x2d31885a3cdf0a.mount: Deactivated successfully.
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.020 2 INFO nova.virt.libvirt.driver [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Deleting instance files /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_del
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.021 2 INFO nova.virt.libvirt.driver [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Deletion of /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_del complete
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.070 2 INFO nova.compute.manager [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.071 2 DEBUG oslo.service.loopingcall [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.071 2 DEBUG nova.compute.manager [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.071 2 DEBUG nova.network.neutron [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.242 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-unplugged-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.242 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] No waiting events found dispatching network-vif-unplugged-2f6bb222-680e-469f-83d5-517735604bb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-unplugged-2f6bb222-680e-469f-83d5-517735604bb0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.244 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.244 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.244 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.244 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] No waiting events found dispatching network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:31:31 compute-0 nova_compute[259627]: 2025-10-14 09:31:31.244 2 WARNING nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received unexpected event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 for instance with vm_state active and task_state deleting.
Oct 14 09:31:31 compute-0 ceph-mon[74249]: pgmap v2351: 305 pgs: 305 active+clean; 255 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 4.3 MiB/s wr, 139 op/s
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 610 KiB/s rd, 3.7 MiB/s wr, 138 op/s
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:32.508 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:31:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:32.509 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.525 2 DEBUG nova.network.neutron [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.558 2 INFO nova.compute.manager [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Took 1.49 seconds to deallocate network for instance.
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.611 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.612 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.638 2 DEBUG nova.network.neutron [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updated VIF entry in instance network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.638 2 DEBUG nova.network.neutron [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.658 2 DEBUG oslo_concurrency.lockutils [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:32 compute-0 nova_compute[259627]: 2025-10-14 09:31:32.698 2 DEBUG oslo_concurrency.processutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:31:32
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 14 09:31:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:31:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:31:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799793029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.147 2 DEBUG oslo_concurrency.processutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.158 2 DEBUG nova.compute.provider_tree [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.178 2 DEBUG nova.scheduler.client.report [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.209 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.240 2 INFO nova.scheduler.client.report [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:31:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.370 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.377 2 DEBUG nova.compute.manager [req-e5663e77-7867-4032-a6f7-b5d3d754eede req-713d39ad-2d1f-4f1d-852c-85bd115d6437 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-deleted-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.378 2 INFO nova.compute.manager [req-e5663e77-7867-4032-a6f7-b5d3d754eede req-713d39ad-2d1f-4f1d-852c-85bd115d6437 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Neutron deleted interface 2f6bb222-680e-469f-83d5-517735604bb0; detaching it from the instance and deleting it from the info cache
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.379 2 DEBUG nova.network.neutron [req-e5663e77-7867-4032-a6f7-b5d3d754eede req-713d39ad-2d1f-4f1d-852c-85bd115d6437 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:33 compute-0 nova_compute[259627]: 2025-10-14 09:31:33.398 2 DEBUG nova.compute.manager [req-e5663e77-7867-4032-a6f7-b5d3d754eede req-713d39ad-2d1f-4f1d-852c-85bd115d6437 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Detach interface failed, port_id=2f6bb222-680e-469f-83d5-517735604bb0, reason: Instance 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:31:33 compute-0 ceph-mon[74249]: pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 610 KiB/s rd, 3.7 MiB/s wr, 138 op/s
Oct 14 09:31:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/799793029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.505 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.505 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.527 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:35 compute-0 ceph-mon[74249]: pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.655 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.656 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.675 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.676 2 INFO nova.compute.claims [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:31:35 compute-0 nova_compute[259627]: 2025-10-14 09:31:35.881 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:31:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000465943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.335 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.342 2 DEBUG nova.compute.provider_tree [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.359 2 DEBUG nova.scheduler.client.report [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.382 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.383 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.434 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.435 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:31:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 121 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 412 KiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.467 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.492 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.597 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.601 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.602 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Creating image(s)
Oct 14 09:31:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1000465943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.628 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:36 compute-0 ovn_controller[152662]: 2025-10-14T09:31:36Z|01485|binding|INFO|Releasing lport 1308be16-f790-4063-acf0-2c8f6fdde665 from this chassis (sb_readonly=0)
Oct 14 09:31:36 compute-0 ovn_controller[152662]: 2025-10-14T09:31:36Z|01486|binding|INFO|Releasing lport 9754cafe-7819-456f-943e-9907e7f07233 from this chassis (sb_readonly=0)
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.657 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.684 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.688 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.734 2 DEBUG nova.policy [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.766 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.767 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.767 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.768 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.792 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:36 compute-0 nova_compute[259627]: 2025-10-14 09:31:36.796 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c9315047-de1c-423a-adfa-118d77df3c94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.095 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c9315047-de1c-423a-adfa-118d77df3c94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.185 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.290 2 DEBUG nova.objects.instance [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid c9315047-de1c-423a-adfa-118d77df3c94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.317 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.318 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Ensure instance console log exists: /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.318 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.319 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.319 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:37 compute-0 nova_compute[259627]: 2025-10-14 09:31:37.588 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Successfully created port: 63edf6de-b6e6-4be7-870e-062e8186ec37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:31:37 compute-0 ceph-mon[74249]: pgmap v2354: 305 pgs: 305 active+clean; 121 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 412 KiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 14 09:31:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:38 compute-0 nova_compute[259627]: 2025-10-14 09:31:38.355 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Successfully created port: 787335a5-4b97-43d1-ba56-12091ebdecdb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:31:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 121 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 14 09:31:39 compute-0 nova_compute[259627]: 2025-10-14 09:31:39.409 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Successfully updated port: 63edf6de-b6e6-4be7-870e-062e8186ec37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:31:39 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:39.511 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:39 compute-0 nova_compute[259627]: 2025-10-14 09:31:39.572 2 DEBUG nova.compute.manager [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:39 compute-0 nova_compute[259627]: 2025-10-14 09:31:39.573 2 DEBUG nova.compute.manager [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing instance network info cache due to event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:39 compute-0 nova_compute[259627]: 2025-10-14 09:31:39.574 2 DEBUG oslo_concurrency.lockutils [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:39 compute-0 nova_compute[259627]: 2025-10-14 09:31:39.575 2 DEBUG oslo_concurrency.lockutils [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:39 compute-0 nova_compute[259627]: 2025-10-14 09:31:39.575 2 DEBUG nova.network.neutron [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:39 compute-0 ceph-mon[74249]: pgmap v2355: 305 pgs: 305 active+clean; 121 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 14 09:31:39 compute-0 nova_compute[259627]: 2025-10-14 09:31:39.783 2 DEBUG nova.network.neutron [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:31:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 134 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 572 KiB/s wr, 68 op/s
Oct 14 09:31:40 compute-0 nova_compute[259627]: 2025-10-14 09:31:40.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:40 compute-0 nova_compute[259627]: 2025-10-14 09:31:40.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:41 compute-0 ceph-mon[74249]: pgmap v2356: 305 pgs: 305 active+clean; 134 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 572 KiB/s wr, 68 op/s
Oct 14 09:31:41 compute-0 nova_compute[259627]: 2025-10-14 09:31:41.845 2 DEBUG nova.network.neutron [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:41 compute-0 nova_compute[259627]: 2025-10-14 09:31:41.875 2 DEBUG oslo_concurrency.lockutils [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:41 compute-0 nova_compute[259627]: 2025-10-14 09:31:41.885 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434286.8846073, 4ff65022-c3e1-4ee6-b866-7892555ef52f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:41 compute-0 nova_compute[259627]: 2025-10-14 09:31:41.885 2 INFO nova.compute.manager [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] VM Stopped (Lifecycle Event)
Oct 14 09:31:41 compute-0 nova_compute[259627]: 2025-10-14 09:31:41.931 2 DEBUG nova.compute.manager [None req-facf29a5-1ed0-4a0e-a922-d9052f6e75a2 - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:41 compute-0 nova_compute[259627]: 2025-10-14 09:31:41.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.585 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Successfully updated port: 787335a5-4b97-43d1-ba56-12091ebdecdb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.602 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.602 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.603 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.705 2 DEBUG nova.compute.manager [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-changed-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.705 2 DEBUG nova.compute.manager [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing instance network info cache due to event network-changed-787335a5-4b97-43d1-ba56-12091ebdecdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.706 2 DEBUG oslo_concurrency.lockutils [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:42 compute-0 nova_compute[259627]: 2025-10-14 09:31:42.797 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:31:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011053336833365891 of space, bias 1.0, pg target 0.33160010500097675 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:31:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:31:43 compute-0 ceph-mon[74249]: pgmap v2357: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 14 09:31:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 09:31:44 compute-0 nova_compute[259627]: 2025-10-14 09:31:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:44 compute-0 nova_compute[259627]: 2025-10-14 09:31:44.997 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:44 compute-0 nova_compute[259627]: 2025-10-14 09:31:44.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:44 compute-0 nova_compute[259627]: 2025-10-14 09:31:44.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:44 compute-0 nova_compute[259627]: 2025-10-14 09:31:44.998 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:31:44 compute-0 nova_compute[259627]: 2025-10-14 09:31:44.999 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:31:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036949519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.501 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.565 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434290.5645478, 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.566 2 INFO nova.compute.manager [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] VM Stopped (Lifecycle Event)
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.590 2 DEBUG nova.compute.manager [None req-444bcb3f-9f01-4ea4-aeae-8da2ede8bcca - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:45 compute-0 ceph-mon[74249]: pgmap v2358: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 09:31:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2036949519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.667 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.681 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.682 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.689 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.690 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance network_info: |[{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.690 2 DEBUG oslo_concurrency.lockutils [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.691 2 DEBUG nova.network.neutron [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing network info cache for port 787335a5-4b97-43d1-ba56-12091ebdecdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:45 compute-0 podman[401865]: 2025-10-14 09:31:45.697080383 +0000 UTC m=+0.099529984 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.698 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start _get_guest_xml network_info=[{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.705 2 WARNING nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.716 2 DEBUG nova.virt.libvirt.host [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.717 2 DEBUG nova.virt.libvirt.host [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:31:45 compute-0 podman[401866]: 2025-10-14 09:31:45.72874713 +0000 UTC m=+0.129687664 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.732 2 DEBUG nova.virt.libvirt.host [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.733 2 DEBUG nova.virt.libvirt.host [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.734 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.734 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.735 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.735 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.735 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.736 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.736 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.736 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.736 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.737 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.737 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.737 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.741 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.939 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.940 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3395MB free_disk=59.921974182128906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.941 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:45 compute-0 nova_compute[259627]: 2025-10-14 09:31:45.941 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance b30a994a-5fb7-4344-9944-98d3d75d3b04 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance c9315047-de1c-423a-adfa-118d77df3c94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.072 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:31:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121337902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.166 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.203 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.209 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 09:31:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:31:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579285411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.591 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.596 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.608 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.631 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:31:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.631 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144975596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.648 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.650 2 DEBUG nova.virt.libvirt.vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:36Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.651 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.652 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.654 2 DEBUG nova.virt.libvirt.vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:36Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.654 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.655 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.657 2 DEBUG nova.objects.instance [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid c9315047-de1c-423a-adfa-118d77df3c94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.674 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <uuid>c9315047-de1c-423a-adfa-118d77df3c94</uuid>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <name>instance-0000008b</name>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-71039575</nova:name>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:31:45</nova:creationTime>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:port uuid="63edf6de-b6e6-4be7-870e-062e8186ec37">
Oct 14 09:31:46 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <nova:port uuid="787335a5-4b97-43d1-ba56-12091ebdecdb">
Oct 14 09:31:46 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fedf:a910" ipVersion="6"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <system>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <entry name="serial">c9315047-de1c-423a-adfa-118d77df3c94</entry>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <entry name="uuid">c9315047-de1c-423a-adfa-118d77df3c94</entry>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </system>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <os>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   </os>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <features>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   </features>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c9315047-de1c-423a-adfa-118d77df3c94_disk">
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c9315047-de1c-423a-adfa-118d77df3c94_disk.config">
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       </source>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:31:46 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:20:cb:ea"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <target dev="tap63edf6de-b6"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:df:a9:10"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <target dev="tap787335a5-4b"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/console.log" append="off"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <video>
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </video>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:31:46 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:31:46 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:31:46 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:31:46 compute-0 nova_compute[259627]: </domain>
Oct 14 09:31:46 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.675 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Preparing to wait for external event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.676 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2121337902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1579285411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4144975596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.676 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.677 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.677 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Preparing to wait for external event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.677 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.678 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.678 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.679 2 DEBUG nova.virt.libvirt.vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:36Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.680 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.681 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.681 2 DEBUG os_vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63edf6de-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63edf6de-b6, col_values=(('external_ids', {'iface-id': '63edf6de-b6e6-4be7-870e-062e8186ec37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:cb:ea', 'vm-uuid': 'c9315047-de1c-423a-adfa-118d77df3c94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:46 compute-0 NetworkManager[44885]: <info>  [1760434306.6918] manager: (tap63edf6de-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.700 2 INFO os_vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6')
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.702 2 DEBUG nova.virt.libvirt.vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:36Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.702 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.703 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.704 2 DEBUG os_vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap787335a5-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap787335a5-4b, col_values=(('external_ids', {'iface-id': '787335a5-4b97-43d1-ba56-12091ebdecdb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:a9:10', 'vm-uuid': 'c9315047-de1c-423a-adfa-118d77df3c94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:46 compute-0 NetworkManager[44885]: <info>  [1760434306.7127] manager: (tap787335a5-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.721 2 INFO os_vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b')
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.765 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.766 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.766 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:20:cb:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.766 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:df:a9:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.766 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Using config drive
Oct 14 09:31:46 compute-0 nova_compute[259627]: 2025-10-14 09:31:46.784 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:31:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1434 writes, 6460 keys, 1434 commit groups, 1.0 writes per commit group, ingest: 9.01 MB, 0.02 MB/s
                                           Interval WAL: 1434 writes, 1434 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    100.2      0.59              0.23        34    0.017       0      0       0.0       0.0
                                             L6      1/0    7.97 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5    185.3    155.2      1.71              0.89        33    0.052    193K    18K       0.0       0.0
                                            Sum      1/0    7.97 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.5    137.6    141.0      2.31              1.12        67    0.034    193K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4    141.4    143.2      0.37              0.22        10    0.037     36K   2545       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    185.3    155.2      1.71              0.89        33    0.052    193K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    101.3      0.59              0.23        33    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.058, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.32 GB write, 0.08 MB/s write, 0.31 GB read, 0.08 MB/s read, 2.3 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 36.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000351 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2368,34.73 MB,11.4228%) FilterBlock(68,547.48 KB,0.175873%) IndexBlock(68,932.62 KB,0.299594%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.187 2 DEBUG nova.network.neutron [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updated VIF entry in instance network info cache for port 787335a5-4b97-43d1-ba56-12091ebdecdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.188 2 DEBUG nova.network.neutron [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.209 2 DEBUG oslo_concurrency.lockutils [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.221 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Creating config drive at /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.231 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8vhsv43 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.401 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8vhsv43" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.444 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.451 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config c9315047-de1c-423a-adfa-118d77df3c94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.628 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.630 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.630 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.673 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config c9315047-de1c-423a-adfa-118d77df3c94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.673 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Deleting local config drive /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config because it was imported into RBD.
Oct 14 09:31:47 compute-0 ceph-mon[74249]: pgmap v2359: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 09:31:47 compute-0 NetworkManager[44885]: <info>  [1760434307.7507] manager: (tap63edf6de-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Oct 14 09:31:47 compute-0 kernel: tap63edf6de-b6: entered promiscuous mode
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01487|binding|INFO|Claiming lport 63edf6de-b6e6-4be7-870e-062e8186ec37 for this chassis.
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01488|binding|INFO|63edf6de-b6e6-4be7-870e-062e8186ec37: Claiming fa:16:3e:20:cb:ea 10.100.0.7
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.807 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:cb:ea 10.100.0.7'], port_security=['fa:16:3e:20:cb:ea 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c9315047-de1c-423a-adfa-118d77df3c94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f55c896-9e6e-44ae-a4b7-c1c60b86826e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=63edf6de-b6e6-4be7-870e-062e8186ec37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.808 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 63edf6de-b6e6-4be7-870e-062e8186ec37 in datapath 20dd724c-9d71-4931-8e8b-4dd3fbbacc17 bound to our chassis
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.810 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20dd724c-9d71-4931-8e8b-4dd3fbbacc17
Oct 14 09:31:47 compute-0 NetworkManager[44885]: <info>  [1760434307.8120] manager: (tap787335a5-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Oct 14 09:31:47 compute-0 kernel: tap787335a5-4b: entered promiscuous mode
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01489|binding|INFO|Setting lport 63edf6de-b6e6-4be7-870e-062e8186ec37 ovn-installed in OVS
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01490|binding|INFO|Setting lport 63edf6de-b6e6-4be7-870e-062e8186ec37 up in Southbound
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01491|if_status|INFO|Dropped 1 log messages in last 38 seconds (most recently, 38 seconds ago) due to excessive rate
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01492|if_status|INFO|Not updating pb chassis for 787335a5-4b97-43d1-ba56-12091ebdecdb now as sb is readonly
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.827 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[697b11c5-8734-4f41-9e0a-d7eec01155bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01493|binding|INFO|Claiming lport 787335a5-4b97-43d1-ba56-12091ebdecdb for this chassis.
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01494|binding|INFO|787335a5-4b97-43d1-ba56-12091ebdecdb: Claiming fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.840 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910'], port_security=['fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedf:a910/64', 'neutron:device_id': 'c9315047-de1c-423a-adfa-118d77df3c94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1dc63515-48cf-4886-956d-024d1d9cb848', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b23756-77cb-4493-bd13-0170877e81b9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=787335a5-4b97-43d1-ba56-12091ebdecdb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:31:47 compute-0 systemd-udevd[402068]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:31:47 compute-0 systemd-udevd[402069]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01495|binding|INFO|Setting lport 787335a5-4b97-43d1-ba56-12091ebdecdb ovn-installed in OVS
Oct 14 09:31:47 compute-0 ovn_controller[152662]: 2025-10-14T09:31:47Z|01496|binding|INFO|Setting lport 787335a5-4b97-43d1-ba56-12091ebdecdb up in Southbound
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:47 compute-0 NetworkManager[44885]: <info>  [1760434307.8623] device (tap63edf6de-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:31:47 compute-0 NetworkManager[44885]: <info>  [1760434307.8685] device (tap63edf6de-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:31:47 compute-0 systemd-machined[214636]: New machine qemu-172-instance-0000008b.
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.871 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[81f956b5-0173-45ef-ab99-3335ef100328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.875 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ec476d9d-d464-4a72-9526-b49a0c47f358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:47 compute-0 NetworkManager[44885]: <info>  [1760434307.8774] device (tap787335a5-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:31:47 compute-0 NetworkManager[44885]: <info>  [1760434307.8785] device (tap787335a5-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:31:47 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008b.
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.909 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5b737054-44bd-4480-ae28-88d3e28ef5e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.932 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1125d8-9ec3-436e-8123-5585d16723a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20dd724c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:07:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804837, 'reachable_time': 24485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402078, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.952 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d44174f5-9fb8-4285-960a-25b54f3c35cc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap20dd724c-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804851, 'tstamp': 804851}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402082, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20dd724c-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804855, 'tstamp': 804855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402082, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.954 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20dd724c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20dd724c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:47 compute-0 nova_compute[259627]: 2025-10-14 09:31:47.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.957 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20dd724c-90, col_values=(('external_ids', {'iface-id': '1308be16-f790-4063-acf0-2c8f6fdde665'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.957 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.958 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 787335a5-4b97-43d1-ba56-12091ebdecdb in datapath 1dc63515-48cf-4886-956d-024d1d9cb848 unbound from our chassis
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.960 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1dc63515-48cf-4886-956d-024d1d9cb848
Oct 14 09:31:47 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.977 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65570420-9355-460c-82c4-3c1f264c1252]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.016 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc6a4c7-7a39-404f-b915-0db2f60025c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.021 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8b23fa29-4beb-4cf4-b7ad-7aaf26f4c40a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.072 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2684fcc2-53d4-4634-ba48-7802f2204918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.100 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0e81b1a0-b459-4ae5-8ecf-d7b8f620a25f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1dc63515-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:e0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804949, 'reachable_time': 40867, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402091, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.126 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0199fc79-1b93-4f1f-82cb-87152ec4ea8d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1dc63515-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804965, 'tstamp': 804965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402092, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.128 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc63515-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.132 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dc63515-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.133 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:48 compute-0 nova_compute[259627]: 2025-10-14 09:31:48.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.133 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1dc63515-40, col_values=(('external_ids', {'iface-id': '9754cafe-7819-456f-943e-9907e7f07233'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.134 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:48 compute-0 nova_compute[259627]: 2025-10-14 09:31:48.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:31:48 compute-0 nova_compute[259627]: 2025-10-14 09:31:48.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:48 compute-0 nova_compute[259627]: 2025-10-14 09:31:48.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.205 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434309.2044435, c9315047-de1c-423a-adfa-118d77df3c94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.205 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] VM Started (Lifecycle Event)
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.234 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.240 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434309.204789, c9315047-de1c-423a-adfa-118d77df3c94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.241 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] VM Paused (Lifecycle Event)
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.266 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.270 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.289 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:31:49 compute-0 ceph-mon[74249]: pgmap v2360: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:31:49 compute-0 nova_compute[259627]: 2025-10-14 09:31:49.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:31:50 compute-0 nova_compute[259627]: 2025-10-14 09:31:50.088 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:31:50 compute-0 nova_compute[259627]: 2025-10-14 09:31:50.234 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:50 compute-0 nova_compute[259627]: 2025-10-14 09:31:50.235 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:50 compute-0 nova_compute[259627]: 2025-10-14 09:31:50.235 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:31:50 compute-0 nova_compute[259627]: 2025-10-14 09:31:50.235 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b30a994a-5fb7-4344-9944-98d3d75d3b04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 09:31:50 compute-0 nova_compute[259627]: 2025-10-14 09:31:50.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:51 compute-0 ceph-mon[74249]: pgmap v2361: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 09:31:51 compute-0 nova_compute[259627]: 2025-10-14 09:31:51.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Oct 14 09:31:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:53 compute-0 ceph-mon[74249]: pgmap v2362: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.765 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.767 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.786 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.795 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.821 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.822 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.849 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.849 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.855 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.855 2 INFO nova.compute.claims [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:31:53 compute-0 nova_compute[259627]: 2025-10-14 09:31:53.986 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.147 2 DEBUG nova.compute.manager [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.147 2 DEBUG oslo_concurrency.lockutils [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.148 2 DEBUG oslo_concurrency.lockutils [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.148 2 DEBUG oslo_concurrency.lockutils [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.148 2 DEBUG nova.compute.manager [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Processing event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:31:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:31:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173069999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.417 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.426 2 DEBUG nova.compute.provider_tree [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.449 2 DEBUG nova.scheduler.client.report [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:31:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.489 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.491 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.552 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.553 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.573 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.594 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.688 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.691 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.692 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Creating image(s)
Oct 14 09:31:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3173069999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.734 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.763 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.788 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.792 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.898 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.899 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.899 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.899 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.922 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.925 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:54 compute-0 nova_compute[259627]: 2025-10-14 09:31:54.989 2 DEBUG nova.policy [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.211 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.276 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.388 2 DEBUG nova.objects.instance [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.449 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.449 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Ensure instance console log exists: /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.450 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.450 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.451 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:55 compute-0 ceph-mon[74249]: pgmap v2363: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:31:55 compute-0 nova_compute[259627]: 2025-10-14 09:31:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.374 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Successfully created port: 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:31:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 189 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.584 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.585 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.585 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.586 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.586 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No event matching network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 in dict_keys([('network-vif-plugged', '787335a5-4b97-43d1-ba56-12091ebdecdb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.587 2 WARNING nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received unexpected event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 for instance with vm_state building and task_state spawning.
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.587 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.588 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.589 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.589 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.590 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Processing event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.590 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.590 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.591 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.591 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.592 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.592 2 WARNING nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received unexpected event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb for instance with vm_state building and task_state spawning.
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.593 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance event wait completed in 7 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.598 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434316.5974936, c9315047-de1c-423a-adfa-118d77df3c94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.598 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] VM Resumed (Lifecycle Event)
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.601 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.607 2 INFO nova.virt.libvirt.driver [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance spawned successfully.
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.607 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.635 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.640 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.641 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.641 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.642 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.642 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.643 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.649 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.686 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.717 2 INFO nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Took 20.12 seconds to spawn the instance on the hypervisor.
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.717 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.803 2 INFO nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Took 21.20 seconds to build instance.
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.820 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:56 compute-0 nova_compute[259627]: 2025-10-14 09:31:56.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:31:57 compute-0 nova_compute[259627]: 2025-10-14 09:31:57.421 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Successfully updated port: 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:31:57 compute-0 nova_compute[259627]: 2025-10-14 09:31:57.443 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:57 compute-0 nova_compute[259627]: 2025-10-14 09:31:57.443 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:57 compute-0 nova_compute[259627]: 2025-10-14 09:31:57.443 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:31:57 compute-0 nova_compute[259627]: 2025-10-14 09:31:57.523 2 DEBUG nova.compute.manager [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:31:57 compute-0 nova_compute[259627]: 2025-10-14 09:31:57.523 2 DEBUG nova.compute.manager [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing instance network info cache due to event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:31:57 compute-0 nova_compute[259627]: 2025-10-14 09:31:57.524 2 DEBUG oslo_concurrency.lockutils [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:31:57 compute-0 podman[402326]: 2025-10-14 09:31:57.642673497 +0000 UTC m=+0.053575046 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:31:57 compute-0 podman[402325]: 2025-10-14 09:31:57.678461135 +0000 UTC m=+0.088532713 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 14 09:31:57 compute-0 ceph-mon[74249]: pgmap v2364: 305 pgs: 305 active+clean; 189 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 09:31:57 compute-0 nova_compute[259627]: 2025-10-14 09:31:57.780 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:31:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:31:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 189 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.563 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.585 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.585 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance network_info: |[{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.586 2 DEBUG oslo_concurrency.lockutils [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.586 2 DEBUG nova.network.neutron [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.589 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start _get_guest_xml network_info=[{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.594 2 WARNING nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.602 2 DEBUG nova.virt.libvirt.host [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.602 2 DEBUG nova.virt.libvirt.host [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.605 2 DEBUG nova.virt.libvirt.host [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.606 2 DEBUG nova.virt.libvirt.host [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.606 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.606 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.607 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.607 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.608 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.608 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.608 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.609 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.609 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.609 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.610 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.610 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:31:58 compute-0 nova_compute[259627]: 2025-10-14 09:31:58.612 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:31:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3302093010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.060 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.093 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.098 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:31:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:31:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3448617375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.580 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.582 2 DEBUG nova.virt.libvirt.vif [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=140,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-cth66w0f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:54Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.583 2 DEBUG nova.network.os_vif_util [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.584 2 DEBUG nova.network.os_vif_util [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.586 2 DEBUG nova.objects.instance [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.618 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <uuid>8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0</uuid>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <name>instance-0000008c</name>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777</nova:name>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:31:58</nova:creationTime>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <nova:port uuid="02b93e6c-e8fd-4ab1-bd57-84775ae34da2">
Oct 14 09:31:59 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <system>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <entry name="serial">8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0</entry>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <entry name="uuid">8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0</entry>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </system>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <os>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   </os>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <features>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   </features>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk">
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       </source>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config">
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       </source>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:31:59 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:66:f3:32"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <target dev="tap02b93e6c-e8"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/console.log" append="off"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <video>
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </video>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:31:59 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:31:59 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:31:59 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:31:59 compute-0 nova_compute[259627]: </domain>
Oct 14 09:31:59 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.630 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Preparing to wait for external event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.630 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.631 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.631 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.632 2 DEBUG nova.virt.libvirt.vif [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=140,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-cth66w0f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:54Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.632 2 DEBUG nova.network.os_vif_util [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.633 2 DEBUG nova.network.os_vif_util [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.633 2 DEBUG os_vif [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02b93e6c-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02b93e6c-e8, col_values=(('external_ids', {'iface-id': '02b93e6c-e8fd-4ab1-bd57-84775ae34da2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:f3:32', 'vm-uuid': '8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:59 compute-0 NetworkManager[44885]: <info>  [1760434319.6410] manager: (tap02b93e6c-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.650 2 INFO os_vif [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8')
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.721 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.721 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.722 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:66:f3:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.722 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Using config drive
Oct 14 09:31:59 compute-0 ceph-mon[74249]: pgmap v2365: 305 pgs: 305 active+clean; 189 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 09:31:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3302093010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3448617375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:31:59 compute-0 nova_compute[259627]: 2025-10-14 09:31:59.745 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.129 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Creating config drive at /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.135 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9numrr4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.175 2 DEBUG nova.compute.manager [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.175 2 DEBUG nova.compute.manager [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing instance network info cache due to event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.176 2 DEBUG oslo_concurrency.lockutils [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.176 2 DEBUG oslo_concurrency.lockutils [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.176 2 DEBUG nova.network.neutron [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.196 2 DEBUG nova.network.neutron [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updated VIF entry in instance network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.197 2 DEBUG nova.network.neutron [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.210 2 DEBUG oslo_concurrency.lockutils [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.278 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9numrr4" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.309 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.312 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.465 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.466 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Deleting local config drive /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config because it was imported into RBD.
Oct 14 09:32:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 213 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 14 09:32:00 compute-0 kernel: tap02b93e6c-e8: entered promiscuous mode
Oct 14 09:32:00 compute-0 NetworkManager[44885]: <info>  [1760434320.5139] manager: (tap02b93e6c-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/610)
Oct 14 09:32:00 compute-0 ovn_controller[152662]: 2025-10-14T09:32:00Z|01497|binding|INFO|Claiming lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 for this chassis.
Oct 14 09:32:00 compute-0 ovn_controller[152662]: 2025-10-14T09:32:00Z|01498|binding|INFO|02b93e6c-e8fd-4ab1-bd57-84775ae34da2: Claiming fa:16:3e:66:f3:32 10.100.0.5
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.524 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:f3:32 10.100.0.5'], port_security=['fa:16:3e:66:f3:32 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62ae45e1-2406-4d0e-83db-65cdec8c0b6f 6ff9262f-239c-4772-9987-411eb120736a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02951490-3f50-4bba-9cc5-21d9c0a6e4ba, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=02b93e6c-e8fd-4ab1-bd57-84775ae34da2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.525 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 in datapath 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 bound to our chassis
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.526 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:00 compute-0 ovn_controller[152662]: 2025-10-14T09:32:00Z|01499|binding|INFO|Setting lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 ovn-installed in OVS
Oct 14 09:32:00 compute-0 ovn_controller[152662]: 2025-10-14T09:32:00Z|01500|binding|INFO|Setting lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 up in Southbound
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.541 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9191f8d3-0bcf-4cb5-8847-65ad3d4b6ce9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.542 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap026a2ce2-41 in ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.543 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap026a2ce2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8c5f55-0211-46b3-b1b5-a1e57f1475d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfcbc73-d5a1-4260-bcf2-8bffb1d78b58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 systemd-udevd[402511]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:32:00 compute-0 systemd-machined[214636]: New machine qemu-173-instance-0000008c.
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.560 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[492aa675-f8b3-4d65-9abf-6c0d962b631a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008c.
Oct 14 09:32:00 compute-0 NetworkManager[44885]: <info>  [1760434320.5762] device (tap02b93e6c-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:32:00 compute-0 NetworkManager[44885]: <info>  [1760434320.5772] device (tap02b93e6c-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.584 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2e1f16-bc53-432a-b37e-ff4358591442]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.619 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[265f3fb5-9060-408c-8d95-e39631f78c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.626 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afa0ed73-48b6-4c38-9ac6-005841831090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 NetworkManager[44885]: <info>  [1760434320.6303] manager: (tap026a2ce2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/611)
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.663 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[817e8662-4a11-476a-a1fd-e30760f38338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.666 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7f237706-4e25-424b-905d-4b7297d18c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 NetworkManager[44885]: <info>  [1760434320.6906] device (tap026a2ce2-40): carrier: link connected
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.698 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f52e35-6232-49e8-99b2-bf76dbf931f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0163886-7cfe-443d-bd48-9ebadce67bd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap026a2ce2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:95:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809945, 'reachable_time': 23632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402542, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.744 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2a1a5c-ec40-4322-a888-769d62a09fb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:9556'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809945, 'tstamp': 809945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402543, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.759 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc198fec-64b5-4969-af9b-c9df1eefecdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap026a2ce2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:95:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809945, 'reachable_time': 23632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402544, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.786 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b087abeb-55c9-46f7-b84b-85df26dcc0e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.848 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7f91ec-8e3c-4855-a629-d12f25c503ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.849 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026a2ce2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.850 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.850 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap026a2ce2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:00 compute-0 kernel: tap026a2ce2-40: entered promiscuous mode
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:00 compute-0 NetworkManager[44885]: <info>  [1760434320.8528] manager: (tap026a2ce2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.856 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap026a2ce2-40, col_values=(('external_ids', {'iface-id': '7f009d61-2857-4109-a89b-a83c53d44768'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:00 compute-0 ovn_controller[152662]: 2025-10-14T09:32:00Z|01501|binding|INFO|Releasing lport 7f009d61-2857-4109-a89b-a83c53d44768 from this chassis (sb_readonly=0)
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.875 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.877 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8260bce-fa2b-4b14-a030-fcb92053fa06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.878 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540.pid.haproxy
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:32:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.878 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'env', 'PROCESS_TAG=haproxy-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.966 2 DEBUG nova.compute.manager [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.966 2 DEBUG oslo_concurrency.lockutils [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.967 2 DEBUG oslo_concurrency.lockutils [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.967 2 DEBUG oslo_concurrency.lockutils [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:00 compute-0 nova_compute[259627]: 2025-10-14 09:32:00.967 2 DEBUG nova.compute.manager [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Processing event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:32:01 compute-0 podman[402618]: 2025-10-14 09:32:01.238108768 +0000 UTC m=+0.027620058 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:32:01 compute-0 podman[402618]: 2025-10-14 09:32:01.3551151 +0000 UTC m=+0.144626360 container create 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 09:32:01 compute-0 systemd[1]: Started libpod-conmon-1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c.scope.
Oct 14 09:32:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfade0dc0f4b5a18f9357cc84a838a56c45c2e6cf4ac2c1e6656f3fc9a57ba4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.467 2 DEBUG nova.network.neutron [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updated VIF entry in instance network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.468 2 DEBUG nova.network.neutron [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.497 2 DEBUG oslo_concurrency.lockutils [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.523 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434321.5226917, 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.524 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] VM Started (Lifecycle Event)
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.527 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.531 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.536 2 INFO nova.virt.libvirt.driver [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance spawned successfully.
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.536 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.545 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.548 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.554 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.555 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.557 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:01 compute-0 podman[402618]: 2025-10-14 09:32:01.557471076 +0000 UTC m=+0.346982376 container init 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.557 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.558 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.558 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:01 compute-0 podman[402618]: 2025-10-14 09:32:01.563502224 +0000 UTC m=+0.353013494 container start 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.567 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.568 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434321.5228019, 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.568 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] VM Paused (Lifecycle Event)
Oct 14 09:32:01 compute-0 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [NOTICE]   (402635) : New worker (402637) forked
Oct 14 09:32:01 compute-0 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [NOTICE]   (402635) : Loading success.
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.590 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.594 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434321.529974, 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.594 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] VM Resumed (Lifecycle Event)
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.616 2 INFO nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Took 6.93 seconds to spawn the instance on the hypervisor.
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.616 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.625 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.654 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.679 2 INFO nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Took 7.85 seconds to build instance.
Oct 14 09:32:01 compute-0 nova_compute[259627]: 2025-10-14 09:32:01.693 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:01 compute-0 ceph-mon[74249]: pgmap v2366: 305 pgs: 305 active+clean; 213 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 14 09:32:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 213 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 14 09:32:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:32:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:32:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:32:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:32:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:32:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:32:03 compute-0 nova_compute[259627]: 2025-10-14 09:32:03.067 2 DEBUG nova.compute.manager [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:03 compute-0 nova_compute[259627]: 2025-10-14 09:32:03.068 2 DEBUG oslo_concurrency.lockutils [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:03 compute-0 nova_compute[259627]: 2025-10-14 09:32:03.068 2 DEBUG oslo_concurrency.lockutils [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:03 compute-0 nova_compute[259627]: 2025-10-14 09:32:03.068 2 DEBUG oslo_concurrency.lockutils [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:03 compute-0 nova_compute[259627]: 2025-10-14 09:32:03.069 2 DEBUG nova.compute.manager [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] No waiting events found dispatching network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:03 compute-0 nova_compute[259627]: 2025-10-14 09:32:03.069 2 WARNING nova.compute.manager [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received unexpected event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 for instance with vm_state active and task_state None.
Oct 14 09:32:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:03 compute-0 ceph-mon[74249]: pgmap v2367: 305 pgs: 305 active+clean; 213 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 14 09:32:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 213 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 14 09:32:04 compute-0 nova_compute[259627]: 2025-10-14 09:32:04.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:05 compute-0 nova_compute[259627]: 2025-10-14 09:32:05.598 2 DEBUG nova.compute.manager [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:05 compute-0 nova_compute[259627]: 2025-10-14 09:32:05.598 2 DEBUG nova.compute.manager [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing instance network info cache due to event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:32:05 compute-0 nova_compute[259627]: 2025-10-14 09:32:05.599 2 DEBUG oslo_concurrency.lockutils [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:05 compute-0 nova_compute[259627]: 2025-10-14 09:32:05.599 2 DEBUG oslo_concurrency.lockutils [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:05 compute-0 nova_compute[259627]: 2025-10-14 09:32:05.600 2 DEBUG nova.network.neutron [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:32:05 compute-0 nova_compute[259627]: 2025-10-14 09:32:05.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:32:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3013312311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:32:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:32:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3013312311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:32:05 compute-0 ceph-mon[74249]: pgmap v2368: 305 pgs: 305 active+clean; 213 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 14 09:32:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3013312311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:32:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3013312311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:32:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 14 09:32:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:07.048 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:07.048 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:07.049 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:07 compute-0 nova_compute[259627]: 2025-10-14 09:32:07.095 2 DEBUG nova.network.neutron [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updated VIF entry in instance network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:32:07 compute-0 nova_compute[259627]: 2025-10-14 09:32:07.096 2 DEBUG nova.network.neutron [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:07 compute-0 nova_compute[259627]: 2025-10-14 09:32:07.115 2 DEBUG oslo_concurrency.lockutils [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:07 compute-0 ovn_controller[152662]: 2025-10-14T09:32:07Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:cb:ea 10.100.0.7
Oct 14 09:32:07 compute-0 ovn_controller[152662]: 2025-10-14T09:32:07Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:cb:ea 10.100.0.7
Oct 14 09:32:07 compute-0 ceph-mon[74249]: pgmap v2369: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 14 09:32:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 762 KiB/s wr, 150 op/s
Oct 14 09:32:09 compute-0 sudo[402646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:09 compute-0 sudo[402646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:09 compute-0 sudo[402646]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:09 compute-0 sudo[402671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:32:09 compute-0 sudo[402671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:09 compute-0 sudo[402671]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:09 compute-0 sudo[402696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:09 compute-0 sudo[402696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:09 compute-0 sudo[402696]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:09 compute-0 sudo[402721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:32:09 compute-0 sudo[402721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:09 compute-0 nova_compute[259627]: 2025-10-14 09:32:09.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:10 compute-0 ceph-mon[74249]: pgmap v2370: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 762 KiB/s wr, 150 op/s
Oct 14 09:32:10 compute-0 sudo[402721]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:32:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:32:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:32:10 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:32:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:32:10 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:32:10 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8d00cfb5-c2f9-4c6b-82fd-4604b588c2e6 does not exist
Oct 14 09:32:10 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0d30dccf-aca4-47eb-a56f-f18c35d77cdf does not exist
Oct 14 09:32:10 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev adb44b83-7c9f-4cdb-849e-3e262510d73a does not exist
Oct 14 09:32:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:32:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:32:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:32:10 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:32:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:32:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:32:10 compute-0 sudo[402775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:10 compute-0 sudo[402775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:10 compute-0 sudo[402775]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:10 compute-0 sudo[402800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:32:10 compute-0 sudo[402800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:10 compute-0 sudo[402800]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:10 compute-0 sudo[402825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:10 compute-0 sudo[402825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:10 compute-0 sudo[402825]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:10 compute-0 sudo[402850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:32:10 compute-0 sudo[402850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 225 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 172 op/s
Oct 14 09:32:10 compute-0 nova_compute[259627]: 2025-10-14 09:32:10.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:10 compute-0 podman[402917]: 2025-10-14 09:32:10.829905609 +0000 UTC m=+0.044214256 container create d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:32:10 compute-0 systemd[1]: Started libpod-conmon-d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c.scope.
Oct 14 09:32:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:32:10 compute-0 podman[402917]: 2025-10-14 09:32:10.80834544 +0000 UTC m=+0.022654107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:32:10 compute-0 podman[402917]: 2025-10-14 09:32:10.910288652 +0000 UTC m=+0.124597329 container init d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:32:10 compute-0 podman[402917]: 2025-10-14 09:32:10.916895054 +0000 UTC m=+0.131203701 container start d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 09:32:10 compute-0 podman[402917]: 2025-10-14 09:32:10.920041621 +0000 UTC m=+0.134350298 container attach d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:32:10 compute-0 eager_hypatia[402933]: 167 167
Oct 14 09:32:10 compute-0 systemd[1]: libpod-d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c.scope: Deactivated successfully.
Oct 14 09:32:10 compute-0 podman[402917]: 2025-10-14 09:32:10.923329662 +0000 UTC m=+0.137638299 container died d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 09:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c9c59c85d428861b8d953ac095be1036292a79a548681e5417d1e76059b26b0-merged.mount: Deactivated successfully.
Oct 14 09:32:10 compute-0 podman[402917]: 2025-10-14 09:32:10.965967418 +0000 UTC m=+0.180276065 container remove d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:32:10 compute-0 systemd[1]: libpod-conmon-d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c.scope: Deactivated successfully.
Oct 14 09:32:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:32:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:32:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:32:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:32:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:32:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:32:11 compute-0 ceph-mon[74249]: pgmap v2371: 305 pgs: 305 active+clean; 225 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 172 op/s
Oct 14 09:32:11 compute-0 podman[402957]: 2025-10-14 09:32:11.162261095 +0000 UTC m=+0.041610452 container create 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 09:32:11 compute-0 systemd[1]: Started libpod-conmon-9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3.scope.
Oct 14 09:32:11 compute-0 podman[402957]: 2025-10-14 09:32:11.142255634 +0000 UTC m=+0.021605021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:32:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:11 compute-0 podman[402957]: 2025-10-14 09:32:11.272817078 +0000 UTC m=+0.152166485 container init 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:32:11 compute-0 podman[402957]: 2025-10-14 09:32:11.284649519 +0000 UTC m=+0.163998876 container start 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:32:11 compute-0 podman[402957]: 2025-10-14 09:32:11.287712804 +0000 UTC m=+0.167062161 container attach 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:32:12 compute-0 eager_mirzakhani[402973]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:32:12 compute-0 eager_mirzakhani[402973]: --> relative data size: 1.0
Oct 14 09:32:12 compute-0 eager_mirzakhani[402973]: --> All data devices are unavailable
Oct 14 09:32:12 compute-0 systemd[1]: libpod-9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3.scope: Deactivated successfully.
Oct 14 09:32:12 compute-0 systemd[1]: libpod-9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3.scope: Consumed 1.046s CPU time.
Oct 14 09:32:12 compute-0 podman[402957]: 2025-10-14 09:32:12.41630374 +0000 UTC m=+1.295653127 container died 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be-merged.mount: Deactivated successfully.
Oct 14 09:32:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 246 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 14 09:32:12 compute-0 podman[402957]: 2025-10-14 09:32:12.495057192 +0000 UTC m=+1.374406579 container remove 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:32:12 compute-0 systemd[1]: libpod-conmon-9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3.scope: Deactivated successfully.
Oct 14 09:32:12 compute-0 sudo[402850]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:12 compute-0 sudo[403016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:12 compute-0 sudo[403016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:12 compute-0 sudo[403016]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:12 compute-0 sudo[403041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:32:12 compute-0 sudo[403041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:12 compute-0 sudo[403041]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:12 compute-0 sudo[403066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:12 compute-0 sudo[403066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:12 compute-0 sudo[403066]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:12 compute-0 sudo[403091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:32:12 compute-0 sudo[403091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:13 compute-0 podman[403156]: 2025-10-14 09:32:13.398755179 +0000 UTC m=+0.057369839 container create b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 09:32:13 compute-0 systemd[1]: Started libpod-conmon-b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3.scope.
Oct 14 09:32:13 compute-0 podman[403156]: 2025-10-14 09:32:13.380815059 +0000 UTC m=+0.039429699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:32:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:32:13 compute-0 podman[403156]: 2025-10-14 09:32:13.503372446 +0000 UTC m=+0.161987156 container init b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:32:13 compute-0 podman[403156]: 2025-10-14 09:32:13.511364192 +0000 UTC m=+0.169978822 container start b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:32:13 compute-0 podman[403156]: 2025-10-14 09:32:13.516364254 +0000 UTC m=+0.174978954 container attach b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:32:13 compute-0 stupefied_morse[403172]: 167 167
Oct 14 09:32:13 compute-0 systemd[1]: libpod-b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3.scope: Deactivated successfully.
Oct 14 09:32:13 compute-0 conmon[403172]: conmon b1f35623487b56b73234 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3.scope/container/memory.events
Oct 14 09:32:13 compute-0 ceph-mon[74249]: pgmap v2372: 305 pgs: 305 active+clean; 246 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 14 09:32:13 compute-0 podman[403177]: 2025-10-14 09:32:13.564475095 +0000 UTC m=+0.030791947 container died b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:32:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e19c70f4d3ce9a933121a0d0bdde3f224053ab1865a2c24d69fe9764613d348-merged.mount: Deactivated successfully.
Oct 14 09:32:13 compute-0 podman[403177]: 2025-10-14 09:32:13.633424877 +0000 UTC m=+0.099741739 container remove b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:32:13 compute-0 systemd[1]: libpod-conmon-b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3.scope: Deactivated successfully.
Oct 14 09:32:13 compute-0 ovn_controller[152662]: 2025-10-14T09:32:13Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:f3:32 10.100.0.5
Oct 14 09:32:13 compute-0 ovn_controller[152662]: 2025-10-14T09:32:13Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:f3:32 10.100.0.5
Oct 14 09:32:13 compute-0 podman[403199]: 2025-10-14 09:32:13.880683285 +0000 UTC m=+0.061099061 container create 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 09:32:13 compute-0 systemd[1]: Started libpod-conmon-0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193.scope.
Oct 14 09:32:13 compute-0 podman[403199]: 2025-10-14 09:32:13.850997156 +0000 UTC m=+0.031412992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:32:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:13 compute-0 podman[403199]: 2025-10-14 09:32:13.977503111 +0000 UTC m=+0.157918857 container init 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:32:13 compute-0 podman[403199]: 2025-10-14 09:32:13.990637353 +0000 UTC m=+0.171053099 container start 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 09:32:13 compute-0 podman[403199]: 2025-10-14 09:32:13.99500476 +0000 UTC m=+0.175420526 container attach 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 09:32:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 246 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 14 09:32:14 compute-0 nova_compute[259627]: 2025-10-14 09:32:14.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:14 compute-0 focused_beaver[403216]: {
Oct 14 09:32:14 compute-0 focused_beaver[403216]:     "0": [
Oct 14 09:32:14 compute-0 focused_beaver[403216]:         {
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "devices": [
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "/dev/loop3"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             ],
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_name": "ceph_lv0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_size": "21470642176",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "name": "ceph_lv0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "tags": {
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cluster_name": "ceph",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.crush_device_class": "",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.encrypted": "0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osd_id": "0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.type": "block",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.vdo": "0"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             },
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "type": "block",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "vg_name": "ceph_vg0"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:         }
Oct 14 09:32:14 compute-0 focused_beaver[403216]:     ],
Oct 14 09:32:14 compute-0 focused_beaver[403216]:     "1": [
Oct 14 09:32:14 compute-0 focused_beaver[403216]:         {
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "devices": [
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "/dev/loop4"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             ],
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_name": "ceph_lv1",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_size": "21470642176",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "name": "ceph_lv1",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "tags": {
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cluster_name": "ceph",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.crush_device_class": "",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.encrypted": "0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osd_id": "1",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.type": "block",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.vdo": "0"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             },
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "type": "block",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "vg_name": "ceph_vg1"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:         }
Oct 14 09:32:14 compute-0 focused_beaver[403216]:     ],
Oct 14 09:32:14 compute-0 focused_beaver[403216]:     "2": [
Oct 14 09:32:14 compute-0 focused_beaver[403216]:         {
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "devices": [
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "/dev/loop5"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             ],
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_name": "ceph_lv2",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_size": "21470642176",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "name": "ceph_lv2",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "tags": {
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.cluster_name": "ceph",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.crush_device_class": "",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.encrypted": "0",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osd_id": "2",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.type": "block",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:                 "ceph.vdo": "0"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             },
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "type": "block",
Oct 14 09:32:14 compute-0 focused_beaver[403216]:             "vg_name": "ceph_vg2"
Oct 14 09:32:14 compute-0 focused_beaver[403216]:         }
Oct 14 09:32:14 compute-0 focused_beaver[403216]:     ]
Oct 14 09:32:14 compute-0 focused_beaver[403216]: }
Oct 14 09:32:14 compute-0 systemd[1]: libpod-0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193.scope: Deactivated successfully.
Oct 14 09:32:14 compute-0 podman[403199]: 2025-10-14 09:32:14.825808148 +0000 UTC m=+1.006223894 container died 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:32:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153-merged.mount: Deactivated successfully.
Oct 14 09:32:14 compute-0 podman[403199]: 2025-10-14 09:32:14.921519257 +0000 UTC m=+1.101935043 container remove 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:32:14 compute-0 systemd[1]: libpod-conmon-0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193.scope: Deactivated successfully.
Oct 14 09:32:14 compute-0 sudo[403091]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:15 compute-0 sudo[403239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:15 compute-0 sudo[403239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:15 compute-0 sudo[403239]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:15 compute-0 sudo[403264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:32:15 compute-0 sudo[403264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:15 compute-0 sudo[403264]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:15 compute-0 sudo[403289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:15 compute-0 sudo[403289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:15 compute-0 sudo[403289]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:15 compute-0 sudo[403314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:32:15 compute-0 sudo[403314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:15 compute-0 ceph-mon[74249]: pgmap v2373: 305 pgs: 305 active+clean; 246 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 14 09:32:15 compute-0 podman[403380]: 2025-10-14 09:32:15.645620777 +0000 UTC m=+0.050159402 container create fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:32:15 compute-0 systemd[1]: Started libpod-conmon-fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac.scope.
Oct 14 09:32:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:32:15 compute-0 podman[403380]: 2025-10-14 09:32:15.625198356 +0000 UTC m=+0.029736971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:32:15 compute-0 nova_compute[259627]: 2025-10-14 09:32:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:15 compute-0 podman[403380]: 2025-10-14 09:32:15.787795416 +0000 UTC m=+0.192334031 container init fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:32:15 compute-0 podman[403380]: 2025-10-14 09:32:15.797787651 +0000 UTC m=+0.202326256 container start fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:32:15 compute-0 podman[403380]: 2025-10-14 09:32:15.801536543 +0000 UTC m=+0.206075168 container attach fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:32:15 compute-0 epic_hopper[403396]: 167 167
Oct 14 09:32:15 compute-0 podman[403380]: 2025-10-14 09:32:15.807436268 +0000 UTC m=+0.211974873 container died fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:32:15 compute-0 systemd[1]: libpod-fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac.scope: Deactivated successfully.
Oct 14 09:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-910d394eea831c5a41c3f307d5712c12b56cab6e10854c75cfc0246836beebe6-merged.mount: Deactivated successfully.
Oct 14 09:32:15 compute-0 podman[403380]: 2025-10-14 09:32:15.853249382 +0000 UTC m=+0.257787987 container remove fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 14 09:32:15 compute-0 systemd[1]: libpod-conmon-fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac.scope: Deactivated successfully.
Oct 14 09:32:15 compute-0 podman[403399]: 2025-10-14 09:32:15.873621792 +0000 UTC m=+0.085418977 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 09:32:15 compute-0 podman[403400]: 2025-10-14 09:32:15.884495869 +0000 UTC m=+0.090150934 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:32:16 compute-0 podman[403457]: 2025-10-14 09:32:16.072939493 +0000 UTC m=+0.056436036 container create 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:32:16 compute-0 systemd[1]: Started libpod-conmon-1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb.scope.
Oct 14 09:32:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:16 compute-0 podman[403457]: 2025-10-14 09:32:16.050700257 +0000 UTC m=+0.034196770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:32:16 compute-0 podman[403457]: 2025-10-14 09:32:16.156517624 +0000 UTC m=+0.140014147 container init 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:32:16 compute-0 podman[403457]: 2025-10-14 09:32:16.167828632 +0000 UTC m=+0.151325165 container start 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:32:16 compute-0 podman[403457]: 2025-10-14 09:32:16.172815854 +0000 UTC m=+0.156312387 container attach 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:32:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 09:32:17 compute-0 determined_colden[403473]: {
Oct 14 09:32:17 compute-0 determined_colden[403473]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "osd_id": 2,
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "type": "bluestore"
Oct 14 09:32:17 compute-0 determined_colden[403473]:     },
Oct 14 09:32:17 compute-0 determined_colden[403473]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "osd_id": 1,
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "type": "bluestore"
Oct 14 09:32:17 compute-0 determined_colden[403473]:     },
Oct 14 09:32:17 compute-0 determined_colden[403473]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "osd_id": 0,
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:32:17 compute-0 determined_colden[403473]:         "type": "bluestore"
Oct 14 09:32:17 compute-0 determined_colden[403473]:     }
Oct 14 09:32:17 compute-0 determined_colden[403473]: }
Oct 14 09:32:17 compute-0 systemd[1]: libpod-1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb.scope: Deactivated successfully.
Oct 14 09:32:17 compute-0 podman[403457]: 2025-10-14 09:32:17.207124075 +0000 UTC m=+1.190620618 container died 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:32:17 compute-0 systemd[1]: libpod-1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb.scope: Consumed 1.046s CPU time.
Oct 14 09:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33-merged.mount: Deactivated successfully.
Oct 14 09:32:17 compute-0 podman[403457]: 2025-10-14 09:32:17.291107796 +0000 UTC m=+1.274604299 container remove 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 09:32:17 compute-0 systemd[1]: libpod-conmon-1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb.scope: Deactivated successfully.
Oct 14 09:32:17 compute-0 sudo[403314]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:32:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:32:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:32:17 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:32:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5fadefd6-041d-4d92-8e17-e10ba5c9a362 does not exist
Oct 14 09:32:17 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 9683c330-9cac-49de-bff4-07c0be46c575 does not exist
Oct 14 09:32:17 compute-0 sudo[403521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:32:17 compute-0 sudo[403521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:17 compute-0 sudo[403521]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:17 compute-0 ceph-mon[74249]: pgmap v2374: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 09:32:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:32:17 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:32:17 compute-0 sudo[403546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:32:17 compute-0 sudo[403546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:32:17 compute-0 sudo[403546]: pam_unix(sudo:session): session closed for user root
Oct 14 09:32:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 14 09:32:18 compute-0 nova_compute[259627]: 2025-10-14 09:32:18.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:19 compute-0 ceph-mon[74249]: pgmap v2375: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 14 09:32:19 compute-0 nova_compute[259627]: 2025-10-14 09:32:19.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.008 2 DEBUG nova.compute.manager [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.008 2 DEBUG nova.compute.manager [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing instance network info cache due to event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.009 2 DEBUG oslo_concurrency.lockutils [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.010 2 DEBUG oslo_concurrency.lockutils [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.010 2 DEBUG nova.network.neutron [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.013 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.013 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.014 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.014 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.014 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.016 2 INFO nova.compute.manager [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Terminating instance
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.017 2 DEBUG nova.compute.manager [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:32:20 compute-0 kernel: tap63edf6de-b6 (unregistering): left promiscuous mode
Oct 14 09:32:20 compute-0 NetworkManager[44885]: <info>  [1760434340.0795] device (tap63edf6de-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 ovn_controller[152662]: 2025-10-14T09:32:20Z|01502|binding|INFO|Releasing lport 63edf6de-b6e6-4be7-870e-062e8186ec37 from this chassis (sb_readonly=0)
Oct 14 09:32:20 compute-0 ovn_controller[152662]: 2025-10-14T09:32:20Z|01503|binding|INFO|Setting lport 63edf6de-b6e6-4be7-870e-062e8186ec37 down in Southbound
Oct 14 09:32:20 compute-0 ovn_controller[152662]: 2025-10-14T09:32:20Z|01504|binding|INFO|Removing iface tap63edf6de-b6 ovn-installed in OVS
Oct 14 09:32:20 compute-0 kernel: tap787335a5-4b (unregistering): left promiscuous mode
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 NetworkManager[44885]: <info>  [1760434340.1091] device (tap787335a5-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.113 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:cb:ea 10.100.0.7'], port_security=['fa:16:3e:20:cb:ea 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c9315047-de1c-423a-adfa-118d77df3c94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f55c896-9e6e-44ae-a4b7-c1c60b86826e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=63edf6de-b6e6-4be7-870e-062e8186ec37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.115 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 63edf6de-b6e6-4be7-870e-062e8186ec37 in datapath 20dd724c-9d71-4931-8e8b-4dd3fbbacc17 unbound from our chassis
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.117 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20dd724c-9d71-4931-8e8b-4dd3fbbacc17
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.141 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42735aaf-c777-4bf8-8e2a-1b0a362029e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_controller[152662]: 2025-10-14T09:32:20Z|01505|binding|INFO|Releasing lport 787335a5-4b97-43d1-ba56-12091ebdecdb from this chassis (sb_readonly=0)
Oct 14 09:32:20 compute-0 ovn_controller[152662]: 2025-10-14T09:32:20Z|01506|binding|INFO|Setting lport 787335a5-4b97-43d1-ba56-12091ebdecdb down in Southbound
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 ovn_controller[152662]: 2025-10-14T09:32:20Z|01507|binding|INFO|Removing iface tap787335a5-4b ovn-installed in OVS
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.167 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910'], port_security=['fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedf:a910/64', 'neutron:device_id': 'c9315047-de1c-423a-adfa-118d77df3c94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1dc63515-48cf-4886-956d-024d1d9cb848', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b23756-77cb-4493-bd13-0170877e81b9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=787335a5-4b97-43d1-ba56-12091ebdecdb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.190 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c6beb8cf-cc36-42a2-9cca-a6f57ec2b687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct 14 09:32:20 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008b.scope: Consumed 13.799s CPU time.
Oct 14 09:32:20 compute-0 systemd-machined[214636]: Machine qemu-172-instance-0000008b terminated.
Oct 14 09:32:20 compute-0 NetworkManager[44885]: <info>  [1760434340.2587] manager: (tap787335a5-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.287 2 INFO nova.virt.libvirt.driver [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance destroyed successfully.
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.288 2 DEBUG nova.objects.instance [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid c9315047-de1c-423a-adfa-118d77df3c94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.310 2 DEBUG nova.virt.libvirt.vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:56Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.311 2 DEBUG nova.network.os_vif_util [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.311 2 DEBUG nova.network.os_vif_util [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.312 2 DEBUG os_vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63edf6de-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.317 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a400c5d3-daa1-45a1-8b97-752533829f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.328 2 INFO os_vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6')
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.329 2 DEBUG nova.virt.libvirt.vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:56Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.329 2 DEBUG nova.network.os_vif_util [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.330 2 DEBUG nova.network.os_vif_util [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.330 2 DEBUG os_vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap787335a5-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.340 2 INFO os_vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b')
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.363 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[66535d53-de4a-4202-bad9-2ac7d2c5ed1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.391 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1af3fe-c494-4d32-bd6a-8ab436f3ae65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20dd724c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:07:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804837, 'reachable_time': 24448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403626, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3788272b-fead-48b6-88e0-6ebdbc5341dc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap20dd724c-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804851, 'tstamp': 804851}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403627, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20dd724c-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804855, 'tstamp': 804855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403627, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.413 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20dd724c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.417 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20dd724c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20dd724c-90, col_values=(('external_ids', {'iface-id': '1308be16-f790-4063-acf0-2c8f6fdde665'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.419 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.421 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 787335a5-4b97-43d1-ba56-12091ebdecdb in datapath 1dc63515-48cf-4886-956d-024d1d9cb848 unbound from our chassis
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.423 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1dc63515-48cf-4886-956d-024d1d9cb848
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.440 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6917ca09-814c-4570-a129-12de03f83e89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.471 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4df23189-9553-4202-afa2-37a3d5f00564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.474 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3c188d28-7ae8-4c61-9f37-6f286d88978a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.504 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[359568b0-a451-457c-98ac-62a1056dc014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b27a43f-7296-4c96-83a7-a94ad478d39f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1dc63515-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:e0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804949, 'reachable_time': 43899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403633, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.536 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[025e6e93-15dc-4efa-9908-1deb547da8b4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1dc63515-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804965, 'tstamp': 804965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403634, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.538 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc63515-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.542 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dc63515-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.543 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.543 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1dc63515-40, col_values=(('external_ids', {'iface-id': '9754cafe-7819-456f-943e-9907e7f07233'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.544 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.729 2 INFO nova.virt.libvirt.driver [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Deleting instance files /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94_del
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.730 2 INFO nova.virt.libvirt.driver [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Deletion of /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94_del complete
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.809 2 INFO nova.compute.manager [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.809 2 DEBUG oslo.service.loopingcall [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.810 2 DEBUG nova.compute.manager [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:32:20 compute-0 nova_compute[259627]: 2025-10-14 09:32:20.810 2 DEBUG nova.network.neutron [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:32:21 compute-0 ceph-mon[74249]: pgmap v2376: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.099 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-unplugged-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.099 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.099 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.100 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.100 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-unplugged-63edf6de-b6e6-4be7-870e-062e8186ec37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.100 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-unplugged-63edf6de-b6e6-4be7-870e-062e8186ec37 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.101 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.101 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.102 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.102 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.103 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.103 2 WARNING nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received unexpected event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 for instance with vm_state active and task_state deleting.
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.104 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-unplugged-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.104 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.105 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.105 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.106 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-unplugged-787335a5-4b97-43d1-ba56-12091ebdecdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.106 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-unplugged-787335a5-4b97-43d1-ba56-12091ebdecdb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:32:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 257 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.541 2 DEBUG nova.compute.manager [req-798e3814-1e40-4820-a8bd-d905556750bd req-56c53f70-944f-408b-af5b-aa111d1fd30e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-deleted-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.542 2 INFO nova.compute.manager [req-798e3814-1e40-4820-a8bd-d905556750bd req-56c53f70-944f-408b-af5b-aa111d1fd30e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Neutron deleted interface 787335a5-4b97-43d1-ba56-12091ebdecdb; detaching it from the instance and deleting it from the info cache
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.542 2 DEBUG nova.network.neutron [req-798e3814-1e40-4820-a8bd-d905556750bd req-56c53f70-944f-408b-af5b-aa111d1fd30e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.566 2 DEBUG nova.compute.manager [req-798e3814-1e40-4820-a8bd-d905556750bd req-56c53f70-944f-408b-af5b-aa111d1fd30e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Detach interface failed, port_id=787335a5-4b97-43d1-ba56-12091ebdecdb, reason: Instance c9315047-de1c-423a-adfa-118d77df3c94 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.821 2 DEBUG nova.network.neutron [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updated VIF entry in instance network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.822 2 DEBUG nova.network.neutron [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.838 2 DEBUG oslo_concurrency.lockutils [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.905 2 DEBUG nova.network.neutron [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.923 2 INFO nova.compute.manager [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Took 2.11 seconds to deallocate network for instance.
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.962 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:22 compute-0 nova_compute[259627]: 2025-10-14 09:32:22.963 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:23 compute-0 nova_compute[259627]: 2025-10-14 09:32:23.052 2 DEBUG oslo_concurrency.processutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:32:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2734996394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:23 compute-0 nova_compute[259627]: 2025-10-14 09:32:23.479 2 DEBUG oslo_concurrency.processutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:23 compute-0 nova_compute[259627]: 2025-10-14 09:32:23.487 2 DEBUG nova.compute.provider_tree [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:32:23 compute-0 nova_compute[259627]: 2025-10-14 09:32:23.505 2 DEBUG nova.scheduler.client.report [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:32:23 compute-0 nova_compute[259627]: 2025-10-14 09:32:23.528 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:23 compute-0 nova_compute[259627]: 2025-10-14 09:32:23.549 2 INFO nova.scheduler.client.report [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance c9315047-de1c-423a-adfa-118d77df3c94
Oct 14 09:32:23 compute-0 ceph-mon[74249]: pgmap v2377: 305 pgs: 305 active+clean; 257 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 09:32:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2734996394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:23 compute-0 nova_compute[259627]: 2025-10-14 09:32:23.621 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:24 compute-0 nova_compute[259627]: 2025-10-14 09:32:24.231 2 DEBUG nova.compute.manager [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:24 compute-0 nova_compute[259627]: 2025-10-14 09:32:24.232 2 DEBUG oslo_concurrency.lockutils [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:24 compute-0 nova_compute[259627]: 2025-10-14 09:32:24.232 2 DEBUG oslo_concurrency.lockutils [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:24 compute-0 nova_compute[259627]: 2025-10-14 09:32:24.233 2 DEBUG oslo_concurrency.lockutils [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:24 compute-0 nova_compute[259627]: 2025-10-14 09:32:24.233 2 DEBUG nova.compute.manager [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:24 compute-0 nova_compute[259627]: 2025-10-14 09:32:24.233 2 WARNING nova.compute.manager [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received unexpected event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb for instance with vm_state deleted and task_state None.
Oct 14 09:32:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 257 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 14 09:32:24 compute-0 nova_compute[259627]: 2025-10-14 09:32:24.706 2 DEBUG nova.compute.manager [req-d0959d75-d15e-411c-82d5-ad6b1bd1a259 req-d0e868e3-b44f-43f6-b960-a2e610b3dd3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-deleted-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.502 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.503 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.503 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.504 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.504 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.506 2 INFO nova.compute.manager [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Terminating instance
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.507 2 DEBUG nova.compute.manager [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:32:25 compute-0 kernel: tapf2639397-8f (unregistering): left promiscuous mode
Oct 14 09:32:25 compute-0 NetworkManager[44885]: <info>  [1760434345.5681] device (tapf2639397-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:32:25 compute-0 ovn_controller[152662]: 2025-10-14T09:32:25Z|01508|binding|INFO|Releasing lport f2639397-8fb2-4541-a298-fd68219e1e47 from this chassis (sb_readonly=0)
Oct 14 09:32:25 compute-0 ovn_controller[152662]: 2025-10-14T09:32:25Z|01509|binding|INFO|Setting lport f2639397-8fb2-4541-a298-fd68219e1e47 down in Southbound
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 ovn_controller[152662]: 2025-10-14T09:32:25Z|01510|binding|INFO|Removing iface tapf2639397-8f ovn-installed in OVS
Oct 14 09:32:25 compute-0 ceph-mon[74249]: pgmap v2378: 305 pgs: 305 active+clean; 257 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.591 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:1e:d0 10.100.0.12'], port_security=['fa:16:3e:a4:1e:d0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b30a994a-5fb7-4344-9944-98d3d75d3b04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f55c896-9e6e-44ae-a4b7-c1c60b86826e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f2639397-8fb2-4541-a298-fd68219e1e47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.593 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f2639397-8fb2-4541-a298-fd68219e1e47 in datapath 20dd724c-9d71-4931-8e8b-4dd3fbbacc17 unbound from our chassis
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.594 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20dd724c-9d71-4931-8e8b-4dd3fbbacc17, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.595 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46599594-f11e-497e-a47b-17c61cb6cff0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.595 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 namespace which is not needed anymore
Oct 14 09:32:25 compute-0 kernel: tapc22b3ec2-f5 (unregistering): left promiscuous mode
Oct 14 09:32:25 compute-0 NetworkManager[44885]: <info>  [1760434345.6146] device (tapc22b3ec2-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 ovn_controller[152662]: 2025-10-14T09:32:25Z|01511|binding|INFO|Releasing lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 from this chassis (sb_readonly=0)
Oct 14 09:32:25 compute-0 ovn_controller[152662]: 2025-10-14T09:32:25Z|01512|binding|INFO|Setting lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 down in Southbound
Oct 14 09:32:25 compute-0 ovn_controller[152662]: 2025-10-14T09:32:25Z|01513|binding|INFO|Removing iface tapc22b3ec2-f5 ovn-installed in OVS
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.634 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475'], port_security=['fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fecf:8475/64', 'neutron:device_id': 'b30a994a-5fb7-4344-9944-98d3d75d3b04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1dc63515-48cf-4886-956d-024d1d9cb848', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b23756-77cb-4493-bd13-0170877e81b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c22b3ec2-f5a1-4c97-8648-a463e9e12545) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct 14 09:32:25 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Consumed 16.378s CPU time.
Oct 14 09:32:25 compute-0 systemd-machined[214636]: Machine qemu-171-instance-00000089 terminated.
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 NetworkManager[44885]: <info>  [1760434345.7464] manager: (tapc22b3ec2-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Oct 14 09:32:25 compute-0 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [NOTICE]   (401300) : haproxy version is 2.8.14-c23fe91
Oct 14 09:32:25 compute-0 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [NOTICE]   (401300) : path to executable is /usr/sbin/haproxy
Oct 14 09:32:25 compute-0 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [WARNING]  (401300) : Exiting Master process...
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [ALERT]    (401300) : Current worker (401302) exited with code 143 (Terminated)
Oct 14 09:32:25 compute-0 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [WARNING]  (401300) : All workers exited. Exiting... (0)
Oct 14 09:32:25 compute-0 systemd[1]: libpod-2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406.scope: Deactivated successfully.
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.759 2 INFO nova.virt.libvirt.driver [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance destroyed successfully.
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.759 2 DEBUG nova.objects.instance [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid b30a994a-5fb7-4344-9944-98d3d75d3b04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:32:25 compute-0 podman[403684]: 2025-10-14 09:32:25.761066228 +0000 UTC m=+0.050436559 container died 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.777 2 DEBUG nova.virt.libvirt.vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:10Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.777 2 DEBUG nova.network.os_vif_util [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.778 2 DEBUG nova.network.os_vif_util [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.778 2 DEBUG os_vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.781 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2639397-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.787 2 INFO os_vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f')
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.788 2 DEBUG nova.virt.libvirt.vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:10Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.788 2 DEBUG nova.network.os_vif_util [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406-userdata-shm.mount: Deactivated successfully.
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.793 2 DEBUG nova.network.os_vif_util [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.793 2 DEBUG os_vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b3a221f83799bd67fc58856adc1cff01fbd3fa4b8be45bf1363f70f768eb22a-merged.mount: Deactivated successfully.
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.796 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc22b3ec2-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.802 2 INFO os_vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5')
Oct 14 09:32:25 compute-0 podman[403684]: 2025-10-14 09:32:25.808649696 +0000 UTC m=+0.098020037 container cleanup 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:32:25 compute-0 systemd[1]: libpod-conmon-2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406.scope: Deactivated successfully.
Oct 14 09:32:25 compute-0 podman[403744]: 2025-10-14 09:32:25.878171022 +0000 UTC m=+0.045441306 container remove 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.890 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf50c0a-c63c-4984-9866-4b72357d54db]: (4, ('Tue Oct 14 09:32:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 (2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406)\n2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406\nTue Oct 14 09:32:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 (2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406)\n2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.892 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b7407743-d0d1-4639-8ae3-4cb1c909681e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.893 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20dd724c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:25 compute-0 kernel: tap20dd724c-90: left promiscuous mode
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.956 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb10bc0-812b-4af9-8781-1b4f6285e5de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.959 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.959 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.983 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc26801-228b-44f7-a255-e56eccb4e851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:25 compute-0 nova_compute[259627]: 2025-10-14 09:32:25.984 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:32:25 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.985 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5fc97a-a327-40b1-be23-124b34a4b2eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.000 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[427ef789-4674-4e2b-8064-306f67468da7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804830, 'reachable_time': 24547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403763, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d20dd724c\x2d9d71\x2d4931\x2d8e8b\x2d4dd3fbbacc17.mount: Deactivated successfully.
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.005 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.005 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[edf48eed-3c14-4327-910a-e2d542a1aa36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.006 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c22b3ec2-f5a1-4c97-8648-a463e9e12545 in datapath 1dc63515-48cf-4886-956d-024d1d9cb848 unbound from our chassis
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.007 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1dc63515-48cf-4886-956d-024d1d9cb848, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.008 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9a372af0-407c-471c-8d80-3aca5fd34122]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.009 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 namespace which is not needed anymore
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.065 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.065 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.075 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.076 2 INFO nova.compute.claims [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:32:26 compute-0 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [NOTICE]   (401373) : haproxy version is 2.8.14-c23fe91
Oct 14 09:32:26 compute-0 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [NOTICE]   (401373) : path to executable is /usr/sbin/haproxy
Oct 14 09:32:26 compute-0 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [WARNING]  (401373) : Exiting Master process...
Oct 14 09:32:26 compute-0 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [WARNING]  (401373) : Exiting Master process...
Oct 14 09:32:26 compute-0 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [ALERT]    (401373) : Current worker (401375) exited with code 143 (Terminated)
Oct 14 09:32:26 compute-0 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [WARNING]  (401373) : All workers exited. Exiting... (0)
Oct 14 09:32:26 compute-0 systemd[1]: libpod-f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a.scope: Deactivated successfully.
Oct 14 09:32:26 compute-0 podman[403779]: 2025-10-14 09:32:26.176156995 +0000 UTC m=+0.049741882 container died f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 09:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a-userdata-shm.mount: Deactivated successfully.
Oct 14 09:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5feeed86f65fad647b0e405deecf168013385c50e6fadd5dfe122e086c585633-merged.mount: Deactivated successfully.
Oct 14 09:32:26 compute-0 podman[403779]: 2025-10-14 09:32:26.215951871 +0000 UTC m=+0.089536758 container cleanup f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:32:26 compute-0 systemd[1]: libpod-conmon-f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a.scope: Deactivated successfully.
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.237 2 INFO nova.virt.libvirt.driver [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Deleting instance files /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04_del
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.238 2 INFO nova.virt.libvirt.driver [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Deletion of /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04_del complete
Oct 14 09:32:26 compute-0 podman[403811]: 2025-10-14 09:32:26.26929783 +0000 UTC m=+0.032640592 container remove f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.274 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b1995c3e-8176-48f1-bafa-21eb9f54f312]: (4, ('Tue Oct 14 09:32:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 (f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a)\nf639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a\nTue Oct 14 09:32:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 (f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a)\nf639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.274 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.275 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf195b5-00e2-463e-b83e-61f479d9974e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.276 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc63515-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:26 compute-0 kernel: tap1dc63515-40: left promiscuous mode
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.295 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13867542-f6bf-4764-96d7-651b26a46bc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.318 2 INFO nova.compute.manager [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.319 2 DEBUG oslo.service.loopingcall [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.319 2 DEBUG nova.compute.manager [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.319 2 DEBUG nova.network.neutron [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.331 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26100b24-1818-4b47-b527-c7a99c3cb13c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.332 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06ed826b-c1e4-489d-96cc-596579667804]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c61165f-3dc9-4f17-84b3-0f5bff1ec094]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804940, 'reachable_time': 29845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403827, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.352 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:32:26 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.352 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7ac643-a2e5-4f2c-b493-1e75fc3e50f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.386 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.386 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing instance network info cache due to event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.386 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.387 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.387 2 DEBUG nova.network.neutron [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:32:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 14 09:32:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:32:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119166413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.718 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.726 2 DEBUG nova.compute.provider_tree [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.754 2 DEBUG nova.scheduler.client.report [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.788 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.790 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:32:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d1dc63515\x2d48cf\x2d4886\x2d956d\x2d024d1d9cb848.mount: Deactivated successfully.
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.830 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-unplugged-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.831 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.831 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.831 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.831 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-unplugged-f2639397-8fb2-4541-a298-fd68219e1e47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.832 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-unplugged-f2639397-8fb2-4541-a298-fd68219e1e47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.832 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.832 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.832 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.833 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.833 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.833 2 WARNING nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received unexpected event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 for instance with vm_state active and task_state deleting.
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.857 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.857 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.881 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:32:26 compute-0 nova_compute[259627]: 2025-10-14 09:32:26.905 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.034 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.036 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.036 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Creating image(s)
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.073 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.100 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.131 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.137 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.202 2 DEBUG nova.policy [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.250 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.251 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.251 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.252 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.280 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.283 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.548 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:27 compute-0 ceph-mon[74249]: pgmap v2379: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 14 09:32:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3119166413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.624 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.727 2 DEBUG nova.objects.instance [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid d1e24a24-daf7-4696-9dd4-ab29df7c8131 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.740 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.741 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Ensure instance console log exists: /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.741 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.741 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:27 compute-0 nova_compute[259627]: 2025-10-14 09:32:27.742 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.004 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Successfully created port: e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:32:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.084 2 DEBUG nova.network.neutron [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.099 2 INFO nova.compute.manager [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Took 1.78 seconds to deallocate network for instance.
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.136 2 DEBUG nova.network.neutron [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updated VIF entry in instance network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.136 2 DEBUG nova.network.neutron [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.145 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.146 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.155 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.156 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-unplugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.156 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.156 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.156 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.157 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-unplugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.157 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-unplugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.157 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.157 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.158 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.158 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.158 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.158 2 WARNING nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received unexpected event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 for instance with vm_state active and task_state deleting.
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.220 2 DEBUG oslo_concurrency.processutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 09:32:28 compute-0 podman[404036]: 2025-10-14 09:32:28.669368627 +0000 UTC m=+0.076918789 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:32:28 compute-0 podman[404035]: 2025-10-14 09:32:28.685270347 +0000 UTC m=+0.094653974 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 09:32:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:32:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228061182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.708 2 DEBUG oslo_concurrency.processutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.718 2 DEBUG nova.compute.provider_tree [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.750 2 DEBUG nova.scheduler.client.report [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.776 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.811 2 INFO nova.scheduler.client.report [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance b30a994a-5fb7-4344-9944-98d3d75d3b04
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.877 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.931 2 DEBUG nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-deleted-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.932 2 INFO nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Neutron deleted interface c22b3ec2-f5a1-4c97-8648-a463e9e12545; detaching it from the instance and deleting it from the info cache
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.932 2 DEBUG nova.network.neutron [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.936 2 DEBUG nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Detach interface failed, port_id=c22b3ec2-f5a1-4c97-8648-a463e9e12545, reason: Instance b30a994a-5fb7-4344-9944-98d3d75d3b04 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.937 2 DEBUG nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-deleted-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.937 2 INFO nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Neutron deleted interface f2639397-8fb2-4541-a298-fd68219e1e47; detaching it from the instance and deleting it from the info cache
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.938 2 DEBUG nova.network.neutron [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.940 2 DEBUG nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Detach interface failed, port_id=f2639397-8fb2-4541-a298-fd68219e1e47, reason: Instance b30a994a-5fb7-4344-9944-98d3d75d3b04 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.965 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Successfully updated port: e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.984 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.985 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:28 compute-0 nova_compute[259627]: 2025-10-14 09:32:28.985 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:32:29 compute-0 nova_compute[259627]: 2025-10-14 09:32:29.049 2 DEBUG nova.compute.manager [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-changed-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:29 compute-0 nova_compute[259627]: 2025-10-14 09:32:29.050 2 DEBUG nova.compute.manager [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Refreshing instance network info cache due to event network-changed-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:32:29 compute-0 nova_compute[259627]: 2025-10-14 09:32:29.050 2 DEBUG oslo_concurrency.lockutils [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:29 compute-0 nova_compute[259627]: 2025-10-14 09:32:29.148 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:32:29 compute-0 ceph-mon[74249]: pgmap v2380: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 09:32:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4228061182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:29 compute-0 nova_compute[259627]: 2025-10-14 09:32:29.983 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updating instance_info_cache with network_info: [{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.009 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.009 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance network_info: |[{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.010 2 DEBUG oslo_concurrency.lockutils [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.010 2 DEBUG nova.network.neutron [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Refreshing network info cache for port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.015 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start _get_guest_xml network_info=[{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.022 2 WARNING nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.034 2 DEBUG nova.virt.libvirt.host [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.035 2 DEBUG nova.virt.libvirt.host [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.049 2 DEBUG nova.virt.libvirt.host [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.050 2 DEBUG nova.virt.libvirt.host [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.050 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.050 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.051 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.051 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.051 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.052 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.052 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.052 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.053 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.053 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.053 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.054 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.057 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 189 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 637 KiB/s wr, 52 op/s
Oct 14 09:32:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:32:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528779693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.504 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.538 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.542 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3528779693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:30 compute-0 nova_compute[259627]: 2025-10-14 09:32:30.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:32:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734407679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.010 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.012 2 DEBUG nova.virt.libvirt.vif [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=141,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-dw48ywqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:32:26Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=d1e24a24-daf7-4696-9dd4-ab29df7c8131,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.013 2 DEBUG nova.network.os_vif_util [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.015 2 DEBUG nova.network.os_vif_util [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.018 2 DEBUG nova.objects.instance [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid d1e24a24-daf7-4696-9dd4-ab29df7c8131 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.035 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <uuid>d1e24a24-daf7-4696-9dd4-ab29df7c8131</uuid>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <name>instance-0000008d</name>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893</nova:name>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:32:30</nova:creationTime>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <nova:port uuid="e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f">
Oct 14 09:32:31 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <system>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <entry name="serial">d1e24a24-daf7-4696-9dd4-ab29df7c8131</entry>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <entry name="uuid">d1e24a24-daf7-4696-9dd4-ab29df7c8131</entry>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </system>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <os>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   </os>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <features>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   </features>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk">
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       </source>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config">
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       </source>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:32:31 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:24:dd:42"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <target dev="tape5c607d0-9f"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/console.log" append="off"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <video>
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </video>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:32:31 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:32:31 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:32:31 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:32:31 compute-0 nova_compute[259627]: </domain>
Oct 14 09:32:31 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.038 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Preparing to wait for external event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.039 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.040 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.040 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.041 2 DEBUG nova.virt.libvirt.vif [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=141,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-dw48ywqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:32:26Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=d1e24a24-daf7-4696-9dd4-ab29df7c8131,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.042 2 DEBUG nova.network.os_vif_util [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.043 2 DEBUG nova.network.os_vif_util [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.044 2 DEBUG os_vif [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5c607d0-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape5c607d0-9f, col_values=(('external_ids', {'iface-id': 'e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:dd:42', 'vm-uuid': 'd1e24a24-daf7-4696-9dd4-ab29df7c8131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:31 compute-0 NetworkManager[44885]: <info>  [1760434351.0560] manager: (tape5c607d0-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.065 2 INFO os_vif [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f')
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.153 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.155 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.155 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:24:dd:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.156 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Using config drive
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.192 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.404 2 DEBUG nova.network.neutron [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updated VIF entry in instance network info cache for port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.405 2 DEBUG nova.network.neutron [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updating instance_info_cache with network_info: [{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.422 2 DEBUG oslo_concurrency.lockutils [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.614 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Creating config drive at /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config
Oct 14 09:32:31 compute-0 ceph-mon[74249]: pgmap v2381: 305 pgs: 305 active+clean; 189 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 637 KiB/s wr, 52 op/s
Oct 14 09:32:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/734407679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.623 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxbkn43p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.764 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxbkn43p" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.791 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:32:31 compute-0 nova_compute[259627]: 2025-10-14 09:32:31.796 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.007 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.009 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Deleting local config drive /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config because it was imported into RBD.
Oct 14 09:32:32 compute-0 kernel: tape5c607d0-9f: entered promiscuous mode
Oct 14 09:32:32 compute-0 NetworkManager[44885]: <info>  [1760434352.0706] manager: (tape5c607d0-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Oct 14 09:32:32 compute-0 ovn_controller[152662]: 2025-10-14T09:32:32Z|01514|binding|INFO|Claiming lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f for this chassis.
Oct 14 09:32:32 compute-0 ovn_controller[152662]: 2025-10-14T09:32:32Z|01515|binding|INFO|e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f: Claiming fa:16:3e:24:dd:42 10.100.0.7
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.083 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:dd:42 10.100.0.7'], port_security=['fa:16:3e:24:dd:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd1e24a24-daf7-4696-9dd4-ab29df7c8131', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6ff9262f-239c-4772-9987-411eb120736a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02951490-3f50-4bba-9cc5-21d9c0a6e4ba, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.085 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f in datapath 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 bound to our chassis
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.088 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.111 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afd4725e-91f1-4cb1-9db2-bcce65e2b903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:32 compute-0 systemd-machined[214636]: New machine qemu-174-instance-0000008d.
Oct 14 09:32:32 compute-0 ovn_controller[152662]: 2025-10-14T09:32:32Z|01516|binding|INFO|Setting lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f up in Southbound
Oct 14 09:32:32 compute-0 ovn_controller[152662]: 2025-10-14T09:32:32Z|01517|binding|INFO|Setting lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f ovn-installed in OVS
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:32 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008d.
Oct 14 09:32:32 compute-0 systemd-udevd[404221]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.150 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f6908f05-d672-4745-b6e7-b08428e8391e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.155 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d54ca9-bbee-4620-bdfe-1e491081de70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:32 compute-0 NetworkManager[44885]: <info>  [1760434352.1664] device (tape5c607d0-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:32:32 compute-0 NetworkManager[44885]: <info>  [1760434352.1678] device (tape5c607d0-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.196 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[592febd9-880c-44c2-a7ce-2ee64024c3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2179d316-0ae0-4837-bef4-273aa35625fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap026a2ce2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:95:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809945, 'reachable_time': 23632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404232, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbde556-7679-488d-a03d-00da418326dd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap026a2ce2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809957, 'tstamp': 809957}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404233, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap026a2ce2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809959, 'tstamp': 809959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404233, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.238 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026a2ce2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap026a2ce2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.244 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap026a2ce2-40, col_values=(('external_ids', {'iface-id': '7f009d61-2857-4109-a89b-a83c53d44768'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.245 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.327 2 DEBUG nova.compute.manager [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.328 2 DEBUG oslo_concurrency.lockutils [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.329 2 DEBUG oslo_concurrency.lockutils [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.329 2 DEBUG oslo_concurrency.lockutils [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.329 2 DEBUG nova.compute.manager [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Processing event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.596 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:32 compute-0 nova_compute[259627]: 2025-10-14 09:32:32.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:32 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.598 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:32:32
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct 14 09:32:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:32:33 compute-0 ovn_controller[152662]: 2025-10-14T09:32:33Z|01518|binding|INFO|Releasing lport 7f009d61-2857-4109-a89b-a83c53d44768 from this chassis (sb_readonly=0)
Oct 14 09:32:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:33 compute-0 nova_compute[259627]: 2025-10-14 09:32:33.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:32:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:32:33 compute-0 ceph-mon[74249]: pgmap v2382: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.040 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434354.0392473, d1e24a24-daf7-4696-9dd4-ab29df7c8131 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.040 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] VM Started (Lifecycle Event)
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.044 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.049 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.054 2 INFO nova.virt.libvirt.driver [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance spawned successfully.
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.055 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.066 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.073 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.084 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.085 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.086 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.087 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.087 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.088 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.094 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.095 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434354.0394695, d1e24a24-daf7-4696-9dd4-ab29df7c8131 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.095 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] VM Paused (Lifecycle Event)
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.124 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.128 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434354.0487576, d1e24a24-daf7-4696-9dd4-ab29df7c8131 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.129 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] VM Resumed (Lifecycle Event)
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.149 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.157 2 INFO nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Took 7.12 seconds to spawn the instance on the hypervisor.
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.157 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.159 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.188 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.224 2 INFO nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Took 8.18 seconds to build instance.
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.241 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.393 2 DEBUG nova.compute.manager [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.394 2 DEBUG oslo_concurrency.lockutils [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.394 2 DEBUG oslo_concurrency.lockutils [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.395 2 DEBUG oslo_concurrency.lockutils [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.395 2 DEBUG nova.compute.manager [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] No waiting events found dispatching network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:32:34 compute-0 nova_compute[259627]: 2025-10-14 09:32:34.395 2 WARNING nova.compute.manager [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received unexpected event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f for instance with vm_state active and task_state None.
Oct 14 09:32:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Oct 14 09:32:35 compute-0 nova_compute[259627]: 2025-10-14 09:32:35.285 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434340.2849784, c9315047-de1c-423a-adfa-118d77df3c94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:32:35 compute-0 nova_compute[259627]: 2025-10-14 09:32:35.285 2 INFO nova.compute.manager [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] VM Stopped (Lifecycle Event)
Oct 14 09:32:35 compute-0 nova_compute[259627]: 2025-10-14 09:32:35.303 2 DEBUG nova.compute.manager [None req-38661a3f-1dbd-48cd-adbb-3605b7f2af22 - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:35 compute-0 ceph-mon[74249]: pgmap v2383: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Oct 14 09:32:35 compute-0 nova_compute[259627]: 2025-10-14 09:32:35.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:36 compute-0 nova_compute[259627]: 2025-10-14 09:32:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Oct 14 09:32:37 compute-0 ceph-mon[74249]: pgmap v2384: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Oct 14 09:32:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 14 09:32:39 compute-0 ceph-mon[74249]: pgmap v2385: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 14 09:32:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:32:40 compute-0 nova_compute[259627]: 2025-10-14 09:32:40.757 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434345.7567332, b30a994a-5fb7-4344-9944-98d3d75d3b04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:32:40 compute-0 nova_compute[259627]: 2025-10-14 09:32:40.758 2 INFO nova.compute.manager [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] VM Stopped (Lifecycle Event)
Oct 14 09:32:40 compute-0 nova_compute[259627]: 2025-10-14 09:32:40.789 2 DEBUG nova.compute.manager [None req-10d29829-d4b6-42d3-901a-8949f01790f5 - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:32:40 compute-0 nova_compute[259627]: 2025-10-14 09:32:40.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:41 compute-0 nova_compute[259627]: 2025-10-14 09:32:41.039 2 DEBUG nova.compute.manager [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-changed-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:41 compute-0 nova_compute[259627]: 2025-10-14 09:32:41.039 2 DEBUG nova.compute.manager [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Refreshing instance network info cache due to event network-changed-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:32:41 compute-0 nova_compute[259627]: 2025-10-14 09:32:41.040 2 DEBUG oslo_concurrency.lockutils [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:41 compute-0 nova_compute[259627]: 2025-10-14 09:32:41.040 2 DEBUG oslo_concurrency.lockutils [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:41 compute-0 nova_compute[259627]: 2025-10-14 09:32:41.040 2 DEBUG nova.network.neutron [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Refreshing network info cache for port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:32:41 compute-0 nova_compute[259627]: 2025-10-14 09:32:41.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:41 compute-0 ceph-mon[74249]: pgmap v2386: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 09:32:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 106 op/s
Oct 14 09:32:42 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:42.600 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:42 compute-0 nova_compute[259627]: 2025-10-14 09:32:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011079409023572312 of space, bias 1.0, pg target 0.33238227070716936 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:32:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:32:43 compute-0 ceph-mon[74249]: pgmap v2387: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 106 op/s
Oct 14 09:32:43 compute-0 nova_compute[259627]: 2025-10-14 09:32:43.775 2 DEBUG nova.network.neutron [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updated VIF entry in instance network info cache for port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:32:43 compute-0 nova_compute[259627]: 2025-10-14 09:32:43.776 2 DEBUG nova.network.neutron [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updating instance_info_cache with network_info: [{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:43 compute-0 nova_compute[259627]: 2025-10-14 09:32:43.800 2 DEBUG oslo_concurrency.lockutils [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 09:32:45 compute-0 ceph-mon[74249]: pgmap v2388: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 09:32:45 compute-0 ovn_controller[152662]: 2025-10-14T09:32:45Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:dd:42 10.100.0.7
Oct 14 09:32:45 compute-0 ovn_controller[152662]: 2025-10-14T09:32:45Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:dd:42 10.100.0.7
Oct 14 09:32:45 compute-0 nova_compute[259627]: 2025-10-14 09:32:45.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:46 compute-0 nova_compute[259627]: 2025-10-14 09:32:46.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 195 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 14 09:32:46 compute-0 podman[404277]: 2025-10-14 09:32:46.686307579 +0000 UTC m=+0.090807439 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct 14 09:32:46 compute-0 podman[404278]: 2025-10-14 09:32:46.700100807 +0000 UTC m=+0.093838363 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:32:46 compute-0 nova_compute[259627]: 2025-10-14 09:32:46.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:46 compute-0 nova_compute[259627]: 2025-10-14 09:32:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:46 compute-0 nova_compute[259627]: 2025-10-14 09:32:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.008 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.008 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:32:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2698932554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.453 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.568 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.568 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.576 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:32:47 compute-0 ceph-mon[74249]: pgmap v2389: 305 pgs: 305 active+clean; 195 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 14 09:32:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2698932554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.879 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.881 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3194MB free_disk=59.89793395996094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.883 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.884 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:47 compute-0 nova_compute[259627]: 2025-10-14 09:32:47.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.022 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.023 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d1e24a24-daf7-4696-9dd4-ab29df7c8131 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.023 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.023 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.079 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 195 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 814 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 14 09:32:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:32:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/493114329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.544 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.553 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.574 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.601 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:32:48 compute-0 nova_compute[259627]: 2025-10-14 09:32:48.601 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/493114329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:49 compute-0 nova_compute[259627]: 2025-10-14 09:32:49.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:49 compute-0 nova_compute[259627]: 2025-10-14 09:32:49.601 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:49 compute-0 ceph-mon[74249]: pgmap v2390: 305 pgs: 305 active+clean; 195 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 814 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 14 09:32:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 14 09:32:50 compute-0 nova_compute[259627]: 2025-10-14 09:32:50.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:50 compute-0 nova_compute[259627]: 2025-10-14 09:32:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:50 compute-0 nova_compute[259627]: 2025-10-14 09:32:50.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:51 compute-0 ceph-mon[74249]: pgmap v2391: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.726 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.726 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.727 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.727 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.727 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.729 2 INFO nova.compute.manager [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Terminating instance
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.730 2 DEBUG nova.compute.manager [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:32:51 compute-0 kernel: tape5c607d0-9f (unregistering): left promiscuous mode
Oct 14 09:32:51 compute-0 NetworkManager[44885]: <info>  [1760434371.7856] device (tape5c607d0-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:32:51 compute-0 ovn_controller[152662]: 2025-10-14T09:32:51Z|01519|binding|INFO|Releasing lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f from this chassis (sb_readonly=0)
Oct 14 09:32:51 compute-0 ovn_controller[152662]: 2025-10-14T09:32:51Z|01520|binding|INFO|Setting lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f down in Southbound
Oct 14 09:32:51 compute-0 ovn_controller[152662]: 2025-10-14T09:32:51Z|01521|binding|INFO|Removing iface tape5c607d0-9f ovn-installed in OVS
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.855 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:dd:42 10.100.0.7'], port_security=['fa:16:3e:24:dd:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd1e24a24-daf7-4696-9dd4-ab29df7c8131', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2f6b00d2-a7f2-4b95-9366-b1ecefd0105d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02951490-3f50-4bba-9cc5-21d9c0a6e4ba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.856 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f in datapath 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 unbound from our chassis
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.858 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.875 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce84b87-7f41-4b59-bdc2-84c5ab6e4b6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.912 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb3d753-8b0d-4eb4-be9e-9d677f7b2f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.915 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[07a19afb-6660-4fec-b385-0118a77bc9dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:51 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct 14 09:32:51 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008d.scope: Consumed 13.202s CPU time.
Oct 14 09:32:51 compute-0 systemd-machined[214636]: Machine qemu-174-instance-0000008d terminated.
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.954 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ef46aa-3f8f-45be-a592-d6dcd448dded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.969 2 INFO nova.virt.libvirt.driver [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance destroyed successfully.
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.969 2 DEBUG nova.objects.instance [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid d1e24a24-daf7-4696-9dd4-ab29df7c8131 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.974 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b17075c3-68d0-4cf2-92de-65258bf5f671]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap026a2ce2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:95:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809945, 'reachable_time': 23632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404380, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.985 2 DEBUG nova.virt.libvirt.vif [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=141,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:32:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-dw48ywqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:32:34Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=d1e24a24-daf7-4696-9dd4-ab29df7c8131,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.986 2 DEBUG nova.network.os_vif_util [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.986 2 DEBUG nova.network.os_vif_util [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.987 2 DEBUG os_vif [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5c607d0-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbdfdcd-df68-481d-a7e6-bfb9904af13a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap026a2ce2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809957, 'tstamp': 809957}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404386, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap026a2ce2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809959, 'tstamp': 809959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404386, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.989 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026a2ce2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.993 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap026a2ce2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.994 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.994 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap026a2ce2-40, col_values=(('external_ids', {'iface-id': '7f009d61-2857-4109-a89b-a83c53d44768'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.994 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:32:51 compute-0 nova_compute[259627]: 2025-10-14 09:32:51.996 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.002 2 INFO os_vif [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f')
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.246 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.247 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.247 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.247 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.381 2 INFO nova.virt.libvirt.driver [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Deleting instance files /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131_del
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.382 2 INFO nova.virt.libvirt.driver [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Deletion of /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131_del complete
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.436 2 INFO nova.compute.manager [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.437 2 DEBUG oslo.service.loopingcall [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.437 2 DEBUG nova.compute.manager [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:32:52 compute-0 nova_compute[259627]: 2025-10-14 09:32:52.438 2 DEBUG nova.network.neutron [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:32:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:32:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:53 compute-0 nova_compute[259627]: 2025-10-14 09:32:53.426 2 DEBUG nova.network.neutron [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:53 compute-0 nova_compute[259627]: 2025-10-14 09:32:53.455 2 INFO nova.compute.manager [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Took 1.02 seconds to deallocate network for instance.
Oct 14 09:32:53 compute-0 nova_compute[259627]: 2025-10-14 09:32:53.507 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:53 compute-0 nova_compute[259627]: 2025-10-14 09:32:53.508 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:53 compute-0 nova_compute[259627]: 2025-10-14 09:32:53.546 2 DEBUG nova.compute.manager [req-ed3af7d9-409f-4fbe-8ad3-b0e87dd34c61 req-bbb3134f-96e9-49d0-a127-6b026cfe78a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-vif-deleted-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:53 compute-0 nova_compute[259627]: 2025-10-14 09:32:53.576 2 DEBUG oslo_concurrency.processutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:53 compute-0 ceph-mon[74249]: pgmap v2392: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:32:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:32:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110739931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:54 compute-0 nova_compute[259627]: 2025-10-14 09:32:54.061 2 DEBUG oslo_concurrency.processutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:54 compute-0 nova_compute[259627]: 2025-10-14 09:32:54.069 2 DEBUG nova.compute.provider_tree [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:32:54 compute-0 nova_compute[259627]: 2025-10-14 09:32:54.093 2 DEBUG nova.scheduler.client.report [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:32:54 compute-0 nova_compute[259627]: 2025-10-14 09:32:54.125 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:54 compute-0 nova_compute[259627]: 2025-10-14 09:32:54.162 2 INFO nova.scheduler.client.report [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance d1e24a24-daf7-4696-9dd4-ab29df7c8131
Oct 14 09:32:54 compute-0 nova_compute[259627]: 2025-10-14 09:32:54.243 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:32:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/110739931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.103 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.134 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.134 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:55 compute-0 ceph-mon[74249]: pgmap v2393: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.937 2 DEBUG nova.compute.manager [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.938 2 DEBUG nova.compute.manager [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing instance network info cache due to event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.938 2 DEBUG oslo_concurrency.lockutils [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.939 2 DEBUG oslo_concurrency.lockutils [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.939 2 DEBUG nova.network.neutron [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.942 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.942 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.943 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.943 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.944 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.945 2 INFO nova.compute.manager [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Terminating instance
Oct 14 09:32:55 compute-0 nova_compute[259627]: 2025-10-14 09:32:55.947 2 DEBUG nova.compute.manager [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:32:56 compute-0 kernel: tap02b93e6c-e8 (unregistering): left promiscuous mode
Oct 14 09:32:56 compute-0 NetworkManager[44885]: <info>  [1760434376.0212] device (tap02b93e6c-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:32:56 compute-0 ovn_controller[152662]: 2025-10-14T09:32:56Z|01522|binding|INFO|Releasing lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 from this chassis (sb_readonly=0)
Oct 14 09:32:56 compute-0 ovn_controller[152662]: 2025-10-14T09:32:56Z|01523|binding|INFO|Setting lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 down in Southbound
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:56 compute-0 ovn_controller[152662]: 2025-10-14T09:32:56Z|01524|binding|INFO|Removing iface tap02b93e6c-e8 ovn-installed in OVS
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.050 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:f3:32 10.100.0.5'], port_security=['fa:16:3e:66:f3:32 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62ae45e1-2406-4d0e-83db-65cdec8c0b6f 6ff9262f-239c-4772-9987-411eb120736a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02951490-3f50-4bba-9cc5-21d9c0a6e4ba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=02b93e6c-e8fd-4ab1-bd57-84775ae34da2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.051 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 in datapath 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 unbound from our chassis
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.054 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.055 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9aac48c0-7d05-4faf-833d-42a5f5c302ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.056 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 namespace which is not needed anymore
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:56 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct 14 09:32:56 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008c.scope: Consumed 14.488s CPU time.
Oct 14 09:32:56 compute-0 systemd-machined[214636]: Machine qemu-173-instance-0000008c terminated.
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.188 2 INFO nova.virt.libvirt.driver [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance destroyed successfully.
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.189 2 DEBUG nova.objects.instance [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.204 2 DEBUG nova.virt.libvirt.vif [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:31:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=140,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:32:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-cth66w0f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:32:01Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.205 2 DEBUG nova.network.os_vif_util [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.205 2 DEBUG nova.network.os_vif_util [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.206 2 DEBUG os_vif [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02b93e6c-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.212 2 INFO os_vif [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8')
Oct 14 09:32:56 compute-0 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [NOTICE]   (402635) : haproxy version is 2.8.14-c23fe91
Oct 14 09:32:56 compute-0 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [NOTICE]   (402635) : path to executable is /usr/sbin/haproxy
Oct 14 09:32:56 compute-0 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [WARNING]  (402635) : Exiting Master process...
Oct 14 09:32:56 compute-0 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [ALERT]    (402635) : Current worker (402637) exited with code 143 (Terminated)
Oct 14 09:32:56 compute-0 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [WARNING]  (402635) : All workers exited. Exiting... (0)
Oct 14 09:32:56 compute-0 systemd[1]: libpod-1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c.scope: Deactivated successfully.
Oct 14 09:32:56 compute-0 podman[404456]: 2025-10-14 09:32:56.239578374 +0000 UTC m=+0.064008971 container died 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c-userdata-shm.mount: Deactivated successfully.
Oct 14 09:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfade0dc0f4b5a18f9357cc84a838a56c45c2e6cf4ac2c1e6656f3fc9a57ba4b-merged.mount: Deactivated successfully.
Oct 14 09:32:56 compute-0 podman[404456]: 2025-10-14 09:32:56.288400822 +0000 UTC m=+0.112831419 container cleanup 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 09:32:56 compute-0 systemd[1]: libpod-conmon-1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c.scope: Deactivated successfully.
Oct 14 09:32:56 compute-0 podman[404513]: 2025-10-14 09:32:56.365464814 +0000 UTC m=+0.046860241 container remove 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.371 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f705e2-c0ab-4df1-8be0-7f3c93d675b2]: (4, ('Tue Oct 14 09:32:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 (1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c)\n1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c\nTue Oct 14 09:32:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 (1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c)\n1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.373 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab97edc-556d-4051-8b19-aaae2dc551f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.374 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026a2ce2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:56 compute-0 kernel: tap026a2ce2-40: left promiscuous mode
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d21a3d76-895e-421c-9436-d26a85c1cc72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.425 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f902801-9556-4424-8db1-919bfee81da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.427 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2af1663-3998-4639-ad4d-cece94e45763]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[53639631-9245-45da-b917-ef5117987161]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809937, 'reachable_time': 32312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404528, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d026a2ce2\x2d4bce\x2d4cf4\x2da66d\x2dfd1bd5dbd540.mount: Deactivated successfully.
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.450 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:32:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.450 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d89cd47b-aad6-43cf-81ac-77d06feee932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 121 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.620 2 INFO nova.virt.libvirt.driver [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Deleting instance files /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_del
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.621 2 INFO nova.virt.libvirt.driver [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Deletion of /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_del complete
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.682 2 INFO nova.compute.manager [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.682 2 DEBUG oslo.service.loopingcall [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.683 2 DEBUG nova.compute.manager [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.683 2 DEBUG nova.network.neutron [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:56 compute-0 nova_compute[259627]: 2025-10-14 09:32:56.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:32:57 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.412 2 DEBUG nova.network.neutron [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:57 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.434 2 INFO nova.compute.manager [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Took 0.75 seconds to deallocate network for instance.
Oct 14 09:32:57 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.488 2 DEBUG nova.compute.manager [req-1d019f0c-8cdd-47bf-9343-564ccec2cb37 req-dddd2e06-aba0-4e1a-876e-e2d452318112 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-vif-deleted-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:32:57 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.506 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:32:57 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.506 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:32:57 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.547 2 DEBUG oslo_concurrency.processutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:32:57 compute-0 ceph-mon[74249]: pgmap v2394: 305 pgs: 305 active+clean; 121 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 09:32:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:57.866 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:84:37 2001:db8:0:1:f816:3eff:fea4:8437 2001:db8::f816:3eff:fea4:8437'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea4:8437/64 2001:db8::f816:3eff:fea4:8437/64', 'neutron:device_id': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a176eb2a-6fbd-4b8e-90b2-85a86523eb62) old=Port_Binding(mac=['fa:16:3e:a4:84:37 2001:db8::f816:3eff:fea4:8437'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:8437/64', 'neutron:device_id': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:32:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:57.868 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a176eb2a-6fbd-4b8e-90b2-85a86523eb62 in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 updated
Oct 14 09:32:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:57.869 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ca3a81-ba03-43af-8eb7-2462170c9d43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:32:57 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:32:57.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd0f392-bf39-4977-af67-9507f90baaef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:32:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:32:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2496263740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:57 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.987 2 DEBUG oslo_concurrency.processutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:32:57 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.994 2 DEBUG nova.compute.provider_tree [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:32:58 compute-0 nova_compute[259627]: 2025-10-14 09:32:57.999 2 DEBUG nova.network.neutron [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updated VIF entry in instance network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:32:58 compute-0 nova_compute[259627]: 2025-10-14 09:32:58.000 2 DEBUG nova.network.neutron [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:32:58 compute-0 nova_compute[259627]: 2025-10-14 09:32:58.014 2 DEBUG nova.scheduler.client.report [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:32:58 compute-0 nova_compute[259627]: 2025-10-14 09:32:58.019 2 DEBUG oslo_concurrency.lockutils [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:32:58 compute-0 nova_compute[259627]: 2025-10-14 09:32:58.033 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:58 compute-0 nova_compute[259627]: 2025-10-14 09:32:58.056 2 INFO nova.scheduler.client.report [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0
Oct 14 09:32:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:32:58 compute-0 nova_compute[259627]: 2025-10-14 09:32:58.123 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:32:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 121 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 73 KiB/s wr, 37 op/s
Oct 14 09:32:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2496263740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:32:59 compute-0 podman[404553]: 2025-10-14 09:32:59.684944543 +0000 UTC m=+0.081850300 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 09:32:59 compute-0 podman[404552]: 2025-10-14 09:32:59.699294625 +0000 UTC m=+0.110828481 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:32:59 compute-0 ceph-mon[74249]: pgmap v2395: 305 pgs: 305 active+clean; 121 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 73 KiB/s wr, 37 op/s
Oct 14 09:33:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 79 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 74 KiB/s wr, 53 op/s
Oct 14 09:33:00 compute-0 nova_compute[259627]: 2025-10-14 09:33:00.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:01 compute-0 nova_compute[259627]: 2025-10-14 09:33:01.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:01 compute-0 nova_compute[259627]: 2025-10-14 09:33:01.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:01 compute-0 nova_compute[259627]: 2025-10-14 09:33:01.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:01 compute-0 ceph-mon[74249]: pgmap v2396: 305 pgs: 305 active+clean; 79 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 74 KiB/s wr, 53 op/s
Oct 14 09:33:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 41 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 14 09:33:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:33:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:33:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:33:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:33:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:33:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:33:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:03 compute-0 nova_compute[259627]: 2025-10-14 09:33:03.674 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:03 compute-0 nova_compute[259627]: 2025-10-14 09:33:03.675 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:03 compute-0 nova_compute[259627]: 2025-10-14 09:33:03.692 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:33:03 compute-0 nova_compute[259627]: 2025-10-14 09:33:03.776 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:03 compute-0 nova_compute[259627]: 2025-10-14 09:33:03.777 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:03 compute-0 ceph-mon[74249]: pgmap v2397: 305 pgs: 305 active+clean; 41 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 14 09:33:03 compute-0 nova_compute[259627]: 2025-10-14 09:33:03.786 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:33:03 compute-0 nova_compute[259627]: 2025-10-14 09:33:03.787 2 INFO nova.compute.claims [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:33:03 compute-0 nova_compute[259627]: 2025-10-14 09:33:03.932 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:33:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100251005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.430 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.440 2 DEBUG nova.compute.provider_tree [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.457 2 DEBUG nova.scheduler.client.report [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.487 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.489 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:33:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 41 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.551 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.552 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.575 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.599 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:33:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3100251005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.812 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.814 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.815 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Creating image(s)
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.849 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.884 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.920 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.925 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:04 compute-0 nova_compute[259627]: 2025-10-14 09:33:04.984 2 DEBUG nova.policy [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.036 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.037 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.038 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.039 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.074 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.078 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 aa3e17be-c995-4cab-b209-1eadaaff1634_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.398 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 aa3e17be-c995-4cab-b209-1eadaaff1634_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.485 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.580 2 DEBUG nova.objects.instance [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid aa3e17be-c995-4cab-b209-1eadaaff1634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.633 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.633 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Ensure instance console log exists: /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.633 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.634 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.634 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:33:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4057076322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:33:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:33:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4057076322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:33:05 compute-0 ceph-mon[74249]: pgmap v2398: 305 pgs: 305 active+clean; 41 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 14 09:33:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4057076322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:33:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4057076322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:05 compute-0 nova_compute[259627]: 2025-10-14 09:33:05.934 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Successfully created port: 51563204-46d2-4b26-bfa3-a2dc0f43701a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:33:06 compute-0 nova_compute[259627]: 2025-10-14 09:33:06.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 67 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 839 KiB/s wr, 76 op/s
Oct 14 09:33:06 compute-0 nova_compute[259627]: 2025-10-14 09:33:06.967 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434371.9656842, d1e24a24-daf7-4696-9dd4-ab29df7c8131 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:33:06 compute-0 nova_compute[259627]: 2025-10-14 09:33:06.968 2 INFO nova.compute.manager [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] VM Stopped (Lifecycle Event)
Oct 14 09:33:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:07.049 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:07.049 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:07.050 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:07 compute-0 nova_compute[259627]: 2025-10-14 09:33:07.059 2 DEBUG nova.compute.manager [None req-774d386b-dfcd-460d-acf5-9b31abbaff53 - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:07 compute-0 nova_compute[259627]: 2025-10-14 09:33:07.141 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Successfully created port: e4183a77-e102-4885-9a7d-ef0431abf27c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:33:07 compute-0 ceph-mon[74249]: pgmap v2399: 305 pgs: 305 active+clean; 67 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 839 KiB/s wr, 76 op/s
Oct 14 09:33:07 compute-0 nova_compute[259627]: 2025-10-14 09:33:07.937 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Successfully updated port: 51563204-46d2-4b26-bfa3-a2dc0f43701a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:33:08 compute-0 nova_compute[259627]: 2025-10-14 09:33:08.038 2 DEBUG nova.compute.manager [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:08 compute-0 nova_compute[259627]: 2025-10-14 09:33:08.039 2 DEBUG nova.compute.manager [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing instance network info cache due to event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:33:08 compute-0 nova_compute[259627]: 2025-10-14 09:33:08.040 2 DEBUG oslo_concurrency.lockutils [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:08 compute-0 nova_compute[259627]: 2025-10-14 09:33:08.040 2 DEBUG oslo_concurrency.lockutils [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:08 compute-0 nova_compute[259627]: 2025-10-14 09:33:08.041 2 DEBUG nova.network.neutron [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:33:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:08 compute-0 nova_compute[259627]: 2025-10-14 09:33:08.280 2 DEBUG nova.network.neutron [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:33:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 67 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 825 KiB/s wr, 48 op/s
Oct 14 09:33:08 compute-0 nova_compute[259627]: 2025-10-14 09:33:08.979 2 DEBUG nova.network.neutron [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:08 compute-0 nova_compute[259627]: 2025-10-14 09:33:08.995 2 DEBUG oslo_concurrency.lockutils [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:09 compute-0 nova_compute[259627]: 2025-10-14 09:33:09.144 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Successfully updated port: e4183a77-e102-4885-9a7d-ef0431abf27c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:33:09 compute-0 nova_compute[259627]: 2025-10-14 09:33:09.161 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:09 compute-0 nova_compute[259627]: 2025-10-14 09:33:09.162 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:09 compute-0 nova_compute[259627]: 2025-10-14 09:33:09.162 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:33:09 compute-0 nova_compute[259627]: 2025-10-14 09:33:09.373 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:33:09 compute-0 ceph-mon[74249]: pgmap v2400: 305 pgs: 305 active+clean; 67 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 825 KiB/s wr, 48 op/s
Oct 14 09:33:10 compute-0 nova_compute[259627]: 2025-10-14 09:33:10.117 2 DEBUG nova.compute.manager [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-changed-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:10 compute-0 nova_compute[259627]: 2025-10-14 09:33:10.118 2 DEBUG nova.compute.manager [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing instance network info cache due to event network-changed-e4183a77-e102-4885-9a7d-ef0431abf27c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:33:10 compute-0 nova_compute[259627]: 2025-10-14 09:33:10.118 2 DEBUG oslo_concurrency.lockutils [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 14 09:33:10 compute-0 nova_compute[259627]: 2025-10-14 09:33:10.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.187 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434376.1859162, 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.187 2 INFO nova.compute.manager [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] VM Stopped (Lifecycle Event)
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.207 2 DEBUG nova.compute.manager [None req-23daf500-5b3b-4d81-be5b-78b2d84c1ed8 - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.511 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.530 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.530 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance network_info: |[{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.531 2 DEBUG oslo_concurrency.lockutils [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.531 2 DEBUG nova.network.neutron [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing network info cache for port e4183a77-e102-4885-9a7d-ef0431abf27c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.535 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start _get_guest_xml network_info=[{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.541 2 WARNING nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.546 2 DEBUG nova.virt.libvirt.host [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.547 2 DEBUG nova.virt.libvirt.host [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.556 2 DEBUG nova.virt.libvirt.host [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.556 2 DEBUG nova.virt.libvirt.host [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.557 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.557 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.558 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.558 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.558 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.558 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.559 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.559 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.559 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.560 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.560 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.560 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:33:11 compute-0 nova_compute[259627]: 2025-10-14 09:33:11.563 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:11 compute-0 ceph-mon[74249]: pgmap v2401: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 14 09:33:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:33:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205886182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.047 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.069 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.073 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:33:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2974658472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:33:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.497 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.500 2 DEBUG nova.virt.libvirt.vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.500 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.502 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.503 2 DEBUG nova.virt.libvirt.vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.504 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.505 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.507 2 DEBUG nova.objects.instance [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid aa3e17be-c995-4cab-b209-1eadaaff1634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.539 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <uuid>aa3e17be-c995-4cab-b209-1eadaaff1634</uuid>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <name>instance-0000008e</name>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-1334569923</nova:name>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:33:11</nova:creationTime>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:port uuid="51563204-46d2-4b26-bfa3-a2dc0f43701a">
Oct 14 09:33:12 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <nova:port uuid="e4183a77-e102-4885-9a7d-ef0431abf27c">
Oct 14 09:33:12 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe8f:fdda" ipVersion="6"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8f:fdda" ipVersion="6"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <system>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <entry name="serial">aa3e17be-c995-4cab-b209-1eadaaff1634</entry>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <entry name="uuid">aa3e17be-c995-4cab-b209-1eadaaff1634</entry>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </system>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <os>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   </os>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <features>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   </features>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/aa3e17be-c995-4cab-b209-1eadaaff1634_disk">
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config">
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:33:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:8e:aa:2c"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <target dev="tap51563204-46"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:8f:fd:da"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <target dev="tape4183a77-e1"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/console.log" append="off"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <video>
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </video>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:33:12 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:33:12 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:33:12 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:33:12 compute-0 nova_compute[259627]: </domain>
Oct 14 09:33:12 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.541 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Preparing to wait for external event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.542 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.543 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.543 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.544 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Preparing to wait for external event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.544 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.545 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.545 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.547 2 DEBUG nova.virt.libvirt.vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.547 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.548 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.549 2 DEBUG os_vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.551 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51563204-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51563204-46, col_values=(('external_ids', {'iface-id': '51563204-46d2-4b26-bfa3-a2dc0f43701a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:aa:2c', 'vm-uuid': 'aa3e17be-c995-4cab-b209-1eadaaff1634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:12 compute-0 NetworkManager[44885]: <info>  [1760434392.5670] manager: (tap51563204-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/617)
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.573 2 INFO os_vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46')
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.574 2 DEBUG nova.virt.libvirt.vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.575 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.576 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.576 2 DEBUG os_vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4183a77-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4183a77-e1, col_values=(('external_ids', {'iface-id': 'e4183a77-e102-4885-9a7d-ef0431abf27c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:fd:da', 'vm-uuid': 'aa3e17be-c995-4cab-b209-1eadaaff1634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:12 compute-0 NetworkManager[44885]: <info>  [1760434392.5827] manager: (tape4183a77-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.590 2 INFO os_vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1')
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.678 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.679 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.680 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:8e:aa:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.680 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:8f:fd:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.681 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Using config drive
Oct 14 09:33:12 compute-0 nova_compute[259627]: 2025-10-14 09:33:12.717 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2205886182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:33:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2974658472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:33:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:13 compute-0 nova_compute[259627]: 2025-10-14 09:33:13.758 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Creating config drive at /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config
Oct 14 09:33:13 compute-0 nova_compute[259627]: 2025-10-14 09:33:13.767 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzhggxbpc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:13 compute-0 ceph-mon[74249]: pgmap v2402: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 14 09:33:13 compute-0 nova_compute[259627]: 2025-10-14 09:33:13.923 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzhggxbpc" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:13 compute-0 nova_compute[259627]: 2025-10-14 09:33:13.960 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:13 compute-0 nova_compute[259627]: 2025-10-14 09:33:13.965 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.018 2 DEBUG nova.network.neutron [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updated VIF entry in instance network info cache for port e4183a77-e102-4885-9a7d-ef0431abf27c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.019 2 DEBUG nova.network.neutron [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.052 2 DEBUG oslo_concurrency.lockutils [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.164 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.165 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Deleting local config drive /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config because it was imported into RBD.
Oct 14 09:33:14 compute-0 kernel: tap51563204-46: entered promiscuous mode
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.2435] manager: (tap51563204-46): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01525|binding|INFO|Claiming lport 51563204-46d2-4b26-bfa3-a2dc0f43701a for this chassis.
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01526|binding|INFO|51563204-46d2-4b26-bfa3-a2dc0f43701a: Claiming fa:16:3e:8e:aa:2c 10.100.0.13
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.268 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:aa:2c 10.100.0.13'], port_security=['fa:16:3e:8e:aa:2c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'aa3e17be-c995-4cab-b209-1eadaaff1634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fc600f-219c-48ab-94ed-7d3694dfd14e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=51563204-46d2-4b26-bfa3-a2dc0f43701a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.270 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 51563204-46d2-4b26-bfa3-a2dc0f43701a in datapath 0ec9546c-0acc-437f-9f6e-7db1743faf53 bound to our chassis
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.271 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ec9546c-0acc-437f-9f6e-7db1743faf53
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.2759] manager: (tape4183a77-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/620)
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.296 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c3281ced-97e2-4224-a0c9-0bed1d6180bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.298 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ec9546c-01 in ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.301 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ec9546c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ad4567-aa75-4c41-9582-3adf48c64c08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.303 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee6e1a0-493c-48a2-be49-6bf32b60f05f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 systemd-udevd[404929]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:33:14 compute-0 systemd-udevd[404931]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.322 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[13cfe5b0-efe0-4327-acff-a88feaec7ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 systemd-machined[214636]: New machine qemu-175-instance-0000008e.
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.3369] device (tap51563204-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.3382] device (tap51563204-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:33:14 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-0000008e.
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.352 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8aefb6-c042-42e8-a8a4-aef8cb3a29e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 kernel: tape4183a77-e1: entered promiscuous mode
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01527|binding|INFO|Claiming lport e4183a77-e102-4885-9a7d-ef0431abf27c for this chassis.
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01528|binding|INFO|e4183a77-e102-4885-9a7d-ef0431abf27c: Claiming fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.3888] device (tape4183a77-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.3922] device (tape4183a77-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.392 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda'], port_security=['fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8f:fdda/64 2001:db8::f816:3eff:fe8f:fdda/64', 'neutron:device_id': 'aa3e17be-c995-4cab-b209-1eadaaff1634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e4183a77-e102-4885-9a7d-ef0431abf27c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01529|binding|INFO|Setting lport 51563204-46d2-4b26-bfa3-a2dc0f43701a ovn-installed in OVS
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01530|binding|INFO|Setting lport 51563204-46d2-4b26-bfa3-a2dc0f43701a up in Southbound
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.400 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4d846236-18d0-4155-9059-64f74124be35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01531|binding|INFO|Setting lport e4183a77-e102-4885-9a7d-ef0431abf27c ovn-installed in OVS
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01532|binding|INFO|Setting lport e4183a77-e102-4885-9a7d-ef0431abf27c up in Southbound
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba056462-6974-425d-9430-8a9ffcb3550d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.4165] manager: (tap0ec9546c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/621)
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.457 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca5750b-1d35-4abd-b6f1-ada015c32058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.461 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7452f3b-3444-466d-9b11-2ba291a417cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.4897] device (tap0ec9546c-00): carrier: link connected
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.496 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f9295e48-c23f-4356-83f7-852ec8b56168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee604729-2be2-4791-b3e8-59a3455ed275]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ec9546c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:a2:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817325, 'reachable_time': 33619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404966, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.534 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb92a894-2cfe-46a6-be73-b32b207306e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:a286'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817325, 'tstamp': 817325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404967, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.562 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa649f1-f3d4-47d6-8ca1-a0b3131b4a33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ec9546c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:a2:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817325, 'reachable_time': 33619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404968, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.594 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70e0e996-1c89-43a1-b670-addd74883e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.662 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d31eed7e-2fd0-4d37-b1ec-7955047c9a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.663 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ec9546c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.663 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.664 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ec9546c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:14 compute-0 kernel: tap0ec9546c-00: entered promiscuous mode
Oct 14 09:33:14 compute-0 NetworkManager[44885]: <info>  [1760434394.6676] manager: (tap0ec9546c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.680 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ec9546c-00, col_values=(('external_ids', {'iface-id': 'b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:14 compute-0 ovn_controller[152662]: 2025-10-14T09:33:14Z|01533|binding|INFO|Releasing lport b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3 from this chassis (sb_readonly=0)
Oct 14 09:33:14 compute-0 nova_compute[259627]: 2025-10-14 09:33:14.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.700 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ec9546c-0acc-437f-9f6e-7db1743faf53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ec9546c-0acc-437f-9f6e-7db1743faf53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.701 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d316ac7a-bf47-479c-a77f-ef4a253f578e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.702 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0ec9546c-0acc-437f-9f6e-7db1743faf53
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0ec9546c-0acc-437f-9f6e-7db1743faf53.pid.haproxy
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0ec9546c-0acc-437f-9f6e-7db1743faf53
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:33:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.703 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'env', 'PROCESS_TAG=haproxy-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ec9546c-0acc-437f-9f6e-7db1743faf53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:33:15 compute-0 podman[405043]: 2025-10-14 09:33:15.099123116 +0000 UTC m=+0.049346122 container create e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:33:15 compute-0 systemd[1]: Started libpod-conmon-e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16.scope.
Oct 14 09:33:15 compute-0 podman[405043]: 2025-10-14 09:33:15.072471732 +0000 UTC m=+0.022694758 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:33:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0619add7e0ccdeb5eee5bc37d898eed13da4d773245bbb9df7aa5e8493239769/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:15 compute-0 podman[405043]: 2025-10-14 09:33:15.200303619 +0000 UTC m=+0.150526685 container init e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.209 2 DEBUG nova.compute.manager [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.210 2 DEBUG oslo_concurrency.lockutils [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.211 2 DEBUG oslo_concurrency.lockutils [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.211 2 DEBUG oslo_concurrency.lockutils [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.212 2 DEBUG nova.compute.manager [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Processing event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:33:15 compute-0 podman[405043]: 2025-10-14 09:33:15.214098827 +0000 UTC m=+0.164321863 container start e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:33:15 compute-0 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [NOTICE]   (405060) : New worker (405062) forked
Oct 14 09:33:15 compute-0 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [NOTICE]   (405060) : Loading success.
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.296 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e4183a77-e102-4885-9a7d-ef0431abf27c in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 unbound from our chassis
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.298 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ca3a81-ba03-43af-8eb7-2462170c9d43
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.310 2 DEBUG nova.compute.manager [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.310 2 DEBUG oslo_concurrency.lockutils [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.311 2 DEBUG oslo_concurrency.lockutils [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.311 2 DEBUG oslo_concurrency.lockutils [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.311 2 DEBUG nova.compute.manager [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Processing event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87dfb4fc-e471-47d9-8345-79386b57224e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.318 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3ca3a81-b1 in ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.320 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3ca3a81-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38fac36a-6784-4b62-8a0f-e339f53fac41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.321 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[16609918-6f76-4ad3-8fe5-e14932600da8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.336 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[19a900bb-84f2-4055-8103-aca1f2be1117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.359 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c4801916-18de-4714-b3b9-c275968d89ca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.361 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434395.3611186, aa3e17be-c995-4cab-b209-1eadaaff1634 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.362 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] VM Started (Lifecycle Event)
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.364 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.367 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.370 2 INFO nova.virt.libvirt.driver [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance spawned successfully.
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.370 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.391 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.396 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.398 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[31ad3511-04da-4b93-b126-7f2ab074ca3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.399 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.400 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.400 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.400 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.401 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.401 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.404 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[74b254fd-4028-44ad-943e-ce101c99a8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 NetworkManager[44885]: <info>  [1760434395.4063] manager: (tapa3ca3a81-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/623)
Oct 14 09:33:15 compute-0 systemd-udevd[404959]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.428 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.428 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434395.3613482, aa3e17be-c995-4cab-b209-1eadaaff1634 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.428 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] VM Paused (Lifecycle Event)
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.446 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0dad88f4-f326-42fd-982e-634ce19cd509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.450 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a88f73a-bcd8-4485-a439-482f02ed38c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.464 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.467 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434395.3668435, aa3e17be-c995-4cab-b209-1eadaaff1634 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.467 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] VM Resumed (Lifecycle Event)
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.470 2 INFO nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Took 10.66 seconds to spawn the instance on the hypervisor.
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.471 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:15 compute-0 NetworkManager[44885]: <info>  [1760434395.4848] device (tapa3ca3a81-b0): carrier: link connected
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.491 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e378dc39-c8bc-4cf2-9465-b24634cbc79a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.495 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.499 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.517 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65c31cba-cd6d-4cd5-91bd-7361a3cf1203]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ca3a81-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:84:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817424, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405081, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.528 2 INFO nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Took 11.79 seconds to build instance.
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.540 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad022b15-c39b-466e-9a37-32f434a8b679]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:8437'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817424, 'tstamp': 817424}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405082, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.573 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84c14942-4ee3-4c66-89d4-05de55e61299]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ca3a81-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:84:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817424, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405083, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.616 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c86167-f31b-440e-939e-77504b498832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae2f6d8-e1e9-4017-b78d-de125c179f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.662 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ca3a81-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.662 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.663 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ca3a81-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:15 compute-0 NetworkManager[44885]: <info>  [1760434395.6655] manager: (tapa3ca3a81-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Oct 14 09:33:15 compute-0 kernel: tapa3ca3a81-b0: entered promiscuous mode
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.668 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ca3a81-b0, col_values=(('external_ids', {'iface-id': 'a176eb2a-6fbd-4b8e-90b2-85a86523eb62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:15 compute-0 ovn_controller[152662]: 2025-10-14T09:33:15Z|01534|binding|INFO|Releasing lport a176eb2a-6fbd-4b8e-90b2-85a86523eb62 from this chassis (sb_readonly=0)
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.683 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3ca3a81-ba03-43af-8eb7-2462170c9d43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3ca3a81-ba03-43af-8eb7-2462170c9d43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.684 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5180d169-cecb-40b3-8a1f-84a464ebdc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.685 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-a3ca3a81-ba03-43af-8eb7-2462170c9d43
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/a3ca3a81-ba03-43af-8eb7-2462170c9d43.pid.haproxy
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID a3ca3a81-ba03-43af-8eb7-2462170c9d43
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:33:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.686 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'env', 'PROCESS_TAG=haproxy-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3ca3a81-ba03-43af-8eb7-2462170c9d43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:33:15 compute-0 ceph-mon[74249]: pgmap v2403: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:33:15 compute-0 nova_compute[259627]: 2025-10-14 09:33:15.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:16 compute-0 podman[405113]: 2025-10-14 09:33:16.101903114 +0000 UTC m=+0.096724635 container create 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:33:16 compute-0 systemd[1]: Started libpod-conmon-552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63.scope.
Oct 14 09:33:16 compute-0 podman[405113]: 2025-10-14 09:33:16.061361849 +0000 UTC m=+0.056183430 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:33:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a4b15685401cfe097a8856dcb11738118fab0652949510d103f701d6144a22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:16 compute-0 podman[405113]: 2025-10-14 09:33:16.208194362 +0000 UTC m=+0.203015943 container init 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:33:16 compute-0 podman[405113]: 2025-10-14 09:33:16.217108581 +0000 UTC m=+0.211930122 container start 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:33:16 compute-0 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [NOTICE]   (405132) : New worker (405134) forked
Oct 14 09:33:16 compute-0 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [NOTICE]   (405132) : Loading success.
Oct 14 09:33:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.357 2 DEBUG nova.compute.manager [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.358 2 DEBUG oslo_concurrency.lockutils [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.359 2 DEBUG oslo_concurrency.lockutils [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.359 2 DEBUG oslo_concurrency.lockutils [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.359 2 DEBUG nova.compute.manager [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.359 2 WARNING nova.compute.manager [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received unexpected event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a for instance with vm_state active and task_state None.
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.441 2 DEBUG nova.compute.manager [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.442 2 DEBUG oslo_concurrency.lockutils [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.442 2 DEBUG oslo_concurrency.lockutils [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.443 2 DEBUG oslo_concurrency.lockutils [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.443 2 DEBUG nova.compute.manager [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.444 2 WARNING nova.compute.manager [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received unexpected event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c for instance with vm_state active and task_state None.
Oct 14 09:33:17 compute-0 nova_compute[259627]: 2025-10-14 09:33:17.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:17 compute-0 sudo[405144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:17 compute-0 podman[405143]: 2025-10-14 09:33:17.661934006 +0000 UTC m=+0.062613127 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:33:17 compute-0 sudo[405144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:17 compute-0 sudo[405144]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:17 compute-0 sudo[405201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:33:17 compute-0 podman[405145]: 2025-10-14 09:33:17.724129613 +0000 UTC m=+0.120785965 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:33:17 compute-0 sudo[405201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:17 compute-0 sudo[405201]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:17 compute-0 sudo[405229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:17 compute-0 sudo[405229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:17 compute-0 sudo[405229]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:17 compute-0 sudo[405254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:33:17 compute-0 sudo[405254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:17 compute-0 ceph-mon[74249]: pgmap v2404: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:33:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1003 KiB/s wr, 16 op/s
Oct 14 09:33:18 compute-0 sudo[405254]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:33:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:33:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:33:18 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:33:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:33:18 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:33:18 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 4caeb134-db52-4422-b60c-1b46481f680f does not exist
Oct 14 09:33:18 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 72b40ef1-5355-485e-917a-2fd6c94589ad does not exist
Oct 14 09:33:18 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a24966ce-8908-4fb7-bea0-edafcd9cbb58 does not exist
Oct 14 09:33:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:33:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:33:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:33:18 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:33:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:33:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:33:18 compute-0 sudo[405311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:18 compute-0 sudo[405311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:18 compute-0 sudo[405311]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:18 compute-0 sudo[405336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:33:18 compute-0 sudo[405336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:18 compute-0 sudo[405336]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:18 compute-0 sudo[405361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:18 compute-0 sudo[405361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:18 compute-0 sudo[405361]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:18 compute-0 sudo[405386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:33:18 compute-0 sudo[405386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:33:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:33:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:33:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:33:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:33:18 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:33:19 compute-0 ovn_controller[152662]: 2025-10-14T09:33:19Z|01535|binding|INFO|Releasing lport a176eb2a-6fbd-4b8e-90b2-85a86523eb62 from this chassis (sb_readonly=0)
Oct 14 09:33:19 compute-0 NetworkManager[44885]: <info>  [1760434399.1945] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Oct 14 09:33:19 compute-0 ovn_controller[152662]: 2025-10-14T09:33:19Z|01536|binding|INFO|Releasing lport b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3 from this chassis (sb_readonly=0)
Oct 14 09:33:19 compute-0 nova_compute[259627]: 2025-10-14 09:33:19.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:19 compute-0 NetworkManager[44885]: <info>  [1760434399.1963] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Oct 14 09:33:19 compute-0 ovn_controller[152662]: 2025-10-14T09:33:19Z|01537|binding|INFO|Releasing lport a176eb2a-6fbd-4b8e-90b2-85a86523eb62 from this chassis (sb_readonly=0)
Oct 14 09:33:19 compute-0 ovn_controller[152662]: 2025-10-14T09:33:19Z|01538|binding|INFO|Releasing lport b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3 from this chassis (sb_readonly=0)
Oct 14 09:33:19 compute-0 nova_compute[259627]: 2025-10-14 09:33:19.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:19 compute-0 nova_compute[259627]: 2025-10-14 09:33:19.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:19 compute-0 podman[405449]: 2025-10-14 09:33:19.386895615 +0000 UTC m=+0.114930881 container create 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 09:33:19 compute-0 podman[405449]: 2025-10-14 09:33:19.318925127 +0000 UTC m=+0.046960403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:33:19 compute-0 systemd[1]: Started libpod-conmon-5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84.scope.
Oct 14 09:33:19 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:33:19 compute-0 nova_compute[259627]: 2025-10-14 09:33:19.494 2 DEBUG nova.compute.manager [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:19 compute-0 nova_compute[259627]: 2025-10-14 09:33:19.496 2 DEBUG nova.compute.manager [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing instance network info cache due to event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:33:19 compute-0 nova_compute[259627]: 2025-10-14 09:33:19.497 2 DEBUG oslo_concurrency.lockutils [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:19 compute-0 nova_compute[259627]: 2025-10-14 09:33:19.497 2 DEBUG oslo_concurrency.lockutils [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:19 compute-0 nova_compute[259627]: 2025-10-14 09:33:19.497 2 DEBUG nova.network.neutron [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:33:19 compute-0 podman[405449]: 2025-10-14 09:33:19.507890764 +0000 UTC m=+0.235926090 container init 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:33:19 compute-0 podman[405449]: 2025-10-14 09:33:19.519311555 +0000 UTC m=+0.247346811 container start 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:33:19 compute-0 relaxed_kirch[405466]: 167 167
Oct 14 09:33:19 compute-0 systemd[1]: libpod-5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84.scope: Deactivated successfully.
Oct 14 09:33:19 compute-0 podman[405449]: 2025-10-14 09:33:19.54803541 +0000 UTC m=+0.276070686 container attach 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 14 09:33:19 compute-0 podman[405449]: 2025-10-14 09:33:19.548992183 +0000 UTC m=+0.277027499 container died 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-41649d140a39bfd7f067665e66b63eebad855bf6172c110a52f5a11b9b8818cf-merged.mount: Deactivated successfully.
Oct 14 09:33:19 compute-0 podman[405449]: 2025-10-14 09:33:19.694474333 +0000 UTC m=+0.422509599 container remove 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:33:19 compute-0 systemd[1]: libpod-conmon-5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84.scope: Deactivated successfully.
Oct 14 09:33:19 compute-0 ceph-mon[74249]: pgmap v2405: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1003 KiB/s wr, 16 op/s
Oct 14 09:33:19 compute-0 podman[405493]: 2025-10-14 09:33:19.972946447 +0000 UTC m=+0.080069536 container create 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:33:20 compute-0 podman[405493]: 2025-10-14 09:33:19.941605738 +0000 UTC m=+0.048728887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:33:20 compute-0 systemd[1]: Started libpod-conmon-92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357.scope.
Oct 14 09:33:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:20 compute-0 podman[405493]: 2025-10-14 09:33:20.169533561 +0000 UTC m=+0.276656660 container init 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 14 09:33:20 compute-0 podman[405493]: 2025-10-14 09:33:20.184994201 +0000 UTC m=+0.292117290 container start 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:33:20 compute-0 podman[405493]: 2025-10-14 09:33:20.193692154 +0000 UTC m=+0.300815253 container attach 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 09:33:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1003 KiB/s wr, 61 op/s
Oct 14 09:33:20 compute-0 nova_compute[259627]: 2025-10-14 09:33:20.620 2 DEBUG nova.network.neutron [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updated VIF entry in instance network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:33:20 compute-0 nova_compute[259627]: 2025-10-14 09:33:20.622 2 DEBUG nova.network.neutron [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:20 compute-0 nova_compute[259627]: 2025-10-14 09:33:20.646 2 DEBUG oslo_concurrency.lockutils [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:20 compute-0 nova_compute[259627]: 2025-10-14 09:33:20.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:21 compute-0 eager_leavitt[405510]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:33:21 compute-0 eager_leavitt[405510]: --> relative data size: 1.0
Oct 14 09:33:21 compute-0 eager_leavitt[405510]: --> All data devices are unavailable
Oct 14 09:33:21 compute-0 systemd[1]: libpod-92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357.scope: Deactivated successfully.
Oct 14 09:33:21 compute-0 systemd[1]: libpod-92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357.scope: Consumed 1.187s CPU time.
Oct 14 09:33:21 compute-0 podman[405493]: 2025-10-14 09:33:21.422383626 +0000 UTC m=+1.529506715 container died 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3-merged.mount: Deactivated successfully.
Oct 14 09:33:21 compute-0 podman[405493]: 2025-10-14 09:33:21.499725223 +0000 UTC m=+1.606848292 container remove 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:33:21 compute-0 systemd[1]: libpod-conmon-92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357.scope: Deactivated successfully.
Oct 14 09:33:21 compute-0 sudo[405386]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:21 compute-0 sudo[405549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:21 compute-0 sudo[405549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:21 compute-0 sudo[405549]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:21 compute-0 sudo[405574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:33:21 compute-0 sudo[405574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:21 compute-0 sudo[405574]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:21 compute-0 sudo[405599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:21 compute-0 sudo[405599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:21 compute-0 sudo[405599]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:21 compute-0 sudo[405624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:33:21 compute-0 sudo[405624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:21 compute-0 ceph-mon[74249]: pgmap v2406: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1003 KiB/s wr, 61 op/s
Oct 14 09:33:22 compute-0 podman[405690]: 2025-10-14 09:33:22.36920547 +0000 UTC m=+0.062000982 container create 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 09:33:22 compute-0 systemd[1]: Started libpod-conmon-8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7.scope.
Oct 14 09:33:22 compute-0 podman[405690]: 2025-10-14 09:33:22.337916303 +0000 UTC m=+0.030711875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:33:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:33:22 compute-0 podman[405690]: 2025-10-14 09:33:22.457430425 +0000 UTC m=+0.150225937 container init 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:33:22 compute-0 podman[405690]: 2025-10-14 09:33:22.468563059 +0000 UTC m=+0.161358541 container start 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:33:22 compute-0 podman[405690]: 2025-10-14 09:33:22.473318675 +0000 UTC m=+0.166114257 container attach 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 09:33:22 compute-0 elastic_feistel[405706]: 167 167
Oct 14 09:33:22 compute-0 systemd[1]: libpod-8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7.scope: Deactivated successfully.
Oct 14 09:33:22 compute-0 podman[405690]: 2025-10-14 09:33:22.476502354 +0000 UTC m=+0.169297866 container died 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:33:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 09:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6840e6789d229a28ba1d5288ddd8d57e9aed1c781deab2c586db8ef52c9ce0a3-merged.mount: Deactivated successfully.
Oct 14 09:33:22 compute-0 podman[405690]: 2025-10-14 09:33:22.529646848 +0000 UTC m=+0.222442340 container remove 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 09:33:22 compute-0 systemd[1]: libpod-conmon-8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7.scope: Deactivated successfully.
Oct 14 09:33:22 compute-0 nova_compute[259627]: 2025-10-14 09:33:22.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:22 compute-0 podman[405730]: 2025-10-14 09:33:22.734442173 +0000 UTC m=+0.071994537 container create 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:33:22 compute-0 systemd[1]: Started libpod-conmon-38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef.scope.
Oct 14 09:33:22 compute-0 podman[405730]: 2025-10-14 09:33:22.704370896 +0000 UTC m=+0.041923270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:33:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:22 compute-0 podman[405730]: 2025-10-14 09:33:22.856852597 +0000 UTC m=+0.194404991 container init 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:33:22 compute-0 podman[405730]: 2025-10-14 09:33:22.869123379 +0000 UTC m=+0.206675743 container start 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:33:22 compute-0 podman[405730]: 2025-10-14 09:33:22.872206634 +0000 UTC m=+0.209759008 container attach 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:33:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]: {
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:     "0": [
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:         {
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "devices": [
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "/dev/loop3"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             ],
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_name": "ceph_lv0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_size": "21470642176",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "name": "ceph_lv0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "tags": {
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cluster_name": "ceph",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.crush_device_class": "",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.encrypted": "0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osd_id": "0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.type": "block",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.vdo": "0"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             },
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "type": "block",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "vg_name": "ceph_vg0"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:         }
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:     ],
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:     "1": [
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:         {
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "devices": [
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "/dev/loop4"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             ],
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_name": "ceph_lv1",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_size": "21470642176",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "name": "ceph_lv1",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "tags": {
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cluster_name": "ceph",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.crush_device_class": "",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.encrypted": "0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osd_id": "1",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.type": "block",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.vdo": "0"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             },
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "type": "block",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "vg_name": "ceph_vg1"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:         }
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:     ],
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:     "2": [
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:         {
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "devices": [
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "/dev/loop5"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             ],
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_name": "ceph_lv2",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_size": "21470642176",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "name": "ceph_lv2",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "tags": {
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.cluster_name": "ceph",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.crush_device_class": "",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.encrypted": "0",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osd_id": "2",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.type": "block",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:                 "ceph.vdo": "0"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             },
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "type": "block",
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:             "vg_name": "ceph_vg2"
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:         }
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]:     ]
Oct 14 09:33:23 compute-0 suspicious_feistel[405746]: }
Oct 14 09:33:23 compute-0 systemd[1]: libpod-38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef.scope: Deactivated successfully.
Oct 14 09:33:23 compute-0 podman[405755]: 2025-10-14 09:33:23.732854075 +0000 UTC m=+0.047230950 container died 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021-merged.mount: Deactivated successfully.
Oct 14 09:33:23 compute-0 podman[405755]: 2025-10-14 09:33:23.792816236 +0000 UTC m=+0.107193071 container remove 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:33:23 compute-0 systemd[1]: libpod-conmon-38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef.scope: Deactivated successfully.
Oct 14 09:33:23 compute-0 sudo[405624]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:23 compute-0 sudo[405769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:23 compute-0 sudo[405769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:23 compute-0 sudo[405769]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:23 compute-0 ceph-mon[74249]: pgmap v2407: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 09:33:23 compute-0 sudo[405794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:33:23 compute-0 sudo[405794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:23 compute-0 sudo[405794]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:24 compute-0 sudo[405819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:24 compute-0 sudo[405819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:24 compute-0 sudo[405819]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:24 compute-0 sudo[405844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:33:24 compute-0 sudo[405844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:33:24 compute-0 podman[405911]: 2025-10-14 09:33:24.577451251 +0000 UTC m=+0.049368992 container create ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:33:24 compute-0 systemd[1]: Started libpod-conmon-ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d.scope.
Oct 14 09:33:24 compute-0 podman[405911]: 2025-10-14 09:33:24.549154687 +0000 UTC m=+0.021072458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:33:24 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:33:24 compute-0 podman[405911]: 2025-10-14 09:33:24.678594633 +0000 UTC m=+0.150512454 container init ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:33:24 compute-0 podman[405911]: 2025-10-14 09:33:24.689756477 +0000 UTC m=+0.161674238 container start ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:33:24 compute-0 podman[405911]: 2025-10-14 09:33:24.694118014 +0000 UTC m=+0.166035765 container attach ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 09:33:24 compute-0 inspiring_hermann[405927]: 167 167
Oct 14 09:33:24 compute-0 systemd[1]: libpod-ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d.scope: Deactivated successfully.
Oct 14 09:33:24 compute-0 podman[405911]: 2025-10-14 09:33:24.696217666 +0000 UTC m=+0.168135397 container died ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-356f62557e60a213d744394ade3ea97c99228890944dac8e54d36464abe526b6-merged.mount: Deactivated successfully.
Oct 14 09:33:24 compute-0 podman[405911]: 2025-10-14 09:33:24.74405995 +0000 UTC m=+0.215977711 container remove ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 09:33:24 compute-0 systemd[1]: libpod-conmon-ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d.scope: Deactivated successfully.
Oct 14 09:33:24 compute-0 podman[405949]: 2025-10-14 09:33:24.931857638 +0000 UTC m=+0.045942838 container create 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:33:24 compute-0 systemd[1]: Started libpod-conmon-20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe.scope.
Oct 14 09:33:25 compute-0 podman[405949]: 2025-10-14 09:33:24.909266844 +0000 UTC m=+0.023352064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:33:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:33:25 compute-0 podman[405949]: 2025-10-14 09:33:25.039678083 +0000 UTC m=+0.153763373 container init 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:33:25 compute-0 podman[405949]: 2025-10-14 09:33:25.048186842 +0000 UTC m=+0.162272022 container start 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:33:25 compute-0 podman[405949]: 2025-10-14 09:33:25.052349924 +0000 UTC m=+0.166435134 container attach 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:33:25 compute-0 nova_compute[259627]: 2025-10-14 09:33:25.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:25 compute-0 ceph-mon[74249]: pgmap v2408: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:33:26 compute-0 brave_fermi[405968]: {
Oct 14 09:33:26 compute-0 brave_fermi[405968]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "osd_id": 2,
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "type": "bluestore"
Oct 14 09:33:26 compute-0 brave_fermi[405968]:     },
Oct 14 09:33:26 compute-0 brave_fermi[405968]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "osd_id": 1,
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "type": "bluestore"
Oct 14 09:33:26 compute-0 brave_fermi[405968]:     },
Oct 14 09:33:26 compute-0 brave_fermi[405968]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "osd_id": 0,
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:33:26 compute-0 brave_fermi[405968]:         "type": "bluestore"
Oct 14 09:33:26 compute-0 brave_fermi[405968]:     }
Oct 14 09:33:26 compute-0 brave_fermi[405968]: }
Oct 14 09:33:26 compute-0 systemd[1]: libpod-20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe.scope: Deactivated successfully.
Oct 14 09:33:26 compute-0 systemd[1]: libpod-20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe.scope: Consumed 1.039s CPU time.
Oct 14 09:33:26 compute-0 podman[405949]: 2025-10-14 09:33:26.099864051 +0000 UTC m=+1.213949271 container died 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b-merged.mount: Deactivated successfully.
Oct 14 09:33:26 compute-0 podman[405949]: 2025-10-14 09:33:26.176234155 +0000 UTC m=+1.290319335 container remove 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:33:26 compute-0 systemd[1]: libpod-conmon-20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe.scope: Deactivated successfully.
Oct 14 09:33:26 compute-0 sudo[405844]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:33:26 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:33:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:33:26 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:33:26 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8b09d45e-7094-4638-b656-928b2ee5e756 does not exist
Oct 14 09:33:26 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ea55e277-664b-4431-8d95-55260296d7bb does not exist
Oct 14 09:33:26 compute-0 sudo[406014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:33:26 compute-0 sudo[406014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:26 compute-0 sudo[406014]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:26 compute-0 sudo[406039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:33:26 compute-0 sudo[406039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:33:26 compute-0 sudo[406039]: pam_unix(sudo:session): session closed for user root
Oct 14 09:33:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 14 09:33:27 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 14 09:33:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:33:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:33:27 compute-0 ceph-mon[74249]: pgmap v2409: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 14 09:33:27 compute-0 nova_compute[259627]: 2025-10-14 09:33:27.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:27 compute-0 ovn_controller[152662]: 2025-10-14T09:33:27Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:aa:2c 10.100.0.13
Oct 14 09:33:27 compute-0 ovn_controller[152662]: 2025-10-14T09:33:27Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:aa:2c 10.100.0.13
Oct 14 09:33:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 09:33:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:33:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 42K writes, 170K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5941 writes, 25K keys, 5941 commit groups, 1.0 writes per commit group, ingest: 27.85 MB, 0.05 MB/s
                                           Interval WAL: 5941 writes, 2282 syncs, 2.60 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:33:29 compute-0 ceph-mon[74249]: pgmap v2410: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 09:33:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 120 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 14 09:33:30 compute-0 podman[406065]: 2025-10-14 09:33:30.661225426 +0000 UTC m=+0.071725832 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:33:30 compute-0 podman[406064]: 2025-10-14 09:33:30.700088089 +0000 UTC m=+0.113922066 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:33:30 compute-0 nova_compute[259627]: 2025-10-14 09:33:30.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:31 compute-0 ceph-mon[74249]: pgmap v2411: 305 pgs: 305 active+clean; 120 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 932 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 14 09:33:32 compute-0 nova_compute[259627]: 2025-10-14 09:33:32.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:33:32
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.mgr', '.rgw.root', 'vms', 'volumes', 'default.rgw.control']
Oct 14 09:33:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:33:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:33:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:33:33 compute-0 ceph-mon[74249]: pgmap v2412: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 932 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 14 09:33:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:33:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:33:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 42K writes, 164K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.73 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4857 writes, 19K keys, 4857 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s
                                           Interval WAL: 4857 writes, 1947 syncs, 2.49 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:33:35 compute-0 ceph-mon[74249]: pgmap v2413: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:33:35 compute-0 nova_compute[259627]: 2025-10-14 09:33:35.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:33:37 compute-0 nova_compute[259627]: 2025-10-14 09:33:37.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:37 compute-0 ceph-mon[74249]: pgmap v2414: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:33:37 compute-0 nova_compute[259627]: 2025-10-14 09:33:37.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:37 compute-0 nova_compute[259627]: 2025-10-14 09:33:37.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:33:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:33:39 compute-0 ceph-mon[74249]: pgmap v2415: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:33:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:33:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 139K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.74 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4462 writes, 17K keys, 4462 commit groups, 1.0 writes per commit group, ingest: 20.06 MB, 0.03 MB/s
                                           Interval WAL: 4462 writes, 1787 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.473 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.473 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.491 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:33:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.560 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.560 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.568 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.568 2 INFO nova.compute.claims [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:33:40 compute-0 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.680 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:40 compute-0 nova_compute[259627]: 2025-10-14 09:33:40.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:33:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316624555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.131 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.139 2 DEBUG nova.compute.provider_tree [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.155 2 DEBUG nova.scheduler.client.report [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.184 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.185 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.243 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.243 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.267 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.299 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.441 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.443 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.443 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Creating image(s)
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.478 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.514 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.550 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.556 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:41 compute-0 ceph-mon[74249]: pgmap v2416: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:33:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/316624555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.660 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.661 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.663 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.663 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.697 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:41 compute-0 nova_compute[259627]: 2025-10-14 09:33:41.702 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.017 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.053 2 DEBUG nova.policy [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.095 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.209 2 DEBUG nova.objects.instance [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 1a22f837-6095-4ccc-8e71-79e69b15bc5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.235 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.236 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Ensure instance console log exists: /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.237 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.237 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.238 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 77 KiB/s wr, 10 op/s
Oct 14 09:33:42 compute-0 nova_compute[259627]: 2025-10-14 09:33:42.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595910049163248 of space, bias 1.0, pg target 0.22787730147489746 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:33:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:33:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:43.528 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:33:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:43.529 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:33:43 compute-0 nova_compute[259627]: 2025-10-14 09:33:43.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:43 compute-0 nova_compute[259627]: 2025-10-14 09:33:43.630 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Successfully created port: b8b13ccf-81a6-410e-a209-ce58758d66f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:33:43 compute-0 ceph-mon[74249]: pgmap v2417: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 77 KiB/s wr, 10 op/s
Oct 14 09:33:43 compute-0 nova_compute[259627]: 2025-10-14 09:33:43.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:44 compute-0 nova_compute[259627]: 2025-10-14 09:33:44.201 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Successfully created port: bf1fcf69-c1da-4a76-8005-54c5457a915a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:33:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 09:33:44 compute-0 nova_compute[259627]: 2025-10-14 09:33:44.757 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Successfully updated port: b8b13ccf-81a6-410e-a209-ce58758d66f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:33:44 compute-0 nova_compute[259627]: 2025-10-14 09:33:44.860 2 DEBUG nova.compute.manager [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:44 compute-0 nova_compute[259627]: 2025-10-14 09:33:44.861 2 DEBUG nova.compute.manager [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing instance network info cache due to event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:33:44 compute-0 nova_compute[259627]: 2025-10-14 09:33:44.862 2 DEBUG oslo_concurrency.lockutils [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:44 compute-0 nova_compute[259627]: 2025-10-14 09:33:44.862 2 DEBUG oslo_concurrency.lockutils [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:44 compute-0 nova_compute[259627]: 2025-10-14 09:33:44.862 2 DEBUG nova.network.neutron [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:33:45 compute-0 nova_compute[259627]: 2025-10-14 09:33:45.038 2 DEBUG nova.network.neutron [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:33:45 compute-0 nova_compute[259627]: 2025-10-14 09:33:45.298 2 DEBUG nova.network.neutron [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:45 compute-0 nova_compute[259627]: 2025-10-14 09:33:45.336 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Successfully updated port: bf1fcf69-c1da-4a76-8005-54c5457a915a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:33:45 compute-0 nova_compute[259627]: 2025-10-14 09:33:45.337 2 DEBUG oslo_concurrency.lockutils [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:45 compute-0 nova_compute[259627]: 2025-10-14 09:33:45.356 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:45 compute-0 nova_compute[259627]: 2025-10-14 09:33:45.357 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:45 compute-0 nova_compute[259627]: 2025-10-14 09:33:45.357 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:33:45 compute-0 nova_compute[259627]: 2025-10-14 09:33:45.480 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:33:45 compute-0 ceph-mon[74249]: pgmap v2418: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 09:33:46 compute-0 nova_compute[259627]: 2025-10-14 09:33:46.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 09:33:46 compute-0 nova_compute[259627]: 2025-10-14 09:33:46.964 2 DEBUG nova.compute.manager [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-changed-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:46 compute-0 nova_compute[259627]: 2025-10-14 09:33:46.965 2 DEBUG nova.compute.manager [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing instance network info cache due to event network-changed-bf1fcf69-c1da-4a76-8005-54c5457a915a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:33:46 compute-0 nova_compute[259627]: 2025-10-14 09:33:46.965 2 DEBUG oslo_concurrency.lockutils [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:47 compute-0 nova_compute[259627]: 2025-10-14 09:33:47.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:47 compute-0 ceph-mon[74249]: pgmap v2419: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 09:33:47 compute-0 nova_compute[259627]: 2025-10-14 09:33:47.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.006 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.007 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.204 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.232 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.233 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance network_info: |[{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.234 2 DEBUG oslo_concurrency.lockutils [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.235 2 DEBUG nova.network.neutron [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing network info cache for port bf1fcf69-c1da-4a76-8005-54c5457a915a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.242 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start _get_guest_xml network_info=[{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.249 2 WARNING nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.258 2 DEBUG nova.virt.libvirt.host [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.259 2 DEBUG nova.virt.libvirt.host [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.271 2 DEBUG nova.virt.libvirt.host [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.272 2 DEBUG nova.virt.libvirt.host [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.273 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.273 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.274 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.274 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.275 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.275 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.276 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.276 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.277 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.277 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.278 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.278 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.285 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:33:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152093417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.493 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:33:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:48.531 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.585 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.586 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:33:48 compute-0 podman[406338]: 2025-10-14 09:33:48.644753328 +0000 UTC m=+0.078903187 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 09:33:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3152093417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:33:48 compute-0 podman[406337]: 2025-10-14 09:33:48.665119448 +0000 UTC m=+0.103330997 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 09:33:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:33:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086076806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.804 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.833 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.837 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.903 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.904 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3400MB free_disk=59.92195129394531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.905 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:48 compute-0 nova_compute[259627]: 2025-10-14 09:33:48.905 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.099 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance aa3e17be-c995-4cab-b209-1eadaaff1634 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.099 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 1a22f837-6095-4ccc-8e71-79e69b15bc5b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.100 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.100 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:33:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:33:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244745326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.275 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.277 2 DEBUG nova.virt.libvirt.vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.278 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.279 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.281 2 DEBUG nova.virt.libvirt.vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.281 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.282 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.284 2 DEBUG nova.objects.instance [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a22f837-6095-4ccc-8e71-79e69b15bc5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.303 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <uuid>1a22f837-6095-4ccc-8e71-79e69b15bc5b</uuid>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <name>instance-0000008f</name>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-521253734</nova:name>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:33:48</nova:creationTime>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:port uuid="b8b13ccf-81a6-410e-a209-ce58758d66f4">
Oct 14 09:33:49 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <nova:port uuid="bf1fcf69-c1da-4a76-8005-54c5457a915a">
Oct 14 09:33:49 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe61:ef18" ipVersion="6"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe61:ef18" ipVersion="6"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <system>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <entry name="serial">1a22f837-6095-4ccc-8e71-79e69b15bc5b</entry>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <entry name="uuid">1a22f837-6095-4ccc-8e71-79e69b15bc5b</entry>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </system>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <os>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   </os>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <features>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   </features>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk">
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config">
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       </source>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:33:49 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:c0:91:ad"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <target dev="tapb8b13ccf-81"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:61:ef:18"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <target dev="tapbf1fcf69-c1"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/console.log" append="off"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <video>
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </video>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:33:49 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:33:49 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:33:49 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:33:49 compute-0 nova_compute[259627]: </domain>
Oct 14 09:33:49 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.304 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Preparing to wait for external event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.305 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.305 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.305 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.306 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Preparing to wait for external event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.306 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.307 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.307 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.308 2 DEBUG nova.virt.libvirt.vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.308 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.309 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.310 2 DEBUG os_vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.311 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.314 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8b13ccf-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8b13ccf-81, col_values=(('external_ids', {'iface-id': 'b8b13ccf-81a6-410e-a209-ce58758d66f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:91:ad', 'vm-uuid': '1a22f837-6095-4ccc-8e71-79e69b15bc5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:49 compute-0 NetworkManager[44885]: <info>  [1760434429.3638] manager: (tapb8b13ccf-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.372 2 INFO os_vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81')
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.374 2 DEBUG nova.virt.libvirt.vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.375 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.377 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.378 2 DEBUG os_vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.379 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf1fcf69-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf1fcf69-c1, col_values=(('external_ids', {'iface-id': 'bf1fcf69-c1da-4a76-8005-54c5457a915a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:ef:18', 'vm-uuid': '1a22f837-6095-4ccc-8e71-79e69b15bc5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:49 compute-0 NetworkManager[44885]: <info>  [1760434429.3868] manager: (tapbf1fcf69-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.397 2 INFO os_vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1')
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.473 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.475 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.476 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:c0:91:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.476 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:61:ef:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.478 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Using config drive
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.514 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:49 compute-0 ceph-mon[74249]: pgmap v2420: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:33:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3086076806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:33:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/244745326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:33:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:33:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/436293421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.754 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.761 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.781 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.821 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.822 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:49 compute-0 nova_compute[259627]: 2025-10-14 09:33:49.992 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Creating config drive at /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.003 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvo5c6258 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.175 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvo5c6258" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.219 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.225 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.340 2 DEBUG nova.network.neutron [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updated VIF entry in instance network info cache for port bf1fcf69-c1da-4a76-8005-54c5457a915a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.341 2 DEBUG nova.network.neutron [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.370 2 DEBUG oslo_concurrency.lockutils [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.445 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.446 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Deleting local config drive /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config because it was imported into RBD.
Oct 14 09:33:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 09:33:50 compute-0 NetworkManager[44885]: <info>  [1760434430.5305] manager: (tapb8b13ccf-81): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Oct 14 09:33:50 compute-0 kernel: tapb8b13ccf-81: entered promiscuous mode
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01539|binding|INFO|Claiming lport b8b13ccf-81a6-410e-a209-ce58758d66f4 for this chassis.
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01540|binding|INFO|b8b13ccf-81a6-410e-a209-ce58758d66f4: Claiming fa:16:3e:c0:91:ad 10.100.0.3
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.550 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:91:ad 10.100.0.3'], port_security=['fa:16:3e:c0:91:ad 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1a22f837-6095-4ccc-8e71-79e69b15bc5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fc600f-219c-48ab-94ed-7d3694dfd14e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b8b13ccf-81a6-410e-a209-ce58758d66f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.552 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b8b13ccf-81a6-410e-a209-ce58758d66f4 in datapath 0ec9546c-0acc-437f-9f6e-7db1743faf53 bound to our chassis
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.554 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ec9546c-0acc-437f-9f6e-7db1743faf53
Oct 14 09:33:50 compute-0 NetworkManager[44885]: <info>  [1760434430.5591] manager: (tapbf1fcf69-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Oct 14 09:33:50 compute-0 kernel: tapbf1fcf69-c1: entered promiscuous mode
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01541|binding|INFO|Setting lport b8b13ccf-81a6-410e-a209-ce58758d66f4 ovn-installed in OVS
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01542|binding|INFO|Setting lport b8b13ccf-81a6-410e-a209-ce58758d66f4 up in Southbound
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01543|if_status|INFO|Dropped 1 log messages in last 123 seconds (most recently, 123 seconds ago) due to excessive rate
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01544|if_status|INFO|Not updating pb chassis for bf1fcf69-c1da-4a76-8005-54c5457a915a now as sb is readonly
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01545|binding|INFO|Claiming lport bf1fcf69-c1da-4a76-8005-54c5457a915a for this chassis.
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01546|binding|INFO|bf1fcf69-c1da-4a76-8005-54c5457a915a: Claiming fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.581 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32451339-8aaf-407d-a1b3-dbeeef67d4e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 systemd-udevd[406519]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:33:50 compute-0 systemd-udevd[406520]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.599 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18'], port_security=['fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe61:ef18/64 2001:db8::f816:3eff:fe61:ef18/64', 'neutron:device_id': '1a22f837-6095-4ccc-8e71-79e69b15bc5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bf1fcf69-c1da-4a76-8005-54c5457a915a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01547|binding|INFO|Setting lport bf1fcf69-c1da-4a76-8005-54c5457a915a ovn-installed in OVS
Oct 14 09:33:50 compute-0 ovn_controller[152662]: 2025-10-14T09:33:50Z|01548|binding|INFO|Setting lport bf1fcf69-c1da-4a76-8005-54c5457a915a up in Southbound
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:50 compute-0 systemd-machined[214636]: New machine qemu-176-instance-0000008f.
Oct 14 09:33:50 compute-0 NetworkManager[44885]: <info>  [1760434430.6234] device (tapbf1fcf69-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:33:50 compute-0 NetworkManager[44885]: <info>  [1760434430.6255] device (tapbf1fcf69-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:33:50 compute-0 NetworkManager[44885]: <info>  [1760434430.6273] device (tapb8b13ccf-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:33:50 compute-0 NetworkManager[44885]: <info>  [1760434430.6292] device (tapb8b13ccf-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:33:50 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-0000008f.
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.634 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea0d0a9-e1de-4f32-b518-4eb86bef8057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.639 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[668725e0-f9ef-41ec-9776-36650821aaf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/436293421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.675 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7cb4d4-fe04-4259-92cb-f0f361162b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.701 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9311c835-2384-4946-a225-bb931f3cf5df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ec9546c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:a2:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817325, 'reachable_time': 33619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406530, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e7b590-cf67-4fe7-9528-40f0c8ab87f8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0ec9546c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817338, 'tstamp': 817338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406534, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0ec9546c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817341, 'tstamp': 817341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406534, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.725 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ec9546c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ec9546c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.729 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ec9546c-00, col_values=(('external_ids', {'iface-id': 'b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.729 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.730 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bf1fcf69-c1da-4a76-8005-54c5457a915a in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 unbound from our chassis
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.732 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ca3a81-ba03-43af-8eb7-2462170c9d43
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c16ca85d-1810-4568-a169-a38640ac0121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.789 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3501c799-c916-4ab3-839f-cbe72d8a3c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.793 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb2ddaa-85f8-4529-bde0-973232336b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.808 2 DEBUG nova.compute.manager [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.809 2 DEBUG oslo_concurrency.lockutils [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.809 2 DEBUG oslo_concurrency.lockutils [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.810 2 DEBUG oslo_concurrency.lockutils [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.810 2 DEBUG nova.compute.manager [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Processing event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.818 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.819 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.820 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.846 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[627672be-f17d-4daa-8daa-0ea5b3baf0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[343792ca-0af3-4006-ac62-3add82adb011]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ca3a81-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:84:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817424, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406541, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.900 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c553cfd-7595-4a47-8be7-86068b85e567]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa3ca3a81-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817441, 'tstamp': 817441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406542, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ca3a81-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.948 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ca3a81-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.948 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.948 2 DEBUG nova.compute.manager [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.949 2 DEBUG oslo_concurrency.lockutils [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.950 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ca3a81-b0, col_values=(('external_ids', {'iface-id': 'a176eb2a-6fbd-4b8e-90b2-85a86523eb62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.950 2 DEBUG oslo_concurrency.lockutils [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.950 2 DEBUG oslo_concurrency.lockutils [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.951 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.951 2 DEBUG nova.compute.manager [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Processing event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:50 compute-0 nova_compute[259627]: 2025-10-14 09:33:50.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:51 compute-0 ceph-mon[74249]: pgmap v2421: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.726 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434431.7260845, 1a22f837-6095-4ccc-8e71-79e69b15bc5b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.728 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] VM Started (Lifecycle Event)
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.731 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.734 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.738 2 INFO nova.virt.libvirt.driver [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance spawned successfully.
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.738 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.767 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.777 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.785 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.785 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.786 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.787 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.788 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.789 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.826 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.826 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434431.727609, 1a22f837-6095-4ccc-8e71-79e69b15bc5b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.826 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] VM Paused (Lifecycle Event)
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.862 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.869 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434431.7332778, 1a22f837-6095-4ccc-8e71-79e69b15bc5b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.870 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] VM Resumed (Lifecycle Event)
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.875 2 INFO nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Took 10.43 seconds to spawn the instance on the hypervisor.
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.876 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.891 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.896 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.920 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.941 2 INFO nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Took 11.41 seconds to build instance.
Oct 14 09:33:51 compute-0 nova_compute[259627]: 2025-10-14 09:33:51.959 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 09:33:52 compute-0 nova_compute[259627]: 2025-10-14 09:33:52.924 2 DEBUG nova.compute.manager [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:52 compute-0 nova_compute[259627]: 2025-10-14 09:33:52.925 2 DEBUG oslo_concurrency.lockutils [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:52 compute-0 nova_compute[259627]: 2025-10-14 09:33:52.925 2 DEBUG oslo_concurrency.lockutils [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:52 compute-0 nova_compute[259627]: 2025-10-14 09:33:52.925 2 DEBUG oslo_concurrency.lockutils [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:52 compute-0 nova_compute[259627]: 2025-10-14 09:33:52.925 2 DEBUG nova.compute.manager [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:33:52 compute-0 nova_compute[259627]: 2025-10-14 09:33:52.926 2 WARNING nova.compute.manager [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received unexpected event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 for instance with vm_state active and task_state None.
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.027 2 DEBUG nova.compute.manager [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.028 2 DEBUG oslo_concurrency.lockutils [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.028 2 DEBUG oslo_concurrency.lockutils [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.029 2 DEBUG oslo_concurrency.lockutils [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.029 2 DEBUG nova.compute.manager [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.030 2 WARNING nova.compute.manager [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received unexpected event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a for instance with vm_state active and task_state None.
Oct 14 09:33:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:53 compute-0 ceph-mon[74249]: pgmap v2422: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:33:53 compute-0 nova_compute[259627]: 2025-10-14 09:33:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:33:54 compute-0 nova_compute[259627]: 2025-10-14 09:33:54.185 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:54 compute-0 nova_compute[259627]: 2025-10-14 09:33:54.185 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:54 compute-0 nova_compute[259627]: 2025-10-14 09:33:54.185 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:33:54 compute-0 nova_compute[259627]: 2025-10-14 09:33:54.186 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aa3e17be-c995-4cab-b209-1eadaaff1634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:33:54 compute-0 nova_compute[259627]: 2025-10-14 09:33:54.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 09:33:55 compute-0 nova_compute[259627]: 2025-10-14 09:33:55.107 2 DEBUG nova.compute.manager [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:33:55 compute-0 nova_compute[259627]: 2025-10-14 09:33:55.107 2 DEBUG nova.compute.manager [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing instance network info cache due to event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:33:55 compute-0 nova_compute[259627]: 2025-10-14 09:33:55.107 2 DEBUG oslo_concurrency.lockutils [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:33:55 compute-0 nova_compute[259627]: 2025-10-14 09:33:55.108 2 DEBUG oslo_concurrency.lockutils [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:33:55 compute-0 nova_compute[259627]: 2025-10-14 09:33:55.108 2 DEBUG nova.network.neutron [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:33:55 compute-0 ceph-mon[74249]: pgmap v2423: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 09:33:56 compute-0 nova_compute[259627]: 2025-10-14 09:33:56.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:33:57 compute-0 nova_compute[259627]: 2025-10-14 09:33:57.288 2 DEBUG nova.network.neutron [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updated VIF entry in instance network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:33:57 compute-0 nova_compute[259627]: 2025-10-14 09:33:57.289 2 DEBUG nova.network.neutron [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:57 compute-0 nova_compute[259627]: 2025-10-14 09:33:57.312 2 DEBUG oslo_concurrency.lockutils [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:57 compute-0 nova_compute[259627]: 2025-10-14 09:33:57.430 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:33:57 compute-0 nova_compute[259627]: 2025-10-14 09:33:57.452 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:33:57 compute-0 nova_compute[259627]: 2025-10-14 09:33:57.453 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:33:57 compute-0 nova_compute[259627]: 2025-10-14 09:33:57.454 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:57 compute-0 nova_compute[259627]: 2025-10-14 09:33:57.454 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:33:57 compute-0 ceph-mon[74249]: pgmap v2424: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:33:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:33:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:33:59 compute-0 nova_compute[259627]: 2025-10-14 09:33:59.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:33:59 compute-0 ceph-mon[74249]: pgmap v2425: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:34:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 14 09:34:00 compute-0 nova_compute[259627]: 2025-10-14 09:34:00.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:01 compute-0 nova_compute[259627]: 2025-10-14 09:34:01.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:01 compute-0 podman[406588]: 2025-10-14 09:34:01.686724736 +0000 UTC m=+0.084539626 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent)
Oct 14 09:34:01 compute-0 ceph-mon[74249]: pgmap v2426: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 14 09:34:01 compute-0 podman[406587]: 2025-10-14 09:34:01.771390794 +0000 UTC m=+0.178463921 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:34:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 14 09:34:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:34:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:34:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:34:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:34:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:34:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:34:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:03 compute-0 ceph-mon[74249]: pgmap v2427: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 14 09:34:04 compute-0 nova_compute[259627]: 2025-10-14 09:34:04.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 69 op/s
Oct 14 09:34:04 compute-0 ovn_controller[152662]: 2025-10-14T09:34:04Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:91:ad 10.100.0.3
Oct 14 09:34:04 compute-0 ovn_controller[152662]: 2025-10-14T09:34:04Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:91:ad 10.100.0.3
Oct 14 09:34:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:34:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640755761' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:34:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:34:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640755761' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:34:05 compute-0 ceph-mon[74249]: pgmap v2428: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 69 op/s
Oct 14 09:34:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/640755761' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:34:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/640755761' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:34:06 compute-0 nova_compute[259627]: 2025-10-14 09:34:06.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Oct 14 09:34:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:07.050 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:07.051 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:07.053 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:07 compute-0 ceph-mon[74249]: pgmap v2429: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Oct 14 09:34:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:34:09 compute-0 nova_compute[259627]: 2025-10-14 09:34:09.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:09 compute-0 ceph-mon[74249]: pgmap v2430: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:34:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:34:11 compute-0 nova_compute[259627]: 2025-10-14 09:34:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:11 compute-0 ceph-mon[74249]: pgmap v2431: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:34:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:34:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:13 compute-0 ceph-mon[74249]: pgmap v2432: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:34:14 compute-0 nova_compute[259627]: 2025-10-14 09:34:14.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.452 2 DEBUG nova.compute.manager [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.452 2 DEBUG nova.compute.manager [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing instance network info cache due to event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.453 2 DEBUG oslo_concurrency.lockutils [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.453 2 DEBUG oslo_concurrency.lockutils [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.454 2 DEBUG nova.network.neutron [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.515 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.516 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.517 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.517 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.518 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.519 2 INFO nova.compute.manager [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Terminating instance
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.521 2 DEBUG nova.compute.manager [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:34:15 compute-0 kernel: tapb8b13ccf-81 (unregistering): left promiscuous mode
Oct 14 09:34:15 compute-0 NetworkManager[44885]: <info>  [1760434455.5827] device (tapb8b13ccf-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:34:15 compute-0 ovn_controller[152662]: 2025-10-14T09:34:15Z|01549|binding|INFO|Releasing lport b8b13ccf-81a6-410e-a209-ce58758d66f4 from this chassis (sb_readonly=0)
Oct 14 09:34:15 compute-0 ovn_controller[152662]: 2025-10-14T09:34:15Z|01550|binding|INFO|Setting lport b8b13ccf-81a6-410e-a209-ce58758d66f4 down in Southbound
Oct 14 09:34:15 compute-0 ovn_controller[152662]: 2025-10-14T09:34:15Z|01551|binding|INFO|Removing iface tapb8b13ccf-81 ovn-installed in OVS
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.609 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:91:ad 10.100.0.3'], port_security=['fa:16:3e:c0:91:ad 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1a22f837-6095-4ccc-8e71-79e69b15bc5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fc600f-219c-48ab-94ed-7d3694dfd14e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b8b13ccf-81a6-410e-a209-ce58758d66f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.612 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b8b13ccf-81a6-410e-a209-ce58758d66f4 in datapath 0ec9546c-0acc-437f-9f6e-7db1743faf53 unbound from our chassis
Oct 14 09:34:15 compute-0 kernel: tapbf1fcf69-c1 (unregistering): left promiscuous mode
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.615 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ec9546c-0acc-437f-9f6e-7db1743faf53
Oct 14 09:34:15 compute-0 NetworkManager[44885]: <info>  [1760434455.6199] device (tapbf1fcf69-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 ovn_controller[152662]: 2025-10-14T09:34:15Z|01552|binding|INFO|Releasing lport bf1fcf69-c1da-4a76-8005-54c5457a915a from this chassis (sb_readonly=0)
Oct 14 09:34:15 compute-0 ovn_controller[152662]: 2025-10-14T09:34:15Z|01553|binding|INFO|Setting lport bf1fcf69-c1da-4a76-8005-54c5457a915a down in Southbound
Oct 14 09:34:15 compute-0 ovn_controller[152662]: 2025-10-14T09:34:15Z|01554|binding|INFO|Removing iface tapbf1fcf69-c1 ovn-installed in OVS
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[245a2c10-1750-42d0-9e5b-2120d7b14fad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.647 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18'], port_security=['fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe61:ef18/64 2001:db8::f816:3eff:fe61:ef18/64', 'neutron:device_id': '1a22f837-6095-4ccc-8e71-79e69b15bc5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bf1fcf69-c1da-4a76-8005-54c5457a915a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct 14 09:34:15 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Consumed 14.112s CPU time.
Oct 14 09:34:15 compute-0 systemd-machined[214636]: Machine qemu-176-instance-0000008f terminated.
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.685 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b543d727-d6c1-49e7-bdb8-4ed3357b6890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.689 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[20c2add3-b248-46ec-a91d-4ed36f1d3bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.724 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5c6111-8cbd-4993-96cb-4e91c5d1659f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.754 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee78346e-3766-49bb-8e5e-62b81a76f2dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ec9546c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:a2:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817325, 'reachable_time': 33619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406648, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.774 2 INFO nova.virt.libvirt.driver [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance destroyed successfully.
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.775 2 DEBUG nova.objects.instance [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 1a22f837-6095-4ccc-8e71-79e69b15bc5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.779 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eeaca4df-86ad-4676-b015-02f25dd1655f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0ec9546c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817338, 'tstamp': 817338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406668, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0ec9546c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817341, 'tstamp': 817341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406668, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.781 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ec9546c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.788 2 DEBUG nova.virt.libvirt.vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:33:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:33:51Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.788 2 DEBUG nova.network.os_vif_util [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.789 2 DEBUG nova.network.os_vif_util [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.789 2 DEBUG os_vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.791 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ec9546c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8b13ccf-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.791 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.792 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ec9546c-00, col_values=(('external_ids', {'iface-id': 'b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.792 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.793 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bf1fcf69-c1da-4a76-8005-54c5457a915a in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 unbound from our chassis
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.795 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ca3a81-ba03-43af-8eb7-2462170c9d43
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 ceph-mon[74249]: pgmap v2433: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.798 2 INFO os_vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81')
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.799 2 DEBUG nova.virt.libvirt.vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:33:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:33:51Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.800 2 DEBUG nova.network.os_vif_util [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.800 2 DEBUG nova.network.os_vif_util [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.801 2 DEBUG os_vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf1fcf69-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.807 2 INFO os_vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1')
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.814 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8942ddd3-95ce-4b34-8bdc-f80a00a066b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.851 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[17602cce-83c2-4d63-8bed-c3354b888bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.853 2 DEBUG nova.compute.manager [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-unplugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.854 2 DEBUG oslo_concurrency.lockutils [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.855 2 DEBUG oslo_concurrency.lockutils [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.855 2 DEBUG oslo_concurrency.lockutils [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.855 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0cd00e-9e6b-4e76-bd2b-8461a5ccddd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.855 2 DEBUG nova.compute.manager [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-unplugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.856 2 DEBUG nova.compute.manager [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-unplugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.895 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9861c404-ba06-4c59-9104-3fe0d6dba1ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.916 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be133d74-67f1-44f2-8ae1-85f4ba1597df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ca3a81-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:84:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817424, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406698, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.937 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[23fa1465-8230-4cf4-9c8c-6960bca9c0b6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa3ca3a81-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817441, 'tstamp': 817441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406699, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.940 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ca3a81-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.945 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ca3a81-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.945 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.946 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ca3a81-b0, col_values=(('external_ids', {'iface-id': 'a176eb2a-6fbd-4b8e-90b2-85a86523eb62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.947 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.997 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:15 compute-0 nova_compute[259627]: 2025-10-14 09:34:15.997 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:34:16 compute-0 nova_compute[259627]: 2025-10-14 09:34:16.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:34:16 compute-0 nova_compute[259627]: 2025-10-14 09:34:16.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:16 compute-0 nova_compute[259627]: 2025-10-14 09:34:16.203 2 INFO nova.virt.libvirt.driver [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Deleting instance files /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b_del
Oct 14 09:34:16 compute-0 nova_compute[259627]: 2025-10-14 09:34:16.205 2 INFO nova.virt.libvirt.driver [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Deletion of /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b_del complete
Oct 14 09:34:16 compute-0 nova_compute[259627]: 2025-10-14 09:34:16.254 2 INFO nova.compute.manager [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:34:16 compute-0 nova_compute[259627]: 2025-10-14 09:34:16.255 2 DEBUG oslo.service.loopingcall [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:34:16 compute-0 nova_compute[259627]: 2025-10-14 09:34:16.255 2 DEBUG nova.compute.manager [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:34:16 compute-0 nova_compute[259627]: 2025-10-14 09:34:16.255 2 DEBUG nova.network.neutron [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:34:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.555 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-unplugged-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.556 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.556 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.557 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.557 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-unplugged-bf1fcf69-c1da-4a76-8005-54c5457a915a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.558 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-unplugged-bf1fcf69-c1da-4a76-8005-54c5457a915a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.558 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.559 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.559 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.560 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.560 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.560 2 WARNING nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received unexpected event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a for instance with vm_state active and task_state deleting.
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.603 2 DEBUG nova.network.neutron [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updated VIF entry in instance network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.604 2 DEBUG nova.network.neutron [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.628 2 DEBUG oslo_concurrency.lockutils [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:34:17 compute-0 ceph-mon[74249]: pgmap v2434: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.953 2 DEBUG nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.954 2 DEBUG oslo_concurrency.lockutils [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.954 2 DEBUG oslo_concurrency.lockutils [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.955 2 DEBUG oslo_concurrency.lockutils [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.956 2 DEBUG nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.956 2 WARNING nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received unexpected event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 for instance with vm_state active and task_state deleting.
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.956 2 DEBUG nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-deleted-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.957 2 INFO nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Neutron deleted interface bf1fcf69-c1da-4a76-8005-54c5457a915a; detaching it from the instance and deleting it from the info cache
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.957 2 DEBUG nova.network.neutron [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:34:17 compute-0 nova_compute[259627]: 2025-10-14 09:34:17.992 2 DEBUG nova.network.neutron [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.002 2 DEBUG nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Detach interface failed, port_id=bf1fcf69-c1da-4a76-8005-54c5457a915a, reason: Instance 1a22f837-6095-4ccc-8e71-79e69b15bc5b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.013 2 INFO nova.compute.manager [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Took 1.76 seconds to deallocate network for instance.
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.080 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.082 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.214 2 DEBUG oslo_concurrency.processutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:34:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct 14 09:34:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:34:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063906089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.710 2 DEBUG oslo_concurrency.processutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.723 2 DEBUG nova.compute.provider_tree [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.751 2 DEBUG nova.scheduler.client.report [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.785 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.823 2 INFO nova.scheduler.client.report [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 1a22f837-6095-4ccc-8e71-79e69b15bc5b
Oct 14 09:34:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3063906089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:34:18 compute-0 nova_compute[259627]: 2025-10-14 09:34:18.921 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:19 compute-0 podman[406723]: 2025-10-14 09:34:19.693712412 +0000 UTC m=+0.102948988 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 14 09:34:19 compute-0 podman[406724]: 2025-10-14 09:34:19.714470791 +0000 UTC m=+0.119824222 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009)
Oct 14 09:34:19 compute-0 ceph-mon[74249]: pgmap v2435: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct 14 09:34:19 compute-0 nova_compute[259627]: 2025-10-14 09:34:19.992 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:20 compute-0 nova_compute[259627]: 2025-10-14 09:34:20.091 2 DEBUG nova.compute.manager [req-9f9264d3-080a-4e50-8206-671ede7bdc16 req-b444b626-7b0b-4ee9-bac2-71c348743970 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-deleted-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 142 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 24 KiB/s wr, 26 op/s
Oct 14 09:34:20 compute-0 nova_compute[259627]: 2025-10-14 09:34:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:21 compute-0 nova_compute[259627]: 2025-10-14 09:34:21.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:21 compute-0 ceph-mon[74249]: pgmap v2436: 305 pgs: 305 active+clean; 142 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 24 KiB/s wr, 26 op/s
Oct 14 09:34:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 121 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 30 op/s
Oct 14 09:34:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.157 2 DEBUG nova.compute.manager [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.158 2 DEBUG nova.compute.manager [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing instance network info cache due to event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.159 2 DEBUG oslo_concurrency.lockutils [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.159 2 DEBUG oslo_concurrency.lockutils [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.160 2 DEBUG nova.network.neutron [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.228 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.229 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.230 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.230 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.231 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.232 2 INFO nova.compute.manager [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Terminating instance
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.234 2 DEBUG nova.compute.manager [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:34:23 compute-0 kernel: tap51563204-46 (unregistering): left promiscuous mode
Oct 14 09:34:23 compute-0 NetworkManager[44885]: <info>  [1760434463.3027] device (tap51563204-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 ovn_controller[152662]: 2025-10-14T09:34:23Z|01555|binding|INFO|Releasing lport 51563204-46d2-4b26-bfa3-a2dc0f43701a from this chassis (sb_readonly=0)
Oct 14 09:34:23 compute-0 ovn_controller[152662]: 2025-10-14T09:34:23Z|01556|binding|INFO|Setting lport 51563204-46d2-4b26-bfa3-a2dc0f43701a down in Southbound
Oct 14 09:34:23 compute-0 ovn_controller[152662]: 2025-10-14T09:34:23Z|01557|binding|INFO|Removing iface tap51563204-46 ovn-installed in OVS
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.378 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:aa:2c 10.100.0.13'], port_security=['fa:16:3e:8e:aa:2c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'aa3e17be-c995-4cab-b209-1eadaaff1634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fc600f-219c-48ab-94ed-7d3694dfd14e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=51563204-46d2-4b26-bfa3-a2dc0f43701a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.380 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 51563204-46d2-4b26-bfa3-a2dc0f43701a in datapath 0ec9546c-0acc-437f-9f6e-7db1743faf53 unbound from our chassis
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.382 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ec9546c-0acc-437f-9f6e-7db1743faf53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.383 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[591fe5f7-f5e5-4bb5-b5c7-36d2e3931c82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.384 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 namespace which is not needed anymore
Oct 14 09:34:23 compute-0 kernel: tape4183a77-e1 (unregistering): left promiscuous mode
Oct 14 09:34:23 compute-0 NetworkManager[44885]: <info>  [1760434463.4008] device (tape4183a77-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 ovn_controller[152662]: 2025-10-14T09:34:23Z|01558|binding|INFO|Releasing lport e4183a77-e102-4885-9a7d-ef0431abf27c from this chassis (sb_readonly=0)
Oct 14 09:34:23 compute-0 ovn_controller[152662]: 2025-10-14T09:34:23Z|01559|binding|INFO|Setting lport e4183a77-e102-4885-9a7d-ef0431abf27c down in Southbound
Oct 14 09:34:23 compute-0 ovn_controller[152662]: 2025-10-14T09:34:23Z|01560|binding|INFO|Removing iface tape4183a77-e1 ovn-installed in OVS
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.421 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda'], port_security=['fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8f:fdda/64 2001:db8::f816:3eff:fe8f:fdda/64', 'neutron:device_id': 'aa3e17be-c995-4cab-b209-1eadaaff1634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e4183a77-e102-4885-9a7d-ef0431abf27c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct 14 09:34:23 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Consumed 15.942s CPU time.
Oct 14 09:34:23 compute-0 systemd-machined[214636]: Machine qemu-175-instance-0000008e terminated.
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [NOTICE]   (405060) : haproxy version is 2.8.14-c23fe91
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [NOTICE]   (405060) : path to executable is /usr/sbin/haproxy
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [WARNING]  (405060) : Exiting Master process...
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [ALERT]    (405060) : Current worker (405062) exited with code 143 (Terminated)
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [WARNING]  (405060) : All workers exited. Exiting... (0)
Oct 14 09:34:23 compute-0 systemd[1]: libpod-e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16.scope: Deactivated successfully.
Oct 14 09:34:23 compute-0 podman[406791]: 2025-10-14 09:34:23.572694612 +0000 UTC m=+0.059138453 container died e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 09:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16-userdata-shm.mount: Deactivated successfully.
Oct 14 09:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0619add7e0ccdeb5eee5bc37d898eed13da4d773245bbb9df7aa5e8493239769-merged.mount: Deactivated successfully.
Oct 14 09:34:23 compute-0 podman[406791]: 2025-10-14 09:34:23.618459645 +0000 UTC m=+0.104903446 container cleanup e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.624 2 DEBUG nova.compute.manager [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-unplugged-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.624 2 DEBUG oslo_concurrency.lockutils [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.624 2 DEBUG oslo_concurrency.lockutils [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.624 2 DEBUG oslo_concurrency.lockutils [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.625 2 DEBUG nova.compute.manager [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-unplugged-51563204-46d2-4b26-bfa3-a2dc0f43701a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.625 2 DEBUG nova.compute.manager [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-unplugged-51563204-46d2-4b26-bfa3-a2dc0f43701a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:34:23 compute-0 systemd[1]: libpod-conmon-e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16.scope: Deactivated successfully.
Oct 14 09:34:23 compute-0 NetworkManager[44885]: <info>  [1760434463.6621] manager: (tap51563204-46): new Tun device (/org/freedesktop/NetworkManager/Devices/631)
Oct 14 09:34:23 compute-0 NetworkManager[44885]: <info>  [1760434463.6717] manager: (tape4183a77-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/632)
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.692 2 INFO nova.virt.libvirt.driver [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance destroyed successfully.
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.693 2 DEBUG nova.objects.instance [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid aa3e17be-c995-4cab-b209-1eadaaff1634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.707 2 DEBUG nova.virt.libvirt.vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:33:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:33:15Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.707 2 DEBUG nova.network.os_vif_util [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.708 2 DEBUG nova.network.os_vif_util [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.708 2 DEBUG os_vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51563204-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.720 2 INFO os_vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46')
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.720 2 DEBUG nova.virt.libvirt.vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:33:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:33:15Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.721 2 DEBUG nova.network.os_vif_util [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.721 2 DEBUG nova.network.os_vif_util [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.721 2 DEBUG os_vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4183a77-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 podman[406819]: 2025-10-14 09:34:23.729286144 +0000 UTC m=+0.064095164 container remove e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.726 2 INFO os_vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1')
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0e381778-9ef1-4a7b-909c-5932cf191980]: (4, ('Tue Oct 14 09:34:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 (e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16)\ne0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16\nTue Oct 14 09:34:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 (e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16)\ne0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[278c35a6-e0b9-4099-b381-ddd9b0465d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ec9546c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:23 compute-0 kernel: tap0ec9546c-00: left promiscuous mode
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 nova_compute[259627]: 2025-10-14 09:34:23.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0edb0996-93f7-4a53-b6fa-3b55bab90e76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.791 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d56abaf-7187-4dbd-a445-2188505a2de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.793 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[faf45dc0-8f75-40a8-883b-d03be183854a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.811 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[86c3154b-2aac-488f-bbc6-b615a530f6c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817315, 'reachable_time': 37820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406872, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d0ec9546c\x2d0acc\x2d437f\x2d9f6e\x2d7db1743faf53.mount: Deactivated successfully.
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.819 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.819 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[74848880-86cf-4292-aa43-d97fb64d34f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.820 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e4183a77-e102-4885-9a7d-ef0431abf27c in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 unbound from our chassis
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.821 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ca3a81-ba03-43af-8eb7-2462170c9d43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.822 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[906067f5-d1ee-4c44-8307-111743499e18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.823 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 namespace which is not needed anymore
Oct 14 09:34:23 compute-0 ceph-mon[74249]: pgmap v2437: 305 pgs: 305 active+clean; 121 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 30 op/s
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [NOTICE]   (405132) : haproxy version is 2.8.14-c23fe91
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [NOTICE]   (405132) : path to executable is /usr/sbin/haproxy
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [WARNING]  (405132) : Exiting Master process...
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [WARNING]  (405132) : Exiting Master process...
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [ALERT]    (405132) : Current worker (405134) exited with code 143 (Terminated)
Oct 14 09:34:23 compute-0 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [WARNING]  (405132) : All workers exited. Exiting... (0)
Oct 14 09:34:23 compute-0 systemd[1]: libpod-552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63.scope: Deactivated successfully.
Oct 14 09:34:23 compute-0 podman[406887]: 2025-10-14 09:34:23.986434855 +0000 UTC m=+0.046065162 container died 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63-userdata-shm.mount: Deactivated successfully.
Oct 14 09:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0a4b15685401cfe097a8856dcb11738118fab0652949510d103f701d6144a22-merged.mount: Deactivated successfully.
Oct 14 09:34:24 compute-0 podman[406887]: 2025-10-14 09:34:24.032594798 +0000 UTC m=+0.092225105 container cleanup 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:34:24 compute-0 systemd[1]: libpod-conmon-552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63.scope: Deactivated successfully.
Oct 14 09:34:24 compute-0 podman[406916]: 2025-10-14 09:34:24.104556984 +0000 UTC m=+0.050135832 container remove 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.110 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7cfc1e-471e-4d0f-beb3-1cac78028da5]: (4, ('Tue Oct 14 09:34:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 (552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63)\n552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63\nTue Oct 14 09:34:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 (552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63)\n552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.112 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ec2115-ed9d-4387-8cce-99147d29f529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.114 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ca3a81-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:24 compute-0 kernel: tapa3ca3a81-b0: left promiscuous mode
Oct 14 09:34:24 compute-0 nova_compute[259627]: 2025-10-14 09:34:24.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:24 compute-0 nova_compute[259627]: 2025-10-14 09:34:24.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.150 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45f85b5c-2ce6-4279-8b3b-713bb774663f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:24 compute-0 nova_compute[259627]: 2025-10-14 09:34:24.156 2 INFO nova.virt.libvirt.driver [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Deleting instance files /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634_del
Oct 14 09:34:24 compute-0 nova_compute[259627]: 2025-10-14 09:34:24.157 2 INFO nova.virt.libvirt.driver [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Deletion of /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634_del complete
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.180 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c82e8f-2700-4bd0-8872-189f47689f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d513f7-c6a2-437f-82d8-43b8b8115188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.202 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[baebe3bf-afb3-4e8a-b297-e74382388c1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817415, 'reachable_time': 44086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406933, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.205 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:34:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.205 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[48413248-99c0-4ca3-a7f9-69e38d662435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:24 compute-0 nova_compute[259627]: 2025-10-14 09:34:24.229 2 INFO nova.compute.manager [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Took 0.99 seconds to destroy the instance on the hypervisor.
Oct 14 09:34:24 compute-0 nova_compute[259627]: 2025-10-14 09:34:24.230 2 DEBUG oslo.service.loopingcall [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:34:24 compute-0 nova_compute[259627]: 2025-10-14 09:34:24.231 2 DEBUG nova.compute.manager [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:34:24 compute-0 nova_compute[259627]: 2025-10-14 09:34:24.231 2 DEBUG nova.network.neutron [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:34:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 121 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 30 op/s
Oct 14 09:34:24 compute-0 systemd[1]: run-netns-ovnmeta\x2da3ca3a81\x2dba03\x2d43af\x2d8eb7\x2d2462170c9d43.mount: Deactivated successfully.
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.251 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-unplugged-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.252 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.253 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.253 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.254 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-unplugged-e4183a77-e102-4885-9a7d-ef0431abf27c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.254 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-unplugged-e4183a77-e102-4885-9a7d-ef0431abf27c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.255 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.255 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.256 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.256 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.256 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.257 2 WARNING nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received unexpected event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c for instance with vm_state active and task_state deleting.
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.502 2 DEBUG nova.network.neutron [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updated VIF entry in instance network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.502 2 DEBUG nova.network.neutron [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.518 2 DEBUG oslo_concurrency.lockutils [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.752 2 DEBUG nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.752 2 DEBUG oslo_concurrency.lockutils [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.753 2 DEBUG oslo_concurrency.lockutils [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.753 2 DEBUG oslo_concurrency.lockutils [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.754 2 DEBUG nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.754 2 WARNING nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received unexpected event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a for instance with vm_state active and task_state deleting.
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.754 2 DEBUG nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-deleted-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.755 2 INFO nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Neutron deleted interface e4183a77-e102-4885-9a7d-ef0431abf27c; detaching it from the instance and deleting it from the info cache
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.755 2 DEBUG nova.network.neutron [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.798 2 DEBUG nova.network.neutron [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.802 2 DEBUG nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Detach interface failed, port_id=e4183a77-e102-4885-9a7d-ef0431abf27c, reason: Instance aa3e17be-c995-4cab-b209-1eadaaff1634 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.826 2 INFO nova.compute.manager [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Took 1.59 seconds to deallocate network for instance.
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.879 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.880 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:25 compute-0 ceph-mon[74249]: pgmap v2438: 305 pgs: 305 active+clean; 121 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 30 op/s
Oct 14 09:34:25 compute-0 nova_compute[259627]: 2025-10-14 09:34:25.928 2 DEBUG oslo_concurrency.processutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:34:26 compute-0 nova_compute[259627]: 2025-10-14 09:34:26.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:34:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031465038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:34:26 compute-0 nova_compute[259627]: 2025-10-14 09:34:26.450 2 DEBUG oslo_concurrency.processutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:34:26 compute-0 nova_compute[259627]: 2025-10-14 09:34:26.457 2 DEBUG nova.compute.provider_tree [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:34:26 compute-0 nova_compute[259627]: 2025-10-14 09:34:26.481 2 DEBUG nova.scheduler.client.report [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:34:26 compute-0 sudo[406954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:26 compute-0 sudo[406954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:26 compute-0 sudo[406954]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:26 compute-0 nova_compute[259627]: 2025-10-14 09:34:26.512 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 14 KiB/s wr, 57 op/s
Oct 14 09:34:26 compute-0 nova_compute[259627]: 2025-10-14 09:34:26.550 2 INFO nova.scheduler.client.report [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance aa3e17be-c995-4cab-b209-1eadaaff1634
Oct 14 09:34:26 compute-0 sudo[406981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:34:26 compute-0 sudo[406981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:26 compute-0 sudo[406981]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:26 compute-0 nova_compute[259627]: 2025-10-14 09:34:26.630 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:26 compute-0 sudo[407006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:26 compute-0 sudo[407006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:26 compute-0 sudo[407006]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:26 compute-0 sudo[407031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:34:26 compute-0 sudo[407031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1031465038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:34:27 compute-0 nova_compute[259627]: 2025-10-14 09:34:27.134 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:27 compute-0 sudo[407031]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:34:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:34:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:34:27 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:34:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:34:27 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:34:27 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 221da1bf-9db7-40cf-83dc-2877b2af0e5d does not exist
Oct 14 09:34:27 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f9b2dfa8-131b-4146-a7c7-074dc1ba0490 does not exist
Oct 14 09:34:27 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 145e96c0-de5a-4540-a538-c7780faf845d does not exist
Oct 14 09:34:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:34:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:34:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:34:27 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:34:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:34:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:34:27 compute-0 sudo[407088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:27 compute-0 sudo[407088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:27 compute-0 sudo[407088]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:27 compute-0 sudo[407113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:34:27 compute-0 sudo[407113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:27 compute-0 sudo[407113]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:27 compute-0 sudo[407138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:27 compute-0 sudo[407138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:27 compute-0 sudo[407138]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:27 compute-0 sudo[407163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:34:27 compute-0 sudo[407163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:27 compute-0 nova_compute[259627]: 2025-10-14 09:34:27.873 2 DEBUG nova.compute.manager [req-a67e81d0-a272-4964-80d5-f972d6b9ad20 req-7b7dfdce-1d54-492f-a7e5-fccfe7faea5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-deleted-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:34:27 compute-0 ceph-mon[74249]: pgmap v2439: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 14 KiB/s wr, 57 op/s
Oct 14 09:34:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:34:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:34:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:34:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:34:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:34:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:34:28 compute-0 podman[407229]: 2025-10-14 09:34:28.060321507 +0000 UTC m=+0.060755222 container create 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:34:28 compute-0 systemd[1]: Started libpod-conmon-1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182.scope.
Oct 14 09:34:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:28 compute-0 podman[407229]: 2025-10-14 09:34:28.026482986 +0000 UTC m=+0.026916701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:34:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:34:28 compute-0 podman[407229]: 2025-10-14 09:34:28.141612772 +0000 UTC m=+0.142046447 container init 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 09:34:28 compute-0 podman[407229]: 2025-10-14 09:34:28.149664679 +0000 UTC m=+0.150098354 container start 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:34:28 compute-0 podman[407229]: 2025-10-14 09:34:28.153200386 +0000 UTC m=+0.153634081 container attach 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 09:34:28 compute-0 hungry_goldberg[407245]: 167 167
Oct 14 09:34:28 compute-0 systemd[1]: libpod-1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182.scope: Deactivated successfully.
Oct 14 09:34:28 compute-0 podman[407229]: 2025-10-14 09:34:28.156970569 +0000 UTC m=+0.157404244 container died 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:34:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9a774f21be8fda3194a00316d7ff69c4d7e071b94d0b64bdbd09f3da7ff9953-merged.mount: Deactivated successfully.
Oct 14 09:34:28 compute-0 podman[407229]: 2025-10-14 09:34:28.191692941 +0000 UTC m=+0.192126626 container remove 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:34:28 compute-0 systemd[1]: libpod-conmon-1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182.scope: Deactivated successfully.
Oct 14 09:34:28 compute-0 podman[407269]: 2025-10-14 09:34:28.395343188 +0000 UTC m=+0.050363557 container create c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:34:28 compute-0 systemd[1]: Started libpod-conmon-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope.
Oct 14 09:34:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:34:28 compute-0 podman[407269]: 2025-10-14 09:34:28.374285052 +0000 UTC m=+0.029305441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:28 compute-0 podman[407269]: 2025-10-14 09:34:28.481291488 +0000 UTC m=+0.136311847 container init c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:34:28 compute-0 podman[407269]: 2025-10-14 09:34:28.496236964 +0000 UTC m=+0.151257323 container start c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 09:34:28 compute-0 podman[407269]: 2025-10-14 09:34:28.500434167 +0000 UTC m=+0.155454556 container attach c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:34:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.1 KiB/s wr, 55 op/s
Oct 14 09:34:28 compute-0 nova_compute[259627]: 2025-10-14 09:34:28.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:29 compute-0 intelligent_volhard[407285]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:34:29 compute-0 intelligent_volhard[407285]: --> relative data size: 1.0
Oct 14 09:34:29 compute-0 intelligent_volhard[407285]: --> All data devices are unavailable
Oct 14 09:34:29 compute-0 systemd[1]: libpod-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope: Deactivated successfully.
Oct 14 09:34:29 compute-0 systemd[1]: libpod-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope: Consumed 1.176s CPU time.
Oct 14 09:34:29 compute-0 conmon[407285]: conmon c84d789ff8d91e463e69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope/container/memory.events
Oct 14 09:34:29 compute-0 podman[407269]: 2025-10-14 09:34:29.726227087 +0000 UTC m=+1.381247466 container died c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:34:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445-merged.mount: Deactivated successfully.
Oct 14 09:34:29 compute-0 podman[407269]: 2025-10-14 09:34:29.802879949 +0000 UTC m=+1.457900328 container remove c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:34:29 compute-0 systemd[1]: libpod-conmon-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope: Deactivated successfully.
Oct 14 09:34:29 compute-0 sudo[407163]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:29 compute-0 ceph-mon[74249]: pgmap v2440: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.1 KiB/s wr, 55 op/s
Oct 14 09:34:29 compute-0 sudo[407327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:29 compute-0 sudo[407327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:29 compute-0 sudo[407327]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:30 compute-0 sudo[407352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:34:30 compute-0 sudo[407352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:30 compute-0 sudo[407352]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:30 compute-0 sudo[407377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:30 compute-0 sudo[407377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:30 compute-0 sudo[407377]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:30 compute-0 sudo[407402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:34:30 compute-0 sudo[407402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.1 KiB/s wr, 55 op/s
Oct 14 09:34:30 compute-0 podman[407469]: 2025-10-14 09:34:30.581224759 +0000 UTC m=+0.057878871 container create e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:34:30 compute-0 systemd[1]: Started libpod-conmon-e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca.scope.
Oct 14 09:34:30 compute-0 podman[407469]: 2025-10-14 09:34:30.555225681 +0000 UTC m=+0.031879843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:34:30 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:34:30 compute-0 podman[407469]: 2025-10-14 09:34:30.684401891 +0000 UTC m=+0.161056033 container init e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:34:30 compute-0 podman[407469]: 2025-10-14 09:34:30.695327579 +0000 UTC m=+0.171981691 container start e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:34:30 compute-0 podman[407469]: 2025-10-14 09:34:30.698804725 +0000 UTC m=+0.175458817 container attach e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:34:30 compute-0 zen_murdock[407485]: 167 167
Oct 14 09:34:30 compute-0 systemd[1]: libpod-e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca.scope: Deactivated successfully.
Oct 14 09:34:30 compute-0 podman[407469]: 2025-10-14 09:34:30.703218843 +0000 UTC m=+0.179872985 container died e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:34:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-df0005df252c108518a7fa3b51a136eae93e44c122abeca4af18c03d50c98d97-merged.mount: Deactivated successfully.
Oct 14 09:34:30 compute-0 podman[407469]: 2025-10-14 09:34:30.752943183 +0000 UTC m=+0.229597295 container remove e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:34:30 compute-0 systemd[1]: libpod-conmon-e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca.scope: Deactivated successfully.
Oct 14 09:34:30 compute-0 nova_compute[259627]: 2025-10-14 09:34:30.773 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434455.7726548, 1a22f837-6095-4ccc-8e71-79e69b15bc5b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:34:30 compute-0 nova_compute[259627]: 2025-10-14 09:34:30.777 2 INFO nova.compute.manager [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] VM Stopped (Lifecycle Event)
Oct 14 09:34:30 compute-0 nova_compute[259627]: 2025-10-14 09:34:30.793 2 DEBUG nova.compute.manager [None req-8b3d1b52-59b4-46bd-93f9-a34fd22a1f3a - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:34:30 compute-0 podman[407507]: 2025-10-14 09:34:30.935893593 +0000 UTC m=+0.050043059 container create 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:34:30 compute-0 systemd[1]: Started libpod-conmon-90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a.scope.
Oct 14 09:34:31 compute-0 podman[407507]: 2025-10-14 09:34:30.911546185 +0000 UTC m=+0.025695691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:34:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:31 compute-0 podman[407507]: 2025-10-14 09:34:31.043197186 +0000 UTC m=+0.157346712 container init 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:34:31 compute-0 nova_compute[259627]: 2025-10-14 09:34:31.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:31 compute-0 podman[407507]: 2025-10-14 09:34:31.062159411 +0000 UTC m=+0.176308907 container start 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:34:31 compute-0 podman[407507]: 2025-10-14 09:34:31.066243462 +0000 UTC m=+0.180392998 container attach 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:34:31 compute-0 romantic_jackson[407524]: {
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:     "0": [
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:         {
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "devices": [
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "/dev/loop3"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             ],
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_name": "ceph_lv0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_size": "21470642176",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "name": "ceph_lv0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "tags": {
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cluster_name": "ceph",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.crush_device_class": "",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.encrypted": "0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osd_id": "0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.type": "block",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.vdo": "0"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             },
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "type": "block",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "vg_name": "ceph_vg0"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:         }
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:     ],
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:     "1": [
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:         {
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "devices": [
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "/dev/loop4"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             ],
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_name": "ceph_lv1",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_size": "21470642176",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "name": "ceph_lv1",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "tags": {
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cluster_name": "ceph",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.crush_device_class": "",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.encrypted": "0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osd_id": "1",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.type": "block",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.vdo": "0"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             },
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "type": "block",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "vg_name": "ceph_vg1"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:         }
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:     ],
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:     "2": [
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:         {
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "devices": [
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "/dev/loop5"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             ],
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_name": "ceph_lv2",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_size": "21470642176",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "name": "ceph_lv2",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "tags": {
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.cluster_name": "ceph",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.crush_device_class": "",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.encrypted": "0",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osd_id": "2",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.type": "block",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:                 "ceph.vdo": "0"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             },
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "type": "block",
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:             "vg_name": "ceph_vg2"
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:         }
Oct 14 09:34:31 compute-0 romantic_jackson[407524]:     ]
Oct 14 09:34:31 compute-0 romantic_jackson[407524]: }
Oct 14 09:34:31 compute-0 systemd[1]: libpod-90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a.scope: Deactivated successfully.
Oct 14 09:34:31 compute-0 podman[407507]: 2025-10-14 09:34:31.858895273 +0000 UTC m=+0.973044779 container died 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791-merged.mount: Deactivated successfully.
Oct 14 09:34:31 compute-0 ceph-mon[74249]: pgmap v2441: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.1 KiB/s wr, 55 op/s
Oct 14 09:34:31 compute-0 podman[407507]: 2025-10-14 09:34:31.945686653 +0000 UTC m=+1.059836119 container remove 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:34:31 compute-0 systemd[1]: libpod-conmon-90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a.scope: Deactivated successfully.
Oct 14 09:34:31 compute-0 sudo[407402]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:32 compute-0 podman[407541]: 2025-10-14 09:34:32.012479192 +0000 UTC m=+0.101030780 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:34:32 compute-0 sudo[407577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:32 compute-0 sudo[407577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:32 compute-0 sudo[407577]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:32 compute-0 podman[407534]: 2025-10-14 09:34:32.111191315 +0000 UTC m=+0.204526120 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:34:32 compute-0 sudo[407608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:34:32 compute-0 sudo[407608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:32 compute-0 sudo[407608]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:32 compute-0 sudo[407633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:32 compute-0 sudo[407633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:32 compute-0 sudo[407633]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:32 compute-0 sudo[407658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:34:32 compute-0 sudo[407658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:34:32 compute-0 podman[407725]: 2025-10-14 09:34:32.782913879 +0000 UTC m=+0.062487304 container create d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:34:32 compute-0 systemd[1]: Started libpod-conmon-d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079.scope.
Oct 14 09:34:32 compute-0 podman[407725]: 2025-10-14 09:34:32.759535895 +0000 UTC m=+0.039109400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:34:32
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'images', 'vms', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 14 09:34:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:34:32 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:34:32 compute-0 podman[407725]: 2025-10-14 09:34:32.88157743 +0000 UTC m=+0.161150865 container init d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:34:32 compute-0 podman[407725]: 2025-10-14 09:34:32.893323579 +0000 UTC m=+0.172897004 container start d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:34:32 compute-0 podman[407725]: 2025-10-14 09:34:32.896595109 +0000 UTC m=+0.176168554 container attach d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:34:32 compute-0 hardcore_wescoff[407741]: 167 167
Oct 14 09:34:32 compute-0 systemd[1]: libpod-d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079.scope: Deactivated successfully.
Oct 14 09:34:32 compute-0 podman[407725]: 2025-10-14 09:34:32.900476084 +0000 UTC m=+0.180049539 container died d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:34:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea5385b7ae71cb3cb1a0c36f444ce84ab73620722125ecd6096f4326dc4848fc-merged.mount: Deactivated successfully.
Oct 14 09:34:32 compute-0 podman[407725]: 2025-10-14 09:34:32.938920887 +0000 UTC m=+0.218494312 container remove d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:34:32 compute-0 systemd[1]: libpod-conmon-d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079.scope: Deactivated successfully.
Oct 14 09:34:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:33 compute-0 podman[407765]: 2025-10-14 09:34:33.107452932 +0000 UTC m=+0.047908176 container create 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:34:33 compute-0 systemd[1]: Started libpod-conmon-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope.
Oct 14 09:34:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:34:33 compute-0 podman[407765]: 2025-10-14 09:34:33.091540292 +0000 UTC m=+0.031995556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:34:33 compute-0 podman[407765]: 2025-10-14 09:34:33.190762887 +0000 UTC m=+0.131218201 container init 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:34:33 compute-0 podman[407765]: 2025-10-14 09:34:33.202843353 +0000 UTC m=+0.143298617 container start 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 09:34:33 compute-0 podman[407765]: 2025-10-14 09:34:33.207231041 +0000 UTC m=+0.147686365 container attach 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:34:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:34:33 compute-0 nova_compute[259627]: 2025-10-14 09:34:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:33 compute-0 ceph-mon[74249]: pgmap v2442: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]: {
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "osd_id": 2,
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "type": "bluestore"
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:     },
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "osd_id": 1,
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "type": "bluestore"
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:     },
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "osd_id": 0,
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:         "type": "bluestore"
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]:     }
Oct 14 09:34:34 compute-0 trusting_bardeen[407782]: }
Oct 14 09:34:34 compute-0 systemd[1]: libpod-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope: Deactivated successfully.
Oct 14 09:34:34 compute-0 systemd[1]: libpod-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope: Consumed 1.035s CPU time.
Oct 14 09:34:34 compute-0 conmon[407782]: conmon 655e6b7f05d022203716 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope/container/memory.events
Oct 14 09:34:34 compute-0 podman[407765]: 2025-10-14 09:34:34.229448406 +0000 UTC m=+1.169903650 container died 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325-merged.mount: Deactivated successfully.
Oct 14 09:34:34 compute-0 podman[407765]: 2025-10-14 09:34:34.296819189 +0000 UTC m=+1.237274433 container remove 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 09:34:34 compute-0 systemd[1]: libpod-conmon-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope: Deactivated successfully.
Oct 14 09:34:34 compute-0 sudo[407658]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:34:34 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:34:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:34:34 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:34:34 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ff7b34cb-e596-4737-b577-14bb9ee63091 does not exist
Oct 14 09:34:34 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev de0c9931-b3e3-46eb-b8c1-1fc01af283d4 does not exist
Oct 14 09:34:34 compute-0 sudo[407828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:34:34 compute-0 sudo[407828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:34 compute-0 sudo[407828]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:34 compute-0 sudo[407853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:34:34 compute-0 sudo[407853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:34:34 compute-0 sudo[407853]: pam_unix(sudo:session): session closed for user root
Oct 14 09:34:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:34:34 compute-0 nova_compute[259627]: 2025-10-14 09:34:34.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:34 compute-0 nova_compute[259627]: 2025-10-14 09:34:34.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:35 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:34:35 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:34:35 compute-0 ceph-mon[74249]: pgmap v2443: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:34:36 compute-0 nova_compute[259627]: 2025-10-14 09:34:36.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:34:37 compute-0 ceph-mon[74249]: pgmap v2444: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:34:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:34:38 compute-0 nova_compute[259627]: 2025-10-14 09:34:38.691 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434463.6909003, aa3e17be-c995-4cab-b209-1eadaaff1634 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:34:38 compute-0 nova_compute[259627]: 2025-10-14 09:34:38.692 2 INFO nova.compute.manager [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] VM Stopped (Lifecycle Event)
Oct 14 09:34:38 compute-0 nova_compute[259627]: 2025-10-14 09:34:38.722 2 DEBUG nova.compute.manager [None req-b9167f18-7cf2-464c-8002-9786f27e265a - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:34:38 compute-0 nova_compute[259627]: 2025-10-14 09:34:38.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:39 compute-0 ceph-mon[74249]: pgmap v2445: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:34:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 14 09:34:41 compute-0 nova_compute[259627]: 2025-10-14 09:34:41.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:41 compute-0 ceph-mon[74249]: pgmap v2446: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 14 09:34:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 14 09:34:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:34:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:34:43 compute-0 ceph-mon[74249]: pgmap v2447: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 14 09:34:43 compute-0 nova_compute[259627]: 2025-10-14 09:34:43.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:43 compute-0 nova_compute[259627]: 2025-10-14 09:34:43.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 14 09:34:45 compute-0 ceph-mon[74249]: pgmap v2448: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 14 09:34:46 compute-0 nova_compute[259627]: 2025-10-14 09:34:46.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:34:47 compute-0 ceph-mon[74249]: pgmap v2449: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:34:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:34:48 compute-0 nova_compute[259627]: 2025-10-14 09:34:48.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:48 compute-0 nova_compute[259627]: 2025-10-14 09:34:48.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:48 compute-0 nova_compute[259627]: 2025-10-14 09:34:48.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:48 compute-0 nova_compute[259627]: 2025-10-14 09:34:48.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.015 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.015 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:34:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:49.355 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:34:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:49.357 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:34:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2510582438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.588 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:34:49 compute-0 ceph-mon[74249]: pgmap v2450: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:34:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2510582438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.851 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.854 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3620MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.854 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.855 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.954 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.954 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:34:49 compute-0 nova_compute[259627]: 2025-10-14 09:34:49.975 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.009 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.010 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.030 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.059 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.084 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:34:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:34:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73745318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:34:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.549 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.560 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.576 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.603 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:34:50 compute-0 nova_compute[259627]: 2025-10-14 09:34:50.603 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:34:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/73745318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:34:50 compute-0 podman[407925]: 2025-10-14 09:34:50.669100971 +0000 UTC m=+0.077523983 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 14 09:34:50 compute-0 podman[407924]: 2025-10-14 09:34:50.688114748 +0000 UTC m=+0.090445061 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:34:51 compute-0 nova_compute[259627]: 2025-10-14 09:34:51.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:51 compute-0 ceph-mon[74249]: pgmap v2451: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:34:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 14 09:34:52 compute-0 nova_compute[259627]: 2025-10-14 09:34:52.600 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:52 compute-0 nova_compute[259627]: 2025-10-14 09:34:52.600 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:52 compute-0 nova_compute[259627]: 2025-10-14 09:34:52.601 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:34:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:53 compute-0 ceph-mon[74249]: pgmap v2452: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 14 09:34:53 compute-0 nova_compute[259627]: 2025-10-14 09:34:53.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:53 compute-0 nova_compute[259627]: 2025-10-14 09:34:53.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:53 compute-0 nova_compute[259627]: 2025-10-14 09:34:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:34:54 compute-0 nova_compute[259627]: 2025-10-14 09:34:54.013 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:34:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 09:34:55 compute-0 ceph-mon[74249]: pgmap v2453: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 09:34:56 compute-0 nova_compute[259627]: 2025-10-14 09:34:56.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:56.460 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:2e:6c 2001:db8:0:1:f816:3eff:fe38:2e6c 2001:db8::f816:3eff:fe38:2e6c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe38:2e6c/64 2001:db8::f816:3eff:fe38:2e6c/64', 'neutron:device_id': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1caef19e-e76a-434d-84c0-dc762554a564) old=Port_Binding(mac=['fa:16:3e:38:2e:6c 2001:db8::f816:3eff:fe38:2e6c'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe38:2e6c/64', 'neutron:device_id': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:34:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:56.462 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1caef19e-e76a-434d-84c0-dc762554a564 in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 updated
Oct 14 09:34:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:56.464 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:34:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:56.465 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d280a9-63fa-47db-9f25-c3b59625e780]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:34:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 09:34:56 compute-0 nova_compute[259627]: 2025-10-14 09:34:56.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:57 compute-0 ceph-mon[74249]: pgmap v2454: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 09:34:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:34:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.711831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498711876, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3375488, "memory_usage": 3433976, "flush_reason": "Manual Compaction"}
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498734730, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3319864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49806, "largest_seqno": 51861, "table_properties": {"data_size": 3310506, "index_size": 5916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18957, "raw_average_key_size": 20, "raw_value_size": 3291963, "raw_average_value_size": 3502, "num_data_blocks": 262, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434273, "oldest_key_time": 1760434273, "file_creation_time": 1760434498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 22968 microseconds, and 15065 cpu microseconds.
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.734796) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3319864 bytes OK
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.734828) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.736902) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.736926) EVENT_LOG_v1 {"time_micros": 1760434498736918, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.736952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3366869, prev total WAL file size 3366869, number of live WAL files 2.
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.738590) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3242KB)], [116(8158KB)]
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498738656, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11674677, "oldest_snapshot_seqno": -1}
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7399 keys, 9956735 bytes, temperature: kUnknown
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498799693, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 9956735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9907561, "index_size": 29542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 191545, "raw_average_key_size": 25, "raw_value_size": 9775688, "raw_average_value_size": 1321, "num_data_blocks": 1156, "num_entries": 7399, "num_filter_entries": 7399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.800127) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 9956735 bytes
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.801751) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.0 rd, 162.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7913, records dropped: 514 output_compression: NoCompression
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.801782) EVENT_LOG_v1 {"time_micros": 1760434498801767, "job": 70, "event": "compaction_finished", "compaction_time_micros": 61134, "compaction_time_cpu_micros": 43736, "output_level": 6, "num_output_files": 1, "total_output_size": 9956735, "num_input_records": 7913, "num_output_records": 7399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498803238, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498806342, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.738421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:34:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:34:58 compute-0 nova_compute[259627]: 2025-10-14 09:34:58.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:34:58 compute-0 nova_compute[259627]: 2025-10-14 09:34:58.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:34:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:34:59.359 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:34:59 compute-0 ceph-mon[74249]: pgmap v2455: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:35:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:35:01 compute-0 nova_compute[259627]: 2025-10-14 09:35:01.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:01 compute-0 ceph-mon[74249]: pgmap v2456: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:35:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:35:02 compute-0 podman[407967]: 2025-10-14 09:35:02.679299947 +0000 UTC m=+0.088569794 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:35:02 compute-0 podman[407966]: 2025-10-14 09:35:02.709249512 +0000 UTC m=+0.119761970 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 09:35:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:35:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:35:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:35:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:35:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:35:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:35:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:03 compute-0 ceph-mon[74249]: pgmap v2457: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:35:03 compute-0 nova_compute[259627]: 2025-10-14 09:35:03.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:03 compute-0 nova_compute[259627]: 2025-10-14 09:35:03.846 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:03 compute-0 nova_compute[259627]: 2025-10-14 09:35:03.847 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:03 compute-0 nova_compute[259627]: 2025-10-14 09:35:03.861 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:35:03 compute-0 nova_compute[259627]: 2025-10-14 09:35:03.956 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:03 compute-0 nova_compute[259627]: 2025-10-14 09:35:03.957 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:03 compute-0 nova_compute[259627]: 2025-10-14 09:35:03.967 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:35:03 compute-0 nova_compute[259627]: 2025-10-14 09:35:03.968 2 INFO nova.compute.claims [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.085 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:35:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:35:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292228845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.577 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.585 2 DEBUG nova.compute.provider_tree [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.601 2 DEBUG nova.scheduler.client.report [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.624 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.624 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.682 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.682 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.707 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.726 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:35:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1292228845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.836 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.838 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.839 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Creating image(s)
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.872 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.908 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.943 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:04 compute-0 nova_compute[259627]: 2025-10-14 09:35:04.948 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.058 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.060 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.061 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.061 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.094 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.101 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.158 2 DEBUG nova.policy [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.414 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.498 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.595 2 DEBUG nova.objects.instance [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.641 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.642 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Ensure instance console log exists: /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.643 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.644 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:05 compute-0 nova_compute[259627]: 2025-10-14 09:35:05.644 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:35:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790611074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:35:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:35:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790611074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:35:05 compute-0 ceph-mon[74249]: pgmap v2458: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:35:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1790611074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:35:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1790611074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:35:06 compute-0 nova_compute[259627]: 2025-10-14 09:35:06.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 65 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct 14 09:35:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:07.051 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:07.052 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:07.052 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:07 compute-0 nova_compute[259627]: 2025-10-14 09:35:07.257 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Successfully created port: 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:35:07 compute-0 ceph-mon[74249]: pgmap v2459: 305 pgs: 305 active+clean; 65 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct 14 09:35:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:08 compute-0 nova_compute[259627]: 2025-10-14 09:35:08.118 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Successfully created port: b2457aed-ba7c-4d69-b93d-9f4c98e456b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:35:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 65 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct 14 09:35:08 compute-0 nova_compute[259627]: 2025-10-14 09:35:08.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:09 compute-0 nova_compute[259627]: 2025-10-14 09:35:09.299 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Successfully updated port: 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:35:09 compute-0 nova_compute[259627]: 2025-10-14 09:35:09.440 2 DEBUG nova.compute.manager [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:09 compute-0 nova_compute[259627]: 2025-10-14 09:35:09.441 2 DEBUG nova.compute.manager [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing instance network info cache due to event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:35:09 compute-0 nova_compute[259627]: 2025-10-14 09:35:09.441 2 DEBUG oslo_concurrency.lockutils [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:09 compute-0 nova_compute[259627]: 2025-10-14 09:35:09.442 2 DEBUG oslo_concurrency.lockutils [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:09 compute-0 nova_compute[259627]: 2025-10-14 09:35:09.442 2 DEBUG nova.network.neutron [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:35:09 compute-0 nova_compute[259627]: 2025-10-14 09:35:09.719 2 DEBUG nova.network.neutron [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:35:09 compute-0 ceph-mon[74249]: pgmap v2460: 305 pgs: 305 active+clean; 65 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct 14 09:35:10 compute-0 nova_compute[259627]: 2025-10-14 09:35:10.100 2 DEBUG nova.network.neutron [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:10 compute-0 nova_compute[259627]: 2025-10-14 09:35:10.115 2 DEBUG oslo_concurrency.lockutils [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:35:10 compute-0 nova_compute[259627]: 2025-10-14 09:35:10.243 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Successfully updated port: b2457aed-ba7c-4d69-b93d-9f4c98e456b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:35:10 compute-0 nova_compute[259627]: 2025-10-14 09:35:10.259 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:10 compute-0 nova_compute[259627]: 2025-10-14 09:35:10.260 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:10 compute-0 nova_compute[259627]: 2025-10-14 09:35:10.260 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:35:10 compute-0 nova_compute[259627]: 2025-10-14 09:35:10.392 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:35:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:11 compute-0 nova_compute[259627]: 2025-10-14 09:35:11.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:11 compute-0 nova_compute[259627]: 2025-10-14 09:35:11.530 2 DEBUG nova.compute.manager [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-changed-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:11 compute-0 nova_compute[259627]: 2025-10-14 09:35:11.530 2 DEBUG nova.compute.manager [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing instance network info cache due to event network-changed-b2457aed-ba7c-4d69-b93d-9f4c98e456b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:35:11 compute-0 nova_compute[259627]: 2025-10-14 09:35:11.530 2 DEBUG oslo_concurrency.lockutils [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:11 compute-0 ceph-mon[74249]: pgmap v2461: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.066 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.096 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.096 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance network_info: |[{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.097 2 DEBUG oslo_concurrency.lockutils [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.098 2 DEBUG nova.network.neutron [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing network info cache for port b2457aed-ba7c-4d69-b93d-9f4c98e456b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.104 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start _get_guest_xml network_info=[{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:35:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.111 2 WARNING nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.116 2 DEBUG nova.virt.libvirt.host [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.117 2 DEBUG nova.virt.libvirt.host [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.127 2 DEBUG nova.virt.libvirt.host [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.128 2 DEBUG nova.virt.libvirt.host [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.128 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.129 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.129 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.130 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.130 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.131 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.131 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.132 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.132 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.132 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.133 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.133 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.139 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:35:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3535914132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.625 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.657 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.661 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:13 compute-0 ceph-mon[74249]: pgmap v2462: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3535914132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:35:13 compute-0 nova_compute[259627]: 2025-10-14 09:35:13.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:35:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264708059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.152 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.154 2 DEBUG nova.virt.libvirt.vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.155 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.157 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.158 2 DEBUG nova.virt.libvirt.vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.159 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.161 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.163 2 DEBUG nova.objects.instance [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.186 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <uuid>c977bdc6-8dd7-4cb4-b50d-28e7313a16e8</uuid>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <name>instance-00000090</name>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-1125974325</nova:name>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:35:13</nova:creationTime>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:port uuid="13d4f68b-234a-4c46-9e1d-79f28a907bf2">
Oct 14 09:35:14 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <nova:port uuid="b2457aed-ba7c-4d69-b93d-9f4c98e456b2">
Oct 14 09:35:14 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe4e:2d2e" ipVersion="6"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4e:2d2e" ipVersion="6"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <system>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <entry name="serial">c977bdc6-8dd7-4cb4-b50d-28e7313a16e8</entry>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <entry name="uuid">c977bdc6-8dd7-4cb4-b50d-28e7313a16e8</entry>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </system>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <os>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   </os>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <features>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   </features>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk">
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       </source>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config">
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       </source>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:35:14 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:2a:ad:d7"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <target dev="tap13d4f68b-23"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:4e:2d:2e"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <target dev="tapb2457aed-ba"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/console.log" append="off"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <video>
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </video>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:35:14 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:35:14 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:35:14 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:35:14 compute-0 nova_compute[259627]: </domain>
Oct 14 09:35:14 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.188 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Preparing to wait for external event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.189 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.190 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.190 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.190 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Preparing to wait for external event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.191 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.191 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.192 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.193 2 DEBUG nova.virt.libvirt.vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.194 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.195 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.196 2 DEBUG os_vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.206 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13d4f68b-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13d4f68b-23, col_values=(('external_ids', {'iface-id': '13d4f68b-234a-4c46-9e1d-79f28a907bf2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:ad:d7', 'vm-uuid': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 NetworkManager[44885]: <info>  [1760434514.2107] manager: (tap13d4f68b-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.219 2 INFO os_vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23')
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.221 2 DEBUG nova.virt.libvirt.vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.221 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.223 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.227 2 DEBUG os_vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2457aed-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2457aed-ba, col_values=(('external_ids', {'iface-id': 'b2457aed-ba7c-4d69-b93d-9f4c98e456b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:2d:2e', 'vm-uuid': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 NetworkManager[44885]: <info>  [1760434514.2372] manager: (tapb2457aed-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.245 2 INFO os_vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba')
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.305 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.305 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.305 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:2a:ad:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.306 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:4e:2d:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.306 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Using config drive
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.331 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.683 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Creating config drive at /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.687 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygrw41k6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1264708059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.841 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygrw41k6" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.871 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.876 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.997 2 DEBUG nova.network.neutron [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updated VIF entry in instance network info cache for port b2457aed-ba7c-4d69-b93d-9f4c98e456b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:35:14 compute-0 nova_compute[259627]: 2025-10-14 09:35:14.998 2 DEBUG nova.network.neutron [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.021 2 DEBUG oslo_concurrency.lockutils [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.082 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.083 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Deleting local config drive /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config because it was imported into RBD.
Oct 14 09:35:15 compute-0 kernel: tap13d4f68b-23: entered promiscuous mode
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.1606] manager: (tap13d4f68b-23): new Tun device (/org/freedesktop/NetworkManager/Devices/635)
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01561|binding|INFO|Claiming lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 for this chassis.
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01562|binding|INFO|13d4f68b-234a-4c46-9e1d-79f28a907bf2: Claiming fa:16:3e:2a:ad:d7 10.100.0.5
Oct 14 09:35:15 compute-0 kernel: tapb2457aed-ba: entered promiscuous mode
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.1897] manager: (tapb2457aed-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Oct 14 09:35:15 compute-0 systemd-udevd[408335]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:35:15 compute-0 systemd-udevd[408334]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.197 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:ad:d7 10.100.0.5'], port_security=['fa:16:3e:2a:ad:d7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09c5c036-dfc9-4826-bc59-c008b28bd97a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=13d4f68b-234a-4c46-9e1d-79f28a907bf2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.200 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 in datapath 9cb6ee52-3808-410f-9854-68ac8ffadab8 bound to our chassis
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.202 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cb6ee52-3808-410f-9854-68ac8ffadab8
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.2047] device (tap13d4f68b-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.2059] device (tap13d4f68b-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.2086] device (tapb2457aed-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.2101] device (tapb2457aed-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.222 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56f1a2c1-5ec0-49b4-be9b-6966c116dcb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.223 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9cb6ee52-31 in ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.225 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9cb6ee52-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.226 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a70c541c-e9bd-4312-81ce-5298f48a9701]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f21fea8f-866c-40cd-8eac-8d15c8453f85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.250 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[37b5520c-6274-4606-a98b-6a7ba23c008c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 systemd-machined[214636]: New machine qemu-177-instance-00000090.
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01563|binding|INFO|Claiming lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 for this chassis.
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01564|binding|INFO|b2457aed-ba7c-4d69-b93d-9f4c98e456b2: Claiming fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e
Oct 14 09:35:15 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-00000090.
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.285 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57feb7ef-cc27-4eb0-93a6-0b555c43ae99]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.294 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e'], port_security=['fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe4e:2d2e/64 2001:db8::f816:3eff:fe4e:2d2e/64', 'neutron:device_id': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b2457aed-ba7c-4d69-b93d-9f4c98e456b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01565|binding|INFO|Setting lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 ovn-installed in OVS
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01566|binding|INFO|Setting lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 up in Southbound
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01567|binding|INFO|Setting lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 ovn-installed in OVS
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01568|binding|INFO|Setting lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 up in Southbound
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.335 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77a4e1f2-5a67-4255-af71-3e0176a0d059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.3435] manager: (tap9cb6ee52-30): new Veth device (/org/freedesktop/NetworkManager/Devices/637)
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.342 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5452685f-70ee-404b-8e02-ab2262527556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.376 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d3a1df-17d9-441b-885f-67ce2380b8f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.381 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d94db442-64a8-4fc5-b0dc-97ab2d05725a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.4021] device (tap9cb6ee52-30): carrier: link connected
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.408 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5e7a1b-f4aa-48c3-9dc9-5a622236ac47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.432 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f99321-698f-432b-8f7b-5744d3f3b1ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cb6ee52-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:65:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829416, 'reachable_time': 37698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408372, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fd1541-1676-45f6-97cf-04b8afeb340e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:6535'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829416, 'tstamp': 829416}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408373, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.479 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2631da82-02bb-4b6b-ab84-e63fc0a79ee7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cb6ee52-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:65:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829416, 'reachable_time': 37698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408374, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0b7802-e2ba-4e7a-8135-86524bf0545f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.582 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3a48da-3a65-4d2a-8625-45a4283ee4b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.584 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb6ee52-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.584 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.585 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb6ee52-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:15 compute-0 kernel: tap9cb6ee52-30: entered promiscuous mode
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 NetworkManager[44885]: <info>  [1760434515.5875] manager: (tap9cb6ee52-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.590 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cb6ee52-30, col_values=(('external_ids', {'iface-id': 'e170cfc7-b9d3-441b-8041-f001d366d5cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:15 compute-0 ovn_controller[152662]: 2025-10-14T09:35:15Z|01569|binding|INFO|Releasing lport e170cfc7-b9d3-441b-8041-f001d366d5cd from this chassis (sb_readonly=0)
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 nova_compute[259627]: 2025-10-14 09:35:15.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.614 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9cb6ee52-3808-410f-9854-68ac8ffadab8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9cb6ee52-3808-410f-9854-68ac8ffadab8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.615 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c903f56a-eb59-4b6b-a51e-2df312f702df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.616 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-9cb6ee52-3808-410f-9854-68ac8ffadab8
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/9cb6ee52-3808-410f-9854-68ac8ffadab8.pid.haproxy
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 9cb6ee52-3808-410f-9854-68ac8ffadab8
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:35:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.616 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'env', 'PROCESS_TAG=haproxy-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9cb6ee52-3808-410f-9854-68ac8ffadab8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:35:15 compute-0 ceph-mon[74249]: pgmap v2463: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:16 compute-0 podman[408450]: 2025-10-14 09:35:16.036758207 +0000 UTC m=+0.080183418 container create 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:35:16 compute-0 podman[408450]: 2025-10-14 09:35:15.995675329 +0000 UTC m=+0.039100610 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:35:16 compute-0 systemd[1]: Started libpod-conmon-704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323.scope.
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd38037ad8c5df564412ddc4d5b130a78e747af4052f1d4f025faaa569ef1ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:16 compute-0 podman[408450]: 2025-10-14 09:35:16.23370859 +0000 UTC m=+0.277133851 container init 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.235 2 DEBUG nova.compute.manager [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.236 2 DEBUG oslo_concurrency.lockutils [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.236 2 DEBUG oslo_concurrency.lockutils [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.236 2 DEBUG oslo_concurrency.lockutils [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.236 2 DEBUG nova.compute.manager [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Processing event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:35:16 compute-0 podman[408450]: 2025-10-14 09:35:16.240515757 +0000 UTC m=+0.283940968 container start 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:35:16 compute-0 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [NOTICE]   (408470) : New worker (408472) forked
Oct 14 09:35:16 compute-0 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [NOTICE]   (408470) : Loading success.
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.330 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434516.3294785, c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.330 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] VM Started (Lifecycle Event)
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.335 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b2457aed-ba7c-4d69-b93d-9f4c98e456b2 in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 unbound from our chassis
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.336 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.348 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df1db499-49b6-4b51-afd3-9677838430db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.348 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0d3b36cd-f1 in ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.349 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0d3b36cd-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[deb90654-ee4e-4eb9-b73d-892ce204f650]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cf29209e-2b8a-4cc8-9be1-dfe2bf40e765]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.351 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.355 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434516.329695, c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.355 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] VM Paused (Lifecycle Event)
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG nova.compute.manager [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG oslo_concurrency.lockutils [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG oslo_concurrency.lockutils [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG oslo_concurrency.lockutils [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG nova.compute.manager [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Processing event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.364 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[56caa5aa-b0dd-4d72-a415-3a4cd05bfe5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.365 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.369 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.371 2 INFO nova.virt.libvirt.driver [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance spawned successfully.
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.372 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.374 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.376 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434516.3678784, c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.376 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] VM Resumed (Lifecycle Event)
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.388 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.389 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.389 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.390 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.390 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.390 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[981891dc-7d49-42fb-9b31-1a5b90e2a727]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.393 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.395 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.422 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.424 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ba907a-0c80-4dcc-86b8-ed5e4a374a85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 systemd-udevd[408363]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.432 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24953f2f-a65e-4a77-9dcd-5b7f4a46769e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 NetworkManager[44885]: <info>  [1760434516.4349] manager: (tap0d3b36cd-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/639)
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.443 2 INFO nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Took 11.61 seconds to spawn the instance on the hypervisor.
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.443 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.467 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfe7569-cbb9-425d-a670-d7acbe7770d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.471 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f3fb7ed1-a81c-4d4f-b3ec-ac0ec35aa58b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.498 2 INFO nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Took 12.59 seconds to build instance.
Oct 14 09:35:16 compute-0 NetworkManager[44885]: <info>  [1760434516.5005] device (tap0d3b36cd-f0): carrier: link connected
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.506 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd3de1f-72b2-4410-9fba-a0b31162ae63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.513 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.524 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7de696f7-41e1-4a1e-8b1a-4e8335f30326]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d3b36cd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:2e:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829526, 'reachable_time': 38835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408491, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.541 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8cf5b3-27f4-47be-959f-4f60b8687235]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:2e6c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829526, 'tstamp': 829526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408492, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.560 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7539fe4-f6be-421c-b04c-f88f9d0a940d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d3b36cd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:2e:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829526, 'reachable_time': 38835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408493, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.595 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f972d90b-b4ff-41a7-852a-bb3700c427c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.641 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d210679-ddcd-4bb7-95de-0ad6973433a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.645 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d3b36cd-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.645 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.646 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d3b36cd-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:16 compute-0 kernel: tap0d3b36cd-f0: entered promiscuous mode
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:16 compute-0 NetworkManager[44885]: <info>  [1760434516.6506] manager: (tap0d3b36cd-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/640)
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.652 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d3b36cd-f0, col_values=(('external_ids', {'iface-id': '1caef19e-e76a-434d-84c0-dc762554a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:16 compute-0 ovn_controller[152662]: 2025-10-14T09:35:16Z|01570|binding|INFO|Releasing lport 1caef19e-e76a-434d-84c0-dc762554a564 from this chassis (sb_readonly=0)
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:16 compute-0 nova_compute[259627]: 2025-10-14 09:35:16.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.667 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0d3b36cd-f345-4ba4-8ea7-29b299ab0543.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0d3b36cd-f345-4ba4-8ea7-29b299ab0543.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.668 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[43dc1ee3-5dfa-470d-815e-e188090c34a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.669 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-0d3b36cd-f345-4ba4-8ea7-29b299ab0543
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/0d3b36cd-f345-4ba4-8ea7-29b299ab0543.pid.haproxy
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 0d3b36cd-f345-4ba4-8ea7-29b299ab0543
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:35:16 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.671 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'env', 'PROCESS_TAG=haproxy-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0d3b36cd-f345-4ba4-8ea7-29b299ab0543.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:35:17 compute-0 podman[408525]: 2025-10-14 09:35:17.098210505 +0000 UTC m=+0.064307189 container create 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:35:17 compute-0 systemd[1]: Started libpod-conmon-29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa.scope.
Oct 14 09:35:17 compute-0 podman[408525]: 2025-10-14 09:35:17.074432692 +0000 UTC m=+0.040529396 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:35:17 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f24b8bd883a1a9d5a46c8f33163dab05be4377ce726a9e20ef9f68b79d27d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:17 compute-0 podman[408525]: 2025-10-14 09:35:17.188636314 +0000 UTC m=+0.154733068 container init 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:35:17 compute-0 podman[408525]: 2025-10-14 09:35:17.199313426 +0000 UTC m=+0.165410150 container start 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:35:17 compute-0 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [NOTICE]   (408544) : New worker (408546) forked
Oct 14 09:35:17 compute-0 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [NOTICE]   (408544) : Loading success.
Oct 14 09:35:17 compute-0 ceph-mon[74249]: pgmap v2464: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 09:35:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.116468) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518116497, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 414, "num_deletes": 257, "total_data_size": 273713, "memory_usage": 281792, "flush_reason": "Manual Compaction"}
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518120817, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 271272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51862, "largest_seqno": 52275, "table_properties": {"data_size": 268841, "index_size": 531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5827, "raw_average_key_size": 17, "raw_value_size": 263975, "raw_average_value_size": 814, "num_data_blocks": 24, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434499, "oldest_key_time": 1760434499, "file_creation_time": 1760434518, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 4420 microseconds, and 1545 cpu microseconds.
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.120883) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 271272 bytes OK
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.120908) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.122909) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.122937) EVENT_LOG_v1 {"time_micros": 1760434518122927, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.122960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 271086, prev total WAL file size 271086, number of live WAL files 2.
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.123822) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(264KB)], [119(9723KB)]
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518123864, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10228007, "oldest_snapshot_seqno": -1}
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7201 keys, 10117828 bytes, temperature: kUnknown
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518177225, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10117828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10069174, "index_size": 29534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 188378, "raw_average_key_size": 26, "raw_value_size": 9939907, "raw_average_value_size": 1380, "num_data_blocks": 1152, "num_entries": 7201, "num_filter_entries": 7201, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434518, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.177462) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10117828 bytes
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.178989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.4 rd, 189.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.5 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(75.0) write-amplify(37.3) OK, records in: 7723, records dropped: 522 output_compression: NoCompression
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.179030) EVENT_LOG_v1 {"time_micros": 1760434518178999, "job": 72, "event": "compaction_finished", "compaction_time_micros": 53443, "compaction_time_cpu_micros": 27006, "output_level": 6, "num_output_files": 1, "total_output_size": 10117828, "num_input_records": 7723, "num_output_records": 7201, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518179182, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518180984, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.123739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:35:18 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.443 2 DEBUG nova.compute.manager [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.443 2 DEBUG oslo_concurrency.lockutils [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.444 2 DEBUG oslo_concurrency.lockutils [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.444 2 DEBUG oslo_concurrency.lockutils [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.444 2 DEBUG nova.compute.manager [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.444 2 WARNING nova.compute.manager [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received unexpected event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 for instance with vm_state active and task_state None.
Oct 14 09:35:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 699 KiB/s wr, 10 op/s
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.564 2 DEBUG nova.compute.manager [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.564 2 DEBUG oslo_concurrency.lockutils [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.564 2 DEBUG oslo_concurrency.lockutils [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.565 2 DEBUG oslo_concurrency.lockutils [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.565 2 DEBUG nova.compute.manager [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:35:18 compute-0 nova_compute[259627]: 2025-10-14 09:35:18.565 2 WARNING nova.compute.manager [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received unexpected event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 for instance with vm_state active and task_state None.
Oct 14 09:35:19 compute-0 ceph-mon[74249]: pgmap v2465: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 699 KiB/s wr, 10 op/s
Oct 14 09:35:19 compute-0 nova_compute[259627]: 2025-10-14 09:35:19.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:19 compute-0 nova_compute[259627]: 2025-10-14 09:35:19.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:19 compute-0 NetworkManager[44885]: <info>  [1760434519.6695] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Oct 14 09:35:19 compute-0 NetworkManager[44885]: <info>  [1760434519.6706] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Oct 14 09:35:19 compute-0 ovn_controller[152662]: 2025-10-14T09:35:19Z|01571|binding|INFO|Releasing lport e170cfc7-b9d3-441b-8041-f001d366d5cd from this chassis (sb_readonly=0)
Oct 14 09:35:19 compute-0 ovn_controller[152662]: 2025-10-14T09:35:19Z|01572|binding|INFO|Releasing lport 1caef19e-e76a-434d-84c0-dc762554a564 from this chassis (sb_readonly=0)
Oct 14 09:35:19 compute-0 ovn_controller[152662]: 2025-10-14T09:35:19Z|01573|binding|INFO|Releasing lport e170cfc7-b9d3-441b-8041-f001d366d5cd from this chassis (sb_readonly=0)
Oct 14 09:35:19 compute-0 ovn_controller[152662]: 2025-10-14T09:35:19Z|01574|binding|INFO|Releasing lport 1caef19e-e76a-434d-84c0-dc762554a564 from this chassis (sb_readonly=0)
Oct 14 09:35:19 compute-0 nova_compute[259627]: 2025-10-14 09:35:19.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:19 compute-0 nova_compute[259627]: 2025-10-14 09:35:19.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 699 KiB/s wr, 79 op/s
Oct 14 09:35:20 compute-0 nova_compute[259627]: 2025-10-14 09:35:20.564 2 DEBUG nova.compute.manager [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:20 compute-0 nova_compute[259627]: 2025-10-14 09:35:20.565 2 DEBUG nova.compute.manager [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing instance network info cache due to event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:35:20 compute-0 nova_compute[259627]: 2025-10-14 09:35:20.565 2 DEBUG oslo_concurrency.lockutils [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:20 compute-0 nova_compute[259627]: 2025-10-14 09:35:20.566 2 DEBUG oslo_concurrency.lockutils [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:20 compute-0 nova_compute[259627]: 2025-10-14 09:35:20.566 2 DEBUG nova.network.neutron [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:35:21 compute-0 nova_compute[259627]: 2025-10-14 09:35:21.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:21 compute-0 ceph-mon[74249]: pgmap v2466: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 699 KiB/s wr, 79 op/s
Oct 14 09:35:21 compute-0 podman[408556]: 2025-10-14 09:35:21.662451301 +0000 UTC m=+0.076916548 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:35:21 compute-0 podman[408557]: 2025-10-14 09:35:21.675862991 +0000 UTC m=+0.083886000 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 14 09:35:22 compute-0 nova_compute[259627]: 2025-10-14 09:35:22.058 2 DEBUG nova.network.neutron [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updated VIF entry in instance network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:35:22 compute-0 nova_compute[259627]: 2025-10-14 09:35:22.059 2 DEBUG nova.network.neutron [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:22 compute-0 nova_compute[259627]: 2025-10-14 09:35:22.083 2 DEBUG oslo_concurrency.lockutils [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:35:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:35:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:23 compute-0 ceph-mon[74249]: pgmap v2467: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:35:24 compute-0 nova_compute[259627]: 2025-10-14 09:35:24.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:35:25 compute-0 ceph-mon[74249]: pgmap v2468: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:35:26 compute-0 nova_compute[259627]: 2025-10-14 09:35:26.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:35:27 compute-0 ceph-mon[74249]: pgmap v2469: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:35:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:28 compute-0 ovn_controller[152662]: 2025-10-14T09:35:28Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:ad:d7 10.100.0.5
Oct 14 09:35:28 compute-0 ovn_controller[152662]: 2025-10-14T09:35:28Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:ad:d7 10.100.0.5
Oct 14 09:35:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 14 09:35:29 compute-0 nova_compute[259627]: 2025-10-14 09:35:29.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:29 compute-0 ceph-mon[74249]: pgmap v2470: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 14 09:35:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 14 09:35:31 compute-0 nova_compute[259627]: 2025-10-14 09:35:31.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:31 compute-0 ceph-mon[74249]: pgmap v2471: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:35:32
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.rgw.root', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Oct 14 09:35:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:35:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:35:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:35:33 compute-0 ceph-mon[74249]: pgmap v2472: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:35:33 compute-0 podman[408598]: 2025-10-14 09:35:33.7077246 +0000 UTC m=+0.107280294 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:35:33 compute-0 podman[408597]: 2025-10-14 09:35:33.727880514 +0000 UTC m=+0.142271362 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:35:34 compute-0 nova_compute[259627]: 2025-10-14 09:35:34.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:35:34 compute-0 sudo[408642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:34 compute-0 sudo[408642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:34 compute-0 sudo[408642]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:34 compute-0 sudo[408667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:35:34 compute-0 sudo[408667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:34 compute-0 sudo[408667]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:34 compute-0 sudo[408692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:34 compute-0 sudo[408692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:34 compute-0 sudo[408692]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:34 compute-0 sudo[408717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 14 09:35:34 compute-0 sudo[408717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:35 compute-0 sudo[408717]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:35:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:35:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:35 compute-0 sudo[408762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:35 compute-0 sudo[408762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:35 compute-0 sudo[408762]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:35 compute-0 sudo[408787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:35:35 compute-0 sudo[408787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:35 compute-0 sudo[408787]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:35 compute-0 sudo[408812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:35 compute-0 sudo[408812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:35 compute-0 sudo[408812]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:35 compute-0 sudo[408837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:35:35 compute-0 sudo[408837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:35 compute-0 ceph-mon[74249]: pgmap v2473: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:35:35 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:35 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:36 compute-0 sudo[408837]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 09:35:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:35:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:35:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:35:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e85b6f19-2277-49ba-b555-467ee082f42d does not exist
Oct 14 09:35:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 77c85dfb-5b25-4e55-b4cb-0f207de16fcb does not exist
Oct 14 09:35:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 9c40edd6-333d-40e6-a00d-46ea17d469d6 does not exist
Oct 14 09:35:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:35:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:35:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:35:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:35:36 compute-0 nova_compute[259627]: 2025-10-14 09:35:36.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:36 compute-0 sudo[408892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:36 compute-0 sudo[408892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:36 compute-0 sudo[408892]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:36 compute-0 sudo[408917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:35:36 compute-0 sudo[408917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:36 compute-0 sudo[408917]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:36 compute-0 sudo[408942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:36 compute-0 sudo[408942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:36 compute-0 sudo[408942]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:36 compute-0 sudo[408967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:35:36 compute-0 sudo[408967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:35:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:35:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:35:36 compute-0 podman[409032]: 2025-10-14 09:35:36.792351786 +0000 UTC m=+0.060438374 container create 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 09:35:36 compute-0 systemd[1]: Started libpod-conmon-4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78.scope.
Oct 14 09:35:36 compute-0 podman[409032]: 2025-10-14 09:35:36.755627245 +0000 UTC m=+0.023713893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:35:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:35:36 compute-0 podman[409032]: 2025-10-14 09:35:36.903388001 +0000 UTC m=+0.171474599 container init 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:35:36 compute-0 podman[409032]: 2025-10-14 09:35:36.915375095 +0000 UTC m=+0.183461643 container start 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:35:36 compute-0 podman[409032]: 2025-10-14 09:35:36.919283381 +0000 UTC m=+0.187369949 container attach 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 09:35:36 compute-0 stoic_villani[409049]: 167 167
Oct 14 09:35:36 compute-0 systemd[1]: libpod-4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78.scope: Deactivated successfully.
Oct 14 09:35:36 compute-0 conmon[409049]: conmon 4ea76e14d4c923f81beb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78.scope/container/memory.events
Oct 14 09:35:36 compute-0 podman[409032]: 2025-10-14 09:35:36.923358371 +0000 UTC m=+0.191444939 container died 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf61b532bdc0ebab71f6ec0e5e1252d81ddc46c945cb9e32d4aa0ca9b7c82247-merged.mount: Deactivated successfully.
Oct 14 09:35:36 compute-0 podman[409032]: 2025-10-14 09:35:36.966799167 +0000 UTC m=+0.234885705 container remove 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:35:36 compute-0 systemd[1]: libpod-conmon-4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78.scope: Deactivated successfully.
Oct 14 09:35:37 compute-0 podman[409073]: 2025-10-14 09:35:37.178074262 +0000 UTC m=+0.049948337 container create 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:35:37 compute-0 systemd[1]: Started libpod-conmon-03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0.scope.
Oct 14 09:35:37 compute-0 podman[409073]: 2025-10-14 09:35:37.161356572 +0000 UTC m=+0.033230617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:35:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:37 compute-0 podman[409073]: 2025-10-14 09:35:37.297764579 +0000 UTC m=+0.169638654 container init 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:35:37 compute-0 podman[409073]: 2025-10-14 09:35:37.3104128 +0000 UTC m=+0.182286885 container start 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 09:35:37 compute-0 podman[409073]: 2025-10-14 09:35:37.314831498 +0000 UTC m=+0.186705563 container attach 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:35:37 compute-0 ceph-mon[74249]: pgmap v2474: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:35:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:38 compute-0 modest_yonath[409090]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:35:38 compute-0 modest_yonath[409090]: --> relative data size: 1.0
Oct 14 09:35:38 compute-0 modest_yonath[409090]: --> All data devices are unavailable
Oct 14 09:35:38 compute-0 systemd[1]: libpod-03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0.scope: Deactivated successfully.
Oct 14 09:35:38 compute-0 systemd[1]: libpod-03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0.scope: Consumed 1.156s CPU time.
Oct 14 09:35:38 compute-0 podman[409119]: 2025-10-14 09:35:38.554489199 +0000 UTC m=+0.029744861 container died 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:35:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e-merged.mount: Deactivated successfully.
Oct 14 09:35:38 compute-0 podman[409119]: 2025-10-14 09:35:38.618551681 +0000 UTC m=+0.093807323 container remove 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:35:38 compute-0 systemd[1]: libpod-conmon-03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0.scope: Deactivated successfully.
Oct 14 09:35:38 compute-0 sudo[408967]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:38 compute-0 sudo[409134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:38 compute-0 sudo[409134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:38 compute-0 sudo[409134]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:38 compute-0 sudo[409159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:35:38 compute-0 sudo[409159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:38 compute-0 sudo[409159]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:38 compute-0 sudo[409184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:38 compute-0 sudo[409184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:38 compute-0 sudo[409184]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:39 compute-0 sudo[409209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:35:39 compute-0 sudo[409209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:39 compute-0 nova_compute[259627]: 2025-10-14 09:35:39.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:39 compute-0 podman[409274]: 2025-10-14 09:35:39.378698385 +0000 UTC m=+0.047166089 container create 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:35:39 compute-0 systemd[1]: Started libpod-conmon-8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5.scope.
Oct 14 09:35:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:35:39 compute-0 podman[409274]: 2025-10-14 09:35:39.360798275 +0000 UTC m=+0.029265979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:35:39 compute-0 podman[409274]: 2025-10-14 09:35:39.461731522 +0000 UTC m=+0.130199236 container init 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:35:39 compute-0 podman[409274]: 2025-10-14 09:35:39.46857375 +0000 UTC m=+0.137041474 container start 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 09:35:39 compute-0 funny_blackburn[409292]: 167 167
Oct 14 09:35:39 compute-0 systemd[1]: libpod-8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5.scope: Deactivated successfully.
Oct 14 09:35:39 compute-0 podman[409274]: 2025-10-14 09:35:39.472204969 +0000 UTC m=+0.140672703 container attach 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:35:39 compute-0 podman[409274]: 2025-10-14 09:35:39.475705985 +0000 UTC m=+0.144173709 container died 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:35:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e06e773ce4dd6a51fccc7acb3e3554068c1ad07749730ba984036f0171ffec68-merged.mount: Deactivated successfully.
Oct 14 09:35:39 compute-0 podman[409274]: 2025-10-14 09:35:39.520495444 +0000 UTC m=+0.188963168 container remove 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:35:39 compute-0 systemd[1]: libpod-conmon-8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5.scope: Deactivated successfully.
Oct 14 09:35:39 compute-0 ceph-mon[74249]: pgmap v2475: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:35:39 compute-0 podman[409318]: 2025-10-14 09:35:39.741410976 +0000 UTC m=+0.051978207 container create fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:35:39 compute-0 systemd[1]: Started libpod-conmon-fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9.scope.
Oct 14 09:35:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:35:39 compute-0 podman[409318]: 2025-10-14 09:35:39.722594444 +0000 UTC m=+0.033161635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:39 compute-0 podman[409318]: 2025-10-14 09:35:39.83695146 +0000 UTC m=+0.147518701 container init fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:35:39 compute-0 podman[409318]: 2025-10-14 09:35:39.849768225 +0000 UTC m=+0.160335416 container start fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:35:39 compute-0 podman[409318]: 2025-10-14 09:35:39.852875301 +0000 UTC m=+0.163442492 container attach fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:35:40 compute-0 nova_compute[259627]: 2025-10-14 09:35:40.301 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:40 compute-0 nova_compute[259627]: 2025-10-14 09:35:40.303 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:40 compute-0 nova_compute[259627]: 2025-10-14 09:35:40.325 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:35:40 compute-0 nova_compute[259627]: 2025-10-14 09:35:40.416 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:40 compute-0 nova_compute[259627]: 2025-10-14 09:35:40.416 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:40 compute-0 nova_compute[259627]: 2025-10-14 09:35:40.425 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:35:40 compute-0 nova_compute[259627]: 2025-10-14 09:35:40.425 2 INFO nova.compute.claims [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:35:40 compute-0 nova_compute[259627]: 2025-10-14 09:35:40.562 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:35:40 compute-0 practical_hermann[409334]: {
Oct 14 09:35:40 compute-0 practical_hermann[409334]:     "0": [
Oct 14 09:35:40 compute-0 practical_hermann[409334]:         {
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "devices": [
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "/dev/loop3"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             ],
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_name": "ceph_lv0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_size": "21470642176",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "name": "ceph_lv0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "tags": {
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cluster_name": "ceph",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.crush_device_class": "",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.encrypted": "0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osd_id": "0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.type": "block",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.vdo": "0"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             },
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "type": "block",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "vg_name": "ceph_vg0"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:         }
Oct 14 09:35:40 compute-0 practical_hermann[409334]:     ],
Oct 14 09:35:40 compute-0 practical_hermann[409334]:     "1": [
Oct 14 09:35:40 compute-0 practical_hermann[409334]:         {
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "devices": [
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "/dev/loop4"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             ],
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_name": "ceph_lv1",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_size": "21470642176",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "name": "ceph_lv1",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "tags": {
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cluster_name": "ceph",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.crush_device_class": "",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.encrypted": "0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osd_id": "1",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.type": "block",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.vdo": "0"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             },
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "type": "block",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "vg_name": "ceph_vg1"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:         }
Oct 14 09:35:40 compute-0 practical_hermann[409334]:     ],
Oct 14 09:35:40 compute-0 practical_hermann[409334]:     "2": [
Oct 14 09:35:40 compute-0 practical_hermann[409334]:         {
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "devices": [
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "/dev/loop5"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             ],
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_name": "ceph_lv2",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_size": "21470642176",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "name": "ceph_lv2",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "tags": {
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.cluster_name": "ceph",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.crush_device_class": "",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.encrypted": "0",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osd_id": "2",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.type": "block",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:                 "ceph.vdo": "0"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             },
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "type": "block",
Oct 14 09:35:40 compute-0 practical_hermann[409334]:             "vg_name": "ceph_vg2"
Oct 14 09:35:40 compute-0 practical_hermann[409334]:         }
Oct 14 09:35:40 compute-0 practical_hermann[409334]:     ]
Oct 14 09:35:40 compute-0 practical_hermann[409334]: }
Oct 14 09:35:40 compute-0 systemd[1]: libpod-fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9.scope: Deactivated successfully.
Oct 14 09:35:40 compute-0 podman[409318]: 2025-10-14 09:35:40.61524433 +0000 UTC m=+0.925811581 container died fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7-merged.mount: Deactivated successfully.
Oct 14 09:35:40 compute-0 podman[409318]: 2025-10-14 09:35:40.758496685 +0000 UTC m=+1.069063876 container remove fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:35:40 compute-0 systemd[1]: libpod-conmon-fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9.scope: Deactivated successfully.
Oct 14 09:35:40 compute-0 sudo[409209]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:40 compute-0 sudo[409375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:40 compute-0 sudo[409375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:40 compute-0 sudo[409375]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:40 compute-0 sudo[409400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:35:40 compute-0 sudo[409400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:40 compute-0 sudo[409400]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:35:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682684492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:35:41 compute-0 sudo[409425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:41 compute-0 sudo[409425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:41 compute-0 sudo[409425]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.020 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.029 2 DEBUG nova.compute.provider_tree [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.055 2 DEBUG nova.scheduler.client.report [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:35:41 compute-0 sudo[409452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:35:41 compute-0 sudo[409452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.084 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.084 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.145 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.145 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.171 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.189 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.292 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.294 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.294 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Creating image(s)
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.323 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.356 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.385 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.390 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.473 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.474 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.475 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.475 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.495 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:41 compute-0 nova_compute[259627]: 2025-10-14 09:35:41.498 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2d97e325-a4e8-4595-9697-04219277474d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:41 compute-0 podman[409568]: 2025-10-14 09:35:41.51096469 +0000 UTC m=+0.105113841 container create c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:35:41 compute-0 podman[409568]: 2025-10-14 09:35:41.426918957 +0000 UTC m=+0.021068118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:35:41 compute-0 systemd[1]: Started libpod-conmon-c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b.scope.
Oct 14 09:35:41 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:35:41 compute-0 podman[409568]: 2025-10-14 09:35:41.672105894 +0000 UTC m=+0.266255095 container init c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:35:41 compute-0 podman[409568]: 2025-10-14 09:35:41.685091513 +0000 UTC m=+0.279240694 container start c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:35:41 compute-0 exciting_ride[409613]: 167 167
Oct 14 09:35:41 compute-0 systemd[1]: libpod-c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b.scope: Deactivated successfully.
Oct 14 09:35:41 compute-0 conmon[409613]: conmon c95f9b6f78e6d815dcba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b.scope/container/memory.events
Oct 14 09:35:41 compute-0 podman[409568]: 2025-10-14 09:35:41.824298599 +0000 UTC m=+0.418447830 container attach c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:35:41 compute-0 podman[409568]: 2025-10-14 09:35:41.82553748 +0000 UTC m=+0.419686671 container died c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:35:41 compute-0 ceph-mon[74249]: pgmap v2476: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:35:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2682684492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.074 2 DEBUG nova.policy [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1654488baf9a4d61e7b130cbc7bb744a54fdd018180d98857aefa458ff6229e3-merged.mount: Deactivated successfully.
Oct 14 09:35:42 compute-0 podman[409568]: 2025-10-14 09:35:42.451750987 +0000 UTC m=+1.045900138 container remove c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:35:42 compute-0 systemd[1]: libpod-conmon-c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b.scope: Deactivated successfully.
Oct 14 09:35:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:35:42 compute-0 podman[409647]: 2025-10-14 09:35:42.670160957 +0000 UTC m=+0.069178319 container create 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:35:42 compute-0 systemd[1]: Started libpod-conmon-2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1.scope.
Oct 14 09:35:42 compute-0 podman[409647]: 2025-10-14 09:35:42.632948304 +0000 UTC m=+0.031965706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:35:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:35:42 compute-0 podman[409647]: 2025-10-14 09:35:42.806914243 +0000 UTC m=+0.205931645 container init 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 14 09:35:42 compute-0 podman[409647]: 2025-10-14 09:35:42.819365018 +0000 UTC m=+0.218382390 container start 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 09:35:42 compute-0 podman[409647]: 2025-10-14 09:35:42.823375677 +0000 UTC m=+0.222393089 container attach 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.831 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2d97e325-a4e8-4595-9697-04219277474d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.897 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.970 2 DEBUG nova.objects.instance [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 2d97e325-a4e8-4595-9697-04219277474d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.997 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.997 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Ensure instance console log exists: /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.997 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.998 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:42 compute-0 nova_compute[259627]: 2025-10-14 09:35:42.998 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:35:43 compute-0 nova_compute[259627]: 2025-10-14 09:35:43.597 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Successfully created port: 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:35:43 compute-0 busy_golick[409664]: {
Oct 14 09:35:43 compute-0 busy_golick[409664]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "osd_id": 2,
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "type": "bluestore"
Oct 14 09:35:43 compute-0 busy_golick[409664]:     },
Oct 14 09:35:43 compute-0 busy_golick[409664]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "osd_id": 1,
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "type": "bluestore"
Oct 14 09:35:43 compute-0 busy_golick[409664]:     },
Oct 14 09:35:43 compute-0 busy_golick[409664]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "osd_id": 0,
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:35:43 compute-0 busy_golick[409664]:         "type": "bluestore"
Oct 14 09:35:43 compute-0 busy_golick[409664]:     }
Oct 14 09:35:43 compute-0 busy_golick[409664]: }
Oct 14 09:35:43 compute-0 systemd[1]: libpod-2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1.scope: Deactivated successfully.
Oct 14 09:35:43 compute-0 podman[409647]: 2025-10-14 09:35:43.741620871 +0000 UTC m=+1.140638233 container died 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:35:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa-merged.mount: Deactivated successfully.
Oct 14 09:35:43 compute-0 podman[409647]: 2025-10-14 09:35:43.808560463 +0000 UTC m=+1.207577835 container remove 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:35:43 compute-0 systemd[1]: libpod-conmon-2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1.scope: Deactivated successfully.
Oct 14 09:35:43 compute-0 sudo[409452]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:35:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:35:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7828fad9-e245-49fa-908a-c5bfafdc672b does not exist
Oct 14 09:35:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 957d3098-7fc4-43be-a63f-4c60070743b8 does not exist
Oct 14 09:35:43 compute-0 sudo[409781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:35:43 compute-0 ceph-mon[74249]: pgmap v2477: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:35:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:35:43 compute-0 sudo[409781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:43 compute-0 sudo[409781]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:44 compute-0 sudo[409806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:35:44 compute-0 sudo[409806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:35:44 compute-0 nova_compute[259627]: 2025-10-14 09:35:44.042 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Successfully created port: da98587d-dc59-4e4f-bc3f-d3e70dafd21e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:35:44 compute-0 sudo[409806]: pam_unix(sudo:session): session closed for user root
Oct 14 09:35:44 compute-0 nova_compute[259627]: 2025-10-14 09:35:44.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:35:44 compute-0 nova_compute[259627]: 2025-10-14 09:35:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:45 compute-0 nova_compute[259627]: 2025-10-14 09:35:45.396 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Successfully updated port: 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:35:45 compute-0 nova_compute[259627]: 2025-10-14 09:35:45.486 2 DEBUG nova.compute.manager [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:45 compute-0 nova_compute[259627]: 2025-10-14 09:35:45.487 2 DEBUG nova.compute.manager [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing instance network info cache due to event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:35:45 compute-0 nova_compute[259627]: 2025-10-14 09:35:45.488 2 DEBUG oslo_concurrency.lockutils [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:45 compute-0 nova_compute[259627]: 2025-10-14 09:35:45.488 2 DEBUG oslo_concurrency.lockutils [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:45 compute-0 nova_compute[259627]: 2025-10-14 09:35:45.488 2 DEBUG nova.network.neutron [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:35:45 compute-0 ceph-mon[74249]: pgmap v2478: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:35:46 compute-0 nova_compute[259627]: 2025-10-14 09:35:46.037 2 DEBUG nova.network.neutron [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:35:46 compute-0 nova_compute[259627]: 2025-10-14 09:35:46.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:46 compute-0 nova_compute[259627]: 2025-10-14 09:35:46.286 2 DEBUG nova.network.neutron [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:46 compute-0 nova_compute[259627]: 2025-10-14 09:35:46.301 2 DEBUG oslo_concurrency.lockutils [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:35:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:47 compute-0 nova_compute[259627]: 2025-10-14 09:35:47.061 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Successfully updated port: da98587d-dc59-4e4f-bc3f-d3e70dafd21e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:35:47 compute-0 nova_compute[259627]: 2025-10-14 09:35:47.075 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:47 compute-0 nova_compute[259627]: 2025-10-14 09:35:47.075 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:47 compute-0 nova_compute[259627]: 2025-10-14 09:35:47.076 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:35:47 compute-0 nova_compute[259627]: 2025-10-14 09:35:47.203 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:35:47 compute-0 nova_compute[259627]: 2025-10-14 09:35:47.602 2 DEBUG nova.compute.manager [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-changed-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:47 compute-0 nova_compute[259627]: 2025-10-14 09:35:47.602 2 DEBUG nova.compute.manager [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing instance network info cache due to event network-changed-da98587d-dc59-4e4f-bc3f-d3e70dafd21e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:35:47 compute-0 nova_compute[259627]: 2025-10-14 09:35:47.603 2 DEBUG oslo_concurrency.lockutils [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:47 compute-0 ceph-mon[74249]: pgmap v2479: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.522 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.543 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.544 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance network_info: |[{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.544 2 DEBUG oslo_concurrency.lockutils [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.545 2 DEBUG nova.network.neutron [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing network info cache for port da98587d-dc59-4e4f-bc3f-d3e70dafd21e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.551 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start _get_guest_xml network_info=[{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.558 2 WARNING nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.565 2 DEBUG nova.virt.libvirt.host [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.566 2 DEBUG nova.virt.libvirt.host [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.576 2 DEBUG nova.virt.libvirt.host [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.577 2 DEBUG nova.virt.libvirt.host [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.578 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.578 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.579 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.579 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.580 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.580 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.580 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.581 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.581 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.582 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.582 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.582 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.587 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:49 compute-0 nova_compute[259627]: 2025-10-14 09:35:49.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:49 compute-0 ceph-mon[74249]: pgmap v2480: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:35:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750887776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.039 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.071 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.077 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:35:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885423904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.538 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.541 2 DEBUG nova.virt.libvirt.vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.542 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.544 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.545 2 DEBUG nova.virt.libvirt.vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.546 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.547 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.550 2 DEBUG nova.objects.instance [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d97e325-a4e8-4595-9697-04219277474d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:35:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.573 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <uuid>2d97e325-a4e8-4595-9697-04219277474d</uuid>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <name>instance-00000091</name>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-529814172</nova:name>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:35:49</nova:creationTime>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:port uuid="2e84cd2e-9f09-482a-9a06-adcfff2088d5">
Oct 14 09:35:50 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <nova:port uuid="da98587d-dc59-4e4f-bc3f-d3e70dafd21e">
Oct 14 09:35:50 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe61:8d23" ipVersion="6"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe61:8d23" ipVersion="6"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <system>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <entry name="serial">2d97e325-a4e8-4595-9697-04219277474d</entry>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <entry name="uuid">2d97e325-a4e8-4595-9697-04219277474d</entry>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </system>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <os>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   </os>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <features>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   </features>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2d97e325-a4e8-4595-9697-04219277474d_disk">
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       </source>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/2d97e325-a4e8-4595-9697-04219277474d_disk.config">
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       </source>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:35:50 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:20:70:35"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <target dev="tap2e84cd2e-9f"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:61:8d:23"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <target dev="tapda98587d-dc"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/console.log" append="off"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <video>
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </video>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:35:50 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:35:50 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:35:50 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:35:50 compute-0 nova_compute[259627]: </domain>
Oct 14 09:35:50 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.575 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Preparing to wait for external event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.576 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.577 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.578 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.578 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Preparing to wait for external event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.579 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.579 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.580 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.581 2 DEBUG nova.virt.libvirt.vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.582 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.583 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.584 2 DEBUG os_vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e84cd2e-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e84cd2e-9f, col_values=(('external_ids', {'iface-id': '2e84cd2e-9f09-482a-9a06-adcfff2088d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:70:35', 'vm-uuid': '2d97e325-a4e8-4595-9697-04219277474d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:50 compute-0 NetworkManager[44885]: <info>  [1760434550.5995] manager: (tap2e84cd2e-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.611 2 INFO os_vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f')
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.613 2 DEBUG nova.virt.libvirt.vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.614 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.615 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.616 2 DEBUG os_vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda98587d-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda98587d-dc, col_values=(('external_ids', {'iface-id': 'da98587d-dc59-4e4f-bc3f-d3e70dafd21e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:8d:23', 'vm-uuid': '2d97e325-a4e8-4595-9697-04219277474d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:50 compute-0 NetworkManager[44885]: <info>  [1760434550.6270] manager: (tapda98587d-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.637 2 INFO os_vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc')
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.720 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.720 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.721 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:20:70:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.721 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:61:8d:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.721 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Using config drive
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.748 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:50 compute-0 nova_compute[259627]: 2025-10-14 09:35:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3750887776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:35:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2885423904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.019 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.268 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Creating config drive at /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.277 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnv_87gl_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.434 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnv_87gl_" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.471 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.476 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config 2d97e325-a4e8-4595-9697-04219277474d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:35:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/638846683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.522 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.528 2 DEBUG nova.network.neutron [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updated VIF entry in instance network info cache for port da98587d-dc59-4e4f-bc3f-d3e70dafd21e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.529 2 DEBUG nova.network.neutron [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.545 2 DEBUG oslo_concurrency.lockutils [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.582 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.582 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.587 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.587 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.711 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config 2d97e325-a4e8-4595-9697-04219277474d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.711 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Deleting local config drive /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config because it was imported into RBD.
Oct 14 09:35:51 compute-0 NetworkManager[44885]: <info>  [1760434551.7844] manager: (tap2e84cd2e-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/645)
Oct 14 09:35:51 compute-0 kernel: tap2e84cd2e-9f: entered promiscuous mode
Oct 14 09:35:51 compute-0 ovn_controller[152662]: 2025-10-14T09:35:51Z|01575|binding|INFO|Claiming lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 for this chassis.
Oct 14 09:35:51 compute-0 ovn_controller[152662]: 2025-10-14T09:35:51Z|01576|binding|INFO|2e84cd2e-9f09-482a-9a06-adcfff2088d5: Claiming fa:16:3e:20:70:35 10.100.0.8
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.801 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:70:35 10.100.0.8'], port_security=['fa:16:3e:20:70:35 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2d97e325-a4e8-4595-9697-04219277474d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09c5c036-dfc9-4826-bc59-c008b28bd97a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2e84cd2e-9f09-482a-9a06-adcfff2088d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.802 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 in datapath 9cb6ee52-3808-410f-9854-68ac8ffadab8 bound to our chassis
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.804 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cb6ee52-3808-410f-9854-68ac8ffadab8
Oct 14 09:35:51 compute-0 ovn_controller[152662]: 2025-10-14T09:35:51Z|01577|binding|INFO|Setting lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 ovn-installed in OVS
Oct 14 09:35:51 compute-0 ovn_controller[152662]: 2025-10-14T09:35:51Z|01578|binding|INFO|Setting lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 up in Southbound
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:51 compute-0 NetworkManager[44885]: <info>  [1760434551.8246] manager: (tapda98587d-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Oct 14 09:35:51 compute-0 kernel: tapda98587d-dc: entered promiscuous mode
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:51 compute-0 ovn_controller[152662]: 2025-10-14T09:35:51Z|01579|binding|INFO|Claiming lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e for this chassis.
Oct 14 09:35:51 compute-0 ovn_controller[152662]: 2025-10-14T09:35:51Z|01580|binding|INFO|da98587d-dc59-4e4f-bc3f-d3e70dafd21e: Claiming fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.838 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57f606ef-c5b3-4d12-b114-5858ad64a87b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.844 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23'], port_security=['fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe61:8d23/64 2001:db8::f816:3eff:fe61:8d23/64', 'neutron:device_id': '2d97e325-a4e8-4595-9697-04219277474d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=da98587d-dc59-4e4f-bc3f-d3e70dafd21e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:51 compute-0 ovn_controller[152662]: 2025-10-14T09:35:51Z|01581|binding|INFO|Setting lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e ovn-installed in OVS
Oct 14 09:35:51 compute-0 ovn_controller[152662]: 2025-10-14T09:35:51Z|01582|binding|INFO|Setting lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e up in Southbound
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:51 compute-0 systemd-udevd[410022]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:35:51 compute-0 systemd-udevd[410020]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:35:51 compute-0 systemd-machined[214636]: New machine qemu-178-instance-00000091.
Oct 14 09:35:51 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000091.
Oct 14 09:35:51 compute-0 NetworkManager[44885]: <info>  [1760434551.8944] device (tapda98587d-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:35:51 compute-0 NetworkManager[44885]: <info>  [1760434551.8955] device (tapda98587d-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.891 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c8bf99-5528-4876-9942-fa8299a1e7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:51 compute-0 NetworkManager[44885]: <info>  [1760434551.8985] device (tap2e84cd2e-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:35:51 compute-0 NetworkManager[44885]: <info>  [1760434551.8992] device (tap2e84cd2e-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.900 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[84cef066-c062-40c5-b0dc-221e3ad38661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.933 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c053c90d-ea06-456e-8c51-fbd7693e8f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:51 compute-0 podman[409990]: 2025-10-14 09:35:51.936926423 +0000 UTC m=+0.115271990 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:35:51 compute-0 podman[409989]: 2025-10-14 09:35:51.940663894 +0000 UTC m=+0.121239156 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.956 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0880be-af6c-4b31-8430-2551d92848d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cb6ee52-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:65:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829416, 'reachable_time': 37698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410044, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.975 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77a3e216-62cc-4b36-a0fc-2a0086e26821]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9cb6ee52-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829431, 'tstamp': 829431}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410048, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9cb6ee52-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829433, 'tstamp': 829433}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410048, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.977 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb6ee52-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.980 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb6ee52-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.980 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.980 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cb6ee52-30, col_values=(('external_ids', {'iface-id': 'e170cfc7-b9d3-441b-8041-f001d366d5cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.981 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.983 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.983 162547 INFO neutron.agent.ovn.metadata.agent [-] Port da98587d-dc59-4e4f-bc3f-d3e70dafd21e in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 unbound from our chassis
Oct 14 09:35:51 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.984 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.984 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3310MB free_disk=59.92197799682617GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.985 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:51 compute-0 nova_compute[259627]: 2025-10-14 09:35:51.985 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:51 compute-0 ceph-mon[74249]: pgmap v2481: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/638846683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.006 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be8131ab-f5ec-4082-9ff4-7347a42a36e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.048 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[79a8a004-2f45-42fa-8c17-4a30c0abe5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.051 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd4cac4-a400-4cb9-8d03-c52279f2a2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.081 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.081 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2d97e325-a4e8-4595-9697-04219277474d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.081 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.081 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.097 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c8864a82-aa15-4ded-b864-07d134af108e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.115 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b4fd32-9105-4947-aff8-2fcabf9534e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d3b36cd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:2e:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 4, 'rx_bytes': 2216, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 4, 'rx_bytes': 2216, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829526, 'reachable_time': 38835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 24, 'inoctets': 1880, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 24, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1880, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 24, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410055, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.136 2 DEBUG nova.compute.manager [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.136 2 DEBUG oslo_concurrency.lockutils [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.136 2 DEBUG oslo_concurrency.lockutils [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.137 2 DEBUG oslo_concurrency.lockutils [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.137 2 DEBUG nova.compute.manager [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Processing event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.139 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ceac88e-b746-4276-a26d-8dc197eeb0ed]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0d3b36cd-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829539, 'tstamp': 829539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410056, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.141 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d3b36cd-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.145 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d3b36cd-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.146 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.146 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d3b36cd-f0, col_values=(('external_ids', {'iface-id': '1caef19e-e76a-434d-84c0-dc762554a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:35:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.147 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.156 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:35:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:35:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258586033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.664 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.670 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.692 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.727 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.727 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.782 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434552.781984, 2d97e325-a4e8-4595-9697-04219277474d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.783 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] VM Started (Lifecycle Event)
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.807 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.811 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434552.7821534, 2d97e325-a4e8-4595-9697-04219277474d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.812 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] VM Paused (Lifecycle Event)
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.838 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.841 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:35:52 compute-0 nova_compute[259627]: 2025-10-14 09:35:52.863 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:35:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1258586033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:35:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:53 compute-0 nova_compute[259627]: 2025-10-14 09:35:53.727 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:53 compute-0 nova_compute[259627]: 2025-10-14 09:35:53.729 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:53 compute-0 nova_compute[259627]: 2025-10-14 09:35:53.729 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:35:53 compute-0 nova_compute[259627]: 2025-10-14 09:35:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:53 compute-0 nova_compute[259627]: 2025-10-14 09:35:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:35:53 compute-0 nova_compute[259627]: 2025-10-14 09:35:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 14 09:35:54 compute-0 ceph-mon[74249]: pgmap v2482: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.239 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.239 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.240 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.240 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.287 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.287 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.287 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.288 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.288 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No event matching network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 in dict_keys([('network-vif-plugged', 'da98587d-dc59-4e4f-bc3f-d3e70dafd21e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.288 2 WARNING nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 for instance with vm_state building and task_state spawning.
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.289 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.289 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.289 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.290 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.290 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Processing event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.291 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.291 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.291 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.291 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.292 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.292 2 WARNING nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e for instance with vm_state building and task_state spawning.
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.293 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.302 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434554.2972925, 2d97e325-a4e8-4595-9697-04219277474d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.302 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] VM Resumed (Lifecycle Event)
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.306 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.312 2 INFO nova.virt.libvirt.driver [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance spawned successfully.
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.312 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.346 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.356 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.366 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.366 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.367 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.368 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.369 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.370 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.378 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.441 2 INFO nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Took 13.15 seconds to spawn the instance on the hypervisor.
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.441 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.509 2 INFO nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Took 14.13 seconds to build instance.
Oct 14 09:35:54 compute-0 nova_compute[259627]: 2025-10-14 09:35:54.534 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:35:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:55 compute-0 nova_compute[259627]: 2025-10-14 09:35:55.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:56 compute-0 ceph-mon[74249]: pgmap v2483: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:35:56 compute-0 nova_compute[259627]: 2025-10-14 09:35:56.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:35:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 790 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 14 09:35:58 compute-0 ceph-mon[74249]: pgmap v2484: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 790 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 14 09:35:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.226 2 DEBUG nova.compute.manager [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.226 2 DEBUG nova.compute.manager [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing instance network info cache due to event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.228 2 DEBUG oslo_concurrency.lockutils [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.228 2 DEBUG oslo_concurrency.lockutils [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.229 2 DEBUG nova.network.neutron [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.289 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.315 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.315 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.316 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 12 KiB/s wr, 36 op/s
Oct 14 09:35:58 compute-0 nova_compute[259627]: 2025-10-14 09:35:58.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:35:59 compute-0 nova_compute[259627]: 2025-10-14 09:35:59.737 2 DEBUG nova.network.neutron [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updated VIF entry in instance network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:35:59 compute-0 nova_compute[259627]: 2025-10-14 09:35:59.737 2 DEBUG nova.network.neutron [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:35:59 compute-0 nova_compute[259627]: 2025-10-14 09:35:59.768 2 DEBUG oslo_concurrency.lockutils [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:36:00 compute-0 ceph-mon[74249]: pgmap v2485: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 12 KiB/s wr, 36 op/s
Oct 14 09:36:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:36:00 compute-0 nova_compute[259627]: 2025-10-14 09:36:00.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:01 compute-0 nova_compute[259627]: 2025-10-14 09:36:01.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:02 compute-0 ceph-mon[74249]: pgmap v2486: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:36:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:36:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:36:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:36:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:36:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:36:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:36:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:36:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:04 compute-0 ceph-mon[74249]: pgmap v2487: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:36:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:36:04 compute-0 podman[410124]: 2025-10-14 09:36:04.695178087 +0000 UTC m=+0.088794730 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:36:04 compute-0 podman[410123]: 2025-10-14 09:36:04.732546914 +0000 UTC m=+0.129921559 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:36:04 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 14 09:36:05 compute-0 nova_compute[259627]: 2025-10-14 09:36:05.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:36:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099769795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:36:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:36:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099769795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:36:06 compute-0 ceph-mon[74249]: pgmap v2488: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:36:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3099769795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:36:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3099769795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:36:06 compute-0 nova_compute[259627]: 2025-10-14 09:36:06.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 14 09:36:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:07.052 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:07.053 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:07.053 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:07 compute-0 ceph-mon[74249]: pgmap v2489: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 14 09:36:07 compute-0 ovn_controller[152662]: 2025-10-14T09:36:07Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:70:35 10.100.0.8
Oct 14 09:36:07 compute-0 ovn_controller[152662]: 2025-10-14T09:36:07Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:70:35 10.100.0.8
Oct 14 09:36:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Oct 14 09:36:09 compute-0 ceph-mon[74249]: pgmap v2490: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Oct 14 09:36:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 14 09:36:10 compute-0 nova_compute[259627]: 2025-10-14 09:36:10.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:11 compute-0 nova_compute[259627]: 2025-10-14 09:36:11.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:11 compute-0 ceph-mon[74249]: pgmap v2491: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 14 09:36:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:36:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:13 compute-0 ceph-mon[74249]: pgmap v2492: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:36:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:36:15 compute-0 nova_compute[259627]: 2025-10-14 09:36:15.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:15 compute-0 ceph-mon[74249]: pgmap v2493: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:36:16 compute-0 nova_compute[259627]: 2025-10-14 09:36:16.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.243 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.246 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.247 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.481 2 DEBUG nova.compute.manager [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.481 2 DEBUG nova.compute.manager [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing instance network info cache due to event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.482 2 DEBUG oslo_concurrency.lockutils [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.482 2 DEBUG oslo_concurrency.lockutils [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.482 2 DEBUG nova.network.neutron [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.536 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.536 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.537 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.538 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.538 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.540 2 INFO nova.compute.manager [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Terminating instance
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.542 2 DEBUG nova.compute.manager [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:36:17 compute-0 kernel: tap2e84cd2e-9f (unregistering): left promiscuous mode
Oct 14 09:36:17 compute-0 NetworkManager[44885]: <info>  [1760434577.5993] device (tap2e84cd2e-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 ovn_controller[152662]: 2025-10-14T09:36:17Z|01583|binding|INFO|Releasing lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 from this chassis (sb_readonly=0)
Oct 14 09:36:17 compute-0 ovn_controller[152662]: 2025-10-14T09:36:17Z|01584|binding|INFO|Setting lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 down in Southbound
Oct 14 09:36:17 compute-0 ovn_controller[152662]: 2025-10-14T09:36:17Z|01585|binding|INFO|Removing iface tap2e84cd2e-9f ovn-installed in OVS
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.622 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:70:35 10.100.0.8'], port_security=['fa:16:3e:20:70:35 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2d97e325-a4e8-4595-9697-04219277474d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09c5c036-dfc9-4826-bc59-c008b28bd97a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2e84cd2e-9f09-482a-9a06-adcfff2088d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.623 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 in datapath 9cb6ee52-3808-410f-9854-68ac8ffadab8 unbound from our chassis
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.624 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cb6ee52-3808-410f-9854-68ac8ffadab8
Oct 14 09:36:17 compute-0 kernel: tapda98587d-dc (unregistering): left promiscuous mode
Oct 14 09:36:17 compute-0 NetworkManager[44885]: <info>  [1760434577.6415] device (tapda98587d-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[63626a20-eebd-4e4d-988c-c12c6f1716a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 ovn_controller[152662]: 2025-10-14T09:36:17Z|01586|binding|INFO|Releasing lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e from this chassis (sb_readonly=0)
Oct 14 09:36:17 compute-0 ovn_controller[152662]: 2025-10-14T09:36:17Z|01587|binding|INFO|Setting lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e down in Southbound
Oct 14 09:36:17 compute-0 ovn_controller[152662]: 2025-10-14T09:36:17Z|01588|binding|INFO|Removing iface tapda98587d-dc ovn-installed in OVS
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.664 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23'], port_security=['fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe61:8d23/64 2001:db8::f816:3eff:fe61:8d23/64', 'neutron:device_id': '2d97e325-a4e8-4595-9697-04219277474d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=da98587d-dc59-4e4f-bc3f-d3e70dafd21e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:36:17 compute-0 ceph-mon[74249]: pgmap v2494: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.689 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d255ffaa-5c28-422f-ad7c-01ab600a02b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.692 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a1e540-5834-430f-b017-a126ef0324fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.722 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b126aa75-48a3-4bfa-b12a-97a4df4a4875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct 14 09:36:17 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000091.scope: Consumed 13.637s CPU time.
Oct 14 09:36:17 compute-0 systemd-machined[214636]: Machine qemu-178-instance-00000091 terminated.
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.740 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0942a086-653a-46f3-919b-6f85d25b4392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cb6ee52-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:65:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829416, 'reachable_time': 37698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410185, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.759 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8e9e6c-7b76-4255-8caf-3e28570bd319]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9cb6ee52-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829431, 'tstamp': 829431}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410186, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9cb6ee52-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829433, 'tstamp': 829433}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410186, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.762 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb6ee52-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.771 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb6ee52-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.771 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.771 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cb6ee52-30, col_values=(('external_ids', {'iface-id': 'e170cfc7-b9d3-441b-8041-f001d366d5cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.772 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.773 162547 INFO neutron.agent.ovn.metadata.agent [-] Port da98587d-dc59-4e4f-bc3f-d3e70dafd21e in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 unbound from our chassis
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.774 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543
Oct 14 09:36:17 compute-0 NetworkManager[44885]: <info>  [1760434577.7779] manager: (tapda98587d-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/647)
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.789 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3038f5a-94d1-4975-8718-5ffa468ce33e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.797 2 INFO nova.virt.libvirt.driver [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance destroyed successfully.
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.797 2 DEBUG nova.objects.instance [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 2d97e325-a4e8-4595-9697-04219277474d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.812 2 DEBUG nova.virt.libvirt.vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:35:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:35:54Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.813 2 DEBUG nova.network.os_vif_util [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.814 2 DEBUG nova.network.os_vif_util [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.816 2 DEBUG os_vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e84cd2e-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.825 2 INFO os_vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f')
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.826 2 DEBUG nova.virt.libvirt.vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:35:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:35:54Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.825 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77066bd6-ed79-4b08-8a9f-58a35152f82a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.826 2 DEBUG nova.network.os_vif_util [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.827 2 DEBUG nova.network.os_vif_util [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.827 2 DEBUG os_vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda98587d-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.829 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c8993c-585e-4362-806d-25559b762f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.832 2 INFO os_vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc')
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.854 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f704df57-4402-4d5b-a7ba-97d58707e2c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0100be30-b603-452b-90f0-1b686bce45b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d3b36cd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:2e:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 5, 'rx_bytes': 3600, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 5, 'rx_bytes': 3600, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829526, 'reachable_time': 38835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 40, 'inoctets': 3040, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 40, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 3040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 40, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410230, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.890 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b00df92c-e283-4f35-8ec1-4a296ce2d4f8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0d3b36cd-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829539, 'tstamp': 829539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410234, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.892 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d3b36cd-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 nova_compute[259627]: 2025-10-14 09:36:17.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.895 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d3b36cd-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.895 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.896 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d3b36cd-f0, col_values=(('external_ids', {'iface-id': '1caef19e-e76a-434d-84c0-dc762554a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.896 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:36:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.210 2 INFO nova.virt.libvirt.driver [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Deleting instance files /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d_del
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.211 2 INFO nova.virt.libvirt.driver [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Deletion of /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d_del complete
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.371 2 INFO nova.compute.manager [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.371 2 DEBUG oslo.service.loopingcall [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.372 2 DEBUG nova.compute.manager [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.372 2 DEBUG nova.network.neutron [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:36:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 167 KiB/s wr, 33 op/s
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.704 2 DEBUG nova.network.neutron [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updated VIF entry in instance network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.705 2 DEBUG nova.network.neutron [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:36:18 compute-0 nova_compute[259627]: 2025-10-14 09:36:18.723 2 DEBUG oslo_concurrency.lockutils [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.169 2 DEBUG nova.compute.manager [req-1d901890-2861-4d96-aa74-e0bc476eff61 req-06e24f1d-61c3-4731-ac29-76e16d2d9ba6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-deleted-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.169 2 INFO nova.compute.manager [req-1d901890-2861-4d96-aa74-e0bc476eff61 req-06e24f1d-61c3-4731-ac29-76e16d2d9ba6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Neutron deleted interface 2e84cd2e-9f09-482a-9a06-adcfff2088d5; detaching it from the instance and deleting it from the info cache
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.170 2 DEBUG nova.network.neutron [req-1d901890-2861-4d96-aa74-e0bc476eff61 req-06e24f1d-61c3-4731-ac29-76e16d2d9ba6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.189 2 DEBUG nova.compute.manager [req-1d901890-2861-4d96-aa74-e0bc476eff61 req-06e24f1d-61c3-4731-ac29-76e16d2d9ba6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Detach interface failed, port_id=2e84cd2e-9f09-482a-9a06-adcfff2088d5, reason: Instance 2d97e325-a4e8-4595-9697-04219277474d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.435 2 DEBUG nova.network.neutron [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.451 2 INFO nova.compute.manager [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Took 1.08 seconds to deallocate network for instance.
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.499 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.500 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.580 2 DEBUG oslo_concurrency.processutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.624 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-unplugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.624 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-unplugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.625 2 WARNING nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-unplugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 for instance with vm_state deleted and task_state None.
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.626 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.626 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.626 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.626 2 WARNING nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 for instance with vm_state deleted and task_state None.
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.626 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-unplugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.627 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.627 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.627 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.627 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-unplugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.627 2 WARNING nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-unplugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e for instance with vm_state deleted and task_state None.
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:36:19 compute-0 nova_compute[259627]: 2025-10-14 09:36:19.628 2 WARNING nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e for instance with vm_state deleted and task_state None.
Oct 14 09:36:19 compute-0 ceph-mon[74249]: pgmap v2495: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 167 KiB/s wr, 33 op/s
Oct 14 09:36:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:36:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402120357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:36:20 compute-0 nova_compute[259627]: 2025-10-14 09:36:20.081 2 DEBUG oslo_concurrency.processutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:36:20 compute-0 nova_compute[259627]: 2025-10-14 09:36:20.086 2 DEBUG nova.compute.provider_tree [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:36:20 compute-0 nova_compute[259627]: 2025-10-14 09:36:20.104 2 DEBUG nova.scheduler.client.report [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:36:20 compute-0 nova_compute[259627]: 2025-10-14 09:36:20.132 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:20 compute-0 nova_compute[259627]: 2025-10-14 09:36:20.174 2 INFO nova.scheduler.client.report [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 2d97e325-a4e8-4595-9697-04219277474d
Oct 14 09:36:20 compute-0 nova_compute[259627]: 2025-10-14 09:36:20.239 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 173 KiB/s wr, 62 op/s
Oct 14 09:36:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3402120357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.236 2 DEBUG nova.compute.manager [req-79d85022-897e-408a-bbad-94824d993d94 req-0e8d6cf2-463c-4f84-af22-8af669bcb41a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-deleted-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.564 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.565 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.566 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.566 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.566 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.567 2 INFO nova.compute.manager [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Terminating instance
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.568 2 DEBUG nova.compute.manager [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:36:21 compute-0 kernel: tap13d4f68b-23 (unregistering): left promiscuous mode
Oct 14 09:36:21 compute-0 NetworkManager[44885]: <info>  [1760434581.6211] device (tap13d4f68b-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:36:21 compute-0 ovn_controller[152662]: 2025-10-14T09:36:21Z|01589|binding|INFO|Releasing lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 from this chassis (sb_readonly=0)
Oct 14 09:36:21 compute-0 ovn_controller[152662]: 2025-10-14T09:36:21Z|01590|binding|INFO|Setting lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 down in Southbound
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 ovn_controller[152662]: 2025-10-14T09:36:21Z|01591|binding|INFO|Removing iface tap13d4f68b-23 ovn-installed in OVS
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.679 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:ad:d7 10.100.0.5'], port_security=['fa:16:3e:2a:ad:d7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09c5c036-dfc9-4826-bc59-c008b28bd97a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=13d4f68b-234a-4c46-9e1d-79f28a907bf2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.680 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 in datapath 9cb6ee52-3808-410f-9854-68ac8ffadab8 unbound from our chassis
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.681 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9cb6ee52-3808-410f-9854-68ac8ffadab8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.681 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[468b4dc9-4cb5-42e7-9ecd-03e5ef337998]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.682 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 namespace which is not needed anymore
Oct 14 09:36:21 compute-0 kernel: tapb2457aed-ba (unregistering): left promiscuous mode
Oct 14 09:36:21 compute-0 ceph-mon[74249]: pgmap v2496: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 173 KiB/s wr, 62 op/s
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.695 2 DEBUG nova.compute.manager [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.695 2 DEBUG nova.compute.manager [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing instance network info cache due to event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.695 2 DEBUG oslo_concurrency.lockutils [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.696 2 DEBUG oslo_concurrency.lockutils [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.696 2 DEBUG nova.network.neutron [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:36:21 compute-0 NetworkManager[44885]: <info>  [1760434581.7013] device (tapb2457aed-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 ovn_controller[152662]: 2025-10-14T09:36:21Z|01592|binding|INFO|Releasing lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 from this chassis (sb_readonly=0)
Oct 14 09:36:21 compute-0 ovn_controller[152662]: 2025-10-14T09:36:21Z|01593|binding|INFO|Setting lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 down in Southbound
Oct 14 09:36:21 compute-0 ovn_controller[152662]: 2025-10-14T09:36:21Z|01594|binding|INFO|Removing iface tapb2457aed-ba ovn-installed in OVS
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.718 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e'], port_security=['fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe4e:2d2e/64 2001:db8::f816:3eff:fe4e:2d2e/64', 'neutron:device_id': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b2457aed-ba7c-4d69-b93d-9f4c98e456b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct 14 09:36:21 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000090.scope: Consumed 15.565s CPU time.
Oct 14 09:36:21 compute-0 systemd-machined[214636]: Machine qemu-177-instance-00000090 terminated.
Oct 14 09:36:21 compute-0 NetworkManager[44885]: <info>  [1760434581.7837] manager: (tap13d4f68b-23): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Oct 14 09:36:21 compute-0 NetworkManager[44885]: <info>  [1760434581.7943] manager: (tapb2457aed-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/649)
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.808 2 INFO nova.virt.libvirt.driver [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance destroyed successfully.
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.808 2 DEBUG nova.objects.instance [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:36:21 compute-0 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [NOTICE]   (408470) : haproxy version is 2.8.14-c23fe91
Oct 14 09:36:21 compute-0 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [NOTICE]   (408470) : path to executable is /usr/sbin/haproxy
Oct 14 09:36:21 compute-0 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [WARNING]  (408470) : Exiting Master process...
Oct 14 09:36:21 compute-0 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [WARNING]  (408470) : Exiting Master process...
Oct 14 09:36:21 compute-0 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [ALERT]    (408470) : Current worker (408472) exited with code 143 (Terminated)
Oct 14 09:36:21 compute-0 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [WARNING]  (408470) : All workers exited. Exiting... (0)
Oct 14 09:36:21 compute-0 systemd[1]: libpod-704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323.scope: Deactivated successfully.
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.825 2 DEBUG nova.virt.libvirt.vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:35:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:35:16Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.826 2 DEBUG nova.network.os_vif_util [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:36:21 compute-0 podman[410288]: 2025-10-14 09:36:21.826335307 +0000 UTC m=+0.050106961 container died 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.826 2 DEBUG nova.network.os_vif_util [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.827 2 DEBUG os_vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13d4f68b-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.835 2 INFO os_vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23')
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.836 2 DEBUG nova.virt.libvirt.vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:35:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:35:16Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.836 2 DEBUG nova.network.os_vif_util [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.837 2 DEBUG nova.network.os_vif_util [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.837 2 DEBUG os_vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.838 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2457aed-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.841 2 INFO os_vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba')
Oct 14 09:36:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323-userdata-shm.mount: Deactivated successfully.
Oct 14 09:36:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd38037ad8c5df564412ddc4d5b130a78e747af4052f1d4f025faaa569ef1ba-merged.mount: Deactivated successfully.
Oct 14 09:36:21 compute-0 podman[410288]: 2025-10-14 09:36:21.875839962 +0000 UTC m=+0.099611626 container cleanup 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:36:21 compute-0 systemd[1]: libpod-conmon-704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323.scope: Deactivated successfully.
Oct 14 09:36:21 compute-0 podman[410360]: 2025-10-14 09:36:21.953064797 +0000 UTC m=+0.051648888 container remove 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.959 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4f047e-ea4c-4db8-b053-36ba4acc9d0c]: (4, ('Tue Oct 14 09:36:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 (704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323)\n704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323\nTue Oct 14 09:36:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 (704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323)\n704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.961 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2aca3595-ba31-44a6-9e97-9969d3d2df65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.962 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb6ee52-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 kernel: tap9cb6ee52-30: left promiscuous mode
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.968 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82f6f6b4-5f42-408e-9ddb-2d8203eb02a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:21 compute-0 nova_compute[259627]: 2025-10-14 09:36:21.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e10d8e7f-d2cd-405a-b58a-6bd42e78ff03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:21 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.993 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fffb9626-af05-4d07-bfba-3aedf7baeec8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.012 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc44ece-62c4-44c4-aeed-1dffdaef035a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829409, 'reachable_time': 43953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410377, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d9cb6ee52\x2d3808\x2d410f\x2d9854\x2d68ac8ffadab8.mount: Deactivated successfully.
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.018 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.018 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc4669b-d808-40e8-92ba-202109b75f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.019 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b2457aed-ba7c-4d69-b93d-9f4c98e456b2 in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 unbound from our chassis
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.020 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.020 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87116647-ee63-4485-86a6-650ebdd7bbb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.021 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 namespace which is not needed anymore
Oct 14 09:36:22 compute-0 podman[410376]: 2025-10-14 09:36:22.075345668 +0000 UTC m=+0.074879939 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible)
Oct 14 09:36:22 compute-0 podman[410373]: 2025-10-14 09:36:22.08399028 +0000 UTC m=+0.085098799 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:36:22 compute-0 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [NOTICE]   (408544) : haproxy version is 2.8.14-c23fe91
Oct 14 09:36:22 compute-0 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [NOTICE]   (408544) : path to executable is /usr/sbin/haproxy
Oct 14 09:36:22 compute-0 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [WARNING]  (408544) : Exiting Master process...
Oct 14 09:36:22 compute-0 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [WARNING]  (408544) : Exiting Master process...
Oct 14 09:36:22 compute-0 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [ALERT]    (408544) : Current worker (408546) exited with code 143 (Terminated)
Oct 14 09:36:22 compute-0 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [WARNING]  (408544) : All workers exited. Exiting... (0)
Oct 14 09:36:22 compute-0 systemd[1]: libpod-29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa.scope: Deactivated successfully.
Oct 14 09:36:22 compute-0 podman[410429]: 2025-10-14 09:36:22.159155085 +0000 UTC m=+0.046125213 container died 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-71f24b8bd883a1a9d5a46c8f33163dab05be4377ce726a9e20ef9f68b79d27d6-merged.mount: Deactivated successfully.
Oct 14 09:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa-userdata-shm.mount: Deactivated successfully.
Oct 14 09:36:22 compute-0 podman[410429]: 2025-10-14 09:36:22.201424532 +0000 UTC m=+0.088394650 container cleanup 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:36:22 compute-0 systemd[1]: libpod-conmon-29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa.scope: Deactivated successfully.
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.268 2 INFO nova.virt.libvirt.driver [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Deleting instance files /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_del
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.269 2 INFO nova.virt.libvirt.driver [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Deletion of /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_del complete
Oct 14 09:36:22 compute-0 podman[410461]: 2025-10-14 09:36:22.280087242 +0000 UTC m=+0.043975030 container remove 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.286 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df6eac2e-a6e5-4b99-9187-ebbdf632ba72]: (4, ('Tue Oct 14 09:36:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 (29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa)\n29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa\nTue Oct 14 09:36:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 (29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa)\n29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.288 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a47952-392f-4752-a288-20358db961f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.289 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d3b36cd-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:22 compute-0 kernel: tap0d3b36cd-f0: left promiscuous mode
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.308 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9131ad-fa65-445b-8248-01e6583a505e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.331 2 INFO nova.compute.manager [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.331 2 DEBUG oslo.service.loopingcall [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.331 2 DEBUG nova.compute.manager [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.332 2 DEBUG nova.network.neutron [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[665752fa-e110-4598-bc35-fcf5e3a664c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.343 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6eeec789-4ad4-4c03-8817-a62ff91d5336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.362 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0ebc2d-7a31-4894-bf00-fe6feca143e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829518, 'reachable_time': 36653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410477, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.365 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:36:22 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.365 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2c931654-22f8-4b89-9051-9790eef935fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:36:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 30 op/s
Oct 14 09:36:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d0d3b36cd\x2df345\x2d4ba4\x2d8ea7\x2d29b299ab0543.mount: Deactivated successfully.
Oct 14 09:36:22 compute-0 nova_compute[259627]: 2025-10-14 09:36:22.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:36:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.387 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-unplugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.388 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.388 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.389 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.389 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-unplugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.390 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-unplugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.390 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.390 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.391 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.391 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.392 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.392 2 WARNING nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received unexpected event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 for instance with vm_state active and task_state deleting.
Oct 14 09:36:23 compute-0 ceph-mon[74249]: pgmap v2497: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 30 op/s
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.818 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-unplugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.819 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.819 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.820 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.820 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-unplugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.821 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-unplugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.821 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.821 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.822 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.822 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.823 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:36:23 compute-0 nova_compute[259627]: 2025-10-14 09:36:23.823 2 WARNING nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received unexpected event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 for instance with vm_state active and task_state deleting.
Oct 14 09:36:24 compute-0 nova_compute[259627]: 2025-10-14 09:36:24.107 2 DEBUG nova.network.neutron [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updated VIF entry in instance network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:36:24 compute-0 nova_compute[259627]: 2025-10-14 09:36:24.107 2 DEBUG nova.network.neutron [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:36:24 compute-0 nova_compute[259627]: 2025-10-14 09:36:24.142 2 DEBUG oslo_concurrency.lockutils [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:36:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 30 op/s
Oct 14 09:36:24 compute-0 nova_compute[259627]: 2025-10-14 09:36:24.676 2 DEBUG nova.network.neutron [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:36:24 compute-0 nova_compute[259627]: 2025-10-14 09:36:24.698 2 INFO nova.compute.manager [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Took 2.37 seconds to deallocate network for instance.
Oct 14 09:36:24 compute-0 nova_compute[259627]: 2025-10-14 09:36:24.741 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:24 compute-0 nova_compute[259627]: 2025-10-14 09:36:24.743 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:24 compute-0 nova_compute[259627]: 2025-10-14 09:36:24.808 2 DEBUG oslo_concurrency.processutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:36:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:36:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535339950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:36:25 compute-0 nova_compute[259627]: 2025-10-14 09:36:25.277 2 DEBUG oslo_concurrency.processutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:36:25 compute-0 nova_compute[259627]: 2025-10-14 09:36:25.286 2 DEBUG nova.compute.provider_tree [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:36:25 compute-0 nova_compute[259627]: 2025-10-14 09:36:25.303 2 DEBUG nova.scheduler.client.report [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:36:25 compute-0 nova_compute[259627]: 2025-10-14 09:36:25.332 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:25 compute-0 nova_compute[259627]: 2025-10-14 09:36:25.371 2 INFO nova.scheduler.client.report [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance c977bdc6-8dd7-4cb4-b50d-28e7313a16e8
Oct 14 09:36:25 compute-0 nova_compute[259627]: 2025-10-14 09:36:25.451 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:25 compute-0 nova_compute[259627]: 2025-10-14 09:36:25.496 2 DEBUG nova.compute.manager [req-8ea42346-aa8d-4420-a237-ef1d7f1b792f req-d9f0beb4-0df9-4309-bf67-9da7569cdd5a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-deleted-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:25 compute-0 nova_compute[259627]: 2025-10-14 09:36:25.497 2 DEBUG nova.compute.manager [req-8ea42346-aa8d-4420-a237-ef1d7f1b792f req-d9f0beb4-0df9-4309-bf67-9da7569cdd5a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-deleted-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:36:25 compute-0 ceph-mon[74249]: pgmap v2498: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 30 op/s
Oct 14 09:36:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1535339950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:36:26 compute-0 nova_compute[259627]: 2025-10-14 09:36:26.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 17 KiB/s wr, 58 op/s
Oct 14 09:36:26 compute-0 nova_compute[259627]: 2025-10-14 09:36:26.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:27 compute-0 ceph-mon[74249]: pgmap v2499: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 17 KiB/s wr, 58 op/s
Oct 14 09:36:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 7.1 KiB/s wr, 56 op/s
Oct 14 09:36:29 compute-0 ceph-mon[74249]: pgmap v2500: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 7.1 KiB/s wr, 56 op/s
Oct 14 09:36:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 7.1 KiB/s wr, 56 op/s
Oct 14 09:36:31 compute-0 nova_compute[259627]: 2025-10-14 09:36:31.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:31 compute-0 ceph-mon[74249]: pgmap v2501: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 7.1 KiB/s wr, 56 op/s
Oct 14 09:36:31 compute-0 nova_compute[259627]: 2025-10-14 09:36:31.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:36:32 compute-0 nova_compute[259627]: 2025-10-14 09:36:32.792 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434577.790484, 2d97e325-a4e8-4595-9697-04219277474d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:36:32 compute-0 nova_compute[259627]: 2025-10-14 09:36:32.793 2 INFO nova.compute.manager [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] VM Stopped (Lifecycle Event)
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:36:32 compute-0 nova_compute[259627]: 2025-10-14 09:36:32.820 2 DEBUG nova.compute.manager [None req-aaefe0e8-d3c6-4410-8d31-2b3e7d5d25bc - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:36:32
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Oct 14 09:36:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:36:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:36:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:36:33 compute-0 ceph-mon[74249]: pgmap v2502: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:36:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:36:35 compute-0 nova_compute[259627]: 2025-10-14 09:36:35.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:35 compute-0 nova_compute[259627]: 2025-10-14 09:36:35.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:35 compute-0 podman[410502]: 2025-10-14 09:36:35.676163409 +0000 UTC m=+0.091211299 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 14 09:36:35 compute-0 podman[410501]: 2025-10-14 09:36:35.707212651 +0000 UTC m=+0.118990631 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009)
Oct 14 09:36:35 compute-0 ceph-mon[74249]: pgmap v2503: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:36:36 compute-0 nova_compute[259627]: 2025-10-14 09:36:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:36:36 compute-0 nova_compute[259627]: 2025-10-14 09:36:36.806 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434581.8053603, c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:36:36 compute-0 nova_compute[259627]: 2025-10-14 09:36:36.806 2 INFO nova.compute.manager [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] VM Stopped (Lifecycle Event)
Oct 14 09:36:36 compute-0 nova_compute[259627]: 2025-10-14 09:36:36.836 2 DEBUG nova.compute.manager [None req-4684d29d-6b5c-4cf8-b203-4ea802e5823c - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:36:36 compute-0 nova_compute[259627]: 2025-10-14 09:36:36.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:37 compute-0 ceph-mon[74249]: pgmap v2504: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:36:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:39 compute-0 ceph-mon[74249]: pgmap v2505: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:41 compute-0 nova_compute[259627]: 2025-10-14 09:36:41.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:41 compute-0 ceph-mon[74249]: pgmap v2506: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:41 compute-0 nova_compute[259627]: 2025-10-14 09:36:41.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:36:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:36:43 compute-0 ceph-mon[74249]: pgmap v2507: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:44 compute-0 sudo[410546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:44 compute-0 sudo[410546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:44 compute-0 sudo[410546]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:44 compute-0 sudo[410571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:36:44 compute-0 sudo[410571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:44 compute-0 sudo[410571]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:44 compute-0 sudo[410596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:44 compute-0 sudo[410596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:44 compute-0 sudo[410596]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:44 compute-0 sudo[410621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:36:44 compute-0 sudo[410621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:44 compute-0 nova_compute[259627]: 2025-10-14 09:36:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:36:45 compute-0 sudo[410621]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:36:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:36:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:36:45 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:36:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:36:45 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:36:45 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e61a499a-ffd0-4719-92a7-b57e9b4b3b6b does not exist
Oct 14 09:36:45 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 62e1145c-9c59-43d8-ba29-821ddac2839c does not exist
Oct 14 09:36:45 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0ccef622-a27f-49f6-bfb3-c700302fa4d7 does not exist
Oct 14 09:36:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:36:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:36:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:36:45 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:36:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:36:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:36:45 compute-0 sudo[410679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:45 compute-0 sudo[410679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:45 compute-0 sudo[410679]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:45 compute-0 sudo[410704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:36:45 compute-0 sudo[410704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:45 compute-0 sudo[410704]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:45 compute-0 sudo[410729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:45 compute-0 sudo[410729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:45 compute-0 sudo[410729]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:45 compute-0 sudo[410754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:36:45 compute-0 sudo[410754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:45 compute-0 podman[410820]: 2025-10-14 09:36:45.786204809 +0000 UTC m=+0.045756404 container create 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:36:45 compute-0 ceph-mon[74249]: pgmap v2508: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:36:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:36:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:36:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:36:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:36:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:36:45 compute-0 systemd[1]: Started libpod-conmon-68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90.scope.
Oct 14 09:36:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:36:45 compute-0 podman[410820]: 2025-10-14 09:36:45.764986998 +0000 UTC m=+0.024538583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:36:45 compute-0 podman[410820]: 2025-10-14 09:36:45.87386672 +0000 UTC m=+0.133418305 container init 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:36:45 compute-0 podman[410820]: 2025-10-14 09:36:45.881725983 +0000 UTC m=+0.141277578 container start 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 09:36:45 compute-0 podman[410820]: 2025-10-14 09:36:45.886218603 +0000 UTC m=+0.145770168 container attach 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:36:45 compute-0 upbeat_darwin[410836]: 167 167
Oct 14 09:36:45 compute-0 podman[410820]: 2025-10-14 09:36:45.887758471 +0000 UTC m=+0.147310026 container died 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:36:45 compute-0 systemd[1]: libpod-68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90.scope: Deactivated successfully.
Oct 14 09:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4664fcffb5ae50bc2d88c0d452134cb4c33968cd29ef93ce7d1e188f847990-merged.mount: Deactivated successfully.
Oct 14 09:36:45 compute-0 podman[410820]: 2025-10-14 09:36:45.930774366 +0000 UTC m=+0.190325911 container remove 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:36:45 compute-0 systemd[1]: libpod-conmon-68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90.scope: Deactivated successfully.
Oct 14 09:36:46 compute-0 podman[410862]: 2025-10-14 09:36:46.120274167 +0000 UTC m=+0.055476803 container create 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:36:46 compute-0 systemd[1]: Started libpod-conmon-5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1.scope.
Oct 14 09:36:46 compute-0 podman[410862]: 2025-10-14 09:36:46.089232535 +0000 UTC m=+0.024435171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:36:46 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:46 compute-0 podman[410862]: 2025-10-14 09:36:46.219468911 +0000 UTC m=+0.154671547 container init 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:36:46 compute-0 nova_compute[259627]: 2025-10-14 09:36:46.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:46 compute-0 podman[410862]: 2025-10-14 09:36:46.233764312 +0000 UTC m=+0.168966938 container start 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:36:46 compute-0 podman[410862]: 2025-10-14 09:36:46.238409646 +0000 UTC m=+0.173612332 container attach 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:36:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:46 compute-0 nova_compute[259627]: 2025-10-14 09:36:46.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:47 compute-0 cranky_haibt[410878]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:36:47 compute-0 cranky_haibt[410878]: --> relative data size: 1.0
Oct 14 09:36:47 compute-0 cranky_haibt[410878]: --> All data devices are unavailable
Oct 14 09:36:47 compute-0 systemd[1]: libpod-5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1.scope: Deactivated successfully.
Oct 14 09:36:47 compute-0 systemd[1]: libpod-5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1.scope: Consumed 1.119s CPU time.
Oct 14 09:36:47 compute-0 podman[410862]: 2025-10-14 09:36:47.407481035 +0000 UTC m=+1.342683661 container died 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:36:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa-merged.mount: Deactivated successfully.
Oct 14 09:36:47 compute-0 podman[410862]: 2025-10-14 09:36:47.486986746 +0000 UTC m=+1.422189372 container remove 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:36:47 compute-0 systemd[1]: libpod-conmon-5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1.scope: Deactivated successfully.
Oct 14 09:36:47 compute-0 sudo[410754]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:47 compute-0 sudo[410918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:47 compute-0 sudo[410918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:47 compute-0 sudo[410918]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:47 compute-0 sudo[410943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:36:47 compute-0 sudo[410943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:47 compute-0 sudo[410943]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:47 compute-0 sudo[410968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:47 compute-0 sudo[410968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:47 compute-0 sudo[410968]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:47 compute-0 ceph-mon[74249]: pgmap v2509: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:47 compute-0 sudo[410993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:36:47 compute-0 sudo[410993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:48 compute-0 podman[411059]: 2025-10-14 09:36:48.344048988 +0000 UTC m=+0.072649633 container create d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:36:48 compute-0 systemd[1]: Started libpod-conmon-d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68.scope.
Oct 14 09:36:48 compute-0 podman[411059]: 2025-10-14 09:36:48.312277579 +0000 UTC m=+0.040878284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:36:48 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:36:48 compute-0 podman[411059]: 2025-10-14 09:36:48.445673902 +0000 UTC m=+0.174274587 container init d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:36:48 compute-0 podman[411059]: 2025-10-14 09:36:48.456957869 +0000 UTC m=+0.185558514 container start d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:36:48 compute-0 podman[411059]: 2025-10-14 09:36:48.461193233 +0000 UTC m=+0.189793878 container attach d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:36:48 compute-0 hungry_goldwasser[411076]: 167 167
Oct 14 09:36:48 compute-0 systemd[1]: libpod-d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68.scope: Deactivated successfully.
Oct 14 09:36:48 compute-0 podman[411059]: 2025-10-14 09:36:48.465592471 +0000 UTC m=+0.194193116 container died d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc5f90a5a36722ed3e3714c38517d1b3b433f398cd32fcadffe91e2e4c048934-merged.mount: Deactivated successfully.
Oct 14 09:36:48 compute-0 podman[411059]: 2025-10-14 09:36:48.515312541 +0000 UTC m=+0.243913166 container remove d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:36:48 compute-0 systemd[1]: libpod-conmon-d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68.scope: Deactivated successfully.
Oct 14 09:36:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:48 compute-0 podman[411099]: 2025-10-14 09:36:48.782545389 +0000 UTC m=+0.078681802 container create 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:36:48 compute-0 podman[411099]: 2025-10-14 09:36:48.754259475 +0000 UTC m=+0.050395938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:36:48 compute-0 systemd[1]: Started libpod-conmon-07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560.scope.
Oct 14 09:36:48 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:48 compute-0 podman[411099]: 2025-10-14 09:36:48.908991042 +0000 UTC m=+0.205127435 container init 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 09:36:48 compute-0 podman[411099]: 2025-10-14 09:36:48.924466042 +0000 UTC m=+0.220602445 container start 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:36:48 compute-0 podman[411099]: 2025-10-14 09:36:48.928691406 +0000 UTC m=+0.224827859 container attach 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:36:49 compute-0 competent_agnesi[411116]: {
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:     "0": [
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:         {
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "devices": [
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "/dev/loop3"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             ],
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_name": "ceph_lv0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_size": "21470642176",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "name": "ceph_lv0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "tags": {
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cluster_name": "ceph",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.crush_device_class": "",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.encrypted": "0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osd_id": "0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.type": "block",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.vdo": "0"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             },
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "type": "block",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "vg_name": "ceph_vg0"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:         }
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:     ],
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:     "1": [
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:         {
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "devices": [
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "/dev/loop4"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             ],
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_name": "ceph_lv1",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_size": "21470642176",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "name": "ceph_lv1",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "tags": {
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cluster_name": "ceph",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.crush_device_class": "",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.encrypted": "0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osd_id": "1",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.type": "block",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.vdo": "0"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             },
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "type": "block",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "vg_name": "ceph_vg1"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:         }
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:     ],
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:     "2": [
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:         {
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "devices": [
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "/dev/loop5"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             ],
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_name": "ceph_lv2",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_size": "21470642176",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "name": "ceph_lv2",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "tags": {
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.cluster_name": "ceph",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.crush_device_class": "",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.encrypted": "0",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osd_id": "2",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.type": "block",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:                 "ceph.vdo": "0"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             },
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "type": "block",
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:             "vg_name": "ceph_vg2"
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:         }
Oct 14 09:36:49 compute-0 competent_agnesi[411116]:     ]
Oct 14 09:36:49 compute-0 competent_agnesi[411116]: }
Oct 14 09:36:49 compute-0 systemd[1]: libpod-07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560.scope: Deactivated successfully.
Oct 14 09:36:49 compute-0 podman[411099]: 2025-10-14 09:36:49.715200606 +0000 UTC m=+1.011337009 container died 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572-merged.mount: Deactivated successfully.
Oct 14 09:36:49 compute-0 podman[411099]: 2025-10-14 09:36:49.773688321 +0000 UTC m=+1.069824704 container remove 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:36:49 compute-0 systemd[1]: libpod-conmon-07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560.scope: Deactivated successfully.
Oct 14 09:36:49 compute-0 sudo[410993]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:49 compute-0 ceph-mon[74249]: pgmap v2510: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:49 compute-0 sudo[411135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:49 compute-0 sudo[411135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:49 compute-0 sudo[411135]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:49 compute-0 sudo[411160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:36:49 compute-0 sudo[411160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:49 compute-0 sudo[411160]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:49 compute-0 nova_compute[259627]: 2025-10-14 09:36:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:36:50 compute-0 sudo[411185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:50 compute-0 sudo[411185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:50 compute-0 sudo[411185]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:50 compute-0 sudo[411210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:36:50 compute-0 sudo[411210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:50 compute-0 podman[411276]: 2025-10-14 09:36:50.608223651 +0000 UTC m=+0.070389809 container create b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 09:36:50 compute-0 systemd[1]: Started libpod-conmon-b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33.scope.
Oct 14 09:36:50 compute-0 podman[411276]: 2025-10-14 09:36:50.580401368 +0000 UTC m=+0.042567606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:36:50 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:36:50 compute-0 podman[411276]: 2025-10-14 09:36:50.700920155 +0000 UTC m=+0.163086413 container init b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:36:50 compute-0 podman[411276]: 2025-10-14 09:36:50.713638468 +0000 UTC m=+0.175804666 container start b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:36:50 compute-0 podman[411276]: 2025-10-14 09:36:50.717456451 +0000 UTC m=+0.179622699 container attach b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:36:50 compute-0 mystifying_lamport[411293]: 167 167
Oct 14 09:36:50 compute-0 systemd[1]: libpod-b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33.scope: Deactivated successfully.
Oct 14 09:36:50 compute-0 podman[411276]: 2025-10-14 09:36:50.723135611 +0000 UTC m=+0.185301809 container died b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-45365a2467741cdc58cb1b34dd4813f8fced64d0e7305930b3135a420c5347ae-merged.mount: Deactivated successfully.
Oct 14 09:36:50 compute-0 podman[411276]: 2025-10-14 09:36:50.778192972 +0000 UTC m=+0.240359170 container remove b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:36:50 compute-0 systemd[1]: libpod-conmon-b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33.scope: Deactivated successfully.
Oct 14 09:36:50 compute-0 nova_compute[259627]: 2025-10-14 09:36:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:36:51 compute-0 podman[411316]: 2025-10-14 09:36:51.022791074 +0000 UTC m=+0.064632647 container create 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 14 09:36:51 compute-0 systemd[1]: Started libpod-conmon-4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193.scope.
Oct 14 09:36:51 compute-0 podman[411316]: 2025-10-14 09:36:51.00345959 +0000 UTC m=+0.045301183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:36:51 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:36:51 compute-0 podman[411316]: 2025-10-14 09:36:51.121835345 +0000 UTC m=+0.163676938 container init 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:36:51 compute-0 podman[411316]: 2025-10-14 09:36:51.136129116 +0000 UTC m=+0.177970689 container start 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:36:51 compute-0 podman[411316]: 2025-10-14 09:36:51.140294188 +0000 UTC m=+0.182135751 container attach 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:36:51 compute-0 nova_compute[259627]: 2025-10-14 09:36:51.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:51 compute-0 ceph-mon[74249]: pgmap v2511: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:51 compute-0 nova_compute[259627]: 2025-10-14 09:36:51.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:51 compute-0 nova_compute[259627]: 2025-10-14 09:36:51.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.015 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.017 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:36:52 compute-0 festive_jones[411333]: {
Oct 14 09:36:52 compute-0 festive_jones[411333]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "osd_id": 2,
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "type": "bluestore"
Oct 14 09:36:52 compute-0 festive_jones[411333]:     },
Oct 14 09:36:52 compute-0 festive_jones[411333]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "osd_id": 1,
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "type": "bluestore"
Oct 14 09:36:52 compute-0 festive_jones[411333]:     },
Oct 14 09:36:52 compute-0 festive_jones[411333]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "osd_id": 0,
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:36:52 compute-0 festive_jones[411333]:         "type": "bluestore"
Oct 14 09:36:52 compute-0 festive_jones[411333]:     }
Oct 14 09:36:52 compute-0 festive_jones[411333]: }
Oct 14 09:36:52 compute-0 systemd[1]: libpod-4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193.scope: Deactivated successfully.
Oct 14 09:36:52 compute-0 podman[411316]: 2025-10-14 09:36:52.220877065 +0000 UTC m=+1.262718618 container died 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:36:52 compute-0 systemd[1]: libpod-4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193.scope: Consumed 1.095s CPU time.
Oct 14 09:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56-merged.mount: Deactivated successfully.
Oct 14 09:36:52 compute-0 podman[411316]: 2025-10-14 09:36:52.303597225 +0000 UTC m=+1.345438798 container remove 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:36:52 compute-0 systemd[1]: libpod-conmon-4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193.scope: Deactivated successfully.
Oct 14 09:36:52 compute-0 sudo[411210]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:36:52 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:36:52 compute-0 podman[411395]: 2025-10-14 09:36:52.354431503 +0000 UTC m=+0.085651043 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:36:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:36:52 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:36:52 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 83ba4603-d729-4833-9f64-14da00264a91 does not exist
Oct 14 09:36:52 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 113e22fe-eccc-45fb-b994-c5c57638c27e does not exist
Oct 14 09:36:52 compute-0 podman[411388]: 2025-10-14 09:36:52.385126836 +0000 UTC m=+0.115394863 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:36:52 compute-0 sudo[411432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:36:52 compute-0 sudo[411432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:52 compute-0 sudo[411432]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:52 compute-0 sudo[411458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:36:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:36:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685230023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:36:52 compute-0 sudo[411458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:36:52 compute-0 sudo[411458]: pam_unix(sudo:session): session closed for user root
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.545 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:36:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.708 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.709 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.709 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.709 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.797 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.798 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:36:52 compute-0 nova_compute[259627]: 2025-10-14 09:36:52.824 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:36:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:36:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954020972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:36:53 compute-0 nova_compute[259627]: 2025-10-14 09:36:53.242 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:36:53 compute-0 nova_compute[259627]: 2025-10-14 09:36:53.248 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:36:53 compute-0 nova_compute[259627]: 2025-10-14 09:36:53.269 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:36:53 compute-0 nova_compute[259627]: 2025-10-14 09:36:53.298 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:36:53 compute-0 nova_compute[259627]: 2025-10-14 09:36:53.298 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:36:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:36:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:36:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2685230023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:36:53 compute-0 ceph-mon[74249]: pgmap v2512: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1954020972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:36:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:55 compute-0 nova_compute[259627]: 2025-10-14 09:36:55.300 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:36:55 compute-0 nova_compute[259627]: 2025-10-14 09:36:55.301 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:36:55 compute-0 nova_compute[259627]: 2025-10-14 09:36:55.301 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:36:55 compute-0 ceph-mon[74249]: pgmap v2513: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:55 compute-0 nova_compute[259627]: 2025-10-14 09:36:55.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:36:55 compute-0 nova_compute[259627]: 2025-10-14 09:36:55.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:36:55 compute-0 nova_compute[259627]: 2025-10-14 09:36:55.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:36:56 compute-0 nova_compute[259627]: 2025-10-14 09:36:56.001 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:36:56 compute-0 nova_compute[259627]: 2025-10-14 09:36:56.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:56 compute-0 nova_compute[259627]: 2025-10-14 09:36:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:36:57 compute-0 ceph-mon[74249]: pgmap v2514: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:36:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:59 compute-0 ceph-mon[74249]: pgmap v2515: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:36:59 compute-0 nova_compute[259627]: 2025-10-14 09:36:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:37:00 compute-0 nova_compute[259627]: 2025-10-14 09:37:00.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:01 compute-0 nova_compute[259627]: 2025-10-14 09:37:01.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:01 compute-0 ceph-mon[74249]: pgmap v2516: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:37:01 compute-0 nova_compute[259627]: 2025-10-14 09:37:01.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:37:02 compute-0 nova_compute[259627]: 2025-10-14 09:37:02.701 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:02 compute-0 nova_compute[259627]: 2025-10-14 09:37:02.702 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:02 compute-0 nova_compute[259627]: 2025-10-14 09:37:02.725 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:37:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:37:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:37:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:37:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:37:02 compute-0 nova_compute[259627]: 2025-10-14 09:37:02.809 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:02 compute-0 nova_compute[259627]: 2025-10-14 09:37:02.809 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:02 compute-0 nova_compute[259627]: 2025-10-14 09:37:02.821 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:37:02 compute-0 nova_compute[259627]: 2025-10-14 09:37:02.822 2 INFO nova.compute.claims [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:37:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:37:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.086 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:37:03 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472351782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.556 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.562 2 DEBUG nova.compute.provider_tree [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.586 2 DEBUG nova.scheduler.client.report [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.614 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.615 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.673 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.674 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:37:03 compute-0 ceph-mon[74249]: pgmap v2517: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:37:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/472351782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.694 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.719 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.881 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.883 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.884 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Creating image(s)
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.924 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:03 compute-0 nova_compute[259627]: 2025-10-14 09:37:03.967 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.006 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.012 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.084 2 DEBUG nova.policy [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.116 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.117 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.118 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.119 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.157 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.163 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 bdfd070c-d036-4656-b797-efba7d4a4565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.443 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 bdfd070c-d036-4656-b797-efba7d4a4565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.543 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:37:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.654 2 DEBUG nova.objects.instance [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid bdfd070c-d036-4656-b797-efba7d4a4565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.671 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.672 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Ensure instance console log exists: /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.672 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.673 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:04 compute-0 nova_compute[259627]: 2025-10-14 09:37:04.673 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:05 compute-0 nova_compute[259627]: 2025-10-14 09:37:05.039 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Successfully created port: d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:37:05 compute-0 nova_compute[259627]: 2025-10-14 09:37:05.589 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Successfully created port: d4332d2f-fff2-4de9-811c-7d5ce2580b21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:37:05 compute-0 ceph-mon[74249]: pgmap v2518: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:37:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:37:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888271872' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:37:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:37:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888271872' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.570 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Successfully updated port: d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:37:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.668 2 DEBUG nova.compute.manager [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.669 2 DEBUG nova.compute.manager [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing instance network info cache due to event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.669 2 DEBUG oslo_concurrency.lockutils [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.669 2 DEBUG oslo_concurrency.lockutils [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.669 2 DEBUG nova.network.neutron [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:37:06 compute-0 podman[411698]: 2025-10-14 09:37:06.6963252 +0000 UTC m=+0.091349743 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 14 09:37:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1888271872' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:37:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1888271872' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:37:06 compute-0 podman[411697]: 2025-10-14 09:37:06.753224016 +0000 UTC m=+0.146565158 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.846 2 DEBUG nova.network.neutron [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:37:06 compute-0 nova_compute[259627]: 2025-10-14 09:37:06.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:07.053 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:07.055 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:07 compute-0 nova_compute[259627]: 2025-10-14 09:37:07.390 2 DEBUG nova.network.neutron [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:37:07 compute-0 nova_compute[259627]: 2025-10-14 09:37:07.427 2 DEBUG oslo_concurrency.lockutils [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:37:07 compute-0 nova_compute[259627]: 2025-10-14 09:37:07.580 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Successfully updated port: d4332d2f-fff2-4de9-811c-7d5ce2580b21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:37:07 compute-0 nova_compute[259627]: 2025-10-14 09:37:07.599 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:07 compute-0 nova_compute[259627]: 2025-10-14 09:37:07.600 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:07 compute-0 nova_compute[259627]: 2025-10-14 09:37:07.600 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:37:07 compute-0 ceph-mon[74249]: pgmap v2519: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:07 compute-0 nova_compute[259627]: 2025-10-14 09:37:07.850 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:37:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:08 compute-0 nova_compute[259627]: 2025-10-14 09:37:08.754 2 DEBUG nova.compute.manager [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-changed-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:08 compute-0 nova_compute[259627]: 2025-10-14 09:37:08.754 2 DEBUG nova.compute.manager [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing instance network info cache due to event network-changed-d4332d2f-fff2-4de9-811c-7d5ce2580b21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:37:08 compute-0 nova_compute[259627]: 2025-10-14 09:37:08.755 2 DEBUG oslo_concurrency.lockutils [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:09 compute-0 ceph-mon[74249]: pgmap v2520: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.297 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.321 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.322 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance network_info: |[{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.323 2 DEBUG oslo_concurrency.lockutils [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.323 2 DEBUG nova.network.neutron [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing network info cache for port d4332d2f-fff2-4de9-811c-7d5ce2580b21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.329 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start _get_guest_xml network_info=[{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.336 2 WARNING nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.341 2 DEBUG nova.virt.libvirt.host [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.342 2 DEBUG nova.virt.libvirt.host [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.355 2 DEBUG nova.virt.libvirt.host [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.355 2 DEBUG nova.virt.libvirt.host [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.356 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.357 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.358 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.358 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.359 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.359 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.359 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.360 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.360 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.360 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.361 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.361 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.366 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:11 compute-0 ceph-mon[74249]: pgmap v2521: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:37:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264397896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.871 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.909 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:11 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.944 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:11.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:37:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57655119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.436 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.439 2 DEBUG nova.virt.libvirt.vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:03Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.439 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.441 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.443 2 DEBUG nova.virt.libvirt.vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:03Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.443 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.445 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.448 2 DEBUG nova.objects.instance [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid bdfd070c-d036-4656-b797-efba7d4a4565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.487 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <uuid>bdfd070c-d036-4656-b797-efba7d4a4565</uuid>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <name>instance-00000092</name>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-1555367739</nova:name>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:37:11</nova:creationTime>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:port uuid="d3b7ded4-91fa-46dc-b6b9-e6e630c275ab">
Oct 14 09:37:12 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <nova:port uuid="d4332d2f-fff2-4de9-811c-7d5ce2580b21">
Oct 14 09:37:12 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe98:6975" ipVersion="6"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <system>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <entry name="serial">bdfd070c-d036-4656-b797-efba7d4a4565</entry>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <entry name="uuid">bdfd070c-d036-4656-b797-efba7d4a4565</entry>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </system>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <os>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   </os>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <features>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   </features>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/bdfd070c-d036-4656-b797-efba7d4a4565_disk">
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/bdfd070c-d036-4656-b797-efba7d4a4565_disk.config">
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       </source>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:37:12 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:fc:b2:53"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <target dev="tapd3b7ded4-91"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:98:69:75"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <target dev="tapd4332d2f-ff"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/console.log" append="off"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <video>
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </video>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:37:12 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:37:12 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:37:12 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:37:12 compute-0 nova_compute[259627]: </domain>
Oct 14 09:37:12 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.489 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Preparing to wait for external event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.490 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.490 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.491 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.491 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Preparing to wait for external event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.491 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.492 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.492 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.493 2 DEBUG nova.virt.libvirt.vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:03Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.494 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.494 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.495 2 DEBUG os_vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3b7ded4-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3b7ded4-91, col_values=(('external_ids', {'iface-id': 'd3b7ded4-91fa-46dc-b6b9-e6e630c275ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:b2:53', 'vm-uuid': 'bdfd070c-d036-4656-b797-efba7d4a4565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:37:12 compute-0 NetworkManager[44885]: <info>  [1760434632.5055] manager: (tapd3b7ded4-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.514 2 INFO os_vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91')
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.515 2 DEBUG nova.virt.libvirt.vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:03Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.516 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.516 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.517 2 DEBUG os_vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4332d2f-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4332d2f-ff, col_values=(('external_ids', {'iface-id': 'd4332d2f-fff2-4de9-811c-7d5ce2580b21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:69:75', 'vm-uuid': 'bdfd070c-d036-4656-b797-efba7d4a4565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 NetworkManager[44885]: <info>  [1760434632.5232] manager: (tapd4332d2f-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.536 2 INFO os_vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff')
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.583 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.583 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.583 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:fc:b2:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.584 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:98:69:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.584 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Using config drive
Oct 14 09:37:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:12 compute-0 nova_compute[259627]: 2025-10-14 09:37:12.610 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1264397896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:37:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/57655119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:37:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:13 compute-0 nova_compute[259627]: 2025-10-14 09:37:13.535 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Creating config drive at /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config
Oct 14 09:37:13 compute-0 nova_compute[259627]: 2025-10-14 09:37:13.545 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperl9e3em execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:13 compute-0 nova_compute[259627]: 2025-10-14 09:37:13.721 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperl9e3em" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:13 compute-0 ceph-mon[74249]: pgmap v2522: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:13 compute-0 nova_compute[259627]: 2025-10-14 09:37:13.762 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:13 compute-0 nova_compute[259627]: 2025-10-14 09:37:13.767 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config bdfd070c-d036-4656-b797-efba7d4a4565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:13 compute-0 nova_compute[259627]: 2025-10-14 09:37:13.962 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config bdfd070c-d036-4656-b797-efba7d4a4565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:13 compute-0 nova_compute[259627]: 2025-10-14 09:37:13.963 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Deleting local config drive /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config because it was imported into RBD.
Oct 14 09:37:14 compute-0 kernel: tapd3b7ded4-91: entered promiscuous mode
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.0076] manager: (tapd3b7ded4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/652)
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01595|binding|INFO|Claiming lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab for this chassis.
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01596|binding|INFO|d3b7ded4-91fa-46dc-b6b9-e6e630c275ab: Claiming fa:16:3e:fc:b2:53 10.100.0.10
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.027 2 DEBUG nova.network.neutron [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updated VIF entry in instance network info cache for port d4332d2f-fff2-4de9-811c-7d5ce2580b21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.028 2 DEBUG nova.network.neutron [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.0306] manager: (tapd4332d2f-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/653)
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.031 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:b2:53 10.100.0.10'], port_security=['fa:16:3e:fc:b2:53 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bdfd070c-d036-4656-b797-efba7d4a4565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b96c99b-2e43-4904-a8ba-7ffb70fd145f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.033 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab in datapath cb1842f7-933b-4c76-aa59-c55590c98ec5 bound to our chassis
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.034 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cb1842f7-933b-4c76-aa59-c55590c98ec5
Oct 14 09:37:14 compute-0 systemd-udevd[411882]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:37:14 compute-0 systemd-udevd[411883]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.048 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d701eb-e0ee-488b-be3d-b5d2f5bb7832]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.050 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcb1842f7-91 in ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.052 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcb1842f7-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f041d22f-1487-44c0-a43f-987d3b06d6c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.053 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad5925f-3795-4018-9b67-5a267872dfad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.053 2 DEBUG oslo_concurrency.lockutils [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.0658] device (tapd3b7ded4-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.064 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[09e0483b-bda1-48e7-96f1-c885332bc7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.0668] device (tapd3b7ded4-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:37:14 compute-0 systemd-machined[214636]: New machine qemu-179-instance-00000092.
Oct 14 09:37:14 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000092.
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.088 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[72e71172-604d-4c93-8599-bbbf5a1ccfc6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.1011] device (tapd4332d2f-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:37:14 compute-0 kernel: tapd4332d2f-ff: entered promiscuous mode
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.1024] device (tapd4332d2f-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01597|binding|INFO|Claiming lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 for this chassis.
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01598|binding|INFO|d4332d2f-fff2-4de9-811c-7d5ce2580b21: Claiming fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.110 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975'], port_security=['fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe98:6975/64', 'neutron:device_id': 'bdfd070c-d036-4656-b797-efba7d4a4565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b53903-cbc2-43bd-a94b-2366acd741ae, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d4332d2f-fff2-4de9-811c-7d5ce2580b21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01599|binding|INFO|Setting lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab ovn-installed in OVS
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01600|binding|INFO|Setting lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab up in Southbound
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.125 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6c78c16f-6199-4f3c-ae4e-4b4061850f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01601|binding|INFO|Setting lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 ovn-installed in OVS
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01602|binding|INFO|Setting lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 up in Southbound
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.1339] manager: (tapcb1842f7-90): new Veth device (/org/freedesktop/NetworkManager/Devices/654)
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.133 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fd975035-7928-4e2c-a136-91b05f06aaa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.166 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a6a552-623d-4704-ac7f-788774a01656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.174 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[607a3784-cff4-48c5-92bc-b34d47161f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.2007] device (tapcb1842f7-90): carrier: link connected
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.205 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0aaea6-6f79-4d5a-9171-e2844d7b6d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.226 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a7db48-c1eb-4858-8c5d-6d670a3270ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb1842f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841296, 'reachable_time': 21039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411918, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.242 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a72328-9460-4391-9ac5-9cc7a437c58b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:b737'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841296, 'tstamp': 841296}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411919, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57f1d425-1095-493e-be3a-c08d7270a695]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb1842f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841296, 'reachable_time': 21039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411920, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.293 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84a0b4a2-e572-49ce-9a77-42b21272025f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.328 2 DEBUG nova.compute.manager [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.328 2 DEBUG oslo_concurrency.lockutils [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.328 2 DEBUG oslo_concurrency.lockutils [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.329 2 DEBUG oslo_concurrency.lockutils [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.329 2 DEBUG nova.compute.manager [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Processing event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.351 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[166f7ba2-f224-4fcf-a73a-61e24f4aa230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.352 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb1842f7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.353 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.353 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb1842f7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 NetworkManager[44885]: <info>  [1760434634.3854] manager: (tapcb1842f7-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/655)
Oct 14 09:37:14 compute-0 kernel: tapcb1842f7-90: entered promiscuous mode
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.387 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcb1842f7-90, col_values=(('external_ids', {'iface-id': '49fe400f-9e76-42ad-a72c-3f9a5bf50e43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 ovn_controller[152662]: 2025-10-14T09:37:14Z|01603|binding|INFO|Releasing lport 49fe400f-9e76-42ad-a72c-3f9a5bf50e43 from this chassis (sb_readonly=0)
Oct 14 09:37:14 compute-0 nova_compute[259627]: 2025-10-14 09:37:14.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.405 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cb1842f7-933b-4c76-aa59-c55590c98ec5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cb1842f7-933b-4c76-aa59-c55590c98ec5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.407 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e8080006-1613-4b02-917e-95cc0b7372bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.407 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-cb1842f7-933b-4c76-aa59-c55590c98ec5
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/cb1842f7-933b-4c76-aa59-c55590c98ec5.pid.haproxy
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID cb1842f7-933b-4c76-aa59-c55590c98ec5
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:37:14 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.408 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'env', 'PROCESS_TAG=haproxy-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cb1842f7-933b-4c76-aa59-c55590c98ec5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:37:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:14 compute-0 podman[411985]: 2025-10-14 09:37:14.842870614 +0000 UTC m=+0.074952040 container create a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 09:37:14 compute-0 podman[411985]: 2025-10-14 09:37:14.805552508 +0000 UTC m=+0.037633974 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:37:14 compute-0 systemd[1]: Started libpod-conmon-a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54.scope.
Oct 14 09:37:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a575707cf6d3dded0de43b86eb81e521613e1d92e58a48fd3de84aba25f6fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:14 compute-0 podman[411985]: 2025-10-14 09:37:14.955041247 +0000 UTC m=+0.187122703 container init a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:37:14 compute-0 podman[411985]: 2025-10-14 09:37:14.961825373 +0000 UTC m=+0.193906799 container start a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:37:14 compute-0 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [NOTICE]   (412014) : New worker (412016) forked
Oct 14 09:37:14 compute-0 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [NOTICE]   (412014) : Loading success.
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.028 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d4332d2f-fff2-4de9-811c-7d5ce2580b21 in datapath 4914ad10-cd65-4b9a-8ebb-43ebafc5f222 unbound from our chassis
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.030 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4914ad10-cd65-4b9a-8ebb-43ebafc5f222
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.049 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[881b05e4-e0e6-486f-b196-6167f2244a98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.050 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4914ad10-c1 in ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.052 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4914ad10-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[361823a3-309c-4429-b7db-2820c1ba0a63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.053 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2768a337-988c-495c-ba5f-9ab1163c4803]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.067 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[134bc167-84f2-43fc-9273-5e9c96344626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.089 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3abaa51e-a511-4f76-9aff-3d4fb3ac65a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.127 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0b5d72-dd35-4f8a-84d7-9680fbb5fd51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.135 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14c8719a-4fbe-45cb-a921-db9a148892ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 NetworkManager[44885]: <info>  [1760434635.1371] manager: (tap4914ad10-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/656)
Oct 14 09:37:15 compute-0 systemd-udevd[411911]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.174 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc4e6af-cc85-464f-a01d-24d0f18fedb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.178 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[80d576ef-7a0d-4ea8-883e-d417dc5c4131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 NetworkManager[44885]: <info>  [1760434635.2094] device (tap4914ad10-c0): carrier: link connected
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.215 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9d10a6bf-2d58-4d36-bbe5-15666983e665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf354c3-30e3-4e5a-b580-27105396efb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4914ad10-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:b4:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841397, 'reachable_time': 27486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412035, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df20cebb-82cc-42bf-b272-72eccd3bf583]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:b40c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841397, 'tstamp': 841397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412036, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eb2687-e50f-43df-b19c-d9d0cbeda6df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4914ad10-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:b4:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841397, 'reachable_time': 27486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412037, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.286 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434635.2860374, bdfd070c-d036-4656-b797-efba7d4a4565 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.286 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] VM Started (Lifecycle Event)
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.309 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.313 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434635.2883456, bdfd070c-d036-4656-b797-efba7d4a4565 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.313 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] VM Paused (Lifecycle Event)
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.327 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14ab6682-ee98-41b0-acfa-f41c0663cc8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.331 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.335 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.356 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[192caad6-29eb-446d-94db-b81d6e180238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.357 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4914ad10-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.357 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.358 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4914ad10-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:15 compute-0 NetworkManager[44885]: <info>  [1760434635.3607] manager: (tap4914ad10-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:15 compute-0 kernel: tap4914ad10-c0: entered promiscuous mode
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.364 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4914ad10-c0, col_values=(('external_ids', {'iface-id': '7ea31872-5902-4f26-8d38-70f94c9c61fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:15 compute-0 ovn_controller[152662]: 2025-10-14T09:37:15Z|01604|binding|INFO|Releasing lport 7ea31872-5902-4f26-8d38-70f94c9c61fa from this chassis (sb_readonly=0)
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.372 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:37:15 compute-0 nova_compute[259627]: 2025-10-14 09:37:15.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.395 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4914ad10-cd65-4b9a-8ebb-43ebafc5f222.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4914ad10-cd65-4b9a-8ebb-43ebafc5f222.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa365d3-919c-4264-99c1-fc775e325157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.396 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-4914ad10-cd65-4b9a-8ebb-43ebafc5f222
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/4914ad10-cd65-4b9a-8ebb-43ebafc5f222.pid.haproxy
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 4914ad10-cd65-4b9a-8ebb-43ebafc5f222
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:37:15 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.397 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'env', 'PROCESS_TAG=haproxy-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4914ad10-cd65-4b9a-8ebb-43ebafc5f222.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:37:15 compute-0 ceph-mon[74249]: pgmap v2523: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:15 compute-0 podman[412067]: 2025-10-14 09:37:15.806411879 +0000 UTC m=+0.068586704 container create 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:37:15 compute-0 systemd[1]: Started libpod-conmon-7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb.scope.
Oct 14 09:37:15 compute-0 podman[412067]: 2025-10-14 09:37:15.769805481 +0000 UTC m=+0.031980386 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:37:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1331dc74cf2f6433ae28b4a6f4672079f6c234826e20c160e76e1878b09b3328/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:15 compute-0 podman[412067]: 2025-10-14 09:37:15.917301291 +0000 UTC m=+0.179476196 container init 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 09:37:15 compute-0 podman[412067]: 2025-10-14 09:37:15.929378167 +0000 UTC m=+0.191553022 container start 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 09:37:15 compute-0 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [NOTICE]   (412087) : New worker (412089) forked
Oct 14 09:37:15 compute-0 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [NOTICE]   (412087) : Loading success.
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.656 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.656 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.657 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.657 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.658 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No event matching network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab in dict_keys([('network-vif-plugged', 'd4332d2f-fff2-4de9-811c-7d5ce2580b21')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.658 2 WARNING nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received unexpected event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab for instance with vm_state building and task_state spawning.
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.659 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.659 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.660 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.660 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.661 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Processing event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.661 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.661 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.662 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.662 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.663 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.663 2 WARNING nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received unexpected event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 for instance with vm_state building and task_state spawning.
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.664 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.670 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434636.669818, bdfd070c-d036-4656-b797-efba7d4a4565 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.670 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] VM Resumed (Lifecycle Event)
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.674 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.681 2 INFO nova.virt.libvirt.driver [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance spawned successfully.
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.681 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.699 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.704 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.719 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.719 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.720 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.721 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.721 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.722 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.741 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.804 2 INFO nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Took 12.92 seconds to spawn the instance on the hypervisor.
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.804 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.880 2 INFO nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Took 14.10 seconds to build instance.
Oct 14 09:37:16 compute-0 nova_compute[259627]: 2025-10-14 09:37:16.896 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:17 compute-0 nova_compute[259627]: 2025-10-14 09:37:17.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:17 compute-0 ceph-mon[74249]: pgmap v2524: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 09:37:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:37:19 compute-0 ceph-mon[74249]: pgmap v2525: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 09:37:20 compute-0 nova_compute[259627]: 2025-10-14 09:37:20.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:20.338 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:37:20 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:20.339 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:37:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:37:21 compute-0 ovn_controller[152662]: 2025-10-14T09:37:21Z|01605|binding|INFO|Releasing lport 49fe400f-9e76-42ad-a72c-3f9a5bf50e43 from this chassis (sb_readonly=0)
Oct 14 09:37:21 compute-0 ovn_controller[152662]: 2025-10-14T09:37:21Z|01606|binding|INFO|Releasing lport 7ea31872-5902-4f26-8d38-70f94c9c61fa from this chassis (sb_readonly=0)
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:21 compute-0 NetworkManager[44885]: <info>  [1760434641.2268] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/658)
Oct 14 09:37:21 compute-0 NetworkManager[44885]: <info>  [1760434641.2291] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Oct 14 09:37:21 compute-0 ovn_controller[152662]: 2025-10-14T09:37:21Z|01607|binding|INFO|Releasing lport 49fe400f-9e76-42ad-a72c-3f9a5bf50e43 from this chassis (sb_readonly=0)
Oct 14 09:37:21 compute-0 ovn_controller[152662]: 2025-10-14T09:37:21Z|01608|binding|INFO|Releasing lport 7ea31872-5902-4f26-8d38-70f94c9c61fa from this chassis (sb_readonly=0)
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.567 2 DEBUG nova.compute.manager [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.568 2 DEBUG nova.compute.manager [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing instance network info cache due to event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.568 2 DEBUG oslo_concurrency.lockutils [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.569 2 DEBUG oslo_concurrency.lockutils [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:21 compute-0 nova_compute[259627]: 2025-10-14 09:37:21.569 2 DEBUG nova.network.neutron [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:37:21 compute-0 ceph-mon[74249]: pgmap v2526: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:37:22 compute-0 nova_compute[259627]: 2025-10-14 09:37:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:37:22 compute-0 podman[412100]: 2025-10-14 09:37:22.697534786 +0000 UTC m=+0.084855073 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 09:37:22 compute-0 podman[412099]: 2025-10-14 09:37:22.702500338 +0000 UTC m=+0.092146653 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 09:37:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:23 compute-0 ceph-mon[74249]: pgmap v2527: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:37:24 compute-0 nova_compute[259627]: 2025-10-14 09:37:24.006 2 DEBUG nova.network.neutron [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updated VIF entry in instance network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:37:24 compute-0 nova_compute[259627]: 2025-10-14 09:37:24.007 2 DEBUG nova.network.neutron [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:37:24 compute-0 nova_compute[259627]: 2025-10-14 09:37:24.028 2 DEBUG oslo_concurrency.lockutils [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:37:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:37:25 compute-0 ceph-mon[74249]: pgmap v2528: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:37:26 compute-0 nova_compute[259627]: 2025-10-14 09:37:26.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:37:27 compute-0 nova_compute[259627]: 2025-10-14 09:37:27.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:27 compute-0 ceph-mon[74249]: pgmap v2529: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:37:27 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 09:37:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:28 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:28.342 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 14 09:37:28 compute-0 ovn_controller[152662]: 2025-10-14T09:37:28Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:b2:53 10.100.0.10
Oct 14 09:37:28 compute-0 ovn_controller[152662]: 2025-10-14T09:37:28Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:b2:53 10.100.0.10
Oct 14 09:37:29 compute-0 ceph-mon[74249]: pgmap v2530: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 14 09:37:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 14 09:37:31 compute-0 nova_compute[259627]: 2025-10-14 09:37:31.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:31 compute-0 ceph-mon[74249]: pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 14 09:37:32 compute-0 nova_compute[259627]: 2025-10-14 09:37:32.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:37:32
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log']
Oct 14 09:37:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:37:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:37:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:37:33 compute-0 ceph-mon[74249]: pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:35 compute-0 ceph-mon[74249]: pgmap v2533: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:36 compute-0 nova_compute[259627]: 2025-10-14 09:37:36.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:37 compute-0 nova_compute[259627]: 2025-10-14 09:37:37.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:37 compute-0 podman[412142]: 2025-10-14 09:37:37.666827749 +0000 UTC m=+0.075006861 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:37:37 compute-0 podman[412141]: 2025-10-14 09:37:37.702897974 +0000 UTC m=+0.114932171 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:37:37 compute-0 ceph-mon[74249]: pgmap v2534: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:39 compute-0 ceph-mon[74249]: pgmap v2535: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:41 compute-0 nova_compute[259627]: 2025-10-14 09:37:41.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:41 compute-0 ceph-mon[74249]: pgmap v2536: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.504 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.504 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.527 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.603 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.603 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.613 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.614 2 INFO nova.compute.claims [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:37:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:42 compute-0 nova_compute[259627]: 2025-10-14 09:37:42.748 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:37:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/418417672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.247 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.256 2 DEBUG nova.compute.provider_tree [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.285 2 DEBUG nova.scheduler.client.report [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.321 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.323 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.380 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.380 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.404 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.420 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.523 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.525 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.525 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Creating image(s)
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.559 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:37:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.598 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.637 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.642 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.695 2 DEBUG nova.policy [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.731 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.732 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.733 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.733 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.759 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:43 compute-0 nova_compute[259627]: 2025-10-14 09:37:43.765 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4e23c3df-9710-4287-9890-cdae2d551fc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:43 compute-0 ceph-mon[74249]: pgmap v2537: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:37:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/418417672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.211 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4e23c3df-9710-4287-9890-cdae2d551fc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.304 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.345 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Successfully created port: 31c29d30-7edf-486a-a168-0356f62ab3b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.420 2 DEBUG nova.objects.instance [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 4e23c3df-9710-4287-9890-cdae2d551fc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.447 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.447 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Ensure instance console log exists: /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.448 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.449 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:44 compute-0 nova_compute[259627]: 2025-10-14 09:37:44.449 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:37:45 compute-0 nova_compute[259627]: 2025-10-14 09:37:45.427 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Successfully created port: b67fadaf-4e6a-49b7-b340-4a8659d6216b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:37:45 compute-0 ceph-mon[74249]: pgmap v2538: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:37:46 compute-0 nova_compute[259627]: 2025-10-14 09:37:46.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:46 compute-0 nova_compute[259627]: 2025-10-14 09:37:46.562 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Successfully updated port: 31c29d30-7edf-486a-a168-0356f62ab3b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:37:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:46 compute-0 nova_compute[259627]: 2025-10-14 09:37:46.664 2 DEBUG nova.compute.manager [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:46 compute-0 nova_compute[259627]: 2025-10-14 09:37:46.664 2 DEBUG nova.compute.manager [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing instance network info cache due to event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:37:46 compute-0 nova_compute[259627]: 2025-10-14 09:37:46.664 2 DEBUG oslo_concurrency.lockutils [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:46 compute-0 nova_compute[259627]: 2025-10-14 09:37:46.665 2 DEBUG oslo_concurrency.lockutils [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:46 compute-0 nova_compute[259627]: 2025-10-14 09:37:46.665 2 DEBUG nova.network.neutron [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:37:46 compute-0 nova_compute[259627]: 2025-10-14 09:37:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:47 compute-0 nova_compute[259627]: 2025-10-14 09:37:47.119 2 DEBUG nova.network.neutron [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:37:47 compute-0 nova_compute[259627]: 2025-10-14 09:37:47.501 2 DEBUG nova.network.neutron [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:37:47 compute-0 nova_compute[259627]: 2025-10-14 09:37:47.523 2 DEBUG oslo_concurrency.lockutils [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:37:47 compute-0 nova_compute[259627]: 2025-10-14 09:37:47.534 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Successfully updated port: b67fadaf-4e6a-49b7-b340-4a8659d6216b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:37:47 compute-0 nova_compute[259627]: 2025-10-14 09:37:47.559 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:47 compute-0 nova_compute[259627]: 2025-10-14 09:37:47.559 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:47 compute-0 nova_compute[259627]: 2025-10-14 09:37:47.560 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:37:47 compute-0 nova_compute[259627]: 2025-10-14 09:37:47.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:47 compute-0 ceph-mon[74249]: pgmap v2539: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:48 compute-0 nova_compute[259627]: 2025-10-14 09:37:48.091 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:37:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:48 compute-0 nova_compute[259627]: 2025-10-14 09:37:48.825 2 DEBUG nova.compute.manager [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-changed-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:48 compute-0 nova_compute[259627]: 2025-10-14 09:37:48.826 2 DEBUG nova.compute.manager [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing instance network info cache due to event network-changed-b67fadaf-4e6a-49b7-b340-4a8659d6216b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:37:48 compute-0 nova_compute[259627]: 2025-10-14 09:37:48.826 2 DEBUG oslo_concurrency.lockutils [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.814 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.836 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.836 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance network_info: |[{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.837 2 DEBUG oslo_concurrency.lockutils [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.838 2 DEBUG nova.network.neutron [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing network info cache for port b67fadaf-4e6a-49b7-b340-4a8659d6216b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.844 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start _get_guest_xml network_info=[{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.851 2 WARNING nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.861 2 DEBUG nova.virt.libvirt.host [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.862 2 DEBUG nova.virt.libvirt.host [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.868 2 DEBUG nova.virt.libvirt.host [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.869 2 DEBUG nova.virt.libvirt.host [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.870 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.871 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.872 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.872 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.873 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.873 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.874 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.875 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.875 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.876 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.876 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.877 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:37:49 compute-0 nova_compute[259627]: 2025-10-14 09:37:49.882 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:49 compute-0 ceph-mon[74249]: pgmap v2540: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:37:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417527090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.368 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.398 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.403 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:37:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785500214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.926 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.927 2 DEBUG nova.virt.libvirt.vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:43Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.928 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.928 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.929 2 DEBUG nova.virt.libvirt.vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:43Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.929 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.930 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.931 2 DEBUG nova.objects.instance [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e23c3df-9710-4287-9890-cdae2d551fc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:37:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2417527090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:37:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1785500214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.947 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <uuid>4e23c3df-9710-4287-9890-cdae2d551fc0</uuid>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <name>instance-00000093</name>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-2012583976</nova:name>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:37:49</nova:creationTime>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:port uuid="31c29d30-7edf-486a-a168-0356f62ab3b9">
Oct 14 09:37:50 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <nova:port uuid="b67fadaf-4e6a-49b7-b340-4a8659d6216b">
Oct 14 09:37:50 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9e:a24" ipVersion="6"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <system>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <entry name="serial">4e23c3df-9710-4287-9890-cdae2d551fc0</entry>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <entry name="uuid">4e23c3df-9710-4287-9890-cdae2d551fc0</entry>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </system>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <os>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   </os>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <features>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   </features>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4e23c3df-9710-4287-9890-cdae2d551fc0_disk">
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       </source>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config">
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       </source>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:37:50 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:7f:2b:85"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <target dev="tap31c29d30-7e"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:9e:0a:24"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <target dev="tapb67fadaf-4e"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/console.log" append="off"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <video>
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </video>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:37:50 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:37:50 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:37:50 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:37:50 compute-0 nova_compute[259627]: </domain>
Oct 14 09:37:50 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Preparing to wait for external event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Preparing to wait for external event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.950 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.950 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.950 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.951 2 DEBUG nova.virt.libvirt.vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:43Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.951 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.951 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.952 2 DEBUG os_vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31c29d30-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31c29d30-7e, col_values=(('external_ids', {'iface-id': '31c29d30-7edf-486a-a168-0356f62ab3b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:2b:85', 'vm-uuid': '4e23c3df-9710-4287-9890-cdae2d551fc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:50 compute-0 NetworkManager[44885]: <info>  [1760434670.9962] manager: (tap31c29d30-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Oct 14 09:37:50 compute-0 nova_compute[259627]: 2025-10-14 09:37:50.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.006 2 INFO os_vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e')
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.006 2 DEBUG nova.virt.libvirt.vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:43Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.007 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.007 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.007 2 DEBUG os_vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb67fadaf-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb67fadaf-4e, col_values=(('external_ids', {'iface-id': 'b67fadaf-4e6a-49b7-b340-4a8659d6216b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:0a:24', 'vm-uuid': '4e23c3df-9710-4287-9890-cdae2d551fc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:51 compute-0 NetworkManager[44885]: <info>  [1760434671.0121] manager: (tapb67fadaf-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.021 2 INFO os_vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e')
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.080 2 DEBUG nova.network.neutron [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updated VIF entry in instance network info cache for port b67fadaf-4e6a-49b7-b340-4a8659d6216b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.081 2 DEBUG nova.network.neutron [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.094 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.095 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.095 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:7f:2b:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.095 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:9e:0a:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.096 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Using config drive
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.129 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.138 2 DEBUG oslo_concurrency.lockutils [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.499 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Creating config drive at /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.510 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod0jc_5c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.679 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod0jc_5c" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.710 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.715 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.916 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.918 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Deleting local config drive /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config because it was imported into RBD.
Oct 14 09:37:51 compute-0 ceph-mon[74249]: pgmap v2541: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:51 compute-0 nova_compute[259627]: 2025-10-14 09:37:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:51 compute-0 kernel: tap31c29d30-7e: entered promiscuous mode
Oct 14 09:37:51 compute-0 NetworkManager[44885]: <info>  [1760434671.9973] manager: (tap31c29d30-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/662)
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01609|binding|INFO|Claiming lport 31c29d30-7edf-486a-a168-0356f62ab3b9 for this chassis.
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01610|binding|INFO|31c29d30-7edf-486a-a168-0356f62ab3b9: Claiming fa:16:3e:7f:2b:85 10.100.0.8
Oct 14 09:37:52 compute-0 NetworkManager[44885]: <info>  [1760434672.0191] manager: (tapb67fadaf-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.017 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:2b:85 10.100.0.8'], port_security=['fa:16:3e:7f:2b:85 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4e23c3df-9710-4287-9890-cdae2d551fc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b96c99b-2e43-4904-a8ba-7ffb70fd145f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31c29d30-7edf-486a-a168-0356f62ab3b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.019 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31c29d30-7edf-486a-a168-0356f62ab3b9 in datapath cb1842f7-933b-4c76-aa59-c55590c98ec5 bound to our chassis
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.021 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cb1842f7-933b-4c76-aa59-c55590c98ec5
Oct 14 09:37:52 compute-0 kernel: tapb67fadaf-4e: entered promiscuous mode
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01611|binding|INFO|Setting lport 31c29d30-7edf-486a-a168-0356f62ab3b9 ovn-installed in OVS
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01612|binding|INFO|Setting lport 31c29d30-7edf-486a-a168-0356f62ab3b9 up in Southbound
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01613|if_status|INFO|Dropped 1 log messages in last 241 seconds (most recently, 241 seconds ago) due to excessive rate
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01614|if_status|INFO|Not updating pb chassis for b67fadaf-4e6a-49b7-b340-4a8659d6216b now as sb is readonly
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.046 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[99b3d99b-d5cf-4130-89e9-e28a27903cfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01615|binding|INFO|Claiming lport b67fadaf-4e6a-49b7-b340-4a8659d6216b for this chassis.
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01616|binding|INFO|b67fadaf-4e6a-49b7-b340-4a8659d6216b: Claiming fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.064 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24'], port_security=['fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9e:a24/64', 'neutron:device_id': '4e23c3df-9710-4287-9890-cdae2d551fc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b53903-cbc2-43bd-a94b-2366acd741ae, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b67fadaf-4e6a-49b7-b340-4a8659d6216b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:37:52 compute-0 systemd-udevd[412516]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:37:52 compute-0 systemd-udevd[412515]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01617|binding|INFO|Setting lport b67fadaf-4e6a-49b7-b340-4a8659d6216b ovn-installed in OVS
Oct 14 09:37:52 compute-0 ovn_controller[152662]: 2025-10-14T09:37:52Z|01618|binding|INFO|Setting lport b67fadaf-4e6a-49b7-b340-4a8659d6216b up in Southbound
Oct 14 09:37:52 compute-0 NetworkManager[44885]: <info>  [1760434672.0850] device (tap31c29d30-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:37:52 compute-0 NetworkManager[44885]: <info>  [1760434672.0857] device (tap31c29d30-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:37:52 compute-0 NetworkManager[44885]: <info>  [1760434672.0870] device (tapb67fadaf-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:37:52 compute-0 NetworkManager[44885]: <info>  [1760434672.0877] device (tapb67fadaf-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:37:52 compute-0 systemd-machined[214636]: New machine qemu-180-instance-00000093.
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.099 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc78928-e418-416f-b37a-79be8579cae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.105 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5e85b8c6-4b8b-44e6-bdce-38d46eef5935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000093.
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.149 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[04cef20a-847e-4644-9144-8f6a4beafee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.180 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85de27e4-cb0e-4027-995e-47589ac3aa71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb1842f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841296, 'reachable_time': 21039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412525, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.206 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[93e20ace-bf87-4b8c-8142-411513c63538]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcb1842f7-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841308, 'tstamp': 841308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412530, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcb1842f7-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841310, 'tstamp': 841310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412530, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.208 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb1842f7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.215 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb1842f7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.215 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.216 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcb1842f7-90, col_values=(('external_ids', {'iface-id': '49fe400f-9e76-42ad-a72c-3f9a5bf50e43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.216 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.218 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b67fadaf-4e6a-49b7-b340-4a8659d6216b in datapath 4914ad10-cd65-4b9a-8ebb-43ebafc5f222 unbound from our chassis
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.220 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4914ad10-cd65-4b9a-8ebb-43ebafc5f222
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.245 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4004ce95-bc77-48db-9777-2b785ce200d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.289 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aaed2df3-9055-4a79-9a39-7395c8d71496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.293 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0966040f-a538-4380-8e91-322d0b26d703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.345 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[de411e01-2593-4183-bba8-6e6625a57329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.376 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[94583473-fb5f-4284-ae75-71023babf7d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4914ad10-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:b4:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841397, 'reachable_time': 27486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412537, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.404 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e2c081-8a46-4d04-ae01-844efc3732c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4914ad10-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841411, 'tstamp': 841411}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412538, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.407 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4914ad10-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.411 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4914ad10-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.411 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.412 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4914ad10-c0, col_values=(('external_ids', {'iface-id': '7ea31872-5902-4f26-8d38-70f94c9c61fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:37:52 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.413 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:37:52 compute-0 sudo[412572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:52 compute-0 sudo[412572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:52 compute-0 sudo[412572]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:52 compute-0 sudo[412607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:37:52 compute-0 sudo[412607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:52 compute-0 sudo[412607]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:52 compute-0 sudo[412632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:52 compute-0 sudo[412632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:52 compute-0 sudo[412632]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:52 compute-0 sudo[412664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 14 09:37:52 compute-0 sudo[412664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:52 compute-0 podman[412657]: 2025-10-14 09:37:52.851978449 +0000 UTC m=+0.059305726 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 14 09:37:52 compute-0 podman[412656]: 2025-10-14 09:37:52.889182292 +0000 UTC m=+0.089307352 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 14 09:37:52 compute-0 nova_compute[259627]: 2025-10-14 09:37:52.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.014 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.014 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.054 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434673.035582, 4e23c3df-9710-4287-9890-cdae2d551fc0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.055 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] VM Started (Lifecycle Event)
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.094 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.099 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434673.0360239, 4e23c3df-9710-4287-9890-cdae2d551fc0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.099 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] VM Paused (Lifecycle Event)
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.119 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.122 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:37:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.143 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:37:53 compute-0 podman[412813]: 2025-10-14 09:37:53.402224082 +0000 UTC m=+0.104630498 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:37:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:37:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287153890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.450 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:53 compute-0 podman[412813]: 2025-10-14 09:37:53.49017727 +0000 UTC m=+0.192583706 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.553 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.554 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.561 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.562 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.679 2 DEBUG nova.compute.manager [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.680 2 DEBUG oslo_concurrency.lockutils [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.681 2 DEBUG oslo_concurrency.lockutils [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.682 2 DEBUG oslo_concurrency.lockutils [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.682 2 DEBUG nova.compute.manager [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Processing event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.816 2 DEBUG nova.compute.manager [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.816 2 DEBUG oslo_concurrency.lockutils [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.817 2 DEBUG oslo_concurrency.lockutils [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.817 2 DEBUG oslo_concurrency.lockutils [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.817 2 DEBUG nova.compute.manager [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Processing event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.819 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.823 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434673.8230517, 4e23c3df-9710-4287-9890-cdae2d551fc0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.823 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] VM Resumed (Lifecycle Event)
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.827 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.833 2 INFO nova.virt.libvirt.driver [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance spawned successfully.
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.834 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.843 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.846 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.856 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.857 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.857 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.858 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.858 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.858 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.865 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.877 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.878 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3273MB free_disk=59.921966552734375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.878 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.878 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.928 2 INFO nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Took 10.40 seconds to spawn the instance on the hypervisor.
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.928 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:37:53 compute-0 ceph-mon[74249]: pgmap v2542: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2287153890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.968 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance bdfd070c-d036-4656-b797-efba7d4a4565 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.969 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 4e23c3df-9710-4287-9890-cdae2d551fc0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.969 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.969 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:37:53 compute-0 nova_compute[259627]: 2025-10-14 09:37:53.988 2 INFO nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Took 11.41 seconds to build instance.
Oct 14 09:37:54 compute-0 nova_compute[259627]: 2025-10-14 09:37:54.005 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:54 compute-0 nova_compute[259627]: 2025-10-14 09:37:54.022 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:37:54 compute-0 sudo[412664]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:37:54 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:37:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:37:54 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:37:54 compute-0 sudo[412989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:54 compute-0 sudo[412989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:54 compute-0 sudo[412989]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:54 compute-0 sudo[413014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:37:54 compute-0 sudo[413014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:54 compute-0 sudo[413014]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:37:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833838182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:37:54 compute-0 sudo[413039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:54 compute-0 sudo[413039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:54 compute-0 sudo[413039]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:54 compute-0 nova_compute[259627]: 2025-10-14 09:37:54.556 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:37:54 compute-0 nova_compute[259627]: 2025-10-14 09:37:54.562 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:37:54 compute-0 nova_compute[259627]: 2025-10-14 09:37:54.576 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:37:54 compute-0 sudo[413067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:37:54 compute-0 sudo[413067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:54 compute-0 nova_compute[259627]: 2025-10-14 09:37:54.607 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:37:54 compute-0 nova_compute[259627]: 2025-10-14 09:37:54.607 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:55 compute-0 sudo[413067]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:37:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:37:55 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:37:55 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:37:55 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 00ddeafd-27bc-4458-ba99-6b1855cd1850 does not exist
Oct 14 09:37:55 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 99f1a0e5-ae33-4645-9ec9-6ec412725c61 does not exist
Oct 14 09:37:55 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5128d048-48d9-452b-ab9f-76f1ce52372f does not exist
Oct 14 09:37:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:37:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:37:55 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:37:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:37:55 compute-0 sudo[413123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:55 compute-0 sudo[413123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:55 compute-0 sudo[413123]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:55 compute-0 sudo[413148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:37:55 compute-0 sudo[413148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:55 compute-0 sudo[413148]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3833838182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: pgmap v2543: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:37:55 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:37:55 compute-0 sudo[413173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:55 compute-0 sudo[413173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:55 compute-0 sudo[413173]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:55 compute-0 sudo[413198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:37:55 compute-0 sudo[413198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.812 2 DEBUG nova.compute.manager [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.813 2 DEBUG oslo_concurrency.lockutils [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.814 2 DEBUG oslo_concurrency.lockutils [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.814 2 DEBUG oslo_concurrency.lockutils [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.814 2 DEBUG nova.compute.manager [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.815 2 WARNING nova.compute.manager [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received unexpected event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b for instance with vm_state active and task_state None.
Oct 14 09:37:55 compute-0 podman[413262]: 2025-10-14 09:37:55.867081789 +0000 UTC m=+0.073179537 container create 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 14 09:37:55 compute-0 systemd[1]: Started libpod-conmon-2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd.scope.
Oct 14 09:37:55 compute-0 podman[413262]: 2025-10-14 09:37:55.833499065 +0000 UTC m=+0.039596903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.939 2 DEBUG nova.compute.manager [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.940 2 DEBUG oslo_concurrency.lockutils [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.940 2 DEBUG oslo_concurrency.lockutils [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.940 2 DEBUG oslo_concurrency.lockutils [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.941 2 DEBUG nova.compute.manager [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:37:55 compute-0 nova_compute[259627]: 2025-10-14 09:37:55.941 2 WARNING nova.compute.manager [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received unexpected event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 for instance with vm_state active and task_state None.
Oct 14 09:37:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:37:55 compute-0 podman[413262]: 2025-10-14 09:37:55.97389554 +0000 UTC m=+0.179993368 container init 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:37:55 compute-0 podman[413262]: 2025-10-14 09:37:55.987670209 +0000 UTC m=+0.193767937 container start 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:37:55 compute-0 podman[413262]: 2025-10-14 09:37:55.990921188 +0000 UTC m=+0.197018926 container attach 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:37:55 compute-0 eloquent_bohr[413278]: 167 167
Oct 14 09:37:55 compute-0 podman[413262]: 2025-10-14 09:37:55.996128636 +0000 UTC m=+0.202226384 container died 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 09:37:55 compute-0 systemd[1]: libpod-2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd.scope: Deactivated successfully.
Oct 14 09:37:56 compute-0 nova_compute[259627]: 2025-10-14 09:37:56.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cac6e178e87e875ca9bfb5771fb39dfbd7852cb63990042078a608c98b45160-merged.mount: Deactivated successfully.
Oct 14 09:37:56 compute-0 podman[413262]: 2025-10-14 09:37:56.051590337 +0000 UTC m=+0.257688105 container remove 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 14 09:37:56 compute-0 systemd[1]: libpod-conmon-2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd.scope: Deactivated successfully.
Oct 14 09:37:56 compute-0 podman[413304]: 2025-10-14 09:37:56.313099185 +0000 UTC m=+0.066457232 container create 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:37:56 compute-0 systemd[1]: Started libpod-conmon-602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357.scope.
Oct 14 09:37:56 compute-0 podman[413304]: 2025-10-14 09:37:56.287348793 +0000 UTC m=+0.040706820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:37:56 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:56 compute-0 podman[413304]: 2025-10-14 09:37:56.429771988 +0000 UTC m=+0.183130035 container init 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:37:56 compute-0 podman[413304]: 2025-10-14 09:37:56.438320528 +0000 UTC m=+0.191678545 container start 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:37:56 compute-0 podman[413304]: 2025-10-14 09:37:56.442002998 +0000 UTC m=+0.195361045 container attach 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:37:56 compute-0 nova_compute[259627]: 2025-10-14 09:37:56.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:37:56 compute-0 nova_compute[259627]: 2025-10-14 09:37:56.608 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:56 compute-0 nova_compute[259627]: 2025-10-14 09:37:56.609 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:37:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 09:37:56 compute-0 nova_compute[259627]: 2025-10-14 09:37:56.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:57 compute-0 dazzling_varahamihira[413321]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:37:57 compute-0 dazzling_varahamihira[413321]: --> relative data size: 1.0
Oct 14 09:37:57 compute-0 dazzling_varahamihira[413321]: --> All data devices are unavailable
Oct 14 09:37:57 compute-0 systemd[1]: libpod-602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357.scope: Deactivated successfully.
Oct 14 09:37:57 compute-0 systemd[1]: libpod-602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357.scope: Consumed 1.032s CPU time.
Oct 14 09:37:57 compute-0 podman[413350]: 2025-10-14 09:37:57.575583045 +0000 UTC m=+0.026169763 container died 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94-merged.mount: Deactivated successfully.
Oct 14 09:37:57 compute-0 podman[413350]: 2025-10-14 09:37:57.639679608 +0000 UTC m=+0.090266306 container remove 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:37:57 compute-0 systemd[1]: libpod-conmon-602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357.scope: Deactivated successfully.
Oct 14 09:37:57 compute-0 sudo[413198]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:57 compute-0 ceph-mon[74249]: pgmap v2544: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 09:37:57 compute-0 sudo[413366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:57 compute-0 sudo[413366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:57 compute-0 sudo[413366]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:57 compute-0 sudo[413391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:37:57 compute-0 sudo[413391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:57 compute-0 sudo[413391]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:57 compute-0 sudo[413416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:57 compute-0 sudo[413416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:57 compute-0 sudo[413416]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:57 compute-0 nova_compute[259627]: 2025-10-14 09:37:57.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:37:57 compute-0 nova_compute[259627]: 2025-10-14 09:37:57.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:37:57 compute-0 nova_compute[259627]: 2025-10-14 09:37:57.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:37:58 compute-0 sudo[413441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:37:58 compute-0 sudo[413441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:37:58 compute-0 nova_compute[259627]: 2025-10-14 09:37:58.199 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:58 compute-0 nova_compute[259627]: 2025-10-14 09:37:58.200 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:58 compute-0 nova_compute[259627]: 2025-10-14 09:37:58.200 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:37:58 compute-0 nova_compute[259627]: 2025-10-14 09:37:58.201 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bdfd070c-d036-4656-b797-efba7d4a4565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:37:58 compute-0 podman[413505]: 2025-10-14 09:37:58.386059834 +0000 UTC m=+0.042834052 container create 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 14 09:37:58 compute-0 systemd[1]: Started libpod-conmon-03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29.scope.
Oct 14 09:37:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:37:58 compute-0 podman[413505]: 2025-10-14 09:37:58.371573789 +0000 UTC m=+0.028348027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:37:58 compute-0 podman[413505]: 2025-10-14 09:37:58.467312948 +0000 UTC m=+0.124087166 container init 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:37:58 compute-0 podman[413505]: 2025-10-14 09:37:58.473243894 +0000 UTC m=+0.130018112 container start 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:37:58 compute-0 podman[413505]: 2025-10-14 09:37:58.4759233 +0000 UTC m=+0.132697548 container attach 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:37:58 compute-0 dreamy_black[413522]: 167 167
Oct 14 09:37:58 compute-0 systemd[1]: libpod-03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29.scope: Deactivated successfully.
Oct 14 09:37:58 compute-0 podman[413505]: 2025-10-14 09:37:58.480265836 +0000 UTC m=+0.137040094 container died 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-418e2e4a56dafdf4e077c4ffaac965d20141bf60dcf01e7fad39ac3248f276e3-merged.mount: Deactivated successfully.
Oct 14 09:37:58 compute-0 podman[413505]: 2025-10-14 09:37:58.525693701 +0000 UTC m=+0.182467949 container remove 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 09:37:58 compute-0 systemd[1]: libpod-conmon-03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29.scope: Deactivated successfully.
Oct 14 09:37:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 14 09:37:58 compute-0 podman[413546]: 2025-10-14 09:37:58.7591568 +0000 UTC m=+0.058933837 container create 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:37:58 compute-0 systemd[1]: Started libpod-conmon-963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b.scope.
Oct 14 09:37:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:37:58 compute-0 podman[413546]: 2025-10-14 09:37:58.73715056 +0000 UTC m=+0.036927637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:37:58 compute-0 podman[413546]: 2025-10-14 09:37:58.853124446 +0000 UTC m=+0.152901503 container init 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:37:58 compute-0 podman[413546]: 2025-10-14 09:37:58.860937708 +0000 UTC m=+0.160714745 container start 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:37:58 compute-0 podman[413546]: 2025-10-14 09:37:58.865065509 +0000 UTC m=+0.164842626 container attach 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:37:59 compute-0 nova_compute[259627]: 2025-10-14 09:37:59.244 2 DEBUG nova.compute.manager [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:37:59 compute-0 nova_compute[259627]: 2025-10-14 09:37:59.247 2 DEBUG nova.compute.manager [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing instance network info cache due to event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:37:59 compute-0 nova_compute[259627]: 2025-10-14 09:37:59.248 2 DEBUG oslo_concurrency.lockutils [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:37:59 compute-0 nova_compute[259627]: 2025-10-14 09:37:59.249 2 DEBUG oslo_concurrency.lockutils [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:37:59 compute-0 nova_compute[259627]: 2025-10-14 09:37:59.249 2 DEBUG nova.network.neutron [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]: {
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:     "0": [
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:         {
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "devices": [
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "/dev/loop3"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             ],
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_name": "ceph_lv0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_size": "21470642176",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "name": "ceph_lv0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "tags": {
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cluster_name": "ceph",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.crush_device_class": "",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.encrypted": "0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osd_id": "0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.type": "block",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.vdo": "0"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             },
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "type": "block",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "vg_name": "ceph_vg0"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:         }
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:     ],
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:     "1": [
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:         {
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "devices": [
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "/dev/loop4"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             ],
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_name": "ceph_lv1",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_size": "21470642176",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "name": "ceph_lv1",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "tags": {
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cluster_name": "ceph",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.crush_device_class": "",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.encrypted": "0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osd_id": "1",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.type": "block",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.vdo": "0"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             },
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "type": "block",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "vg_name": "ceph_vg1"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:         }
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:     ],
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:     "2": [
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:         {
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "devices": [
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "/dev/loop5"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             ],
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_name": "ceph_lv2",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_size": "21470642176",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "name": "ceph_lv2",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "tags": {
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.cluster_name": "ceph",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.crush_device_class": "",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.encrypted": "0",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osd_id": "2",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.type": "block",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:                 "ceph.vdo": "0"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             },
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "type": "block",
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:             "vg_name": "ceph_vg2"
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:         }
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]:     ]
Oct 14 09:37:59 compute-0 ecstatic_gagarin[413563]: }
Oct 14 09:37:59 compute-0 ceph-mon[74249]: pgmap v2545: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 14 09:37:59 compute-0 systemd[1]: libpod-963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b.scope: Deactivated successfully.
Oct 14 09:37:59 compute-0 conmon[413563]: conmon 963422f73fd8818af178 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b.scope/container/memory.events
Oct 14 09:37:59 compute-0 podman[413546]: 2025-10-14 09:37:59.700649595 +0000 UTC m=+1.000426662 container died 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 14 09:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a-merged.mount: Deactivated successfully.
Oct 14 09:37:59 compute-0 podman[413546]: 2025-10-14 09:37:59.761740034 +0000 UTC m=+1.061517061 container remove 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:37:59 compute-0 systemd[1]: libpod-conmon-963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b.scope: Deactivated successfully.
Oct 14 09:37:59 compute-0 sudo[413441]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:59 compute-0 sudo[413586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:37:59 compute-0 sudo[413586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:59 compute-0 sudo[413586]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:59 compute-0 sudo[413611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:37:59 compute-0 sudo[413611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:37:59 compute-0 sudo[413611]: pam_unix(sudo:session): session closed for user root
Oct 14 09:37:59 compute-0 sudo[413636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:38:00 compute-0 sudo[413636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:38:00 compute-0 sudo[413636]: pam_unix(sudo:session): session closed for user root
Oct 14 09:38:00 compute-0 sudo[413661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:38:00 compute-0 sudo[413661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:38:00 compute-0 podman[413725]: 2025-10-14 09:38:00.423899423 +0000 UTC m=+0.057062551 container create 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 14 09:38:00 compute-0 systemd[1]: Started libpod-conmon-9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612.scope.
Oct 14 09:38:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:38:00 compute-0 podman[413725]: 2025-10-14 09:38:00.404694252 +0000 UTC m=+0.037857350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:38:00 compute-0 podman[413725]: 2025-10-14 09:38:00.50853515 +0000 UTC m=+0.141698188 container init 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:38:00 compute-0 podman[413725]: 2025-10-14 09:38:00.516807803 +0000 UTC m=+0.149970811 container start 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:38:00 compute-0 podman[413725]: 2025-10-14 09:38:00.520954015 +0000 UTC m=+0.154117063 container attach 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:38:00 compute-0 awesome_noether[413741]: 167 167
Oct 14 09:38:00 compute-0 systemd[1]: libpod-9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612.scope: Deactivated successfully.
Oct 14 09:38:00 compute-0 podman[413725]: 2025-10-14 09:38:00.524522253 +0000 UTC m=+0.157685261 container died 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f92591f9de9cfae70a303f93ae4b50cc93e762a6a809cc1a659b87e98e6c82f-merged.mount: Deactivated successfully.
Oct 14 09:38:00 compute-0 podman[413725]: 2025-10-14 09:38:00.574734205 +0000 UTC m=+0.207897243 container remove 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:38:00 compute-0 systemd[1]: libpod-conmon-9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612.scope: Deactivated successfully.
Oct 14 09:38:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 14 09:38:00 compute-0 podman[413765]: 2025-10-14 09:38:00.76936939 +0000 UTC m=+0.047247460 container create d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:38:00 compute-0 systemd[1]: Started libpod-conmon-d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1.scope.
Oct 14 09:38:00 compute-0 podman[413765]: 2025-10-14 09:38:00.744751476 +0000 UTC m=+0.022629526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:38:00 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:38:00 compute-0 podman[413765]: 2025-10-14 09:38:00.870480392 +0000 UTC m=+0.148358472 container init d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:38:00 compute-0 podman[413765]: 2025-10-14 09:38:00.877818072 +0000 UTC m=+0.155696112 container start d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:38:00 compute-0 podman[413765]: 2025-10-14 09:38:00.881858901 +0000 UTC m=+0.159737021 container attach d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 09:38:00 compute-0 nova_compute[259627]: 2025-10-14 09:38:00.965 2 DEBUG nova.network.neutron [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updated VIF entry in instance network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:38:00 compute-0 nova_compute[259627]: 2025-10-14 09:38:00.970 2 DEBUG nova.network.neutron [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:38:00 compute-0 nova_compute[259627]: 2025-10-14 09:38:00.997 2 DEBUG oslo_concurrency.lockutils [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:38:01 compute-0 nova_compute[259627]: 2025-10-14 09:38:01.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:01 compute-0 nova_compute[259627]: 2025-10-14 09:38:01.273 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:38:01 compute-0 nova_compute[259627]: 2025-10-14 09:38:01.301 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:38:01 compute-0 nova_compute[259627]: 2025-10-14 09:38:01.301 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:38:01 compute-0 nova_compute[259627]: 2025-10-14 09:38:01.302 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:01 compute-0 nova_compute[259627]: 2025-10-14 09:38:01.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:01 compute-0 ceph-mon[74249]: pgmap v2546: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 14 09:38:01 compute-0 nice_yalow[413783]: {
Oct 14 09:38:01 compute-0 nice_yalow[413783]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "osd_id": 2,
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "type": "bluestore"
Oct 14 09:38:01 compute-0 nice_yalow[413783]:     },
Oct 14 09:38:01 compute-0 nice_yalow[413783]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "osd_id": 1,
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "type": "bluestore"
Oct 14 09:38:01 compute-0 nice_yalow[413783]:     },
Oct 14 09:38:01 compute-0 nice_yalow[413783]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "osd_id": 0,
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:38:01 compute-0 nice_yalow[413783]:         "type": "bluestore"
Oct 14 09:38:01 compute-0 nice_yalow[413783]:     }
Oct 14 09:38:01 compute-0 nice_yalow[413783]: }
Oct 14 09:38:01 compute-0 systemd[1]: libpod-d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1.scope: Deactivated successfully.
Oct 14 09:38:01 compute-0 podman[413765]: 2025-10-14 09:38:01.883759718 +0000 UTC m=+1.161637748 container died d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 09:38:01 compute-0 systemd[1]: libpod-d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1.scope: Consumed 1.013s CPU time.
Oct 14 09:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789-merged.mount: Deactivated successfully.
Oct 14 09:38:01 compute-0 podman[413765]: 2025-10-14 09:38:01.93764538 +0000 UTC m=+1.215523410 container remove d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:38:01 compute-0 systemd[1]: libpod-conmon-d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1.scope: Deactivated successfully.
Oct 14 09:38:01 compute-0 sudo[413661]: pam_unix(sudo:session): session closed for user root
Oct 14 09:38:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:38:01 compute-0 nova_compute[259627]: 2025-10-14 09:38:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:01 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:38:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:38:01 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:38:01 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 784e2ec4-3083-4f9b-a589-5ecf936b60d5 does not exist
Oct 14 09:38:01 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev bf338d45-c4da-47af-b287-2cc45e6732e1 does not exist
Oct 14 09:38:02 compute-0 sudo[413829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:38:02 compute-0 sudo[413829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:38:02 compute-0 sudo[413829]: pam_unix(sudo:session): session closed for user root
Oct 14 09:38:02 compute-0 sudo[413854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:38:02 compute-0 sudo[413854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:38:02 compute-0 sudo[413854]: pam_unix(sudo:session): session closed for user root
Oct 14 09:38:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:38:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:38:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:38:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:38:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:38:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:38:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:38:02 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:38:02 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:38:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.145680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683145736, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1601, "num_deletes": 250, "total_data_size": 2609815, "memory_usage": 2646648, "flush_reason": "Manual Compaction"}
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683159147, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1529055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52276, "largest_seqno": 53876, "table_properties": {"data_size": 1523607, "index_size": 2652, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14341, "raw_average_key_size": 20, "raw_value_size": 1511590, "raw_average_value_size": 2187, "num_data_blocks": 122, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434519, "oldest_key_time": 1760434519, "file_creation_time": 1760434683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 13525 microseconds, and 8402 cpu microseconds.
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.159203) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1529055 bytes OK
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.159229) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.161087) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.161111) EVENT_LOG_v1 {"time_micros": 1760434683161104, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.161133) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2602846, prev total WAL file size 2602846, number of live WAL files 2.
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.162833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303030' seq:72057594037927935, type:22 .. '6D6772737461740032323531' seq:0, type:0; will stop at (end)
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1493KB)], [122(9880KB)]
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683162927, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11646883, "oldest_snapshot_seqno": -1}
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7457 keys, 9358591 bytes, temperature: kUnknown
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683226494, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9358591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9310751, "index_size": 28080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 193866, "raw_average_key_size": 25, "raw_value_size": 9179503, "raw_average_value_size": 1230, "num_data_blocks": 1099, "num_entries": 7457, "num_filter_entries": 7457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.226818) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9358591 bytes
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.228300) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.0 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.6 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(13.7) write-amplify(6.1) OK, records in: 7892, records dropped: 435 output_compression: NoCompression
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.228331) EVENT_LOG_v1 {"time_micros": 1760434683228317, "job": 74, "event": "compaction_finished", "compaction_time_micros": 63648, "compaction_time_cpu_micros": 42930, "output_level": 6, "num_output_files": 1, "total_output_size": 9358591, "num_input_records": 7892, "num_output_records": 7457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683228913, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683232492, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.162595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:03 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:03 compute-0 ceph-mon[74249]: pgmap v2547: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:38:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:38:05 compute-0 ovn_controller[152662]: 2025-10-14T09:38:05Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:2b:85 10.100.0.8
Oct 14 09:38:05 compute-0 ovn_controller[152662]: 2025-10-14T09:38:05Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:2b:85 10.100.0.8
Oct 14 09:38:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:38:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1113738685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:38:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:38:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1113738685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:38:06 compute-0 ceph-mon[74249]: pgmap v2548: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:38:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1113738685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:38:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1113738685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:38:06 compute-0 nova_compute[259627]: 2025-10-14 09:38:06.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:06 compute-0 nova_compute[259627]: 2025-10-14 09:38:06.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 14 09:38:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:07.054 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:07.055 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:08 compute-0 ceph-mon[74249]: pgmap v2549: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 14 09:38:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 14 09:38:08 compute-0 podman[413880]: 2025-10-14 09:38:08.69892227 +0000 UTC m=+0.094719125 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 14 09:38:08 compute-0 podman[413879]: 2025-10-14 09:38:08.745729269 +0000 UTC m=+0.144008065 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 09:38:10 compute-0 ceph-mon[74249]: pgmap v2550: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 14 09:38:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:38:11 compute-0 nova_compute[259627]: 2025-10-14 09:38:11.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:11 compute-0 nova_compute[259627]: 2025-10-14 09:38:11.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:12 compute-0 ceph-mon[74249]: pgmap v2551: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:38:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:38:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:14 compute-0 ceph-mon[74249]: pgmap v2552: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:38:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:38:16 compute-0 ceph-mon[74249]: pgmap v2553: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:38:16 compute-0 nova_compute[259627]: 2025-10-14 09:38:16.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:16 compute-0 nova_compute[259627]: 2025-10-14 09:38:16.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:38:18 compute-0 ceph-mon[74249]: pgmap v2554: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.085 2 DEBUG nova.compute.manager [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.085 2 DEBUG nova.compute.manager [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing instance network info cache due to event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.086 2 DEBUG oslo_concurrency.lockutils [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.086 2 DEBUG oslo_concurrency.lockutils [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.086 2 DEBUG nova.network.neutron [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:38:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.173 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.173 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.174 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.174 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.174 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.175 2 INFO nova.compute.manager [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Terminating instance
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.176 2 DEBUG nova.compute.manager [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:38:18 compute-0 kernel: tap31c29d30-7e (unregistering): left promiscuous mode
Oct 14 09:38:18 compute-0 NetworkManager[44885]: <info>  [1760434698.2332] device (tap31c29d30-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:38:18 compute-0 ovn_controller[152662]: 2025-10-14T09:38:18Z|01619|binding|INFO|Releasing lport 31c29d30-7edf-486a-a168-0356f62ab3b9 from this chassis (sb_readonly=0)
Oct 14 09:38:18 compute-0 ovn_controller[152662]: 2025-10-14T09:38:18Z|01620|binding|INFO|Setting lport 31c29d30-7edf-486a-a168-0356f62ab3b9 down in Southbound
Oct 14 09:38:18 compute-0 ovn_controller[152662]: 2025-10-14T09:38:18Z|01621|binding|INFO|Removing iface tap31c29d30-7e ovn-installed in OVS
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.258 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:2b:85 10.100.0.8'], port_security=['fa:16:3e:7f:2b:85 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4e23c3df-9710-4287-9890-cdae2d551fc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b96c99b-2e43-4904-a8ba-7ffb70fd145f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31c29d30-7edf-486a-a168-0356f62ab3b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.260 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31c29d30-7edf-486a-a168-0356f62ab3b9 in datapath cb1842f7-933b-4c76-aa59-c55590c98ec5 unbound from our chassis
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.261 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cb1842f7-933b-4c76-aa59-c55590c98ec5
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 kernel: tapb67fadaf-4e (unregistering): left promiscuous mode
Oct 14 09:38:18 compute-0 NetworkManager[44885]: <info>  [1760434698.2795] device (tapb67fadaf-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.290 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b532af59-773c-470d-a666-950366763d14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ovn_controller[152662]: 2025-10-14T09:38:18Z|01622|binding|INFO|Releasing lport b67fadaf-4e6a-49b7-b340-4a8659d6216b from this chassis (sb_readonly=0)
Oct 14 09:38:18 compute-0 ovn_controller[152662]: 2025-10-14T09:38:18Z|01623|binding|INFO|Setting lport b67fadaf-4e6a-49b7-b340-4a8659d6216b down in Southbound
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 ovn_controller[152662]: 2025-10-14T09:38:18Z|01624|binding|INFO|Removing iface tapb67fadaf-4e ovn-installed in OVS
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.310 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24'], port_security=['fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9e:a24/64', 'neutron:device_id': '4e23c3df-9710-4287-9890-cdae2d551fc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b53903-cbc2-43bd-a94b-2366acd741ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b67fadaf-4e6a-49b7-b340-4a8659d6216b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.344 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[88f9f314-107d-463f-9e72-fbfc5b6a612f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.347 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c8165a8c-3544-46b9-9451-bc0d35077a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct 14 09:38:18 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000093.scope: Consumed 13.664s CPU time.
Oct 14 09:38:18 compute-0 systemd-machined[214636]: Machine qemu-180-instance-00000093 terminated.
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.392 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc1f172-38ab-45f6-bd88-10b5005a4efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 NetworkManager[44885]: <info>  [1760434698.4162] manager: (tapb67fadaf-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.417 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36ff8a6f-5378-4b11-82bb-63ec15be113e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb1842f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841296, 'reachable_time': 21039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413939, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.433 2 INFO nova.virt.libvirt.driver [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance destroyed successfully.
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.433 2 DEBUG nova.objects.instance [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 4e23c3df-9710-4287-9890-cdae2d551fc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddc28a7-8fcb-43ed-8294-852591b85d0d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcb1842f7-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841308, 'tstamp': 841308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413956, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcb1842f7-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841310, 'tstamp': 841310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413956, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.446 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb1842f7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.452 2 DEBUG nova.virt.libvirt.vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:37:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:37:53Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.452 2 DEBUG nova.network.os_vif_util [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.453 2 DEBUG nova.network.os_vif_util [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.454 2 DEBUG os_vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31c29d30-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.461 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb1842f7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.462 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.462 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcb1842f7-90, col_values=(('external_ids', {'iface-id': '49fe400f-9e76-42ad-a72c-3f9a5bf50e43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.462 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.464 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b67fadaf-4e6a-49b7-b340-4a8659d6216b in datapath 4914ad10-cd65-4b9a-8ebb-43ebafc5f222 unbound from our chassis
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.465 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4914ad10-cd65-4b9a-8ebb-43ebafc5f222
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.468 2 INFO os_vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e')
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.468 2 DEBUG nova.virt.libvirt.vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:37:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:37:53Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.469 2 DEBUG nova.network.os_vif_util [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.470 2 DEBUG nova.network.os_vif_util [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.471 2 DEBUG os_vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb67fadaf-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.478 2 INFO os_vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e')
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfc1845-ef09-4a6c-8665-c1ba60f47dbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.537 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce36baa0-0b17-486e-bca0-764293b12a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.542 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6372c2-44b9-4d94-bed9-dba1999997b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.584 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[def7b38e-d0ca-428f-b421-0b97493b3159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e390a9a4-97e0-401e-a5e1-0dc83f4d22ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4914ad10-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:b4:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841397, 'reachable_time': 27486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413985, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 109 KiB/s wr, 17 op/s
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec93e21-2833-4b6e-adb3-006924725417]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4914ad10-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841411, 'tstamp': 841411}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413986, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.639 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4914ad10-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.644 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4914ad10-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.645 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.645 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4914ad10-c0, col_values=(('external_ids', {'iface-id': '7ea31872-5902-4f26-8d38-70f94c9c61fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:18 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.646 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.943 2 INFO nova.virt.libvirt.driver [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Deleting instance files /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0_del
Oct 14 09:38:18 compute-0 nova_compute[259627]: 2025-10-14 09:38:18.944 2 INFO nova.virt.libvirt.driver [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Deletion of /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0_del complete
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.009 2 INFO nova.compute.manager [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.010 2 DEBUG oslo.service.loopingcall [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.011 2 DEBUG nova.compute.manager [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.011 2 DEBUG nova.network.neutron [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.301 2 DEBUG nova.compute.manager [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-unplugged-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.302 2 DEBUG oslo_concurrency.lockutils [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.302 2 DEBUG oslo_concurrency.lockutils [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.303 2 DEBUG oslo_concurrency.lockutils [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.303 2 DEBUG nova.compute.manager [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-unplugged-31c29d30-7edf-486a-a168-0356f62ab3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:38:19 compute-0 nova_compute[259627]: 2025-10-14 09:38:19.304 2 DEBUG nova.compute.manager [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-unplugged-31c29d30-7edf-486a-a168-0356f62ab3b9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:38:20 compute-0 ceph-mon[74249]: pgmap v2555: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 109 KiB/s wr, 17 op/s
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.172 2 DEBUG nova.compute.manager [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-unplugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.173 2 DEBUG oslo_concurrency.lockutils [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.173 2 DEBUG oslo_concurrency.lockutils [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.174 2 DEBUG oslo_concurrency.lockutils [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.174 2 DEBUG nova.compute.manager [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-unplugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.174 2 DEBUG nova.compute.manager [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-unplugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:38:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 116 KiB/s wr, 48 op/s
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.867 2 DEBUG nova.network.neutron [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.889 2 INFO nova.compute.manager [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Took 1.88 seconds to deallocate network for instance.
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.947 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.948 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.970 2 DEBUG nova.network.neutron [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updated VIF entry in instance network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.971 2 DEBUG nova.network.neutron [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:38:20 compute-0 nova_compute[259627]: 2025-10-14 09:38:20.997 2 DEBUG oslo_concurrency.lockutils [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.045 2 DEBUG oslo_concurrency.processutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.436 2 DEBUG nova.compute.manager [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.437 2 DEBUG oslo_concurrency.lockutils [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.438 2 DEBUG oslo_concurrency.lockutils [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.438 2 DEBUG oslo_concurrency.lockutils [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.439 2 DEBUG nova.compute.manager [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.439 2 WARNING nova.compute.manager [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received unexpected event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 for instance with vm_state deleted and task_state None.
Oct 14 09:38:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:38:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051288224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.486 2 DEBUG oslo_concurrency.processutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.494 2 DEBUG nova.compute.provider_tree [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.514 2 DEBUG nova.scheduler.client.report [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.544 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.568 2 INFO nova.scheduler.client.report [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 4e23c3df-9710-4287-9890-cdae2d551fc0
Oct 14 09:38:21 compute-0 nova_compute[259627]: 2025-10-14 09:38:21.631 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:22 compute-0 ceph-mon[74249]: pgmap v2556: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 116 KiB/s wr, 48 op/s
Oct 14 09:38:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1051288224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.259 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.259 2 DEBUG oslo_concurrency.lockutils [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.260 2 DEBUG oslo_concurrency.lockutils [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.260 2 DEBUG oslo_concurrency.lockutils [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.261 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.261 2 WARNING nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received unexpected event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b for instance with vm_state deleted and task_state None.
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.262 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-deleted-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.262 2 INFO nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Neutron deleted interface 31c29d30-7edf-486a-a168-0356f62ab3b9; detaching it from the instance and deleting it from the info cache
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.263 2 DEBUG nova.network.neutron [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.267 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Detach interface failed, port_id=31c29d30-7edf-486a-a168-0356f62ab3b9, reason: Instance 4e23c3df-9710-4287-9890-cdae2d551fc0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.267 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-deleted-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.268 2 INFO nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Neutron deleted interface b67fadaf-4e6a-49b7-b340-4a8659d6216b; detaching it from the instance and deleting it from the info cache
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.268 2 DEBUG nova.network.neutron [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 14 09:38:22 compute-0 nova_compute[259627]: 2025-10-14 09:38:22.272 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Detach interface failed, port_id=b67fadaf-4e6a-49b7-b340-4a8659d6216b, reason: Instance 4e23c3df-9710-4287-9890-cdae2d551fc0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:38:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 31 op/s
Oct 14 09:38:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.351 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:38:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.352 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:23 compute-0 podman[414011]: 2025-10-14 09:38:23.707801505 +0000 UTC m=+0.100061587 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:38:23 compute-0 podman[414010]: 2025-10-14 09:38:23.728284317 +0000 UTC m=+0.121827140 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.847 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.848 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.849 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.849 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.850 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.852 2 INFO nova.compute.manager [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Terminating instance
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.854 2 DEBUG nova.compute.manager [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:38:23 compute-0 kernel: tapd3b7ded4-91 (unregistering): left promiscuous mode
Oct 14 09:38:23 compute-0 NetworkManager[44885]: <info>  [1760434703.9237] device (tapd3b7ded4-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:38:23 compute-0 ovn_controller[152662]: 2025-10-14T09:38:23Z|01625|binding|INFO|Releasing lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab from this chassis (sb_readonly=0)
Oct 14 09:38:23 compute-0 ovn_controller[152662]: 2025-10-14T09:38:23Z|01626|binding|INFO|Setting lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab down in Southbound
Oct 14 09:38:23 compute-0 ovn_controller[152662]: 2025-10-14T09:38:23Z|01627|binding|INFO|Removing iface tapd3b7ded4-91 ovn-installed in OVS
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.945 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:b2:53 10.100.0.10'], port_security=['fa:16:3e:fc:b2:53 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bdfd070c-d036-4656-b797-efba7d4a4565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b96c99b-2e43-4904-a8ba-7ffb70fd145f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:38:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.947 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab in datapath cb1842f7-933b-4c76-aa59-c55590c98ec5 unbound from our chassis
Oct 14 09:38:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.948 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cb1842f7-933b-4c76-aa59-c55590c98ec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:38:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.949 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a198c74b-92d9-4190-bc4e-c4c229e055f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.950 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 namespace which is not needed anymore
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:23 compute-0 kernel: tapd4332d2f-ff (unregistering): left promiscuous mode
Oct 14 09:38:23 compute-0 NetworkManager[44885]: <info>  [1760434703.9764] device (tapd4332d2f-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:23 compute-0 ovn_controller[152662]: 2025-10-14T09:38:23Z|01628|binding|INFO|Releasing lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 from this chassis (sb_readonly=0)
Oct 14 09:38:23 compute-0 ovn_controller[152662]: 2025-10-14T09:38:23Z|01629|binding|INFO|Setting lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 down in Southbound
Oct 14 09:38:23 compute-0 ovn_controller[152662]: 2025-10-14T09:38:23Z|01630|binding|INFO|Removing iface tapd4332d2f-ff ovn-installed in OVS
Oct 14 09:38:23 compute-0 nova_compute[259627]: 2025-10-14 09:38:23.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.000 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975'], port_security=['fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe98:6975/64', 'neutron:device_id': 'bdfd070c-d036-4656-b797-efba7d4a4565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b53903-cbc2-43bd-a94b-2366acd741ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d4332d2f-fff2-4de9-811c-7d5ce2580b21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 14 09:38:24 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000092.scope: Consumed 16.802s CPU time.
Oct 14 09:38:24 compute-0 systemd-machined[214636]: Machine qemu-179-instance-00000092 terminated.
Oct 14 09:38:24 compute-0 ceph-mon[74249]: pgmap v2557: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 31 op/s
Oct 14 09:38:24 compute-0 NetworkManager[44885]: <info>  [1760434704.0910] manager: (tapd4332d2f-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.120 2 INFO nova.virt.libvirt.driver [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance destroyed successfully.
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [NOTICE]   (412014) : haproxy version is 2.8.14-c23fe91
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [NOTICE]   (412014) : path to executable is /usr/sbin/haproxy
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [WARNING]  (412014) : Exiting Master process...
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [WARNING]  (412014) : Exiting Master process...
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.122 2 DEBUG nova.objects.instance [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid bdfd070c-d036-4656-b797-efba7d4a4565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [ALERT]    (412014) : Current worker (412016) exited with code 143 (Terminated)
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [WARNING]  (412014) : All workers exited. Exiting... (0)
Oct 14 09:38:24 compute-0 systemd[1]: libpod-a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54.scope: Deactivated successfully.
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.135 2 DEBUG nova.virt.libvirt.vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:37:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:37:16Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.140 2 DEBUG nova.network.os_vif_util [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:38:24 compute-0 podman[414076]: 2025-10-14 09:38:24.135824058 +0000 UTC m=+0.066760949 container died a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.143 2 DEBUG nova.network.os_vif_util [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.144 2 DEBUG os_vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3b7ded4-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.158 2 INFO os_vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91')
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.160 2 DEBUG nova.virt.libvirt.vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:37:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:37:16Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.160 2 DEBUG nova.network.os_vif_util [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.161 2 DEBUG nova.network.os_vif_util [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.162 2 DEBUG os_vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4332d2f-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.169 2 INFO os_vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff')
Oct 14 09:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54-userdata-shm.mount: Deactivated successfully.
Oct 14 09:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-25a575707cf6d3dded0de43b86eb81e521613e1d92e58a48fd3de84aba25f6fe-merged.mount: Deactivated successfully.
Oct 14 09:38:24 compute-0 podman[414076]: 2025-10-14 09:38:24.196315683 +0000 UTC m=+0.127252574 container cleanup a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:38:24 compute-0 systemd[1]: libpod-conmon-a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54.scope: Deactivated successfully.
Oct 14 09:38:24 compute-0 podman[414142]: 2025-10-14 09:38:24.271412686 +0000 UTC m=+0.045956899 container remove a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8c19e55e-68f4-40af-997e-5b53a1391770]: (4, ('Tue Oct 14 09:38:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 (a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54)\na5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54\nTue Oct 14 09:38:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 (a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54)\na5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bf210d54-f083-4bc6-8bc2-7d13afea5bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.284 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb1842f7-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 kernel: tapcb1842f7-90: left promiscuous mode
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.317 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb3141c-a8e9-46ed-a5df-6c5fd30b2b89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.347 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35265d52-c823-44f7-8cc3-b62b6a7b8b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2843e757-5734-4ff8-b270-108f49e73ad6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.368 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6d22ad-d9a2-4534-a638-d7445e94e26d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841288, 'reachable_time': 21456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414160, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 systemd[1]: run-netns-ovnmeta\x2dcb1842f7\x2d933b\x2d4c76\x2daa59\x2dc55590c98ec5.mount: Deactivated successfully.
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.374 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.374 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba165f8-422d-422b-84dc-fb86cc9d73df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.376 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d4332d2f-fff2-4de9-811c-7d5ce2580b21 in datapath 4914ad10-cd65-4b9a-8ebb-43ebafc5f222 unbound from our chassis
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.378 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.379 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4914ad10-cd65-4b9a-8ebb-43ebafc5f222, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.379 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing instance network info cache due to event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.380 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.380 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.381 2 DEBUG nova.network.neutron [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.380 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0592a474-8b27-4e15-a0e1-83e730a06bf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.381 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 namespace which is not needed anymore
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [NOTICE]   (412087) : haproxy version is 2.8.14-c23fe91
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [NOTICE]   (412087) : path to executable is /usr/sbin/haproxy
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [WARNING]  (412087) : Exiting Master process...
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [ALERT]    (412087) : Current worker (412089) exited with code 143 (Terminated)
Oct 14 09:38:24 compute-0 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [WARNING]  (412087) : All workers exited. Exiting... (0)
Oct 14 09:38:24 compute-0 systemd[1]: libpod-7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb.scope: Deactivated successfully.
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.605 2 INFO nova.virt.libvirt.driver [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Deleting instance files /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565_del
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.606 2 INFO nova.virt.libvirt.driver [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Deletion of /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565_del complete
Oct 14 09:38:24 compute-0 podman[414179]: 2025-10-14 09:38:24.608269452 +0000 UTC m=+0.081277175 container died 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:38:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 31 op/s
Oct 14 09:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb-userdata-shm.mount: Deactivated successfully.
Oct 14 09:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1331dc74cf2f6433ae28b4a6f4672079f6c234826e20c160e76e1878b09b3328-merged.mount: Deactivated successfully.
Oct 14 09:38:24 compute-0 podman[414179]: 2025-10-14 09:38:24.65217191 +0000 UTC m=+0.125179543 container cleanup 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.661 2 INFO nova.compute.manager [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.662 2 DEBUG oslo.service.loopingcall [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.663 2 DEBUG nova.compute.manager [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.663 2 DEBUG nova.network.neutron [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:38:24 compute-0 systemd[1]: libpod-conmon-7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb.scope: Deactivated successfully.
Oct 14 09:38:24 compute-0 podman[414207]: 2025-10-14 09:38:24.710958592 +0000 UTC m=+0.038899045 container remove 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.723 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6b525581-c508-46ff-9905-cb3d7f508389]: (4, ('Tue Oct 14 09:38:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 (7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb)\n7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb\nTue Oct 14 09:38:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 (7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb)\n7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[55906227-1d74-4a4e-82d7-f6c4d9d7d4a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.726 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4914ad10-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 kernel: tap4914ad10-c0: left promiscuous mode
Oct 14 09:38:24 compute-0 nova_compute[259627]: 2025-10-14 09:38:24.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.755 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0b6310-2a96-4fa2-a8e3-9c09b1ecb0a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc338261-d309-43f5-a783-ebf5e2bb6540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.789 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[28164651-b9a1-4acc-a593-abcb04750d5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.813 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3e80ca-26d5-4fc7-a222-2ee5ce6eb156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841388, 'reachable_time': 27324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414222, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.815 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:38:24 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.816 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb3db80-7193-4596-b717-f1cd5f05241b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d4914ad10\x2dcd65\x2d4b9a\x2d8ebb\x2d43ebafc5f222.mount: Deactivated successfully.
Oct 14 09:38:25 compute-0 nova_compute[259627]: 2025-10-14 09:38:25.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:26 compute-0 ceph-mon[74249]: pgmap v2558: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 31 op/s
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 41 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 26 KiB/s wr, 59 op/s
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.696 2 DEBUG nova.network.neutron [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updated VIF entry in instance network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.697 2 DEBUG nova.network.neutron [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.727 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.728 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-unplugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.729 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.729 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.730 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.730 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-unplugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.731 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-unplugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.782 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.783 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.784 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.784 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.785 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.785 2 WARNING nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received unexpected event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab for instance with vm_state active and task_state deleting.
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.786 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-unplugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.786 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.786 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.787 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.787 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-unplugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.787 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-unplugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.788 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.788 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.788 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.789 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.789 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.789 2 WARNING nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received unexpected event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 for instance with vm_state active and task_state deleting.
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.790 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-deleted-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.790 2 INFO nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Neutron deleted interface d3b7ded4-91fa-46dc-b6b9-e6e630c275ab; detaching it from the instance and deleting it from the info cache
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.790 2 DEBUG nova.network.neutron [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:38:26 compute-0 nova_compute[259627]: 2025-10-14 09:38:26.823 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Detach interface failed, port_id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab, reason: Instance bdfd070c-d036-4656-b797-efba7d4a4565 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.355 2 DEBUG nova.network.neutron [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.379 2 INFO nova.compute.manager [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Took 2.72 seconds to deallocate network for instance.
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.422 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.423 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.508 2 DEBUG oslo_concurrency.processutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:38:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:38:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442197784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.935 2 DEBUG oslo_concurrency.processutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.941 2 DEBUG nova.compute.provider_tree [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.961 2 DEBUG nova.scheduler.client.report [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:38:27 compute-0 nova_compute[259627]: 2025-10-14 09:38:27.987 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:28 compute-0 nova_compute[259627]: 2025-10-14 09:38:28.028 2 INFO nova.scheduler.client.report [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance bdfd070c-d036-4656-b797-efba7d4a4565
Oct 14 09:38:28 compute-0 ceph-mon[74249]: pgmap v2559: 305 pgs: 305 active+clean; 41 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 26 KiB/s wr, 59 op/s
Oct 14 09:38:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1442197784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:28 compute-0 nova_compute[259627]: 2025-10-14 09:38:28.128 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 41 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct 14 09:38:28 compute-0 nova_compute[259627]: 2025-10-14 09:38:28.907 2 DEBUG nova.compute.manager [req-63ef8cc8-0c8a-4042-a5d3-b77102f72d2e req-41093187-bf5e-46b1-a26f-0c0c245126bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-deleted-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:38:29 compute-0 nova_compute[259627]: 2025-10-14 09:38:29.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:30 compute-0 ceph-mon[74249]: pgmap v2560: 305 pgs: 305 active+clean; 41 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct 14 09:38:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct 14 09:38:31 compute-0 ceph-mon[74249]: pgmap v2561: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct 14 09:38:31 compute-0 nova_compute[259627]: 2025-10-14 09:38:31.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:38:32
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Oct 14 09:38:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:38:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:38:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:38:33 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:33.355 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:38:33 compute-0 nova_compute[259627]: 2025-10-14 09:38:33.432 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434698.4304912, 4e23c3df-9710-4287-9890-cdae2d551fc0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:38:33 compute-0 nova_compute[259627]: 2025-10-14 09:38:33.433 2 INFO nova.compute.manager [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] VM Stopped (Lifecycle Event)
Oct 14 09:38:33 compute-0 nova_compute[259627]: 2025-10-14 09:38:33.470 2 DEBUG nova.compute.manager [None req-5c3f268b-6fad-4997-ab3a-e7fc5b784b27 - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:38:33 compute-0 ceph-mon[74249]: pgmap v2562: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 09:38:34 compute-0 nova_compute[259627]: 2025-10-14 09:38:34.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 09:38:35 compute-0 ceph-mon[74249]: pgmap v2563: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 09:38:36 compute-0 nova_compute[259627]: 2025-10-14 09:38:36.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:36 compute-0 nova_compute[259627]: 2025-10-14 09:38:36.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:36 compute-0 nova_compute[259627]: 2025-10-14 09:38:36.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 09:38:37 compute-0 ceph-mon[74249]: pgmap v2564: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 09:38:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:39 compute-0 nova_compute[259627]: 2025-10-14 09:38:39.110 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434704.108715, bdfd070c-d036-4656-b797-efba7d4a4565 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:38:39 compute-0 nova_compute[259627]: 2025-10-14 09:38:39.111 2 INFO nova.compute.manager [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] VM Stopped (Lifecycle Event)
Oct 14 09:38:39 compute-0 nova_compute[259627]: 2025-10-14 09:38:39.146 2 DEBUG nova.compute.manager [None req-e11be0fd-c752-473b-a714-d0dff6e67c4c - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:38:39 compute-0 nova_compute[259627]: 2025-10-14 09:38:39.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:39 compute-0 podman[414247]: 2025-10-14 09:38:39.697560112 +0000 UTC m=+0.096521340 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 09:38:39 compute-0 podman[414246]: 2025-10-14 09:38:39.713663677 +0000 UTC m=+0.119903033 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:38:39 compute-0 ceph-mon[74249]: pgmap v2565: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:41 compute-0 nova_compute[259627]: 2025-10-14 09:38:41.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:41 compute-0 ceph-mon[74249]: pgmap v2566: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:38:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:38:43 compute-0 ceph-mon[74249]: pgmap v2567: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.745826) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723745915, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 549, "num_deletes": 251, "total_data_size": 578483, "memory_usage": 588696, "flush_reason": "Manual Compaction"}
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723752972, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 573213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53877, "largest_seqno": 54425, "table_properties": {"data_size": 570161, "index_size": 1024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6977, "raw_average_key_size": 18, "raw_value_size": 564210, "raw_average_value_size": 1533, "num_data_blocks": 46, "num_entries": 368, "num_filter_entries": 368, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434683, "oldest_key_time": 1760434683, "file_creation_time": 1760434723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 7237 microseconds, and 4267 cpu microseconds.
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.753076) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 573213 bytes OK
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.753099) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.754627) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.754649) EVENT_LOG_v1 {"time_micros": 1760434723754642, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.754672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 575397, prev total WAL file size 575397, number of live WAL files 2.
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.755341) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(559KB)], [125(9139KB)]
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723755557, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 9931804, "oldest_snapshot_seqno": -1}
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7315 keys, 8282339 bytes, temperature: kUnknown
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723811443, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8282339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8236370, "index_size": 26592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 191600, "raw_average_key_size": 26, "raw_value_size": 8108558, "raw_average_value_size": 1108, "num_data_blocks": 1029, "num_entries": 7315, "num_filter_entries": 7315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.811712) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8282339 bytes
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.813176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.7 rd, 148.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 8.9 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(31.8) write-amplify(14.4) OK, records in: 7825, records dropped: 510 output_compression: NoCompression
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.813205) EVENT_LOG_v1 {"time_micros": 1760434723813193, "job": 76, "event": "compaction_finished", "compaction_time_micros": 55897, "compaction_time_cpu_micros": 37865, "output_level": 6, "num_output_files": 1, "total_output_size": 8282339, "num_input_records": 7825, "num_output_records": 7315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723813568, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723817239, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.755253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:38:44 compute-0 nova_compute[259627]: 2025-10-14 09:38:44.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:45 compute-0 ceph-mon[74249]: pgmap v2568: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:46 compute-0 nova_compute[259627]: 2025-10-14 09:38:46.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:46 compute-0 nova_compute[259627]: 2025-10-14 09:38:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:47 compute-0 ceph-mon[74249]: pgmap v2569: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:49 compute-0 nova_compute[259627]: 2025-10-14 09:38:49.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:49 compute-0 ceph-mon[74249]: pgmap v2570: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:50.223 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:47:b8 10.100.0.2 2001:db8::f816:3eff:fe76:47b8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe76:47b8/64', 'neutron:device_id': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=500514c8-ea10-48c5-93d8-b1c6948e60b0) old=Port_Binding(mac=['fa:16:3e:76:47:b8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:38:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:50.224 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 500514c8-ea10-48c5-93d8-b1c6948e60b0 in datapath 14196b9b-0205-497b-9e98-32690613a533 updated
Oct 14 09:38:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:50.225 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14196b9b-0205-497b-9e98-32690613a533, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:38:50 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:50.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33ecf7e3-ac28-496d-b04c-32b028b426ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:50 compute-0 nova_compute[259627]: 2025-10-14 09:38:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:50 compute-0 nova_compute[259627]: 2025-10-14 09:38:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:50 compute-0 nova_compute[259627]: 2025-10-14 09:38:50.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:38:51 compute-0 nova_compute[259627]: 2025-10-14 09:38:51.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:51 compute-0 ceph-mon[74249]: pgmap v2571: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:52 compute-0 nova_compute[259627]: 2025-10-14 09:38:52.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:53.724 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:47:b8 10.100.0.2 2001:db8:0:1:f816:3eff:fe76:47b8 2001:db8::f816:3eff:fe76:47b8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe76:47b8/64 2001:db8::f816:3eff:fe76:47b8/64', 'neutron:device_id': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=500514c8-ea10-48c5-93d8-b1c6948e60b0) old=Port_Binding(mac=['fa:16:3e:76:47:b8 10.100.0.2 2001:db8::f816:3eff:fe76:47b8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe76:47b8/64', 'neutron:device_id': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:38:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:53.726 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 500514c8-ea10-48c5-93d8-b1c6948e60b0 in datapath 14196b9b-0205-497b-9e98-32690613a533 updated
Oct 14 09:38:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:53.728 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14196b9b-0205-497b-9e98-32690613a533, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:38:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:38:53.729 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7718b5d9-5c9c-4d8e-a31b-ccddc812d110]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:38:53 compute-0 ceph-mon[74249]: pgmap v2572: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:54 compute-0 nova_compute[259627]: 2025-10-14 09:38:54.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:54 compute-0 podman[414292]: 2025-10-14 09:38:54.649775458 +0000 UTC m=+0.063805627 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:38:54 compute-0 podman[414293]: 2025-10-14 09:38:54.661842644 +0000 UTC m=+0.067303342 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:38:54 compute-0 nova_compute[259627]: 2025-10-14 09:38:54.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:38:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:38:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806017976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.474 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.748 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.750 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3594MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.750 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.751 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:55 compute-0 ceph-mon[74249]: pgmap v2573: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3806017976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.901 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:38:55 compute-0 nova_compute[259627]: 2025-10-14 09:38:55.966 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:38:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:38:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/848032482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:56 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:38:56 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:38:56 compute-0 nova_compute[259627]: 2025-10-14 09:38:56.437 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:38:56 compute-0 nova_compute[259627]: 2025-10-14 09:38:56.445 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:38:56 compute-0 nova_compute[259627]: 2025-10-14 09:38:56.471 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:38:56 compute-0 nova_compute[259627]: 2025-10-14 09:38:56.504 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:38:56 compute-0 nova_compute[259627]: 2025-10-14 09:38:56.505 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:56 compute-0 nova_compute[259627]: 2025-10-14 09:38:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/848032482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:57 compute-0 nova_compute[259627]: 2025-10-14 09:38:57.506 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:57 compute-0 nova_compute[259627]: 2025-10-14 09:38:57.507 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:38:57 compute-0 ceph-mon[74249]: pgmap v2574: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:57 compute-0 nova_compute[259627]: 2025-10-14 09:38:57.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:57 compute-0 nova_compute[259627]: 2025-10-14 09:38:57.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:38:58 compute-0 nova_compute[259627]: 2025-10-14 09:38:58.007 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:38:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:38:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:58 compute-0 nova_compute[259627]: 2025-10-14 09:38:58.679 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:58 compute-0 nova_compute[259627]: 2025-10-14 09:38:58.679 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:58 compute-0 nova_compute[259627]: 2025-10-14 09:38:58.708 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:38:58 compute-0 nova_compute[259627]: 2025-10-14 09:38:58.890 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:38:58 compute-0 nova_compute[259627]: 2025-10-14 09:38:58.891 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:38:58 compute-0 nova_compute[259627]: 2025-10-14 09:38:58.901 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:38:58 compute-0 nova_compute[259627]: 2025-10-14 09:38:58.902 2 INFO nova.compute.claims [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.050 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:38:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:38:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347606092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.548 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.554 2 DEBUG nova.compute.provider_tree [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.581 2 DEBUG nova.scheduler.client.report [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.629 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.630 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.707 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.708 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.735 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.756 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:38:59 compute-0 ceph-mon[74249]: pgmap v2575: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:38:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3347606092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.869 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.871 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.872 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Creating image(s)
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.911 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.948 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.975 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:38:59 compute-0 nova_compute[259627]: 2025-10-14 09:38:59.979 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.028 2 DEBUG nova.policy [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.033 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.071 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.072 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.072 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.073 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.097 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.100 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.362 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.443 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.550 2 DEBUG nova.objects.instance [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid e5b13156-71d2-4a9c-be63-1beebe1ca3fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.574 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.574 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Ensure instance console log exists: /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.574 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.575 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:00 compute-0 nova_compute[259627]: 2025-10-14 09:39:00.575 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 09:39:01 compute-0 nova_compute[259627]: 2025-10-14 09:39:01.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:01 compute-0 ceph-mon[74249]: pgmap v2576: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 09:39:02 compute-0 sudo[414569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:02 compute-0 sudo[414569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:02 compute-0 sudo[414569]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:02 compute-0 sudo[414594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:39:02 compute-0 sudo[414594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:02 compute-0 sudo[414594]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:02 compute-0 sudo[414619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:02 compute-0 sudo[414619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:02 compute-0 sudo[414619]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:02 compute-0 sudo[414644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:39:02 compute-0 sudo[414644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:02 compute-0 nova_compute[259627]: 2025-10-14 09:39:02.517 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Successfully created port: 30c28c87-45b1-43e9-930b-c8ba5142286f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:39:02 compute-0 sudo[414644]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:39:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:39:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:39:02 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:39:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:39:02 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 956726a7-5c12-4a68-bf17-4325b85dd9e3 does not exist
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev bc7bc011-8776-4264-8df5-c13a6edb5ddb does not exist
Oct 14 09:39:02 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev bb08aeff-726b-4be7-a5e5-43ba3297f9f9 does not exist
Oct 14 09:39:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:39:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:39:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:39:02 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:39:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:39:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:39:02 compute-0 nova_compute[259627]: 2025-10-14 09:39:02.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:03 compute-0 sudo[414701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:03 compute-0 sudo[414701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:03 compute-0 sudo[414701]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:03 compute-0 sudo[414726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:39:03 compute-0 sudo[414726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:03 compute-0 sudo[414726]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:03 compute-0 sudo[414751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:03 compute-0 sudo[414751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:03 compute-0 nova_compute[259627]: 2025-10-14 09:39:03.193 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Successfully updated port: 30c28c87-45b1-43e9-930b-c8ba5142286f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:39:03 compute-0 sudo[414751]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:03 compute-0 nova_compute[259627]: 2025-10-14 09:39:03.258 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:39:03 compute-0 nova_compute[259627]: 2025-10-14 09:39:03.258 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:39:03 compute-0 nova_compute[259627]: 2025-10-14 09:39:03.258 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:39:03 compute-0 sudo[414776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:39:03 compute-0 sudo[414776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:03 compute-0 nova_compute[259627]: 2025-10-14 09:39:03.332 2 DEBUG nova.compute.manager [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:39:03 compute-0 nova_compute[259627]: 2025-10-14 09:39:03.333 2 DEBUG nova.compute.manager [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing instance network info cache due to event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:39:03 compute-0 nova_compute[259627]: 2025-10-14 09:39:03.333 2 DEBUG oslo_concurrency.lockutils [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:39:03 compute-0 nova_compute[259627]: 2025-10-14 09:39:03.543 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:39:03 compute-0 podman[414843]: 2025-10-14 09:39:03.749817991 +0000 UTC m=+0.068306827 container create 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:39:03 compute-0 systemd[1]: Started libpod-conmon-14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec.scope.
Oct 14 09:39:03 compute-0 podman[414843]: 2025-10-14 09:39:03.720722117 +0000 UTC m=+0.039211003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:39:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:39:03 compute-0 podman[414843]: 2025-10-14 09:39:03.853477435 +0000 UTC m=+0.171966261 container init 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:39:03 compute-0 podman[414843]: 2025-10-14 09:39:03.861745648 +0000 UTC m=+0.180234484 container start 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:39:03 compute-0 podman[414843]: 2025-10-14 09:39:03.867899269 +0000 UTC m=+0.186388105 container attach 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:39:03 compute-0 xenodochial_kilby[414860]: 167 167
Oct 14 09:39:03 compute-0 systemd[1]: libpod-14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec.scope: Deactivated successfully.
Oct 14 09:39:03 compute-0 podman[414843]: 2025-10-14 09:39:03.871170359 +0000 UTC m=+0.189659185 container died 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:39:03 compute-0 ceph-mon[74249]: pgmap v2577: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 09:39:03 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:39:03 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:39:03 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:39:03 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:39:03 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:39:03 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-073458afd39f15130dda37f96a62361c662b7be0bf1605df52aa0875376b851b-merged.mount: Deactivated successfully.
Oct 14 09:39:03 compute-0 podman[414843]: 2025-10-14 09:39:03.926334503 +0000 UTC m=+0.244823329 container remove 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:39:03 compute-0 systemd[1]: libpod-conmon-14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec.scope: Deactivated successfully.
Oct 14 09:39:04 compute-0 podman[414883]: 2025-10-14 09:39:04.184495128 +0000 UTC m=+0.071850194 container create 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 09:39:04 compute-0 systemd[1]: Started libpod-conmon-4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782.scope.
Oct 14 09:39:04 compute-0 podman[414883]: 2025-10-14 09:39:04.154245466 +0000 UTC m=+0.041600572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:39:04 compute-0 nova_compute[259627]: 2025-10-14 09:39:04.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:04 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:04 compute-0 podman[414883]: 2025-10-14 09:39:04.292611141 +0000 UTC m=+0.179966227 container init 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:39:04 compute-0 podman[414883]: 2025-10-14 09:39:04.312228733 +0000 UTC m=+0.199583809 container start 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:39:04 compute-0 podman[414883]: 2025-10-14 09:39:04.316701762 +0000 UTC m=+0.204056908 container attach 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:39:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 09:39:05 compute-0 affectionate_shockley[414900]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:39:05 compute-0 affectionate_shockley[414900]: --> relative data size: 1.0
Oct 14 09:39:05 compute-0 affectionate_shockley[414900]: --> All data devices are unavailable
Oct 14 09:39:05 compute-0 systemd[1]: libpod-4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782.scope: Deactivated successfully.
Oct 14 09:39:05 compute-0 systemd[1]: libpod-4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782.scope: Consumed 1.149s CPU time.
Oct 14 09:39:05 compute-0 podman[414929]: 2025-10-14 09:39:05.555918322 +0000 UTC m=+0.031790081 container died 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f-merged.mount: Deactivated successfully.
Oct 14 09:39:05 compute-0 podman[414929]: 2025-10-14 09:39:05.618004206 +0000 UTC m=+0.093875875 container remove 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 09:39:05 compute-0 systemd[1]: libpod-conmon-4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782.scope: Deactivated successfully.
Oct 14 09:39:05 compute-0 sudo[414776]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:39:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896758948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:39:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:39:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896758948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:39:05 compute-0 sudo[414944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:05 compute-0 sudo[414944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:05 compute-0 sudo[414944]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:05 compute-0 sudo[414969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:39:05 compute-0 sudo[414969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:05 compute-0 sudo[414969]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:05 compute-0 sudo[414994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:05 compute-0 ceph-mon[74249]: pgmap v2578: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 09:39:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1896758948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:39:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1896758948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:39:05 compute-0 sudo[414994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:05 compute-0 sudo[414994]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:05 compute-0 sudo[415019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:39:05 compute-0 sudo[415019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.137 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.169 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.169 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance network_info: |[{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.170 2 DEBUG oslo_concurrency.lockutils [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.170 2 DEBUG nova.network.neutron [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.180 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start _get_guest_xml network_info=[{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.187 2 WARNING nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.193 2 DEBUG nova.virt.libvirt.host [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.195 2 DEBUG nova.virt.libvirt.host [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.207 2 DEBUG nova.virt.libvirt.host [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.208 2 DEBUG nova.virt.libvirt.host [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.209 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.209 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.210 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.211 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.212 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.212 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.212 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.213 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.214 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.214 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.215 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.215 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.222 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:06 compute-0 podman[415082]: 2025-10-14 09:39:06.313477193 +0000 UTC m=+0.048437490 container create 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:39:06 compute-0 systemd[1]: Started libpod-conmon-25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509.scope.
Oct 14 09:39:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:39:06 compute-0 podman[415082]: 2025-10-14 09:39:06.373619279 +0000 UTC m=+0.108579606 container init 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:39:06 compute-0 podman[415082]: 2025-10-14 09:39:06.381210205 +0000 UTC m=+0.116170492 container start 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 09:39:06 compute-0 podman[415082]: 2025-10-14 09:39:06.385055049 +0000 UTC m=+0.120015366 container attach 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:39:06 compute-0 vibrant_kare[415099]: 167 167
Oct 14 09:39:06 compute-0 systemd[1]: libpod-25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509.scope: Deactivated successfully.
Oct 14 09:39:06 compute-0 podman[415082]: 2025-10-14 09:39:06.294499437 +0000 UTC m=+0.029459774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:39:06 compute-0 podman[415082]: 2025-10-14 09:39:06.387463078 +0000 UTC m=+0.122423365 container died 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:39:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a0ac7207e34bf0d329b4cf196b94ff10aebca80426c407a33fba20cad4d61a6-merged.mount: Deactivated successfully.
Oct 14 09:39:06 compute-0 podman[415082]: 2025-10-14 09:39:06.427274615 +0000 UTC m=+0.162234902 container remove 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 09:39:06 compute-0 systemd[1]: libpod-conmon-25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509.scope: Deactivated successfully.
Oct 14 09:39:06 compute-0 podman[415140]: 2025-10-14 09:39:06.597441401 +0000 UTC m=+0.048328407 container create 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:39:06 compute-0 systemd[1]: Started libpod-conmon-26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de.scope.
Oct 14 09:39:06 compute-0 podman[415140]: 2025-10-14 09:39:06.577375629 +0000 UTC m=+0.028262685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:39:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:39:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923987819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:39:06 compute-0 podman[415140]: 2025-10-14 09:39:06.718354708 +0000 UTC m=+0.169241744 container init 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 09:39:06 compute-0 podman[415140]: 2025-10-14 09:39:06.725609106 +0000 UTC m=+0.176496122 container start 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:39:06 compute-0 podman[415140]: 2025-10-14 09:39:06.729175184 +0000 UTC m=+0.180062240 container attach 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.734 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.757 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:06 compute-0 nova_compute[259627]: 2025-10-14 09:39:06.761 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/923987819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:39:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:07.055 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:39:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103820873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.200 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.202 2 DEBUG nova.virt.libvirt.vif [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:38:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-911560012',display_name='tempest-TestGettingAddress-server-911560012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-911560012',id=148,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-friw18rr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:38:59Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=e5b13156-71d2-4a9c-be63-1beebe1ca3fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.203 2 DEBUG nova.network.os_vif_util [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.205 2 DEBUG nova.network.os_vif_util [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.208 2 DEBUG nova.objects.instance [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid e5b13156-71d2-4a9c-be63-1beebe1ca3fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.226 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <uuid>e5b13156-71d2-4a9c-be63-1beebe1ca3fb</uuid>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <name>instance-00000094</name>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-911560012</nova:name>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:39:06</nova:creationTime>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <nova:port uuid="30c28c87-45b1-43e9-930b-c8ba5142286f">
Oct 14 09:39:07 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb7:ec03" ipVersion="6"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb7:ec03" ipVersion="6"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <system>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <entry name="serial">e5b13156-71d2-4a9c-be63-1beebe1ca3fb</entry>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <entry name="uuid">e5b13156-71d2-4a9c-be63-1beebe1ca3fb</entry>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </system>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <os>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   </os>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <features>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   </features>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk">
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       </source>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config">
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       </source>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:39:07 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:b7:ec:03"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <target dev="tap30c28c87-45"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/console.log" append="off"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <video>
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </video>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:39:07 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:39:07 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:39:07 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:39:07 compute-0 nova_compute[259627]: </domain>
Oct 14 09:39:07 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.229 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Preparing to wait for external event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.229 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.230 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.230 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.232 2 DEBUG nova.virt.libvirt.vif [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:38:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-911560012',display_name='tempest-TestGettingAddress-server-911560012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-911560012',id=148,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-friw18rr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:38:59Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=e5b13156-71d2-4a9c-be63-1beebe1ca3fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.233 2 DEBUG nova.network.os_vif_util [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.234 2 DEBUG nova.network.os_vif_util [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.235 2 DEBUG os_vif [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.251 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30c28c87-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.252 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30c28c87-45, col_values=(('external_ids', {'iface-id': '30c28c87-45b1-43e9-930b-c8ba5142286f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:ec:03', 'vm-uuid': 'e5b13156-71d2-4a9c-be63-1beebe1ca3fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:07 compute-0 NetworkManager[44885]: <info>  [1760434747.2554] manager: (tap30c28c87-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/666)
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.263 2 INFO os_vif [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45')
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.332 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.333 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.333 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:b7:ec:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.334 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Using config drive
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.366 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:07 compute-0 pensive_euler[415156]: {
Oct 14 09:39:07 compute-0 pensive_euler[415156]:     "0": [
Oct 14 09:39:07 compute-0 pensive_euler[415156]:         {
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "devices": [
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "/dev/loop3"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             ],
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_name": "ceph_lv0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_size": "21470642176",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "name": "ceph_lv0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "tags": {
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cluster_name": "ceph",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.crush_device_class": "",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.encrypted": "0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osd_id": "0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.type": "block",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.vdo": "0"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             },
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "type": "block",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "vg_name": "ceph_vg0"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:         }
Oct 14 09:39:07 compute-0 pensive_euler[415156]:     ],
Oct 14 09:39:07 compute-0 pensive_euler[415156]:     "1": [
Oct 14 09:39:07 compute-0 pensive_euler[415156]:         {
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "devices": [
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "/dev/loop4"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             ],
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_name": "ceph_lv1",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_size": "21470642176",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "name": "ceph_lv1",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "tags": {
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cluster_name": "ceph",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.crush_device_class": "",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.encrypted": "0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osd_id": "1",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.type": "block",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.vdo": "0"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             },
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "type": "block",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "vg_name": "ceph_vg1"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:         }
Oct 14 09:39:07 compute-0 pensive_euler[415156]:     ],
Oct 14 09:39:07 compute-0 pensive_euler[415156]:     "2": [
Oct 14 09:39:07 compute-0 pensive_euler[415156]:         {
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "devices": [
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "/dev/loop5"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             ],
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_name": "ceph_lv2",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_size": "21470642176",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "name": "ceph_lv2",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "tags": {
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.cluster_name": "ceph",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.crush_device_class": "",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.encrypted": "0",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osd_id": "2",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.type": "block",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:                 "ceph.vdo": "0"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             },
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "type": "block",
Oct 14 09:39:07 compute-0 pensive_euler[415156]:             "vg_name": "ceph_vg2"
Oct 14 09:39:07 compute-0 pensive_euler[415156]:         }
Oct 14 09:39:07 compute-0 pensive_euler[415156]:     ]
Oct 14 09:39:07 compute-0 pensive_euler[415156]: }
Oct 14 09:39:07 compute-0 systemd[1]: libpod-26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de.scope: Deactivated successfully.
Oct 14 09:39:07 compute-0 podman[415140]: 2025-10-14 09:39:07.568790598 +0000 UTC m=+1.019677624 container died 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 09:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf-merged.mount: Deactivated successfully.
Oct 14 09:39:07 compute-0 podman[415140]: 2025-10-14 09:39:07.633808354 +0000 UTC m=+1.084695350 container remove 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:39:07 compute-0 systemd[1]: libpod-conmon-26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de.scope: Deactivated successfully.
Oct 14 09:39:07 compute-0 sudo[415019]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:07 compute-0 sudo[415241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:07 compute-0 sudo[415241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:07 compute-0 sudo[415241]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:07 compute-0 sudo[415266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:39:07 compute-0 sudo[415266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:07 compute-0 sudo[415266]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.834 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Creating config drive at /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config
Oct 14 09:39:07 compute-0 nova_compute[259627]: 2025-10-14 09:39:07.847 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp20xiwnlm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:07 compute-0 sudo[415291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:07 compute-0 sudo[415291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:07 compute-0 sudo[415291]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:07 compute-0 ceph-mon[74249]: pgmap v2579: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:39:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2103820873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:39:07 compute-0 sudo[415317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:39:07 compute-0 sudo[415317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.003 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp20xiwnlm" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.044 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.049 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.246 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.248 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Deleting local config drive /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config because it was imported into RBD.
Oct 14 09:39:08 compute-0 kernel: tap30c28c87-45: entered promiscuous mode
Oct 14 09:39:08 compute-0 NetworkManager[44885]: <info>  [1760434748.3198] manager: (tap30c28c87-45): new Tun device (/org/freedesktop/NetworkManager/Devices/667)
Oct 14 09:39:08 compute-0 ovn_controller[152662]: 2025-10-14T09:39:08Z|01631|binding|INFO|Claiming lport 30c28c87-45b1-43e9-930b-c8ba5142286f for this chassis.
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:08 compute-0 ovn_controller[152662]: 2025-10-14T09:39:08Z|01632|binding|INFO|30c28c87-45b1-43e9-930b-c8ba5142286f: Claiming fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.337 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03'], port_security=['fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:feb7:ec03/64 2001:db8::f816:3eff:feb7:ec03/64', 'neutron:device_id': 'e5b13156-71d2-4a9c-be63-1beebe1ca3fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75fd0641-e399-4e8e-872e-ceab82cd0201', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=30c28c87-45b1-43e9-930b-c8ba5142286f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.339 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 30c28c87-45b1-43e9-930b-c8ba5142286f in datapath 14196b9b-0205-497b-9e98-32690613a533 bound to our chassis
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.340 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14196b9b-0205-497b-9e98-32690613a533
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.352 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90567eda-48c7-41cf-b710-42e4b905c6a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.353 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14196b9b-01 in ovnmeta-14196b9b-0205-497b-9e98-32690613a533 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:39:08 compute-0 systemd-udevd[415443]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.357 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14196b9b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.357 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c04787de-0c7c-4e46-b332-19e880237d43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.358 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e85c4ec6-2a90-4d12-85a1-92ee8b9c46c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 podman[415426]: 2025-10-14 09:39:08.361480711 +0000 UTC m=+0.053483024 container create 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 09:39:08 compute-0 NetworkManager[44885]: <info>  [1760434748.3695] device (tap30c28c87-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.369 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a90748c4-f2c5-4abb-a199-a18b4723e6c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 NetworkManager[44885]: <info>  [1760434748.3703] device (tap30c28c87-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.370 2 DEBUG nova.network.neutron [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updated VIF entry in instance network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.371 2 DEBUG nova.network.neutron [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:39:08 compute-0 systemd-machined[214636]: New machine qemu-181-instance-00000094.
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.387 2 DEBUG oslo_concurrency.lockutils [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4634e0ca-07d9-48be-aa4b-b3f01539364e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 ovn_controller[152662]: 2025-10-14T09:39:08Z|01633|binding|INFO|Setting lport 30c28c87-45b1-43e9-930b-c8ba5142286f ovn-installed in OVS
Oct 14 09:39:08 compute-0 ovn_controller[152662]: 2025-10-14T09:39:08Z|01634|binding|INFO|Setting lport 30c28c87-45b1-43e9-930b-c8ba5142286f up in Southbound
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:08 compute-0 systemd[1]: Started libpod-conmon-9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80.scope.
Oct 14 09:39:08 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000094.
Oct 14 09:39:08 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.425 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[468130c5-bea1-4f85-a1a0-32f71b106f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 podman[415426]: 2025-10-14 09:39:08.337875052 +0000 UTC m=+0.029877385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:39:08 compute-0 systemd-udevd[415450]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:39:08 compute-0 NetworkManager[44885]: <info>  [1760434748.4337] manager: (tap14196b9b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/668)
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.433 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70c51889-4934-4ad7-aae9-22cc9ebb4348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 podman[415426]: 2025-10-14 09:39:08.464899919 +0000 UTC m=+0.156902272 container init 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:39:08 compute-0 podman[415426]: 2025-10-14 09:39:08.472884385 +0000 UTC m=+0.164886688 container start 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:39:08 compute-0 unruffled_hawking[415456]: 167 167
Oct 14 09:39:08 compute-0 systemd[1]: libpod-9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80.scope: Deactivated successfully.
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.479 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[57ec4661-b782-4453-b20a-9f431e555ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.483 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7a238d-49ee-4fa4-97a4-47d452f88f65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 NetworkManager[44885]: <info>  [1760434748.5143] device (tap14196b9b-00): carrier: link connected
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.521 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f48eedd-b5de-4638-865d-ec6716102964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a812250-6c2b-496d-a0e1-cb9a86852fc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14196b9b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:47:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852727, 'reachable_time': 21450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415498, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 podman[415426]: 2025-10-14 09:39:08.548817228 +0000 UTC m=+0.240819541 container attach 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:39:08 compute-0 podman[415426]: 2025-10-14 09:39:08.549112265 +0000 UTC m=+0.241114568 container died 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.564 2 DEBUG nova.compute.manager [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.564 2 DEBUG oslo_concurrency.lockutils [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.564 2 DEBUG oslo_concurrency.lockutils [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.565 2 DEBUG oslo_concurrency.lockutils [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.565 2 DEBUG nova.compute.manager [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Processing event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.567 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b14fca0c-551a-41c8-a5ba-0621999488c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:47b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852727, 'tstamp': 852727}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415499, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.583 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6721f9-5bbe-4572-8fea-7bd661022724]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14196b9b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:47:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852727, 'reachable_time': 21450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 415500, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b73908ea85983b88e26f247867240613e4ef8d37bcf191cc6d3cb56781cb7a-merged.mount: Deactivated successfully.
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b382a7ef-f775-460e-aa0b-29539ea84776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 podman[415426]: 2025-10-14 09:39:08.639558495 +0000 UTC m=+0.331560798 container remove 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:39:08 compute-0 systemd[1]: libpod-conmon-9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80.scope: Deactivated successfully.
Oct 14 09:39:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.706 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7278b2f2-096a-4357-b9df-0b8bd06e7393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.708 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14196b9b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14196b9b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:08 compute-0 NetworkManager[44885]: <info>  [1760434748.7121] manager: (tap14196b9b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Oct 14 09:39:08 compute-0 kernel: tap14196b9b-00: entered promiscuous mode
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14196b9b-00, col_values=(('external_ids', {'iface-id': '500514c8-ea10-48c5-93d8-b1c6948e60b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.724 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14196b9b-0205-497b-9e98-32690613a533.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14196b9b-0205-497b-9e98-32690613a533.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.724 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1a38b403-f2ff-4797-bc07-8ca2e22b3295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.726 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-14196b9b-0205-497b-9e98-32690613a533
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/14196b9b-0205-497b-9e98-32690613a533.pid.haproxy
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID 14196b9b-0205-497b-9e98-32690613a533
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:39:08 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.727 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'env', 'PROCESS_TAG=haproxy-14196b9b-0205-497b-9e98-32690613a533', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14196b9b-0205-497b-9e98-32690613a533.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:39:08 compute-0 ovn_controller[152662]: 2025-10-14T09:39:08Z|01635|binding|INFO|Releasing lport 500514c8-ea10-48c5-93d8-b1c6948e60b0 from this chassis (sb_readonly=0)
Oct 14 09:39:08 compute-0 nova_compute[259627]: 2025-10-14 09:39:08.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:08 compute-0 podman[415518]: 2025-10-14 09:39:08.87167629 +0000 UTC m=+0.083655344 container create 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:39:08 compute-0 podman[415518]: 2025-10-14 09:39:08.809796522 +0000 UTC m=+0.021775576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:39:08 compute-0 systemd[1]: Started libpod-conmon-2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b.scope.
Oct 14 09:39:08 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:08 compute-0 podman[415518]: 2025-10-14 09:39:08.95847033 +0000 UTC m=+0.170449394 container init 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:39:08 compute-0 podman[415518]: 2025-10-14 09:39:08.967500442 +0000 UTC m=+0.179479486 container start 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 09:39:08 compute-0 podman[415518]: 2025-10-14 09:39:08.970486735 +0000 UTC m=+0.182465799 container attach 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:39:09 compute-0 podman[415604]: 2025-10-14 09:39:09.076976968 +0000 UTC m=+0.042852762 container create da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:39:09 compute-0 systemd[1]: Started libpod-conmon-da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d.scope.
Oct 14 09:39:09 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579e65901c3d9167dcba11f2ead71e8144552c5f2bb2a88cb78fc270c6ac4ad0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:39:09 compute-0 podman[415604]: 2025-10-14 09:39:09.054567028 +0000 UTC m=+0.020442842 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:39:09 compute-0 podman[415604]: 2025-10-14 09:39:09.151519288 +0000 UTC m=+0.117395092 container init da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:39:09 compute-0 podman[415604]: 2025-10-14 09:39:09.15651079 +0000 UTC m=+0.122386584 container start da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:39:09 compute-0 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [NOTICE]   (415623) : New worker (415625) forked
Oct 14 09:39:09 compute-0 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [NOTICE]   (415623) : Loading success.
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.447 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.448 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434749.4472356, e5b13156-71d2-4a9c-be63-1beebe1ca3fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.448 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] VM Started (Lifecycle Event)
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.453 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.456 2 INFO nova.virt.libvirt.driver [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance spawned successfully.
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.457 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.475 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.481 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.485 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.486 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.486 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.486 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.487 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.487 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.523 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.523 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434749.4481535, e5b13156-71d2-4a9c-be63-1beebe1ca3fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.524 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] VM Paused (Lifecycle Event)
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.550 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.552 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434749.4525728, e5b13156-71d2-4a9c-be63-1beebe1ca3fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.552 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] VM Resumed (Lifecycle Event)
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.562 2 INFO nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Took 9.69 seconds to spawn the instance on the hypervisor.
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.563 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.590 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.594 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.625 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.640 2 INFO nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Took 10.85 seconds to build instance.
Oct 14 09:39:09 compute-0 nova_compute[259627]: 2025-10-14 09:39:09.659 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:09 compute-0 ceph-mon[74249]: pgmap v2580: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]: {
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "osd_id": 2,
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "type": "bluestore"
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:     },
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "osd_id": 1,
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "type": "bluestore"
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:     },
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "osd_id": 0,
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:         "type": "bluestore"
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]:     }
Oct 14 09:39:09 compute-0 hopeful_diffie[415572]: }
Oct 14 09:39:09 compute-0 systemd[1]: libpod-2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b.scope: Deactivated successfully.
Oct 14 09:39:09 compute-0 systemd[1]: libpod-2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b.scope: Consumed 1.009s CPU time.
Oct 14 09:39:09 compute-0 podman[415518]: 2025-10-14 09:39:09.990194819 +0000 UTC m=+1.202173863 container died 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2-merged.mount: Deactivated successfully.
Oct 14 09:39:10 compute-0 podman[415518]: 2025-10-14 09:39:10.049974826 +0000 UTC m=+1.261953870 container remove 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:39:10 compute-0 systemd[1]: libpod-conmon-2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b.scope: Deactivated successfully.
Oct 14 09:39:10 compute-0 sudo[415317]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:39:10 compute-0 podman[415671]: 2025-10-14 09:39:10.089779583 +0000 UTC m=+0.070381099 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 09:39:10 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:39:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:39:10 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:39:10 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ec160650-3a95-4185-ad10-a9f8cda3e817 does not exist
Oct 14 09:39:10 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 855ddc89-e09d-406e-8ad3-d2a7d937523f does not exist
Oct 14 09:39:10 compute-0 podman[415663]: 2025-10-14 09:39:10.114877239 +0000 UTC m=+0.093887425 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:39:10 compute-0 sudo[415713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:39:10 compute-0 sudo[415713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:10 compute-0 sudo[415713]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:10 compute-0 sudo[415739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:39:10 compute-0 sudo[415739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:39:10 compute-0 sudo[415739]: pam_unix(sudo:session): session closed for user root
Oct 14 09:39:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 14 09:39:10 compute-0 nova_compute[259627]: 2025-10-14 09:39:10.681 2 DEBUG nova.compute.manager [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:39:10 compute-0 nova_compute[259627]: 2025-10-14 09:39:10.681 2 DEBUG oslo_concurrency.lockutils [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:10 compute-0 nova_compute[259627]: 2025-10-14 09:39:10.681 2 DEBUG oslo_concurrency.lockutils [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:10 compute-0 nova_compute[259627]: 2025-10-14 09:39:10.682 2 DEBUG oslo_concurrency.lockutils [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:10 compute-0 nova_compute[259627]: 2025-10-14 09:39:10.682 2 DEBUG nova.compute.manager [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] No waiting events found dispatching network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:39:10 compute-0 nova_compute[259627]: 2025-10-14 09:39:10.682 2 WARNING nova.compute.manager [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received unexpected event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f for instance with vm_state active and task_state None.
Oct 14 09:39:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:39:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:39:11 compute-0 nova_compute[259627]: 2025-10-14 09:39:11.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:11 compute-0 nova_compute[259627]: 2025-10-14 09:39:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:12 compute-0 ceph-mon[74249]: pgmap v2581: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 14 09:39:12 compute-0 nova_compute[259627]: 2025-10-14 09:39:12.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Oct 14 09:39:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:13 compute-0 ceph-mon[74249]: pgmap v2582: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Oct 14 09:39:14 compute-0 ovn_controller[152662]: 2025-10-14T09:39:14Z|01636|binding|INFO|Releasing lport 500514c8-ea10-48c5-93d8-b1c6948e60b0 from this chassis (sb_readonly=0)
Oct 14 09:39:14 compute-0 NetworkManager[44885]: <info>  [1760434754.3804] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Oct 14 09:39:14 compute-0 NetworkManager[44885]: <info>  [1760434754.3832] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Oct 14 09:39:14 compute-0 nova_compute[259627]: 2025-10-14 09:39:14.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:14 compute-0 ovn_controller[152662]: 2025-10-14T09:39:14Z|01637|binding|INFO|Releasing lport 500514c8-ea10-48c5-93d8-b1c6948e60b0 from this chassis (sb_readonly=0)
Oct 14 09:39:14 compute-0 nova_compute[259627]: 2025-10-14 09:39:14.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:14 compute-0 nova_compute[259627]: 2025-10-14 09:39:14.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Oct 14 09:39:14 compute-0 nova_compute[259627]: 2025-10-14 09:39:14.734 2 DEBUG nova.compute.manager [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:39:14 compute-0 nova_compute[259627]: 2025-10-14 09:39:14.735 2 DEBUG nova.compute.manager [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing instance network info cache due to event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:39:14 compute-0 nova_compute[259627]: 2025-10-14 09:39:14.735 2 DEBUG oslo_concurrency.lockutils [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:39:14 compute-0 nova_compute[259627]: 2025-10-14 09:39:14.736 2 DEBUG oslo_concurrency.lockutils [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:39:14 compute-0 nova_compute[259627]: 2025-10-14 09:39:14.736 2 DEBUG nova.network.neutron [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:39:15 compute-0 ceph-mon[74249]: pgmap v2583: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Oct 14 09:39:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 98 op/s
Oct 14 09:39:16 compute-0 nova_compute[259627]: 2025-10-14 09:39:16.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:17 compute-0 nova_compute[259627]: 2025-10-14 09:39:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:17 compute-0 nova_compute[259627]: 2025-10-14 09:39:17.355 2 DEBUG nova.network.neutron [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updated VIF entry in instance network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:39:17 compute-0 nova_compute[259627]: 2025-10-14 09:39:17.356 2 DEBUG nova.network.neutron [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:39:17 compute-0 nova_compute[259627]: 2025-10-14 09:39:17.380 2 DEBUG oslo_concurrency.lockutils [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:39:17 compute-0 ceph-mon[74249]: pgmap v2584: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 98 op/s
Oct 14 09:39:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:39:19 compute-0 ceph-mon[74249]: pgmap v2585: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:39:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Oct 14 09:39:21 compute-0 ovn_controller[152662]: 2025-10-14T09:39:21Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:ec:03 10.100.0.7
Oct 14 09:39:21 compute-0 ovn_controller[152662]: 2025-10-14T09:39:21Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:ec:03 10.100.0.7
Oct 14 09:39:21 compute-0 nova_compute[259627]: 2025-10-14 09:39:21.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:21 compute-0 ceph-mon[74249]: pgmap v2586: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Oct 14 09:39:22 compute-0 nova_compute[259627]: 2025-10-14 09:39:22.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 09:39:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:23 compute-0 ceph-mon[74249]: pgmap v2587: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 09:39:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 09:39:24 compute-0 nova_compute[259627]: 2025-10-14 09:39:24.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:24 compute-0 nova_compute[259627]: 2025-10-14 09:39:24.993 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:39:25 compute-0 nova_compute[259627]: 2025-10-14 09:39:25.014 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:39:25 compute-0 podman[415766]: 2025-10-14 09:39:25.723118235 +0000 UTC m=+0.090156194 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 14 09:39:25 compute-0 podman[415765]: 2025-10-14 09:39:25.735507359 +0000 UTC m=+0.100670812 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:39:25 compute-0 ceph-mon[74249]: pgmap v2588: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 09:39:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 14 09:39:26 compute-0 nova_compute[259627]: 2025-10-14 09:39:26.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:27 compute-0 nova_compute[259627]: 2025-10-14 09:39:27.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:27 compute-0 ceph-mon[74249]: pgmap v2589: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 14 09:39:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:39:29 compute-0 ceph-mon[74249]: pgmap v2590: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:39:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:39:31 compute-0 nova_compute[259627]: 2025-10-14 09:39:31.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:31 compute-0 ceph-mon[74249]: pgmap v2591: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:39:32 compute-0 nova_compute[259627]: 2025-10-14 09:39:32.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 107 KiB/s wr, 21 op/s
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:39:32
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Oct 14 09:39:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:39:33 compute-0 nova_compute[259627]: 2025-10-14 09:39:33.127 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:33 compute-0 nova_compute[259627]: 2025-10-14 09:39:33.128 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:33 compute-0 nova_compute[259627]: 2025-10-14 09:39:33.152 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:39:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:33 compute-0 nova_compute[259627]: 2025-10-14 09:39:33.277 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:33 compute-0 nova_compute[259627]: 2025-10-14 09:39:33.278 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:33 compute-0 nova_compute[259627]: 2025-10-14 09:39:33.291 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:39:33 compute-0 nova_compute[259627]: 2025-10-14 09:39:33.292 2 INFO nova.compute.claims [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:39:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:39:33 compute-0 nova_compute[259627]: 2025-10-14 09:39:33.538 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:33 compute-0 ceph-mon[74249]: pgmap v2592: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 107 KiB/s wr, 21 op/s
Oct 14 09:39:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:39:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472284329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.045 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.052 2 DEBUG nova.compute.provider_tree [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.076 2 DEBUG nova.scheduler.client.report [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.101 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.102 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.163 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.163 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.186 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.204 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.336 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.337 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.338 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Creating image(s)
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.360 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.382 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.402 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.405 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.455 2 DEBUG nova.policy [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.507 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.507 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.508 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.508 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.529 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.532 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 107 KiB/s wr, 21 op/s
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.795 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2472284329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.876 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.975 2 DEBUG nova.objects.instance [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.994 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.995 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Ensure instance console log exists: /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.996 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.997 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:34 compute-0 nova_compute[259627]: 2025-10-14 09:39:34.998 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:35.396 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:39:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:35.398 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:39:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:35.399 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:35 compute-0 nova_compute[259627]: 2025-10-14 09:39:35.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:35 compute-0 nova_compute[259627]: 2025-10-14 09:39:35.525 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Successfully created port: e8b3ac7d-3adf-47a0-8a80-7a5692a145de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:39:35 compute-0 ceph-mon[74249]: pgmap v2593: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 107 KiB/s wr, 21 op/s
Oct 14 09:39:36 compute-0 nova_compute[259627]: 2025-10-14 09:39:36.192 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Successfully updated port: e8b3ac7d-3adf-47a0-8a80-7a5692a145de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:39:36 compute-0 nova_compute[259627]: 2025-10-14 09:39:36.211 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:39:36 compute-0 nova_compute[259627]: 2025-10-14 09:39:36.211 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:39:36 compute-0 nova_compute[259627]: 2025-10-14 09:39:36.211 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:39:36 compute-0 nova_compute[259627]: 2025-10-14 09:39:36.275 2 DEBUG nova.compute.manager [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:39:36 compute-0 nova_compute[259627]: 2025-10-14 09:39:36.276 2 DEBUG nova.compute.manager [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing instance network info cache due to event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:39:36 compute-0 nova_compute[259627]: 2025-10-14 09:39:36.276 2 DEBUG oslo_concurrency.lockutils [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:39:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 1.9 MiB/s wr, 48 op/s
Oct 14 09:39:36 compute-0 nova_compute[259627]: 2025-10-14 09:39:36.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:37 compute-0 nova_compute[259627]: 2025-10-14 09:39:37.129 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:39:37 compute-0 nova_compute[259627]: 2025-10-14 09:39:37.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:37 compute-0 ceph-mon[74249]: pgmap v2594: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 1.9 MiB/s wr, 48 op/s
Oct 14 09:39:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.410 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.445 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.446 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance network_info: |[{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.447 2 DEBUG oslo_concurrency.lockutils [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.447 2 DEBUG nova.network.neutron [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.453 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start _get_guest_xml network_info=[{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.461 2 WARNING nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.467 2 DEBUG nova.virt.libvirt.host [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.468 2 DEBUG nova.virt.libvirt.host [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.477 2 DEBUG nova.virt.libvirt.host [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.478 2 DEBUG nova.virt.libvirt.host [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.479 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.480 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.480 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.481 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.481 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.482 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.482 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.483 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.483 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.484 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.484 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.485 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.489 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:39:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:39:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351782841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:39:38 compute-0 nova_compute[259627]: 2025-10-14 09:39:38.993 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.015 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.019 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:39:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661788815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.489 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.491 2 DEBUG nova.virt.libvirt.vif [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-311340838',display_name='tempest-TestGettingAddress-server-311340838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-311340838',id=149,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-0evlaun0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:39:34Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.491 2 DEBUG nova.network.os_vif_util [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.492 2 DEBUG nova.network.os_vif_util [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.494 2 DEBUG nova.objects.instance [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.513 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <uuid>3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed</uuid>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <name>instance-00000095</name>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-311340838</nova:name>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:39:38</nova:creationTime>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <nova:port uuid="e8b3ac7d-3adf-47a0-8a80-7a5692a145de">
Oct 14 09:39:39 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee7:826b" ipVersion="6"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee7:826b" ipVersion="6"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <system>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <entry name="serial">3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed</entry>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <entry name="uuid">3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed</entry>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </system>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <os>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   </os>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <features>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   </features>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk">
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config">
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:39:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:e7:82:6b"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <target dev="tape8b3ac7d-3a"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/console.log" append="off"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <video>
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </video>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:39:39 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:39:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:39:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:39:39 compute-0 nova_compute[259627]: </domain>
Oct 14 09:39:39 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.513 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Preparing to wait for external event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.514 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.514 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.514 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.515 2 DEBUG nova.virt.libvirt.vif [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-311340838',display_name='tempest-TestGettingAddress-server-311340838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-311340838',id=149,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-0evlaun0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:39:34Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.515 2 DEBUG nova.network.os_vif_util [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.516 2 DEBUG nova.network.os_vif_util [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.516 2 DEBUG os_vif [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8b3ac7d-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.522 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8b3ac7d-3a, col_values=(('external_ids', {'iface-id': 'e8b3ac7d-3adf-47a0-8a80-7a5692a145de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:82:6b', 'vm-uuid': '3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:39 compute-0 NetworkManager[44885]: <info>  [1760434779.5399] manager: (tape8b3ac7d-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/672)
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.548 2 INFO os_vif [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a')
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.642 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.642 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.643 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:e7:82:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.643 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Using config drive
Oct 14 09:39:39 compute-0 nova_compute[259627]: 2025-10-14 09:39:39.667 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:39 compute-0 ceph-mon[74249]: pgmap v2595: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:39:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3351782841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:39:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2661788815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.065 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Creating config drive at /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.070 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfum5ym0q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.216 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfum5ym0q" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.256 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.261 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.483 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.485 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Deleting local config drive /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config because it was imported into RBD.
Oct 14 09:39:40 compute-0 kernel: tape8b3ac7d-3a: entered promiscuous mode
Oct 14 09:39:40 compute-0 NetworkManager[44885]: <info>  [1760434780.5446] manager: (tape8b3ac7d-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/673)
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:40 compute-0 ovn_controller[152662]: 2025-10-14T09:39:40Z|01638|binding|INFO|Claiming lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de for this chassis.
Oct 14 09:39:40 compute-0 ovn_controller[152662]: 2025-10-14T09:39:40Z|01639|binding|INFO|e8b3ac7d-3adf-47a0-8a80-7a5692a145de: Claiming fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.555 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b'], port_security=['fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fee7:826b/64 2001:db8::f816:3eff:fee7:826b/64', 'neutron:device_id': '3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75fd0641-e399-4e8e-872e-ceab82cd0201', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e8b3ac7d-3adf-47a0-8a80-7a5692a145de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.556 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e8b3ac7d-3adf-47a0-8a80-7a5692a145de in datapath 14196b9b-0205-497b-9e98-32690613a533 bound to our chassis
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.557 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14196b9b-0205-497b-9e98-32690613a533
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.572 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c47e6c-12af-4ac2-8e7d-d34b3789f4a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:40 compute-0 ovn_controller[152662]: 2025-10-14T09:39:40Z|01640|binding|INFO|Setting lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de ovn-installed in OVS
Oct 14 09:39:40 compute-0 ovn_controller[152662]: 2025-10-14T09:39:40Z|01641|binding|INFO|Setting lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de up in Southbound
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.633 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[436bb6bb-6f8a-4713-b621-e0ca8d4c6b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:40 compute-0 systemd-machined[214636]: New machine qemu-182-instance-00000095.
Oct 14 09:39:40 compute-0 systemd-udevd[416137]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.637 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe31f5c-1c46-42a2-ac27-ed7ac8d2677c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:40 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000095.
Oct 14 09:39:40 compute-0 NetworkManager[44885]: <info>  [1760434780.6502] device (tape8b3ac7d-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:39:40 compute-0 NetworkManager[44885]: <info>  [1760434780.6522] device (tape8b3ac7d-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:39:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.675 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[35e3fda8-f035-47be-9030-53d12f8e1c94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.700 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[adbb8f83-1f6f-44b9-ada8-d6f6546f81c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14196b9b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:47:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852727, 'reachable_time': 21450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416162, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:40 compute-0 podman[416126]: 2025-10-14 09:39:40.705858107 +0000 UTC m=+0.075015132 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1427f6ee-9d17-4d5f-b7e6-f45e44c70e3e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14196b9b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852742, 'tstamp': 852742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416175, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14196b9b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852745, 'tstamp': 852745}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416175, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14196b9b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:40 compute-0 nova_compute[259627]: 2025-10-14 09:39:40.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.721 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14196b9b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.721 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.722 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14196b9b-00, col_values=(('external_ids', {'iface-id': '500514c8-ea10-48c5-93d8-b1c6948e60b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:39:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.722 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:39:40 compute-0 podman[416123]: 2025-10-14 09:39:40.764855775 +0000 UTC m=+0.136634244 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.692 2 DEBUG nova.compute.manager [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.693 2 DEBUG oslo_concurrency.lockutils [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.693 2 DEBUG oslo_concurrency.lockutils [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.694 2 DEBUG oslo_concurrency.lockutils [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.694 2 DEBUG nova.compute.manager [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Processing event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.696 2 DEBUG nova.network.neutron [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updated VIF entry in instance network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.697 2 DEBUG nova.network.neutron [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.711 2 DEBUG oslo_concurrency.lockutils [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.776 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434781.7759151, 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.777 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] VM Started (Lifecycle Event)
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.780 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.786 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.794 2 INFO nova.virt.libvirt.driver [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance spawned successfully.
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.794 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.799 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.806 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.822 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.823 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.824 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.825 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.825 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.826 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:39:41 compute-0 ceph-mon[74249]: pgmap v2596: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.835 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.836 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434781.776156, 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.836 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] VM Paused (Lifecycle Event)
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.887 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.893 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434781.7861793, 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.893 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] VM Resumed (Lifecycle Event)
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.939 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.942 2 INFO nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Took 7.60 seconds to spawn the instance on the hypervisor.
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.942 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.947 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:39:41 compute-0 nova_compute[259627]: 2025-10-14 09:39:41.985 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:39:42 compute-0 nova_compute[259627]: 2025-10-14 09:39:42.027 2 INFO nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Took 8.80 seconds to build instance.
Oct 14 09:39:42 compute-0 nova_compute[259627]: 2025-10-14 09:39:42.047 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:39:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011052700926287686 of space, bias 1.0, pg target 0.3315810277886306 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:39:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:39:43 compute-0 nova_compute[259627]: 2025-10-14 09:39:43.812 2 DEBUG nova.compute.manager [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:39:43 compute-0 nova_compute[259627]: 2025-10-14 09:39:43.814 2 DEBUG oslo_concurrency.lockutils [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:43 compute-0 nova_compute[259627]: 2025-10-14 09:39:43.814 2 DEBUG oslo_concurrency.lockutils [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:43 compute-0 nova_compute[259627]: 2025-10-14 09:39:43.815 2 DEBUG oslo_concurrency.lockutils [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:43 compute-0 nova_compute[259627]: 2025-10-14 09:39:43.815 2 DEBUG nova.compute.manager [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] No waiting events found dispatching network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:39:43 compute-0 nova_compute[259627]: 2025-10-14 09:39:43.815 2 WARNING nova.compute.manager [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received unexpected event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de for instance with vm_state active and task_state None.
Oct 14 09:39:43 compute-0 ceph-mon[74249]: pgmap v2597: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:39:44 compute-0 nova_compute[259627]: 2025-10-14 09:39:44.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:39:45 compute-0 ceph-mon[74249]: pgmap v2598: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 09:39:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:39:46 compute-0 nova_compute[259627]: 2025-10-14 09:39:46.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:46 compute-0 nova_compute[259627]: 2025-10-14 09:39:46.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:47 compute-0 nova_compute[259627]: 2025-10-14 09:39:47.350 2 DEBUG nova.compute.manager [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:39:47 compute-0 nova_compute[259627]: 2025-10-14 09:39:47.351 2 DEBUG nova.compute.manager [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing instance network info cache due to event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:39:47 compute-0 nova_compute[259627]: 2025-10-14 09:39:47.352 2 DEBUG oslo_concurrency.lockutils [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:39:47 compute-0 nova_compute[259627]: 2025-10-14 09:39:47.352 2 DEBUG oslo_concurrency.lockutils [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:39:47 compute-0 nova_compute[259627]: 2025-10-14 09:39:47.353 2 DEBUG nova.network.neutron [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:39:47 compute-0 ceph-mon[74249]: pgmap v2599: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:39:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:39:49 compute-0 nova_compute[259627]: 2025-10-14 09:39:49.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:49 compute-0 ceph-mon[74249]: pgmap v2600: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:39:50 compute-0 nova_compute[259627]: 2025-10-14 09:39:50.166 2 DEBUG nova.network.neutron [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updated VIF entry in instance network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:39:50 compute-0 nova_compute[259627]: 2025-10-14 09:39:50.167 2 DEBUG nova.network.neutron [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:39:50 compute-0 nova_compute[259627]: 2025-10-14 09:39:50.202 2 DEBUG oslo_concurrency.lockutils [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:39:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:39:51 compute-0 nova_compute[259627]: 2025-10-14 09:39:51.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:51 compute-0 ceph-mon[74249]: pgmap v2601: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:39:52 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 14 09:39:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 14 09:39:52 compute-0 nova_compute[259627]: 2025-10-14 09:39:52.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:53 compute-0 ceph-mon[74249]: pgmap v2602: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 14 09:39:54 compute-0 nova_compute[259627]: 2025-10-14 09:39:54.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 14 09:39:54 compute-0 ovn_controller[152662]: 2025-10-14T09:39:54Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:82:6b 10.100.0.10
Oct 14 09:39:54 compute-0 ovn_controller[152662]: 2025-10-14T09:39:54Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:82:6b 10.100.0.10
Oct 14 09:39:54 compute-0 nova_compute[259627]: 2025-10-14 09:39:54.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:54 compute-0 nova_compute[259627]: 2025-10-14 09:39:54.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:39:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520363710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.506 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.612 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.613 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.619 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.619 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.861 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.861 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3179MB free_disk=59.92183303833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.862 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.862 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:39:55 compute-0 ceph-mon[74249]: pgmap v2603: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 14 09:39:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1520363710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.981 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e5b13156-71d2-4a9c-be63-1beebe1ca3fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.982 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.982 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:39:55 compute-0 nova_compute[259627]: 2025-10-14 09:39:55.983 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.010 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.036 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.036 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.053 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.096 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.181 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:39:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:39:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3786777675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:39:56 compute-0 podman[416270]: 2025-10-14 09:39:56.655001097 +0000 UTC m=+0.068594944 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:39:56 compute-0 podman[416271]: 2025-10-14 09:39:56.656510864 +0000 UTC m=+0.065370905 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:39:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.674 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.680 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.710 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.733 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.734 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:39:56 compute-0 nova_compute[259627]: 2025-10-14 09:39:56.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3786777675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:39:57 compute-0 ceph-mon[74249]: pgmap v2604: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 14 09:39:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:39:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:39:59 compute-0 nova_compute[259627]: 2025-10-14 09:39:59.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:39:59 compute-0 nova_compute[259627]: 2025-10-14 09:39:59.733 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:59 compute-0 nova_compute[259627]: 2025-10-14 09:39:59.734 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:59 compute-0 nova_compute[259627]: 2025-10-14 09:39:59.734 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:39:59 compute-0 ceph-mon[74249]: pgmap v2605: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:39:59 compute-0 nova_compute[259627]: 2025-10-14 09:39:59.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:39:59 compute-0 nova_compute[259627]: 2025-10-14 09:39:59.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:39:59 compute-0 nova_compute[259627]: 2025-10-14 09:39:59.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:40:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:40:00 compute-0 nova_compute[259627]: 2025-10-14 09:40:00.974 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:40:00 compute-0 nova_compute[259627]: 2025-10-14 09:40:00.975 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:40:00 compute-0 nova_compute[259627]: 2025-10-14 09:40:00.975 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:40:00 compute-0 nova_compute[259627]: 2025-10-14 09:40:00.976 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e5b13156-71d2-4a9c-be63-1beebe1ca3fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:40:01 compute-0 nova_compute[259627]: 2025-10-14 09:40:01.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:01 compute-0 ceph-mon[74249]: pgmap v2606: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:40:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:40:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:40:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:40:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:40:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:40:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:40:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:40:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:03 compute-0 ceph-mon[74249]: pgmap v2607: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:40:04 compute-0 nova_compute[259627]: 2025-10-14 09:40:04.310 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:40:04 compute-0 nova_compute[259627]: 2025-10-14 09:40:04.330 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:40:04 compute-0 nova_compute[259627]: 2025-10-14 09:40:04.331 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:40:04 compute-0 nova_compute[259627]: 2025-10-14 09:40:04.332 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:40:04 compute-0 nova_compute[259627]: 2025-10-14 09:40:04.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:40:04 compute-0 nova_compute[259627]: 2025-10-14 09:40:04.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:40:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:40:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394527340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:40:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:40:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394527340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:40:05 compute-0 ceph-mon[74249]: pgmap v2608: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:40:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2394527340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:40:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2394527340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.322 2 DEBUG nova.compute.manager [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.323 2 DEBUG nova.compute.manager [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing instance network info cache due to event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.323 2 DEBUG oslo_concurrency.lockutils [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.324 2 DEBUG oslo_concurrency.lockutils [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.324 2 DEBUG nova.network.neutron [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.385 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.386 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.386 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.387 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.387 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.389 2 INFO nova.compute.manager [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Terminating instance
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.391 2 DEBUG nova.compute.manager [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:40:06 compute-0 kernel: tape8b3ac7d-3a (unregistering): left promiscuous mode
Oct 14 09:40:06 compute-0 NetworkManager[44885]: <info>  [1760434806.4653] device (tape8b3ac7d-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:40:06 compute-0 ovn_controller[152662]: 2025-10-14T09:40:06Z|01642|binding|INFO|Releasing lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de from this chassis (sb_readonly=0)
Oct 14 09:40:06 compute-0 ovn_controller[152662]: 2025-10-14T09:40:06Z|01643|binding|INFO|Setting lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de down in Southbound
Oct 14 09:40:06 compute-0 ovn_controller[152662]: 2025-10-14T09:40:06Z|01644|binding|INFO|Removing iface tape8b3ac7d-3a ovn-installed in OVS
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.496 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b'], port_security=['fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fee7:826b/64 2001:db8::f816:3eff:fee7:826b/64', 'neutron:device_id': '3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75fd0641-e399-4e8e-872e-ceab82cd0201', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e8b3ac7d-3adf-47a0-8a80-7a5692a145de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.498 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e8b3ac7d-3adf-47a0-8a80-7a5692a145de in datapath 14196b9b-0205-497b-9e98-32690613a533 unbound from our chassis
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.500 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14196b9b-0205-497b-9e98-32690613a533
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.528 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[745e32d1-f2a5-4410-a4a9-461bf92207b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:06 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct 14 09:40:06 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000095.scope: Consumed 14.087s CPU time.
Oct 14 09:40:06 compute-0 systemd-machined[214636]: Machine qemu-182-instance-00000095 terminated.
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.575 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f30aabde-55b0-4e65-9442-75188352d026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.581 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[19700c34-ef8d-4570-9938-6ea2cb46f4c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.635 2 INFO nova.virt.libvirt.driver [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance destroyed successfully.
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.636 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[09089c08-f00d-4e6b-8bcb-86e17949eb48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.637 2 DEBUG nova.objects.instance [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.660 2 DEBUG nova.virt.libvirt.vif [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-311340838',display_name='tempest-TestGettingAddress-server-311340838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-311340838',id=149,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:39:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-0evlaun0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:39:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.661 2 DEBUG nova.network.os_vif_util [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.664 2 DEBUG nova.network.os_vif_util [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.665 2 DEBUG os_vif [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8b3ac7d-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.675 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[faf663bc-0fd0-4c8b-8932-d15c369e7ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14196b9b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:47:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852727, 'reachable_time': 21450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416333, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.676 2 DEBUG nova.compute.manager [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-unplugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.676 2 DEBUG oslo_concurrency.lockutils [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.677 2 DEBUG oslo_concurrency.lockutils [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.677 2 DEBUG oslo_concurrency.lockutils [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.678 2 DEBUG nova.compute.manager [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] No waiting events found dispatching network-vif-unplugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.678 2 DEBUG nova.compute.manager [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-unplugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.687 2 INFO os_vif [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a')
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.697 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ab3d9d-d59f-4daf-9554-d0391beedef0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14196b9b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852742, 'tstamp': 852742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416335, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14196b9b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852745, 'tstamp': 852745}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416335, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14196b9b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.702 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14196b9b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.702 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.703 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14196b9b-00, col_values=(('external_ids', {'iface-id': '500514c8-ea10-48c5-93d8-b1c6948e60b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:06 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.703 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:06 compute-0 nova_compute[259627]: 2025-10-14 09:40:06.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:07 compute-0 nova_compute[259627]: 2025-10-14 09:40:07.156 2 INFO nova.virt.libvirt.driver [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Deleting instance files /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_del
Oct 14 09:40:07 compute-0 nova_compute[259627]: 2025-10-14 09:40:07.158 2 INFO nova.virt.libvirt.driver [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Deletion of /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_del complete
Oct 14 09:40:07 compute-0 nova_compute[259627]: 2025-10-14 09:40:07.241 2 INFO nova.compute.manager [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 14 09:40:07 compute-0 nova_compute[259627]: 2025-10-14 09:40:07.242 2 DEBUG oslo.service.loopingcall [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:40:07 compute-0 nova_compute[259627]: 2025-10-14 09:40:07.242 2 DEBUG nova.compute.manager [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:40:07 compute-0 nova_compute[259627]: 2025-10-14 09:40:07.243 2 DEBUG nova.network.neutron [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:40:07 compute-0 ceph-mon[74249]: pgmap v2609: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.103 2 DEBUG nova.network.neutron [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.129 2 INFO nova.compute.manager [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Took 0.89 seconds to deallocate network for instance.
Oct 14 09:40:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.183 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.184 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.267 2 DEBUG oslo_concurrency.processutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.400 2 DEBUG nova.compute.manager [req-796d3c0c-964a-4e4b-98d7-45085a842c5a req-3f978797-e996-46e4-a721-9f60f52d4e7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-deleted-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.581 2 DEBUG nova.network.neutron [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updated VIF entry in instance network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.582 2 DEBUG nova.network.neutron [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.601 2 DEBUG oslo_concurrency.lockutils [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:40:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 14 09:40:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:40:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3959017969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.756 2 DEBUG oslo_concurrency.processutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.765 2 DEBUG nova.compute.provider_tree [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.774 2 DEBUG nova.compute.manager [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.775 2 DEBUG oslo_concurrency.lockutils [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.775 2 DEBUG oslo_concurrency.lockutils [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.776 2 DEBUG oslo_concurrency.lockutils [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.776 2 DEBUG nova.compute.manager [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] No waiting events found dispatching network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.777 2 WARNING nova.compute.manager [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received unexpected event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de for instance with vm_state deleted and task_state None.
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.785 2 DEBUG nova.scheduler.client.report [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.810 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.836 2 INFO nova.scheduler.client.report [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed
Oct 14 09:40:08 compute-0 nova_compute[259627]: 2025-10-14 09:40:08.895 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3959017969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:09 compute-0 ceph-mon[74249]: pgmap v2610: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 14 09:40:10 compute-0 sudo[416377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:10 compute-0 sudo[416377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:10 compute-0 sudo[416377]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.330 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.331 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.332 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.333 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.333 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.335 2 INFO nova.compute.manager [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Terminating instance
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.338 2 DEBUG nova.compute.manager [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:40:10 compute-0 sudo[416402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:40:10 compute-0 sudo[416402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:10 compute-0 sudo[416402]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:10 compute-0 kernel: tap30c28c87-45 (unregistering): left promiscuous mode
Oct 14 09:40:10 compute-0 NetworkManager[44885]: <info>  [1760434810.3957] device (tap30c28c87-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:40:10 compute-0 ovn_controller[152662]: 2025-10-14T09:40:10Z|01645|binding|INFO|Releasing lport 30c28c87-45b1-43e9-930b-c8ba5142286f from this chassis (sb_readonly=0)
Oct 14 09:40:10 compute-0 ovn_controller[152662]: 2025-10-14T09:40:10Z|01646|binding|INFO|Setting lport 30c28c87-45b1-43e9-930b-c8ba5142286f down in Southbound
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:10 compute-0 ovn_controller[152662]: 2025-10-14T09:40:10Z|01647|binding|INFO|Removing iface tap30c28c87-45 ovn-installed in OVS
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.409 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03'], port_security=['fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:feb7:ec03/64 2001:db8::f816:3eff:feb7:ec03/64', 'neutron:device_id': 'e5b13156-71d2-4a9c-be63-1beebe1ca3fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75fd0641-e399-4e8e-872e-ceab82cd0201', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=30c28c87-45b1-43e9-930b-c8ba5142286f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.410 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 30c28c87-45b1-43e9-930b-c8ba5142286f in datapath 14196b9b-0205-497b-9e98-32690613a533 unbound from our chassis
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.411 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14196b9b-0205-497b-9e98-32690613a533, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be4b49c0-e55f-4d22-9dae-430592bde651]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.415 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14196b9b-0205-497b-9e98-32690613a533 namespace which is not needed anymore
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:10 compute-0 sudo[416427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:10 compute-0 sudo[416427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:10 compute-0 sudo[416427]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:10 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 14 09:40:10 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Consumed 14.863s CPU time.
Oct 14 09:40:10 compute-0 systemd-machined[214636]: Machine qemu-181-instance-00000094 terminated.
Oct 14 09:40:10 compute-0 sudo[416468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:40:10 compute-0 sudo[416468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:10 compute-0 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [NOTICE]   (415623) : haproxy version is 2.8.14-c23fe91
Oct 14 09:40:10 compute-0 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [NOTICE]   (415623) : path to executable is /usr/sbin/haproxy
Oct 14 09:40:10 compute-0 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [WARNING]  (415623) : Exiting Master process...
Oct 14 09:40:10 compute-0 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [WARNING]  (415623) : Exiting Master process...
Oct 14 09:40:10 compute-0 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [ALERT]    (415623) : Current worker (415625) exited with code 143 (Terminated)
Oct 14 09:40:10 compute-0 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [WARNING]  (415623) : All workers exited. Exiting... (0)
Oct 14 09:40:10 compute-0 systemd[1]: libpod-da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d.scope: Deactivated successfully.
Oct 14 09:40:10 compute-0 podman[416499]: 2025-10-14 09:40:10.554840044 +0000 UTC m=+0.043470308 container died da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.571 2 INFO nova.virt.libvirt.driver [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance destroyed successfully.
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.572 2 DEBUG nova.objects.instance [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid e5b13156-71d2-4a9c-be63-1beebe1ca3fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.591 2 DEBUG nova.virt.libvirt.vif [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:38:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-911560012',display_name='tempest-TestGettingAddress-server-911560012',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-911560012',id=148,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:39:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-friw18rr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:39:09Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=e5b13156-71d2-4a9c-be63-1beebe1ca3fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.592 2 DEBUG nova.network.os_vif_util [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.593 2 DEBUG nova.network.os_vif_util [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.593 2 DEBUG os_vif [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30c28c87-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.602 2 INFO os_vif [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45')
Oct 14 09:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d-userdata-shm.mount: Deactivated successfully.
Oct 14 09:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-579e65901c3d9167dcba11f2ead71e8144552c5f2bb2a88cb78fc270c6ac4ad0-merged.mount: Deactivated successfully.
Oct 14 09:40:10 compute-0 podman[416499]: 2025-10-14 09:40:10.624704809 +0000 UTC m=+0.113335113 container cleanup da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:40:10 compute-0 systemd[1]: libpod-conmon-da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d.scope: Deactivated successfully.
Oct 14 09:40:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 09:40:10 compute-0 podman[416557]: 2025-10-14 09:40:10.705969333 +0000 UTC m=+0.050563432 container remove da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.711 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d30987e7-47c9-48c6-96a1-7041ab4cc8fc]: (4, ('Tue Oct 14 09:40:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533 (da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d)\nda359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d\nTue Oct 14 09:40:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533 (da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d)\nda359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.713 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7a751600-79cb-47d7-93ca-bc295628500a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.714 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14196b9b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:10 compute-0 kernel: tap14196b9b-00: left promiscuous mode
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[601c47f9-9c53-415d-a68d-5c7f2da34d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.760 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e443b51-0c6c-4c04-a3ea-deff37747b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.762 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b55dfa83-946b-4aad-a69d-82edd228bac7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.777 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fad5b3b-c8ad-4304-ad6f-f1512bd8de74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852718, 'reachable_time': 43402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416596, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d14196b9b\x2d0205\x2d497b\x2d9e98\x2d32690613a533.mount: Deactivated successfully.
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.781 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14196b9b-0205-497b-9e98-32690613a533 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:40:10 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.781 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b0308696-416e-46be-9346-001da9be3cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:10 compute-0 podman[416586]: 2025-10-14 09:40:10.827906745 +0000 UTC m=+0.075921804 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.882 2 DEBUG nova.compute.manager [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.882 2 DEBUG nova.compute.manager [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing instance network info cache due to event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.883 2 DEBUG oslo_concurrency.lockutils [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.883 2 DEBUG oslo_concurrency.lockutils [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:40:10 compute-0 nova_compute[259627]: 2025-10-14 09:40:10.883 2 DEBUG nova.network.neutron [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:40:10 compute-0 podman[416604]: 2025-10-14 09:40:10.895792851 +0000 UTC m=+0.091015764 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:40:11 compute-0 sudo[416468]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.024 2 INFO nova.virt.libvirt.driver [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Deleting instance files /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb_del
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.025 2 INFO nova.virt.libvirt.driver [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Deletion of /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb_del complete
Oct 14 09:40:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:40:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:40:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:40:11 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:40:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:40:11 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:40:11 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a04c671d-245d-4a5b-9222-7a97969a45e8 does not exist
Oct 14 09:40:11 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6cd7c7e3-9580-49b9-b248-f7d2cb3191e5 does not exist
Oct 14 09:40:11 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 2f1c73e6-a12b-4e1f-9342-9bab84e2b7f4 does not exist
Oct 14 09:40:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:40:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:40:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:40:11 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:40:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:40:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:40:11 compute-0 sudo[416654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:11 compute-0 sudo[416654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:11 compute-0 sudo[416654]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.119 2 INFO nova.compute.manager [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.120 2 DEBUG oslo.service.loopingcall [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.120 2 DEBUG nova.compute.manager [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.120 2 DEBUG nova.network.neutron [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:40:11 compute-0 sudo[416679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.179 2 DEBUG nova.compute.manager [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-unplugged-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.180 2 DEBUG oslo_concurrency.lockutils [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.180 2 DEBUG oslo_concurrency.lockutils [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.180 2 DEBUG oslo_concurrency.lockutils [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:11 compute-0 sudo[416679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.181 2 DEBUG nova.compute.manager [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] No waiting events found dispatching network-vif-unplugged-30c28c87-45b1-43e9-930b-c8ba5142286f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.181 2 DEBUG nova.compute.manager [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-unplugged-30c28c87-45b1-43e9-930b-c8ba5142286f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:40:11 compute-0 sudo[416679]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:11 compute-0 sudo[416704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:11 compute-0 sudo[416704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:11 compute-0 sudo[416704]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:11 compute-0 sudo[416729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:40:11 compute-0 sudo[416729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:11 compute-0 podman[416791]: 2025-10-14 09:40:11.750728372 +0000 UTC m=+0.061745007 container create 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:40:11 compute-0 nova_compute[259627]: 2025-10-14 09:40:11.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:11 compute-0 systemd[1]: Started libpod-conmon-4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4.scope.
Oct 14 09:40:11 compute-0 podman[416791]: 2025-10-14 09:40:11.726494397 +0000 UTC m=+0.037511122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:40:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:40:11 compute-0 podman[416791]: 2025-10-14 09:40:11.855924763 +0000 UTC m=+0.166941498 container init 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:40:11 compute-0 podman[416791]: 2025-10-14 09:40:11.867426105 +0000 UTC m=+0.178442780 container start 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:40:11 compute-0 podman[416791]: 2025-10-14 09:40:11.87089562 +0000 UTC m=+0.181912275 container attach 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 09:40:11 compute-0 pedantic_northcutt[416807]: 167 167
Oct 14 09:40:11 compute-0 systemd[1]: libpod-4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4.scope: Deactivated successfully.
Oct 14 09:40:11 compute-0 podman[416791]: 2025-10-14 09:40:11.875176425 +0000 UTC m=+0.186193100 container died 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:40:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4d67928482b107d201b3311283b0c54bfa3e94dbc95743727efbfed27753f10-merged.mount: Deactivated successfully.
Oct 14 09:40:11 compute-0 podman[416791]: 2025-10-14 09:40:11.913579928 +0000 UTC m=+0.224596573 container remove 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:40:11 compute-0 systemd[1]: libpod-conmon-4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4.scope: Deactivated successfully.
Oct 14 09:40:11 compute-0 ceph-mon[74249]: pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 09:40:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:40:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:40:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:40:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:40:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:40:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:40:12 compute-0 podman[416831]: 2025-10-14 09:40:12.123441218 +0000 UTC m=+0.041740055 container create e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:40:12 compute-0 systemd[1]: Started libpod-conmon-e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270.scope.
Oct 14 09:40:12 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:12 compute-0 podman[416831]: 2025-10-14 09:40:12.104947304 +0000 UTC m=+0.023246141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:40:12 compute-0 podman[416831]: 2025-10-14 09:40:12.205759028 +0000 UTC m=+0.124057885 container init e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:40:12 compute-0 podman[416831]: 2025-10-14 09:40:12.218028879 +0000 UTC m=+0.136327706 container start e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:40:12 compute-0 podman[416831]: 2025-10-14 09:40:12.221144426 +0000 UTC m=+0.139443253 container attach e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:40:12 compute-0 nova_compute[259627]: 2025-10-14 09:40:12.496 2 DEBUG nova.network.neutron [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:40:12 compute-0 nova_compute[259627]: 2025-10-14 09:40:12.526 2 INFO nova.compute.manager [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Took 1.41 seconds to deallocate network for instance.
Oct 14 09:40:12 compute-0 nova_compute[259627]: 2025-10-14 09:40:12.569 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:12 compute-0 nova_compute[259627]: 2025-10-14 09:40:12.570 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:12 compute-0 nova_compute[259627]: 2025-10-14 09:40:12.606 2 DEBUG oslo_concurrency.processutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct 14 09:40:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:40:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576567254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.115 2 DEBUG oslo_concurrency.processutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.125 2 DEBUG nova.compute.provider_tree [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.150 2 DEBUG nova.scheduler.client.report [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:40:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.185 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.227 2 INFO nova.scheduler.client.report [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance e5b13156-71d2-4a9c-be63-1beebe1ca3fb
Oct 14 09:40:13 compute-0 gifted_bohr[416847]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:40:13 compute-0 gifted_bohr[416847]: --> relative data size: 1.0
Oct 14 09:40:13 compute-0 gifted_bohr[416847]: --> All data devices are unavailable
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.289 2 DEBUG nova.compute.manager [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.289 2 DEBUG oslo_concurrency.lockutils [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.289 2 DEBUG oslo_concurrency.lockutils [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.290 2 DEBUG oslo_concurrency.lockutils [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.290 2 DEBUG nova.compute.manager [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] No waiting events found dispatching network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.290 2 WARNING nova.compute.manager [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received unexpected event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f for instance with vm_state deleted and task_state None.
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.291 2 DEBUG nova.compute.manager [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-deleted-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:13 compute-0 systemd[1]: libpod-e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270.scope: Deactivated successfully.
Oct 14 09:40:13 compute-0 systemd[1]: libpod-e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270.scope: Consumed 1.017s CPU time.
Oct 14 09:40:13 compute-0 podman[416831]: 2025-10-14 09:40:13.300374629 +0000 UTC m=+1.218673496 container died e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.328 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e-merged.mount: Deactivated successfully.
Oct 14 09:40:13 compute-0 podman[416831]: 2025-10-14 09:40:13.359794047 +0000 UTC m=+1.278092894 container remove e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 09:40:13 compute-0 systemd[1]: libpod-conmon-e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270.scope: Deactivated successfully.
Oct 14 09:40:13 compute-0 sudo[416729]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:13 compute-0 sudo[416910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:13 compute-0 sudo[416910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:13 compute-0 sudo[416910]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:13 compute-0 sudo[416935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:40:13 compute-0 sudo[416935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:13 compute-0 sudo[416935]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:13 compute-0 sudo[416960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:13 compute-0 sudo[416960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:13 compute-0 sudo[416960]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:13 compute-0 sudo[416985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:40:13 compute-0 sudo[416985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.777 2 DEBUG nova.network.neutron [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updated VIF entry in instance network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.779 2 DEBUG nova.network.neutron [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:40:13 compute-0 nova_compute[259627]: 2025-10-14 09:40:13.804 2 DEBUG oslo_concurrency.lockutils [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:40:13 compute-0 ceph-mon[74249]: pgmap v2612: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct 14 09:40:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2576567254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:14 compute-0 podman[417050]: 2025-10-14 09:40:14.174177292 +0000 UTC m=+0.067329803 container create 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:40:14 compute-0 systemd[1]: Started libpod-conmon-18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f.scope.
Oct 14 09:40:14 compute-0 podman[417050]: 2025-10-14 09:40:14.151187018 +0000 UTC m=+0.044339619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:40:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:40:14 compute-0 podman[417050]: 2025-10-14 09:40:14.258892201 +0000 UTC m=+0.152044772 container init 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 09:40:14 compute-0 podman[417050]: 2025-10-14 09:40:14.269991224 +0000 UTC m=+0.163143735 container start 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:40:14 compute-0 podman[417050]: 2025-10-14 09:40:14.27269704 +0000 UTC m=+0.165849551 container attach 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:40:14 compute-0 hopeful_austin[417066]: 167 167
Oct 14 09:40:14 compute-0 systemd[1]: libpod-18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f.scope: Deactivated successfully.
Oct 14 09:40:14 compute-0 podman[417050]: 2025-10-14 09:40:14.277914908 +0000 UTC m=+0.171067429 container died 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-16011f8eabe82c41632ae6374c24469019da00e6786da91fc2c6e72ccb7d8acc-merged.mount: Deactivated successfully.
Oct 14 09:40:14 compute-0 podman[417050]: 2025-10-14 09:40:14.321063117 +0000 UTC m=+0.214215638 container remove 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:40:14 compute-0 systemd[1]: libpod-conmon-18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f.scope: Deactivated successfully.
Oct 14 09:40:14 compute-0 podman[417091]: 2025-10-14 09:40:14.531621934 +0000 UTC m=+0.061201763 container create d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:40:14 compute-0 systemd[1]: Started libpod-conmon-d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b.scope.
Oct 14 09:40:14 compute-0 podman[417091]: 2025-10-14 09:40:14.511398028 +0000 UTC m=+0.040977867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:40:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:14 compute-0 podman[417091]: 2025-10-14 09:40:14.635575175 +0000 UTC m=+0.165155084 container init d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:40:14 compute-0 podman[417091]: 2025-10-14 09:40:14.646539824 +0000 UTC m=+0.176119643 container start d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:40:14 compute-0 podman[417091]: 2025-10-14 09:40:14.650097831 +0000 UTC m=+0.179677730 container attach d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 14 09:40:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct 14 09:40:15 compute-0 epic_nash[417108]: {
Oct 14 09:40:15 compute-0 epic_nash[417108]:     "0": [
Oct 14 09:40:15 compute-0 epic_nash[417108]:         {
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "devices": [
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "/dev/loop3"
Oct 14 09:40:15 compute-0 epic_nash[417108]:             ],
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_name": "ceph_lv0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_size": "21470642176",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "name": "ceph_lv0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "tags": {
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cluster_name": "ceph",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.crush_device_class": "",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.encrypted": "0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osd_id": "0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.type": "block",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.vdo": "0"
Oct 14 09:40:15 compute-0 epic_nash[417108]:             },
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "type": "block",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "vg_name": "ceph_vg0"
Oct 14 09:40:15 compute-0 epic_nash[417108]:         }
Oct 14 09:40:15 compute-0 epic_nash[417108]:     ],
Oct 14 09:40:15 compute-0 epic_nash[417108]:     "1": [
Oct 14 09:40:15 compute-0 epic_nash[417108]:         {
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "devices": [
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "/dev/loop4"
Oct 14 09:40:15 compute-0 epic_nash[417108]:             ],
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_name": "ceph_lv1",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_size": "21470642176",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "name": "ceph_lv1",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "tags": {
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cluster_name": "ceph",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.crush_device_class": "",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.encrypted": "0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osd_id": "1",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.type": "block",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.vdo": "0"
Oct 14 09:40:15 compute-0 epic_nash[417108]:             },
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "type": "block",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "vg_name": "ceph_vg1"
Oct 14 09:40:15 compute-0 epic_nash[417108]:         }
Oct 14 09:40:15 compute-0 epic_nash[417108]:     ],
Oct 14 09:40:15 compute-0 epic_nash[417108]:     "2": [
Oct 14 09:40:15 compute-0 epic_nash[417108]:         {
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "devices": [
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "/dev/loop5"
Oct 14 09:40:15 compute-0 epic_nash[417108]:             ],
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_name": "ceph_lv2",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_size": "21470642176",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "name": "ceph_lv2",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "tags": {
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.cluster_name": "ceph",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.crush_device_class": "",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.encrypted": "0",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osd_id": "2",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.type": "block",
Oct 14 09:40:15 compute-0 epic_nash[417108]:                 "ceph.vdo": "0"
Oct 14 09:40:15 compute-0 epic_nash[417108]:             },
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "type": "block",
Oct 14 09:40:15 compute-0 epic_nash[417108]:             "vg_name": "ceph_vg2"
Oct 14 09:40:15 compute-0 epic_nash[417108]:         }
Oct 14 09:40:15 compute-0 epic_nash[417108]:     ]
Oct 14 09:40:15 compute-0 epic_nash[417108]: }
Oct 14 09:40:15 compute-0 systemd[1]: libpod-d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b.scope: Deactivated successfully.
Oct 14 09:40:15 compute-0 podman[417091]: 2025-10-14 09:40:15.430993415 +0000 UTC m=+0.960573294 container died d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:40:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10-merged.mount: Deactivated successfully.
Oct 14 09:40:15 compute-0 podman[417091]: 2025-10-14 09:40:15.512242909 +0000 UTC m=+1.041822758 container remove d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:40:15 compute-0 systemd[1]: libpod-conmon-d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b.scope: Deactivated successfully.
Oct 14 09:40:15 compute-0 sudo[416985]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:15 compute-0 sudo[417131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:15 compute-0 sudo[417131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:15 compute-0 sudo[417131]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:15 compute-0 nova_compute[259627]: 2025-10-14 09:40:15.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:15 compute-0 sudo[417156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:40:15 compute-0 sudo[417156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:15 compute-0 sudo[417156]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:15 compute-0 sudo[417181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:15 compute-0 sudo[417181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:15 compute-0 sudo[417181]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:15 compute-0 sudo[417206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:40:15 compute-0 sudo[417206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:16 compute-0 ceph-mon[74249]: pgmap v2613: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct 14 09:40:16 compute-0 podman[417272]: 2025-10-14 09:40:16.313875641 +0000 UTC m=+0.055623366 container create d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:40:16 compute-0 systemd[1]: Started libpod-conmon-d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352.scope.
Oct 14 09:40:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:40:16 compute-0 podman[417272]: 2025-10-14 09:40:16.292836754 +0000 UTC m=+0.034584459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:40:16 compute-0 podman[417272]: 2025-10-14 09:40:16.408161815 +0000 UTC m=+0.149909590 container init d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:40:16 compute-0 podman[417272]: 2025-10-14 09:40:16.418948999 +0000 UTC m=+0.160696684 container start d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:40:16 compute-0 laughing_golick[417288]: 167 167
Oct 14 09:40:16 compute-0 systemd[1]: libpod-d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352.scope: Deactivated successfully.
Oct 14 09:40:16 compute-0 podman[417272]: 2025-10-14 09:40:16.42795403 +0000 UTC m=+0.169701715 container attach d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:40:16 compute-0 podman[417272]: 2025-10-14 09:40:16.428372521 +0000 UTC m=+0.170120206 container died d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa159873990fcc3ef8963c34bfba0ab1e7b1a3fc97cd2bc2c5979d5764f87dec-merged.mount: Deactivated successfully.
Oct 14 09:40:16 compute-0 podman[417272]: 2025-10-14 09:40:16.492948635 +0000 UTC m=+0.234696400 container remove d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 09:40:16 compute-0 systemd[1]: libpod-conmon-d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352.scope: Deactivated successfully.
Oct 14 09:40:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 58 op/s
Oct 14 09:40:16 compute-0 podman[417314]: 2025-10-14 09:40:16.695421713 +0000 UTC m=+0.040678659 container create f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:40:16 compute-0 systemd[1]: Started libpod-conmon-f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0.scope.
Oct 14 09:40:16 compute-0 podman[417314]: 2025-10-14 09:40:16.67654942 +0000 UTC m=+0.021806396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:40:16 compute-0 nova_compute[259627]: 2025-10-14 09:40:16.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:16 compute-0 podman[417314]: 2025-10-14 09:40:16.814058324 +0000 UTC m=+0.159315300 container init f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:40:16 compute-0 podman[417314]: 2025-10-14 09:40:16.830269732 +0000 UTC m=+0.175526678 container start f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 09:40:16 compute-0 podman[417314]: 2025-10-14 09:40:16.833529612 +0000 UTC m=+0.178786598 container attach f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:40:17 compute-0 practical_euler[417331]: {
Oct 14 09:40:17 compute-0 practical_euler[417331]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "osd_id": 2,
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "type": "bluestore"
Oct 14 09:40:17 compute-0 practical_euler[417331]:     },
Oct 14 09:40:17 compute-0 practical_euler[417331]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "osd_id": 1,
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "type": "bluestore"
Oct 14 09:40:17 compute-0 practical_euler[417331]:     },
Oct 14 09:40:17 compute-0 practical_euler[417331]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "osd_id": 0,
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:40:17 compute-0 practical_euler[417331]:         "type": "bluestore"
Oct 14 09:40:17 compute-0 practical_euler[417331]:     }
Oct 14 09:40:17 compute-0 practical_euler[417331]: }
Oct 14 09:40:17 compute-0 systemd[1]: libpod-f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0.scope: Deactivated successfully.
Oct 14 09:40:17 compute-0 systemd[1]: libpod-f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0.scope: Consumed 1.067s CPU time.
Oct 14 09:40:17 compute-0 podman[417314]: 2025-10-14 09:40:17.891318551 +0000 UTC m=+1.236575537 container died f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 09:40:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330-merged.mount: Deactivated successfully.
Oct 14 09:40:17 compute-0 podman[417314]: 2025-10-14 09:40:17.972205755 +0000 UTC m=+1.317462741 container remove f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:40:17 compute-0 systemd[1]: libpod-conmon-f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0.scope: Deactivated successfully.
Oct 14 09:40:18 compute-0 sudo[417206]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:18 compute-0 ceph-mon[74249]: pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 58 op/s
Oct 14 09:40:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:40:18 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:40:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:40:18 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:40:18 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a11636b-eb9e-4633-9e4b-c797b8f2ac36 does not exist
Oct 14 09:40:18 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6c0c4242-e338-43cb-8466-c455c0cd84c2 does not exist
Oct 14 09:40:18 compute-0 sudo[417378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:40:18 compute-0 sudo[417378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:18 compute-0 sudo[417378]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:18 compute-0 sudo[417403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:40:18 compute-0 sudo[417403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:40:18 compute-0 sudo[417403]: pam_unix(sudo:session): session closed for user root
Oct 14 09:40:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 09:40:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:40:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:40:19 compute-0 nova_compute[259627]: 2025-10-14 09:40:19.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:19 compute-0 nova_compute[259627]: 2025-10-14 09:40:19.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:20 compute-0 ceph-mon[74249]: pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 09:40:20 compute-0 nova_compute[259627]: 2025-10-14 09:40:20.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 09:40:21 compute-0 nova_compute[259627]: 2025-10-14 09:40:21.633 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434806.6306548, 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:40:21 compute-0 nova_compute[259627]: 2025-10-14 09:40:21.634 2 INFO nova.compute.manager [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] VM Stopped (Lifecycle Event)
Oct 14 09:40:21 compute-0 nova_compute[259627]: 2025-10-14 09:40:21.664 2 DEBUG nova.compute.manager [None req-9699c253-86b8-45e5-9a9b-1895ee345424 - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:40:21 compute-0 nova_compute[259627]: 2025-10-14 09:40:21.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:22 compute-0 ceph-mon[74249]: pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 09:40:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:40:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:24 compute-0 ceph-mon[74249]: pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:40:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:40:25 compute-0 nova_compute[259627]: 2025-10-14 09:40:25.569 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434810.568186, e5b13156-71d2-4a9c-be63-1beebe1ca3fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:40:25 compute-0 nova_compute[259627]: 2025-10-14 09:40:25.570 2 INFO nova.compute.manager [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] VM Stopped (Lifecycle Event)
Oct 14 09:40:25 compute-0 nova_compute[259627]: 2025-10-14 09:40:25.598 2 DEBUG nova.compute.manager [None req-abda6417-3e73-4745-b51c-f2d63956071a - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:40:25 compute-0 nova_compute[259627]: 2025-10-14 09:40:25.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:26 compute-0 ceph-mon[74249]: pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:40:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:40:26 compute-0 nova_compute[259627]: 2025-10-14 09:40:26.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:27 compute-0 podman[417430]: 2025-10-14 09:40:27.696381744 +0000 UTC m=+0.097922524 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 14 09:40:27 compute-0 podman[417429]: 2025-10-14 09:40:27.702144585 +0000 UTC m=+0.105432128 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 09:40:28 compute-0 ceph-mon[74249]: pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 09:40:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:28 compute-0 nova_compute[259627]: 2025-10-14 09:40:28.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:40:30 compute-0 ceph-mon[74249]: pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:30 compute-0 nova_compute[259627]: 2025-10-14 09:40:30.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:31 compute-0 nova_compute[259627]: 2025-10-14 09:40:31.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:32 compute-0 ceph-mon[74249]: pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:40:32
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Oct 14 09:40:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:40:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:40:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:40:34 compute-0 ceph-mon[74249]: pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:35.718 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:40:35 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:35.719 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:40:35 compute-0 nova_compute[259627]: 2025-10-14 09:40:35.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:36 compute-0 ceph-mon[74249]: pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:36.147 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:1e:41 10.100.0.2 2001:db8::f816:3eff:fe88:1e41'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:1e41/64', 'neutron:device_id': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1bf2b41c-8c9f-45cd-aeb9-2459d1373791) old=Port_Binding(mac=['fa:16:3e:88:1e:41 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:40:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:36.149 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 updated
Oct 14 09:40:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:36.151 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:40:36 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:36.153 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fd419d-f017-4d39-8c29-d61012445fb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:36 compute-0 nova_compute[259627]: 2025-10-14 09:40:36.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:38 compute-0 ceph-mon[74249]: pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:40 compute-0 ceph-mon[74249]: pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.163 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:1e:41 10.100.0.2 2001:db8:0:1:f816:3eff:fe88:1e41 2001:db8::f816:3eff:fe88:1e41'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe88:1e41/64 2001:db8::f816:3eff:fe88:1e41/64', 'neutron:device_id': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1bf2b41c-8c9f-45cd-aeb9-2459d1373791) old=Port_Binding(mac=['fa:16:3e:88:1e:41 10.100.0.2 2001:db8::f816:3eff:fe88:1e41'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:1e41/64', 'neutron:device_id': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:40:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.165 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 updated
Oct 14 09:40:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.167 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:40:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.168 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[75df4e3c-26cf-4c47-aa65-3153c94adeb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.721 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:40 compute-0 nova_compute[259627]: 2025-10-14 09:40:40.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:41 compute-0 ceph-mon[74249]: pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:41 compute-0 podman[417468]: 2025-10-14 09:40:41.662676625 +0000 UTC m=+0.070386599 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 09:40:41 compute-0 podman[417467]: 2025-10-14 09:40:41.707939845 +0000 UTC m=+0.126950747 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:40:41 compute-0 nova_compute[259627]: 2025-10-14 09:40:41.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:40:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:40:43 compute-0 ceph-mon[74249]: pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.114 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.114 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.142 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.218 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.219 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.226 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.227 2 INFO nova.compute.claims [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.324 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:45 compute-0 ceph-mon[74249]: pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:40:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856832795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.783 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.823 2 DEBUG nova.compute.provider_tree [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.842 2 DEBUG nova.scheduler.client.report [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.870 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.871 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.925 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.926 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.960 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:40:45 compute-0 nova_compute[259627]: 2025-10-14 09:40:45.988 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.079 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.081 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.082 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Creating image(s)
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.120 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.146 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.169 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.174 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.215 2 DEBUG nova.policy [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.259 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.260 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.261 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.261 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.291 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.295 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0239a56b-babd-4c44-b52b-ade80229be78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.609 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0239a56b-babd-4c44-b52b-ade80229be78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.669 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:40:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1856832795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.778 2 DEBUG nova.objects.instance [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 0239a56b-babd-4c44-b52b-ade80229be78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.839 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.840 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Ensure instance console log exists: /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.841 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.841 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.842 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:46 compute-0 nova_compute[259627]: 2025-10-14 09:40:46.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:47 compute-0 ceph-mon[74249]: pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:47 compute-0 nova_compute[259627]: 2025-10-14 09:40:47.832 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Successfully created port: 0b5d3762-db25-4cc9-90f3-79d3eb662378 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:40:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:48 compute-0 nova_compute[259627]: 2025-10-14 09:40:48.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:40:49 compute-0 nova_compute[259627]: 2025-10-14 09:40:49.315 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Successfully updated port: 0b5d3762-db25-4cc9-90f3-79d3eb662378 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:40:49 compute-0 nova_compute[259627]: 2025-10-14 09:40:49.334 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:40:49 compute-0 nova_compute[259627]: 2025-10-14 09:40:49.334 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:40:49 compute-0 nova_compute[259627]: 2025-10-14 09:40:49.335 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:40:49 compute-0 nova_compute[259627]: 2025-10-14 09:40:49.522 2 DEBUG nova.compute.manager [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:49 compute-0 nova_compute[259627]: 2025-10-14 09:40:49.523 2 DEBUG nova.compute.manager [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing instance network info cache due to event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:40:49 compute-0 nova_compute[259627]: 2025-10-14 09:40:49.523 2 DEBUG oslo_concurrency.lockutils [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:40:49 compute-0 nova_compute[259627]: 2025-10-14 09:40:49.761 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:40:49 compute-0 ceph-mon[74249]: pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:40:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:40:50 compute-0 nova_compute[259627]: 2025-10-14 09:40:50.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:51 compute-0 ceph-mon[74249]: pgmap v2631: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:40:51 compute-0 nova_compute[259627]: 2025-10-14 09:40:51.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.368 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.398 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.398 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance network_info: |[{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.398 2 DEBUG oslo_concurrency.lockutils [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.399 2 DEBUG nova.network.neutron [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.403 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start _get_guest_xml network_info=[{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.409 2 WARNING nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.414 2 DEBUG nova.virt.libvirt.host [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.415 2 DEBUG nova.virt.libvirt.host [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.419 2 DEBUG nova.virt.libvirt.host [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.419 2 DEBUG nova.virt.libvirt.host [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.420 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.420 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.421 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.421 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.421 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.422 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.422 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.422 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.422 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.423 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.423 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.423 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.426 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:40:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:40:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397886194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.911 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.934 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:40:52 compute-0 nova_compute[259627]: 2025-10-14 09:40:52.938 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:40:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221020274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.380 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.383 2 DEBUG nova.virt.libvirt.vif [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-35816390',display_name='tempest-TestGettingAddress-server-35816390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-35816390',id=150,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-r3oxjpue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:40:46Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=0239a56b-babd-4c44-b52b-ade80229be78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.384 2 DEBUG nova.network.os_vif_util [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.386 2 DEBUG nova.network.os_vif_util [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.388 2 DEBUG nova.objects.instance [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 0239a56b-babd-4c44-b52b-ade80229be78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.419 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <uuid>0239a56b-babd-4c44-b52b-ade80229be78</uuid>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <name>instance-00000096</name>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-35816390</nova:name>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:40:52</nova:creationTime>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <nova:port uuid="0b5d3762-db25-4cc9-90f3-79d3eb662378">
Oct 14 09:40:53 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe37:ee6c" ipVersion="6"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe37:ee6c" ipVersion="6"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <system>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <entry name="serial">0239a56b-babd-4c44-b52b-ade80229be78</entry>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <entry name="uuid">0239a56b-babd-4c44-b52b-ade80229be78</entry>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </system>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <os>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   </os>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <features>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   </features>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/0239a56b-babd-4c44-b52b-ade80229be78_disk">
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       </source>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/0239a56b-babd-4c44-b52b-ade80229be78_disk.config">
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       </source>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:40:53 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:37:ee:6c"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <target dev="tap0b5d3762-db"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/console.log" append="off"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <video>
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </video>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:40:53 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:40:53 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:40:53 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:40:53 compute-0 nova_compute[259627]: </domain>
Oct 14 09:40:53 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.421 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Preparing to wait for external event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.422 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.423 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.423 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.424 2 DEBUG nova.virt.libvirt.vif [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-35816390',display_name='tempest-TestGettingAddress-server-35816390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-35816390',id=150,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-r3oxjpue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:40:46Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=0239a56b-babd-4c44-b52b-ade80229be78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.425 2 DEBUG nova.network.os_vif_util [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.426 2 DEBUG nova.network.os_vif_util [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.427 2 DEBUG os_vif [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b5d3762-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b5d3762-db, col_values=(('external_ids', {'iface-id': '0b5d3762-db25-4cc9-90f3-79d3eb662378', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:ee:6c', 'vm-uuid': '0239a56b-babd-4c44-b52b-ade80229be78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:53 compute-0 NetworkManager[44885]: <info>  [1760434853.4396] manager: (tap0b5d3762-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/674)
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.453 2 INFO os_vif [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db')
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.535 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.536 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.536 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:37:ee:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.537 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Using config drive
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.574 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:40:53 compute-0 ceph-mon[74249]: pgmap v2632: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:40:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/397886194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:40:53 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/221020274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:40:53 compute-0 nova_compute[259627]: 2025-10-14 09:40:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.349 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Creating config drive at /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.359 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmk9fmaip execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.515 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmk9fmaip" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.560 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.567 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config 0239a56b-babd-4c44-b52b-ade80229be78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.789 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config 0239a56b-babd-4c44-b52b-ade80229be78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.791 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Deleting local config drive /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config because it was imported into RBD.
Oct 14 09:40:54 compute-0 kernel: tap0b5d3762-db: entered promiscuous mode
Oct 14 09:40:54 compute-0 NetworkManager[44885]: <info>  [1760434854.8743] manager: (tap0b5d3762-db): new Tun device (/org/freedesktop/NetworkManager/Devices/675)
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:54 compute-0 ovn_controller[152662]: 2025-10-14T09:40:54Z|01648|binding|INFO|Claiming lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 for this chassis.
Oct 14 09:40:54 compute-0 ovn_controller[152662]: 2025-10-14T09:40:54Z|01649|binding|INFO|0b5d3762-db25-4cc9-90f3-79d3eb662378: Claiming fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.903 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c'], port_security=['fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe37:ee6c/64 2001:db8::f816:3eff:fe37:ee6c/64', 'neutron:device_id': '0239a56b-babd-4c44-b52b-ade80229be78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0efff049-0165-4ea1-b912-974883009802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b5d3762-db25-4cc9-90f3-79d3eb662378) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.904 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b5d3762-db25-4cc9-90f3-79d3eb662378 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 bound to our chassis
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.905 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.926 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[483525cd-d108-4a9a-b7af-20bb14e6d786]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.927 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2d242a4-f1 in ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.931 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2d242a4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.931 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e352cf0-d727-48f0-a634-e51e4dac3f00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:54 compute-0 systemd-machined[214636]: New machine qemu-183-instance-00000096.
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.932 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d12b999-d69c-43dc-a03d-8c3b779a572b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.946 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4a242d89-d3f1-4257-8422-f5a4cf148204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:54 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000096.
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:40:54 compute-0 ovn_controller[152662]: 2025-10-14T09:40:54Z|01650|binding|INFO|Setting lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 ovn-installed in OVS
Oct 14 09:40:54 compute-0 ovn_controller[152662]: 2025-10-14T09:40:54Z|01651|binding|INFO|Setting lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 up in Southbound
Oct 14 09:40:54 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.982 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[49e637e5-a95a-4678-adb2-6e180ef1164c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:54 compute-0 nova_compute[259627]: 2025-10-14 09:40:54.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:54 compute-0 systemd-udevd[417840]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:40:55 compute-0 NetworkManager[44885]: <info>  [1760434855.0031] device (tap0b5d3762-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:40:55 compute-0 NetworkManager[44885]: <info>  [1760434855.0041] device (tap0b5d3762-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.025 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41b92a0e-0c03-4349-9c8b-b413c963d18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 systemd-udevd[417844]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:40:55 compute-0 NetworkManager[44885]: <info>  [1760434855.0346] manager: (tapd2d242a4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/676)
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.035 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a49ad672-dc3b-480d-86c7-cb091e6aacdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.078 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7881f8f7-54c9-4378-b362-3508590b3708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.083 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cbde498f-fff6-43ba-a5e8-98e949f44e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 NetworkManager[44885]: <info>  [1760434855.1168] device (tapd2d242a4-f0): carrier: link connected
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.123 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[681c9eb5-ef28-4b70-ab18-7211112345cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.147 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba55fe5d-d36c-4c9e-967b-83bc53ac0b1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2d242a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:1e:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863387, 'reachable_time': 34263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417870, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.170 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[949c792d-4c76-4a9b-8bc0-7d08ee542f15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:1e41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863387, 'tstamp': 863387}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417871, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.195 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[759c8ef6-333b-40fd-91a8-720aa41e4e7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2d242a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:1e:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863387, 'reachable_time': 34263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 417872, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.243 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bb104c3a-0b82-4055-9a37-7c101f3277ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a83a13a2-ad9d-48e3-a40b-dce3c66d7ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.317 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2d242a4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.317 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.317 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2d242a4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:55 compute-0 kernel: tapd2d242a4-f0: entered promiscuous mode
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:55 compute-0 NetworkManager[44885]: <info>  [1760434855.3202] manager: (tapd2d242a4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/677)
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.321 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2d242a4-f0, col_values=(('external_ids', {'iface-id': '1bf2b41c-8c9f-45cd-aeb9-2459d1373791'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:40:55 compute-0 ovn_controller[152662]: 2025-10-14T09:40:55Z|01652|binding|INFO|Releasing lport 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 from this chassis (sb_readonly=0)
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.341 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2d242a4-fdb7-41f9-9ee7-4e1b17687d68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2d242a4-fdb7-41f9-9ee7-4e1b17687d68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.342 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8c7f68-24f9-4f12-8879-25f1ef5b8dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.343 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/d2d242a4-fdb7-41f9-9ee7-4e1b17687d68.pid.haproxy
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID d2d242a4-fdb7-41f9-9ee7-4e1b17687d68
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:40:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.344 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'env', 'PROCESS_TAG=haproxy-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2d242a4-fdb7-41f9-9ee7-4e1b17687d68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.372 2 DEBUG nova.compute.manager [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.375 2 DEBUG oslo_concurrency.lockutils [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.376 2 DEBUG oslo_concurrency.lockutils [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.376 2 DEBUG oslo_concurrency.lockutils [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.377 2 DEBUG nova.compute.manager [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Processing event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.622 2 DEBUG nova.network.neutron [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updated VIF entry in instance network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.624 2 DEBUG nova.network.neutron [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:40:55 compute-0 nova_compute[259627]: 2025-10-14 09:40:55.647 2 DEBUG oslo_concurrency.lockutils [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:40:55 compute-0 podman[417945]: 2025-10-14 09:40:55.746645644 +0000 UTC m=+0.049823994 container create 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:40:55 compute-0 systemd[1]: Started libpod-conmon-7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4.scope.
Oct 14 09:40:55 compute-0 podman[417945]: 2025-10-14 09:40:55.71715472 +0000 UTC m=+0.020333090 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:40:55 compute-0 ceph-mon[74249]: pgmap v2633: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:40:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b2f022e3216a77d69566ed9e97088f68d82c271e111cd888b10c372258ed61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:40:55 compute-0 podman[417945]: 2025-10-14 09:40:55.845567692 +0000 UTC m=+0.148746032 container init 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:40:55 compute-0 podman[417945]: 2025-10-14 09:40:55.851153829 +0000 UTC m=+0.154332169 container start 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:40:55 compute-0 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [NOTICE]   (417964) : New worker (417966) forked
Oct 14 09:40:55 compute-0 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [NOTICE]   (417964) : Loading success.
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434856.0452204, 0239a56b-babd-4c44-b52b-ade80229be78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] VM Started (Lifecycle Event)
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.049 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.053 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.058 2 INFO nova.virt.libvirt.driver [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance spawned successfully.
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.059 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.077 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.084 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.094 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.095 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.096 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.096 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.097 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.098 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.113 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.114 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434856.0453238, 0239a56b-babd-4c44-b52b-ade80229be78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.114 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] VM Paused (Lifecycle Event)
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.151 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.158 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434856.052432, 0239a56b-babd-4c44-b52b-ade80229be78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.158 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] VM Resumed (Lifecycle Event)
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.199 2 INFO nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Took 10.12 seconds to spawn the instance on the hypervisor.
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.200 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.201 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.210 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.253 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.274 2 INFO nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Took 11.08 seconds to build instance.
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.291 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 09:40:56 compute-0 nova_compute[259627]: 2025-10-14 09:40:56.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.049 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.050 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.051 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.051 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.051 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.535 2 DEBUG nova.compute.manager [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.536 2 DEBUG oslo_concurrency.lockutils [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.537 2 DEBUG oslo_concurrency.lockutils [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.538 2 DEBUG oslo_concurrency.lockutils [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.538 2 DEBUG nova.compute.manager [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] No waiting events found dispatching network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.539 2 WARNING nova.compute.manager [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received unexpected event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 for instance with vm_state active and task_state None.
Oct 14 09:40:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:40:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104448304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.575 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.653 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.654 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:40:57 compute-0 ceph-mon[74249]: pgmap v2634: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 09:40:57 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4104448304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.875 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.877 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3457MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.877 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.878 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.950 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 0239a56b-babd-4c44-b52b-ade80229be78 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.951 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:40:57 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.952 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:40:58 compute-0 nova_compute[259627]: 2025-10-14 09:40:57.995 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:40:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:40:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:40:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637702024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:58 compute-0 nova_compute[259627]: 2025-10-14 09:40:58.437 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:40:58 compute-0 nova_compute[259627]: 2025-10-14 09:40:58.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:58 compute-0 nova_compute[259627]: 2025-10-14 09:40:58.447 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:40:58 compute-0 nova_compute[259627]: 2025-10-14 09:40:58.467 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:40:58 compute-0 nova_compute[259627]: 2025-10-14 09:40:58.498 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:40:58 compute-0 nova_compute[259627]: 2025-10-14 09:40:58.498 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:40:58 compute-0 podman[418020]: 2025-10-14 09:40:58.658250194 +0000 UTC m=+0.067289332 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:40:58 compute-0 podman[418021]: 2025-10-14 09:40:58.681578047 +0000 UTC m=+0.079436741 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:40:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 09:40:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3637702024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:40:59 compute-0 NetworkManager[44885]: <info>  [1760434859.7375] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/678)
Oct 14 09:40:59 compute-0 NetworkManager[44885]: <info>  [1760434859.7396] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/679)
Oct 14 09:40:59 compute-0 ovn_controller[152662]: 2025-10-14T09:40:59Z|01653|binding|INFO|Releasing lport 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 from this chassis (sb_readonly=0)
Oct 14 09:40:59 compute-0 nova_compute[259627]: 2025-10-14 09:40:59.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:59 compute-0 ovn_controller[152662]: 2025-10-14T09:40:59Z|01654|binding|INFO|Releasing lport 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 from this chassis (sb_readonly=0)
Oct 14 09:40:59 compute-0 nova_compute[259627]: 2025-10-14 09:40:59.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:59 compute-0 nova_compute[259627]: 2025-10-14 09:40:59.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:40:59 compute-0 ceph-mon[74249]: pgmap v2635: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 09:41:00 compute-0 nova_compute[259627]: 2025-10-14 09:41:00.374 2 DEBUG nova.compute.manager [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:00 compute-0 nova_compute[259627]: 2025-10-14 09:41:00.375 2 DEBUG nova.compute.manager [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing instance network info cache due to event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:41:00 compute-0 nova_compute[259627]: 2025-10-14 09:41:00.376 2 DEBUG oslo_concurrency.lockutils [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:41:00 compute-0 nova_compute[259627]: 2025-10-14 09:41:00.376 2 DEBUG oslo_concurrency.lockutils [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:41:00 compute-0 nova_compute[259627]: 2025-10-14 09:41:00.377 2 DEBUG nova.network.neutron [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:41:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:41:01 compute-0 nova_compute[259627]: 2025-10-14 09:41:01.500 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:01 compute-0 nova_compute[259627]: 2025-10-14 09:41:01.500 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:01 compute-0 nova_compute[259627]: 2025-10-14 09:41:01.501 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:41:01 compute-0 nova_compute[259627]: 2025-10-14 09:41:01.528 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:41:01 compute-0 nova_compute[259627]: 2025-10-14 09:41:01.528 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:01 compute-0 nova_compute[259627]: 2025-10-14 09:41:01.529 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:01 compute-0 nova_compute[259627]: 2025-10-14 09:41:01.530 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:41:01 compute-0 ceph-mon[74249]: pgmap v2636: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:41:02 compute-0 nova_compute[259627]: 2025-10-14 09:41:02.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:02 compute-0 nova_compute[259627]: 2025-10-14 09:41:02.029 2 DEBUG nova.network.neutron [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updated VIF entry in instance network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:41:02 compute-0 nova_compute[259627]: 2025-10-14 09:41:02.030 2 DEBUG nova.network.neutron [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:41:02 compute-0 nova_compute[259627]: 2025-10-14 09:41:02.054 2 DEBUG oslo_concurrency.lockutils [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:41:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:41:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:41:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:41:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:41:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:41:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:41:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:41:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:03 compute-0 nova_compute[259627]: 2025-10-14 09:41:03.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:03 compute-0 ceph-mon[74249]: pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:41:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:41:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:41:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282492851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:41:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:41:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282492851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:41:05 compute-0 ceph-mon[74249]: pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:41:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3282492851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:41:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3282492851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:41:05 compute-0 nova_compute[259627]: 2025-10-14 09:41:05.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:41:07 compute-0 nova_compute[259627]: 2025-10-14 09:41:07.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:07 compute-0 ceph-mon[74249]: pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:41:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:08 compute-0 nova_compute[259627]: 2025-10-14 09:41:08.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:08 compute-0 ovn_controller[152662]: 2025-10-14T09:41:08Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:ee:6c 10.100.0.12
Oct 14 09:41:08 compute-0 ovn_controller[152662]: 2025-10-14T09:41:08Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:ee:6c 10.100.0.12
Oct 14 09:41:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 14 09:41:09 compute-0 ceph-mon[74249]: pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 14 09:41:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 14 09:41:11 compute-0 ceph-mon[74249]: pgmap v2641: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 14 09:41:12 compute-0 nova_compute[259627]: 2025-10-14 09:41:12.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:12 compute-0 podman[418064]: 2025-10-14 09:41:12.692399521 +0000 UTC m=+0.097166306 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 09:41:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 09:41:12 compute-0 podman[418063]: 2025-10-14 09:41:12.743927305 +0000 UTC m=+0.150034543 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:41:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:13 compute-0 nova_compute[259627]: 2025-10-14 09:41:13.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:13 compute-0 ceph-mon[74249]: pgmap v2642: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 09:41:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 09:41:15 compute-0 ceph-mon[74249]: pgmap v2643: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 09:41:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:17 compute-0 nova_compute[259627]: 2025-10-14 09:41:17.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:17 compute-0 ceph-mon[74249]: pgmap v2644: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:18 compute-0 sudo[418106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:18 compute-0 sudo[418106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:18 compute-0 sudo[418106]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:18 compute-0 sudo[418131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:41:18 compute-0 sudo[418131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:18 compute-0 sudo[418131]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:18 compute-0 sudo[418156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:18 compute-0 sudo[418156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:18 compute-0 sudo[418156]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:18 compute-0 nova_compute[259627]: 2025-10-14 09:41:18.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:18 compute-0 sudo[418181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:41:18 compute-0 sudo[418181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:19 compute-0 nova_compute[259627]: 2025-10-14 09:41:19.027 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:19 compute-0 nova_compute[259627]: 2025-10-14 09:41:19.028 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:19 compute-0 nova_compute[259627]: 2025-10-14 09:41:19.045 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:41:19 compute-0 sudo[418181]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:41:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:41:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:41:19 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:41:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:41:19 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:41:19 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 91d5641c-5ff0-4b64-a883-88dbe2e9d0b6 does not exist
Oct 14 09:41:19 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c80beece-36fc-45dc-963d-742d9d98dbef does not exist
Oct 14 09:41:19 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 872df25f-eb6b-43b0-bfaf-a24cbeae476e does not exist
Oct 14 09:41:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:41:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:41:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:41:19 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:41:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:41:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:41:19 compute-0 sudo[418238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:19 compute-0 sudo[418238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:19 compute-0 sudo[418238]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:19 compute-0 sudo[418263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:41:19 compute-0 sudo[418263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:19 compute-0 sudo[418263]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:19 compute-0 sudo[418288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:19 compute-0 sudo[418288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:19 compute-0 sudo[418288]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:19 compute-0 sudo[418313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:41:19 compute-0 sudo[418313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:19 compute-0 podman[418378]: 2025-10-14 09:41:19.950445461 +0000 UTC m=+0.060827003 container create 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:41:19 compute-0 ceph-mon[74249]: pgmap v2645: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:41:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:41:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:41:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:41:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:41:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:41:19 compute-0 systemd[1]: Started libpod-conmon-1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc.scope.
Oct 14 09:41:20 compute-0 podman[418378]: 2025-10-14 09:41:19.918245111 +0000 UTC m=+0.028626703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:41:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:41:20 compute-0 podman[418378]: 2025-10-14 09:41:20.057173871 +0000 UTC m=+0.167555443 container init 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:41:20 compute-0 podman[418378]: 2025-10-14 09:41:20.066203632 +0000 UTC m=+0.176585164 container start 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:41:20 compute-0 podman[418378]: 2025-10-14 09:41:20.070669552 +0000 UTC m=+0.181051154 container attach 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:41:20 compute-0 wonderful_pascal[418394]: 167 167
Oct 14 09:41:20 compute-0 systemd[1]: libpod-1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc.scope: Deactivated successfully.
Oct 14 09:41:20 compute-0 conmon[418394]: conmon 1cd9cd5b996f3cce9feb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc.scope/container/memory.events
Oct 14 09:41:20 compute-0 podman[418378]: 2025-10-14 09:41:20.073932552 +0000 UTC m=+0.184314064 container died 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 09:41:20 compute-0 nova_compute[259627]: 2025-10-14 09:41:20.101 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:20 compute-0 nova_compute[259627]: 2025-10-14 09:41:20.104 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-466537f7b95a03f4b98c6f6fa002ee93d6540e9f2d17f6280890b78c64ef4236-merged.mount: Deactivated successfully.
Oct 14 09:41:20 compute-0 nova_compute[259627]: 2025-10-14 09:41:20.118 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:41:20 compute-0 nova_compute[259627]: 2025-10-14 09:41:20.118 2 INFO nova.compute.claims [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:41:20 compute-0 podman[418378]: 2025-10-14 09:41:20.132838537 +0000 UTC m=+0.243220049 container remove 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 09:41:20 compute-0 systemd[1]: libpod-conmon-1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc.scope: Deactivated successfully.
Oct 14 09:41:20 compute-0 podman[418418]: 2025-10-14 09:41:20.373727119 +0000 UTC m=+0.075622847 container create 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:41:20 compute-0 systemd[1]: Started libpod-conmon-47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a.scope.
Oct 14 09:41:20 compute-0 podman[418418]: 2025-10-14 09:41:20.342275977 +0000 UTC m=+0.044171745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:41:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:20 compute-0 podman[418418]: 2025-10-14 09:41:20.508718262 +0000 UTC m=+0.210614030 container init 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 09:41:20 compute-0 podman[418418]: 2025-10-14 09:41:20.525719739 +0000 UTC m=+0.227615457 container start 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 09:41:20 compute-0 podman[418418]: 2025-10-14 09:41:20.530163388 +0000 UTC m=+0.232059096 container attach 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 09:41:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.086 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:41:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887099128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.589 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.597 2 DEBUG nova.compute.provider_tree [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:41:21 compute-0 laughing_sinoussi[418434]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:41:21 compute-0 laughing_sinoussi[418434]: --> relative data size: 1.0
Oct 14 09:41:21 compute-0 laughing_sinoussi[418434]: --> All data devices are unavailable
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.626 2 DEBUG nova.scheduler.client.report [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.647 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.648 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:41:21 compute-0 systemd[1]: libpod-47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a.scope: Deactivated successfully.
Oct 14 09:41:21 compute-0 podman[418418]: 2025-10-14 09:41:21.66680121 +0000 UTC m=+1.368696898 container died 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:41:21 compute-0 systemd[1]: libpod-47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a.scope: Consumed 1.096s CPU time.
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.689 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.690 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7-merged.mount: Deactivated successfully.
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.717 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:41:21 compute-0 podman[418418]: 2025-10-14 09:41:21.732871821 +0000 UTC m=+1.434767509 container remove 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:41:21 compute-0 systemd[1]: libpod-conmon-47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a.scope: Deactivated successfully.
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.749 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:41:21 compute-0 sudo[418313]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.836 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.838 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.839 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Creating image(s)
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.866 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:41:21 compute-0 sudo[418497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:21 compute-0 sudo[418497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:21 compute-0 sudo[418497]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.911 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.949 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:41:21 compute-0 nova_compute[259627]: 2025-10-14 09:41:21.953 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:21 compute-0 sudo[418547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:41:21 compute-0 sudo[418547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:21 compute-0 sudo[418547]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:22 compute-0 ceph-mon[74249]: pgmap v2646: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2887099128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.030 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.031 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.032 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.032 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:22 compute-0 sudo[418602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:22 compute-0 sudo[418602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:22 compute-0 sudo[418602]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.097 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.102 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3e37b67b-524c-4098-9609-97b0b31e72c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:22 compute-0 sudo[418645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:41:22 compute-0 sudo[418645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.200 2 DEBUG nova.policy [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.361 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3e37b67b-524c-4098-9609-97b0b31e72c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.485 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.605 2 DEBUG nova.objects.instance [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 3e37b67b-524c-4098-9609-97b0b31e72c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:41:22 compute-0 podman[418791]: 2025-10-14 09:41:22.631484804 +0000 UTC m=+0.044742229 container create 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:41:22 compute-0 systemd[1]: Started libpod-conmon-3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694.scope.
Oct 14 09:41:22 compute-0 podman[418791]: 2025-10-14 09:41:22.612568219 +0000 UTC m=+0.025825724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:41:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 09:41:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.727 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.729 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Ensure instance console log exists: /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.729 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.730 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:22 compute-0 nova_compute[259627]: 2025-10-14 09:41:22.731 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:22 compute-0 podman[418791]: 2025-10-14 09:41:22.732292567 +0000 UTC m=+0.145550012 container init 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:41:22 compute-0 podman[418791]: 2025-10-14 09:41:22.743542453 +0000 UTC m=+0.156799878 container start 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:41:22 compute-0 podman[418791]: 2025-10-14 09:41:22.74707629 +0000 UTC m=+0.160333715 container attach 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:41:22 compute-0 cool_leavitt[418819]: 167 167
Oct 14 09:41:22 compute-0 systemd[1]: libpod-3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694.scope: Deactivated successfully.
Oct 14 09:41:22 compute-0 podman[418824]: 2025-10-14 09:41:22.815911449 +0000 UTC m=+0.041563981 container died 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a9d115116c42831b425052bd275f8540bbd0a8ae4ceec80b1ed4ef3474f8d84-merged.mount: Deactivated successfully.
Oct 14 09:41:22 compute-0 podman[418824]: 2025-10-14 09:41:22.861262572 +0000 UTC m=+0.086915094 container remove 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:41:22 compute-0 systemd[1]: libpod-conmon-3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694.scope: Deactivated successfully.
Oct 14 09:41:23 compute-0 podman[418846]: 2025-10-14 09:41:23.099777056 +0000 UTC m=+0.051177497 container create 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:41:23 compute-0 systemd[1]: Started libpod-conmon-7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d.scope.
Oct 14 09:41:23 compute-0 podman[418846]: 2025-10-14 09:41:23.074513976 +0000 UTC m=+0.025914507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:41:23 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:41:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:23 compute-0 podman[418846]: 2025-10-14 09:41:23.20631441 +0000 UTC m=+0.157714911 container init 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:41:23 compute-0 podman[418846]: 2025-10-14 09:41:23.222469736 +0000 UTC m=+0.173870217 container start 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:41:23 compute-0 podman[418846]: 2025-10-14 09:41:23.226999828 +0000 UTC m=+0.178400319 container attach 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:41:23 compute-0 nova_compute[259627]: 2025-10-14 09:41:23.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:23 compute-0 nova_compute[259627]: 2025-10-14 09:41:23.702 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Successfully created port: 3300e6b2-d3bc-432e-925e-1d837fab4a11 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:41:24 compute-0 ceph-mon[74249]: pgmap v2647: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]: {
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:     "0": [
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:         {
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "devices": [
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "/dev/loop3"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             ],
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_name": "ceph_lv0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_size": "21470642176",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "name": "ceph_lv0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "tags": {
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cluster_name": "ceph",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.crush_device_class": "",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.encrypted": "0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osd_id": "0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.type": "block",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.vdo": "0"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             },
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "type": "block",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "vg_name": "ceph_vg0"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:         }
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:     ],
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:     "1": [
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:         {
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "devices": [
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "/dev/loop4"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             ],
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_name": "ceph_lv1",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_size": "21470642176",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "name": "ceph_lv1",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "tags": {
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cluster_name": "ceph",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.crush_device_class": "",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.encrypted": "0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osd_id": "1",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.type": "block",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.vdo": "0"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             },
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "type": "block",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "vg_name": "ceph_vg1"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:         }
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:     ],
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:     "2": [
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:         {
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "devices": [
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "/dev/loop5"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             ],
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_name": "ceph_lv2",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_size": "21470642176",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "name": "ceph_lv2",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "tags": {
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.cluster_name": "ceph",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.crush_device_class": "",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.encrypted": "0",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osd_id": "2",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.type": "block",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:                 "ceph.vdo": "0"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             },
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "type": "block",
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:             "vg_name": "ceph_vg2"
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:         }
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]:     ]
Oct 14 09:41:24 compute-0 kind_brahmagupta[418863]: }
Oct 14 09:41:24 compute-0 systemd[1]: libpod-7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d.scope: Deactivated successfully.
Oct 14 09:41:24 compute-0 podman[418846]: 2025-10-14 09:41:24.068091978 +0000 UTC m=+1.019492429 container died 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d-merged.mount: Deactivated successfully.
Oct 14 09:41:24 compute-0 podman[418846]: 2025-10-14 09:41:24.172647494 +0000 UTC m=+1.124047975 container remove 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:41:24 compute-0 systemd[1]: libpod-conmon-7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d.scope: Deactivated successfully.
Oct 14 09:41:24 compute-0 sudo[418645]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:24 compute-0 sudo[418885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:24 compute-0 sudo[418885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:24 compute-0 sudo[418885]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:24 compute-0 sudo[418910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:41:24 compute-0 sudo[418910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:24 compute-0 sudo[418910]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:24 compute-0 sudo[418935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:24 compute-0 sudo[418935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:24 compute-0 sudo[418935]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:24 compute-0 sudo[418960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:41:24 compute-0 sudo[418960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 09:41:24 compute-0 nova_compute[259627]: 2025-10-14 09:41:24.740 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Successfully updated port: 3300e6b2-d3bc-432e-925e-1d837fab4a11 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:41:24 compute-0 nova_compute[259627]: 2025-10-14 09:41:24.768 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:41:24 compute-0 nova_compute[259627]: 2025-10-14 09:41:24.768 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:41:24 compute-0 nova_compute[259627]: 2025-10-14 09:41:24.768 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:41:24 compute-0 nova_compute[259627]: 2025-10-14 09:41:24.887 2 DEBUG nova.compute.manager [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:24 compute-0 nova_compute[259627]: 2025-10-14 09:41:24.888 2 DEBUG nova.compute.manager [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing instance network info cache due to event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:41:24 compute-0 nova_compute[259627]: 2025-10-14 09:41:24.888 2 DEBUG oslo_concurrency.lockutils [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:41:25 compute-0 podman[419025]: 2025-10-14 09:41:25.043670708 +0000 UTC m=+0.052754596 container create f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:41:25 compute-0 systemd[1]: Started libpod-conmon-f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5.scope.
Oct 14 09:41:25 compute-0 podman[419025]: 2025-10-14 09:41:25.021474013 +0000 UTC m=+0.030557871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:41:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:41:25 compute-0 podman[419025]: 2025-10-14 09:41:25.139225993 +0000 UTC m=+0.148309921 container init f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:41:25 compute-0 podman[419025]: 2025-10-14 09:41:25.150146591 +0000 UTC m=+0.159230449 container start f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:41:25 compute-0 podman[419025]: 2025-10-14 09:41:25.15378396 +0000 UTC m=+0.162867888 container attach f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:41:25 compute-0 exciting_clarke[419042]: 167 167
Oct 14 09:41:25 compute-0 podman[419025]: 2025-10-14 09:41:25.159600063 +0000 UTC m=+0.168683961 container died f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:41:25 compute-0 systemd[1]: libpod-f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5.scope: Deactivated successfully.
Oct 14 09:41:25 compute-0 nova_compute[259627]: 2025-10-14 09:41:25.173 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9b8cd1a76e6b09894df7650950783850e811b6138b82c14ee2b6df9aad9d0dc-merged.mount: Deactivated successfully.
Oct 14 09:41:25 compute-0 podman[419025]: 2025-10-14 09:41:25.213797523 +0000 UTC m=+0.222881371 container remove f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:41:25 compute-0 systemd[1]: libpod-conmon-f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5.scope: Deactivated successfully.
Oct 14 09:41:25 compute-0 podman[419066]: 2025-10-14 09:41:25.462973968 +0000 UTC m=+0.070348268 container create 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:41:25 compute-0 podman[419066]: 2025-10-14 09:41:25.435080453 +0000 UTC m=+0.042454803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:41:25 compute-0 systemd[1]: Started libpod-conmon-3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb.scope.
Oct 14 09:41:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:41:25 compute-0 podman[419066]: 2025-10-14 09:41:25.615999333 +0000 UTC m=+0.223373683 container init 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 09:41:25 compute-0 podman[419066]: 2025-10-14 09:41:25.627475965 +0000 UTC m=+0.234850265 container start 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:41:25 compute-0 podman[419066]: 2025-10-14 09:41:25.633050542 +0000 UTC m=+0.240424842 container attach 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:41:26 compute-0 ceph-mon[74249]: pgmap v2648: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 09:41:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:41:26 compute-0 vibrant_brown[419082]: {
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "osd_id": 2,
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "type": "bluestore"
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:     },
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "osd_id": 1,
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "type": "bluestore"
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:     },
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "osd_id": 0,
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:         "type": "bluestore"
Oct 14 09:41:26 compute-0 vibrant_brown[419082]:     }
Oct 14 09:41:26 compute-0 vibrant_brown[419082]: }
Oct 14 09:41:26 compute-0 systemd[1]: libpod-3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb.scope: Deactivated successfully.
Oct 14 09:41:26 compute-0 systemd[1]: libpod-3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb.scope: Consumed 1.136s CPU time.
Oct 14 09:41:26 compute-0 podman[419115]: 2025-10-14 09:41:26.796631866 +0000 UTC m=+0.029857274 container died 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945-merged.mount: Deactivated successfully.
Oct 14 09:41:26 compute-0 podman[419115]: 2025-10-14 09:41:26.853285206 +0000 UTC m=+0.086510614 container remove 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:41:26 compute-0 systemd[1]: libpod-conmon-3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb.scope: Deactivated successfully.
Oct 14 09:41:26 compute-0 sudo[418960]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:41:26 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:41:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:41:26 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:41:26 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5ff90b68-5547-4ccf-983c-48a5ce044fb3 does not exist
Oct 14 09:41:26 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0fe86e24-926f-4329-933f-af043d04de34 does not exist
Oct 14 09:41:27 compute-0 sudo[419131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:41:27 compute-0 sudo[419131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:27 compute-0 sudo[419131]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:27 compute-0 sudo[419156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:41:27 compute-0 sudo[419156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:41:27 compute-0 sudo[419156]: pam_unix(sudo:session): session closed for user root
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.385 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.428 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.429 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance network_info: |[{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.429 2 DEBUG oslo_concurrency.lockutils [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.429 2 DEBUG nova.network.neutron [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.432 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start _get_guest_xml network_info=[{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.437 2 WARNING nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.448 2 DEBUG nova.virt.libvirt.host [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.449 2 DEBUG nova.virt.libvirt.host [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.452 2 DEBUG nova.virt.libvirt.host [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.452 2 DEBUG nova.virt.libvirt.host [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.453 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.453 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.453 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.455 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.455 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.455 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.455 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.458 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:41:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333800934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:41:27 compute-0 ceph-mon[74249]: pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:41:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:41:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:41:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/333800934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.916 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.943 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:41:27 compute-0 nova_compute[259627]: 2025-10-14 09:41:27.948 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:41:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472713562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.388 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.392 2 DEBUG nova.virt.libvirt.vif [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2135470383',display_name='tempest-TestGettingAddress-server-2135470383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2135470383',id=151,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ny0720r7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:41:21Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3e37b67b-524c-4098-9609-97b0b31e72c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.393 2 DEBUG nova.network.os_vif_util [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.395 2 DEBUG nova.network.os_vif_util [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.397 2 DEBUG nova.objects.instance [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e37b67b-524c-4098-9609-97b0b31e72c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.420 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <uuid>3e37b67b-524c-4098-9609-97b0b31e72c4</uuid>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <name>instance-00000097</name>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-2135470383</nova:name>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:41:27</nova:creationTime>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <nova:port uuid="3300e6b2-d3bc-432e-925e-1d837fab4a11">
Oct 14 09:41:28 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe15:7971" ipVersion="6"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe15:7971" ipVersion="6"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <system>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <entry name="serial">3e37b67b-524c-4098-9609-97b0b31e72c4</entry>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <entry name="uuid">3e37b67b-524c-4098-9609-97b0b31e72c4</entry>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </system>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <os>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   </os>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <features>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   </features>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/3e37b67b-524c-4098-9609-97b0b31e72c4_disk">
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       </source>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config">
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       </source>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:41:28 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:15:79:71"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <target dev="tap3300e6b2-d3"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/console.log" append="off"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <video>
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </video>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:41:28 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:41:28 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:41:28 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:41:28 compute-0 nova_compute[259627]: </domain>
Oct 14 09:41:28 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.422 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Preparing to wait for external event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.423 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.423 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.423 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.424 2 DEBUG nova.virt.libvirt.vif [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2135470383',display_name='tempest-TestGettingAddress-server-2135470383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2135470383',id=151,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ny0720r7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:41:21Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3e37b67b-524c-4098-9609-97b0b31e72c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.425 2 DEBUG nova.network.os_vif_util [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.426 2 DEBUG nova.network.os_vif_util [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.426 2 DEBUG os_vif [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3300e6b2-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3300e6b2-d3, col_values=(('external_ids', {'iface-id': '3300e6b2-d3bc-432e-925e-1d837fab4a11', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:79:71', 'vm-uuid': '3e37b67b-524c-4098-9609-97b0b31e72c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:28 compute-0 NetworkManager[44885]: <info>  [1760434888.4406] manager: (tap3300e6b2-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/680)
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.451 2 INFO os_vif [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3')
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.534 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.535 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.536 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:15:79:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.537 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Using config drive
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.576 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:41:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:41:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2472713562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.958 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Creating config drive at /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config
Oct 14 09:41:28 compute-0 nova_compute[259627]: 2025-10-14 09:41:28.966 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv65pca3j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.142 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv65pca3j" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.175 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.179 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.419 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.421 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Deleting local config drive /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config because it was imported into RBD.
Oct 14 09:41:29 compute-0 kernel: tap3300e6b2-d3: entered promiscuous mode
Oct 14 09:41:29 compute-0 NetworkManager[44885]: <info>  [1760434889.4972] manager: (tap3300e6b2-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/681)
Oct 14 09:41:29 compute-0 ovn_controller[152662]: 2025-10-14T09:41:29Z|01655|binding|INFO|Claiming lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 for this chassis.
Oct 14 09:41:29 compute-0 ovn_controller[152662]: 2025-10-14T09:41:29Z|01656|binding|INFO|3300e6b2-d3bc-432e-925e-1d837fab4a11: Claiming fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.541 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971'], port_security=['fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fe15:7971/64 2001:db8::f816:3eff:fe15:7971/64', 'neutron:device_id': '3e37b67b-524c-4098-9609-97b0b31e72c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0efff049-0165-4ea1-b912-974883009802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3300e6b2-d3bc-432e-925e-1d837fab4a11) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.542 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3300e6b2-d3bc-432e-925e-1d837fab4a11 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 bound to our chassis
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.543 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68
Oct 14 09:41:29 compute-0 ovn_controller[152662]: 2025-10-14T09:41:29Z|01657|binding|INFO|Setting lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 ovn-installed in OVS
Oct 14 09:41:29 compute-0 ovn_controller[152662]: 2025-10-14T09:41:29Z|01658|binding|INFO|Setting lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 up in Southbound
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8800aa87-5674-42c4-969b-2c7afec82870]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:29 compute-0 systemd-machined[214636]: New machine qemu-184-instance-00000097.
Oct 14 09:41:29 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000097.
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.599 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6b5b3d-2385-452b-87db-e90dd523f1a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.603 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd70f13-5be8-4ae9-ad01-7379c5a79cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:29 compute-0 systemd-udevd[419351]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:41:29 compute-0 NetworkManager[44885]: <info>  [1760434889.6359] device (tap3300e6b2-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:41:29 compute-0 NetworkManager[44885]: <info>  [1760434889.6367] device (tap3300e6b2-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.635 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a67f4375-7cda-4c0d-8fcd-2a732316a56e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:29 compute-0 podman[419315]: 2025-10-14 09:41:29.651191076 +0000 UTC m=+0.076209411 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.661 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e24309c-169b-4d28-8f24-d9d33c51bb2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2d242a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:1e:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863387, 'reachable_time': 34263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419363, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:29 compute-0 podman[419313]: 2025-10-14 09:41:29.677787289 +0000 UTC m=+0.108611057 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33d2c6dd-c76a-4ade-b008-aa4866a2783d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd2d242a4-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863403, 'tstamp': 863403}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419368, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd2d242a4-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863407, 'tstamp': 863407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419368, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.678 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2d242a4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2d242a4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2d242a4-f0, col_values=(('external_ids', {'iface-id': '1bf2b41c-8c9f-45cd-aeb9-2459d1373791'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:29 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.840 2 DEBUG nova.network.neutron [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updated VIF entry in instance network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.840 2 DEBUG nova.network.neutron [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.857 2 DEBUG oslo_concurrency.lockutils [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:41:29 compute-0 ceph-mon[74249]: pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.962 2 DEBUG nova.compute.manager [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.963 2 DEBUG oslo_concurrency.lockutils [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.964 2 DEBUG oslo_concurrency.lockutils [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.965 2 DEBUG oslo_concurrency.lockutils [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:29 compute-0 nova_compute[259627]: 2025-10-14 09:41:29.966 2 DEBUG nova.compute.manager [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Processing event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:41:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:30.066 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:30 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:30.068 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:41:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.857 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434890.8565214, 3e37b67b-524c-4098-9609-97b0b31e72c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.859 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] VM Started (Lifecycle Event)
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.861 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.863 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.866 2 INFO nova.virt.libvirt.driver [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance spawned successfully.
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.866 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.897 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.901 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.908 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.909 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.909 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.909 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.910 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.910 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.956 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.957 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434890.8569922, 3e37b67b-524c-4098-9609-97b0b31e72c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:41:30 compute-0 nova_compute[259627]: 2025-10-14 09:41:30.957 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] VM Paused (Lifecycle Event)
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.033 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.036 2 INFO nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Took 9.20 seconds to spawn the instance on the hypervisor.
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.037 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434890.8630536, 3e37b67b-524c-4098-9609-97b0b31e72c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] VM Resumed (Lifecycle Event)
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.082 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.085 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.127 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.140 2 INFO nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Took 12.04 seconds to build instance.
Oct 14 09:41:31 compute-0 nova_compute[259627]: 2025-10-14 09:41:31.158 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:31 compute-0 ceph-mon[74249]: pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 09:41:32 compute-0 nova_compute[259627]: 2025-10-14 09:41:32.154 2 DEBUG nova.compute.manager [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:32 compute-0 nova_compute[259627]: 2025-10-14 09:41:32.155 2 DEBUG oslo_concurrency.lockutils [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:32 compute-0 nova_compute[259627]: 2025-10-14 09:41:32.155 2 DEBUG oslo_concurrency.lockutils [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:32 compute-0 nova_compute[259627]: 2025-10-14 09:41:32.155 2 DEBUG oslo_concurrency.lockutils [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:32 compute-0 nova_compute[259627]: 2025-10-14 09:41:32.155 2 DEBUG nova.compute.manager [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] No waiting events found dispatching network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:41:32 compute-0 nova_compute[259627]: 2025-10-14 09:41:32.156 2 WARNING nova.compute.manager [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received unexpected event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 for instance with vm_state active and task_state None.
Oct 14 09:41:32 compute-0 nova_compute[259627]: 2025-10-14 09:41:32.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:41:32
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'backups', 'images', 'cephfs.cephfs.data']
Oct 14 09:41:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:41:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:41:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:41:33 compute-0 nova_compute[259627]: 2025-10-14 09:41:33.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:33 compute-0 ceph-mon[74249]: pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 09:41:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 09:41:35 compute-0 ceph-mon[74249]: pgmap v2653: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 09:41:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:41:37 compute-0 nova_compute[259627]: 2025-10-14 09:41:37.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:37 compute-0 ceph-mon[74249]: pgmap v2654: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:41:38 compute-0 nova_compute[259627]: 2025-10-14 09:41:38.057 2 DEBUG nova.compute.manager [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:38 compute-0 nova_compute[259627]: 2025-10-14 09:41:38.059 2 DEBUG nova.compute.manager [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing instance network info cache due to event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:41:38 compute-0 nova_compute[259627]: 2025-10-14 09:41:38.059 2 DEBUG oslo_concurrency.lockutils [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:41:38 compute-0 nova_compute[259627]: 2025-10-14 09:41:38.060 2 DEBUG oslo_concurrency.lockutils [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:41:38 compute-0 nova_compute[259627]: 2025-10-14 09:41:38.060 2 DEBUG nova.network.neutron [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:41:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:38 compute-0 nova_compute[259627]: 2025-10-14 09:41:38.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:41:39 compute-0 ceph-mon[74249]: pgmap v2655: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:41:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:40.070 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:41:41 compute-0 nova_compute[259627]: 2025-10-14 09:41:41.393 2 DEBUG nova.network.neutron [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updated VIF entry in instance network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:41:41 compute-0 nova_compute[259627]: 2025-10-14 09:41:41.394 2 DEBUG nova.network.neutron [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:41:41 compute-0 nova_compute[259627]: 2025-10-14 09:41:41.427 2 DEBUG oslo_concurrency.lockutils [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:41:41 compute-0 ovn_controller[152662]: 2025-10-14T09:41:41Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:79:71 10.100.0.10
Oct 14 09:41:41 compute-0 ovn_controller[152662]: 2025-10-14T09:41:41Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:79:71 10.100.0.10
Oct 14 09:41:41 compute-0 ceph-mon[74249]: pgmap v2656: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 09:41:42 compute-0 nova_compute[259627]: 2025-10-14 09:41:42.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Oct 14 09:41:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:43 compute-0 nova_compute[259627]: 2025-10-14 09:41:43.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011080044930650518 of space, bias 1.0, pg target 0.33240134791951553 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:41:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:41:43 compute-0 podman[419414]: 2025-10-14 09:41:43.701680452 +0000 UTC m=+0.099289607 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:41:43 compute-0 podman[419413]: 2025-10-14 09:41:43.70361105 +0000 UTC m=+0.110071412 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:41:43 compute-0 ceph-mon[74249]: pgmap v2657: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Oct 14 09:41:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Oct 14 09:41:46 compute-0 ceph-mon[74249]: pgmap v2658: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Oct 14 09:41:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 14 09:41:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:41:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1334 writes, 5814 keys, 1334 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s
                                           Interval WAL: 1334 writes, 1334 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    101.1      0.64              0.25        38    0.017       0      0       0.0       0.0
                                             L6      1/0    7.90 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    184.3    155.0      1.95              1.04        37    0.053    224K    20K       0.0       0.0
                                            Sum      1/0    7.90 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6    138.6    141.7      2.59              1.30        75    0.035    224K    20K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.6    146.9    146.7      0.28              0.18         8    0.035     31K   1981       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    184.3    155.0      1.95              1.04        37    0.053    224K    20K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    102.2      0.64              0.25        37    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.064, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.35 GB read, 0.07 MB/s read, 2.6 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 40.86 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000354 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2661,39.20 MB,12.8937%) FilterBlock(76,633.86 KB,0.20362%) IndexBlock(76,1.04 MB,0.342299%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 14 09:41:47 compute-0 nova_compute[259627]: 2025-10-14 09:41:47.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:48 compute-0 ceph-mon[74249]: pgmap v2659: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 14 09:41:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:48 compute-0 nova_compute[259627]: 2025-10-14 09:41:48.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:49 compute-0 nova_compute[259627]: 2025-10-14 09:41:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:50 compute-0 ceph-mon[74249]: pgmap v2660: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:41:52 compute-0 ceph-mon[74249]: pgmap v2661: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 09:41:52 compute-0 nova_compute[259627]: 2025-10-14 09:41:52.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:53 compute-0 nova_compute[259627]: 2025-10-14 09:41:53.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:54 compute-0 ceph-mon[74249]: pgmap v2662: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:54 compute-0 nova_compute[259627]: 2025-10-14 09:41:54.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:54 compute-0 nova_compute[259627]: 2025-10-14 09:41:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.937 2 DEBUG nova.compute.manager [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.938 2 DEBUG nova.compute.manager [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing instance network info cache due to event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.939 2 DEBUG oslo_concurrency.lockutils [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.939 2 DEBUG oslo_concurrency.lockutils [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.940 2 DEBUG nova.network.neutron [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.976 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.977 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.977 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.978 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.979 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.980 2 INFO nova.compute.manager [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Terminating instance
Oct 14 09:41:55 compute-0 nova_compute[259627]: 2025-10-14 09:41:55.982 2 DEBUG nova.compute.manager [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:41:56 compute-0 kernel: tap3300e6b2-d3 (unregistering): left promiscuous mode
Oct 14 09:41:56 compute-0 NetworkManager[44885]: <info>  [1760434916.0405] device (tap3300e6b2-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:41:56 compute-0 ceph-mon[74249]: pgmap v2663: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:41:56 compute-0 ovn_controller[152662]: 2025-10-14T09:41:56Z|01659|binding|INFO|Releasing lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 from this chassis (sb_readonly=0)
Oct 14 09:41:56 compute-0 ovn_controller[152662]: 2025-10-14T09:41:56Z|01660|binding|INFO|Setting lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 down in Southbound
Oct 14 09:41:56 compute-0 ovn_controller[152662]: 2025-10-14T09:41:56Z|01661|binding|INFO|Removing iface tap3300e6b2-d3 ovn-installed in OVS
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.071 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971'], port_security=['fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fe15:7971/64 2001:db8::f816:3eff:fe15:7971/64', 'neutron:device_id': '3e37b67b-524c-4098-9609-97b0b31e72c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0efff049-0165-4ea1-b912-974883009802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3300e6b2-d3bc-432e-925e-1d837fab4a11) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.073 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3300e6b2-d3bc-432e-925e-1d837fab4a11 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 unbound from our chassis
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.076 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.107 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f821d238-5727-4680-a844-1dfb66bb7991]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:56 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct 14 09:41:56 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Consumed 13.456s CPU time.
Oct 14 09:41:56 compute-0 systemd-machined[214636]: Machine qemu-184-instance-00000097 terminated.
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.149 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3c1ac1-a41a-4ddb-8fb2-0a9055d7670a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.152 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d577cde1-d435-4268-ab03-0d4df158deaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.195 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c04bc658-1f4b-4bf0-9e29-9288900f8495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.215 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[982ab23f-1391-48d6-8b09-ce85ebfde597]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2d242a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:1e:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863387, 'reachable_time': 34263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419473, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.232 2 INFO nova.virt.libvirt.driver [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance destroyed successfully.
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.237 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3f35f8-f4ce-47fb-a598-fb6e661c9f20]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd2d242a4-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863403, 'tstamp': 863403}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419480, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd2d242a4-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863407, 'tstamp': 863407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419480, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.238 2 DEBUG nova.objects.instance [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 3e37b67b-524c-4098-9609-97b0b31e72c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.239 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2d242a4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.246 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2d242a4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.247 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.247 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2d242a4-f0, col_values=(('external_ids', {'iface-id': '1bf2b41c-8c9f-45cd-aeb9-2459d1373791'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.253 2 DEBUG nova.virt.libvirt.vif [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2135470383',display_name='tempest-TestGettingAddress-server-2135470383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2135470383',id=151,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:41:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ny0720r7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:41:31Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3e37b67b-524c-4098-9609-97b0b31e72c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.254 2 DEBUG nova.network.os_vif_util [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.255 2 DEBUG nova.network.os_vif_util [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.256 2 DEBUG os_vif [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3300e6b2-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.267 2 INFO os_vif [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3')
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.708 2 INFO nova.virt.libvirt.driver [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Deleting instance files /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4_del
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.709 2 INFO nova.virt.libvirt.driver [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Deletion of /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4_del complete
Oct 14 09:41:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.776 2 INFO nova.compute.manager [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.777 2 DEBUG oslo.service.loopingcall [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.778 2 DEBUG nova.compute.manager [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:41:56 compute-0 nova_compute[259627]: 2025-10-14 09:41:56.779 2 DEBUG nova.network.neutron [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.531 2 DEBUG nova.network.neutron [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.554 2 INFO nova.compute.manager [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Took 0.78 seconds to deallocate network for instance.
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.615 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.615 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.660 2 DEBUG nova.compute.manager [req-1075a146-5d77-4878-96d7-ce0ebb145982 req-5b0ce4f4-642a-49e8-a621-099a3437230c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-deleted-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.715 2 DEBUG oslo_concurrency.processutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.968 2 DEBUG nova.network.neutron [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updated VIF entry in instance network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.969 2 DEBUG nova.network.neutron [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:41:57 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:57.999 2 DEBUG oslo_concurrency.lockutils [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.016 2 DEBUG nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-unplugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.017 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.017 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.017 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.018 2 DEBUG nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] No waiting events found dispatching network-vif-unplugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.018 2 WARNING nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received unexpected event network-vif-unplugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 for instance with vm_state deleted and task_state None.
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.018 2 DEBUG nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.018 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.018 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.019 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.019 2 DEBUG nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] No waiting events found dispatching network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.019 2 WARNING nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received unexpected event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 for instance with vm_state deleted and task_state None.
Oct 14 09:41:58 compute-0 ceph-mon[74249]: pgmap v2664: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 14 09:41:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:41:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231750227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.178 2 DEBUG oslo_concurrency.processutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.183 2 DEBUG nova.compute.provider_tree [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.199 2 DEBUG nova.scheduler.client.report [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.216 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.218 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.218 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.218 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.219 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.302 2 INFO nova.scheduler.client.report [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 3e37b67b-524c-4098-9609-97b0b31e72c4
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.368 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:41:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344874566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.694 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 21 KiB/s wr, 2 op/s
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.774 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:41:58 compute-0 nova_compute[259627]: 2025-10-14 09:41:58.775 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.034 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.036 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3384MB free_disk=59.897186279296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.037 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.037 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4231750227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:41:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1344874566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.136 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 0239a56b-babd-4c44-b52b-ade80229be78 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.137 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.137 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.190 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:41:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:41:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192242180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.657 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.665 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.669 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.670 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.671 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.671 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.671 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.674 2 INFO nova.compute.manager [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Terminating instance
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.676 2 DEBUG nova.compute.manager [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.683 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.716 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.716 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:41:59 compute-0 kernel: tap0b5d3762-db (unregistering): left promiscuous mode
Oct 14 09:41:59 compute-0 NetworkManager[44885]: <info>  [1760434919.7277] device (tap0b5d3762-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.770 2 DEBUG nova.compute.manager [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.771 2 DEBUG nova.compute.manager [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing instance network info cache due to event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.772 2 DEBUG oslo_concurrency.lockutils [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.772 2 DEBUG oslo_concurrency.lockutils [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.772 2 DEBUG nova.network.neutron [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:59 compute-0 ovn_controller[152662]: 2025-10-14T09:41:59Z|01662|binding|INFO|Releasing lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 from this chassis (sb_readonly=0)
Oct 14 09:41:59 compute-0 ovn_controller[152662]: 2025-10-14T09:41:59Z|01663|binding|INFO|Setting lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 down in Southbound
Oct 14 09:41:59 compute-0 ovn_controller[152662]: 2025-10-14T09:41:59Z|01664|binding|INFO|Removing iface tap0b5d3762-db ovn-installed in OVS
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.813 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c'], port_security=['fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe37:ee6c/64 2001:db8::f816:3eff:fe37:ee6c/64', 'neutron:device_id': '0239a56b-babd-4c44-b52b-ade80229be78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0efff049-0165-4ea1-b912-974883009802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b5d3762-db25-4cc9-90f3-79d3eb662378) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:41:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.814 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b5d3762-db25-4cc9-90f3-79d3eb662378 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 unbound from our chassis
Oct 14 09:41:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.815 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:41:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.815 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b85dad7-d1cf-4bc4-9eb7-906f3686835f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:41:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.816 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 namespace which is not needed anymore
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:59 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct 14 09:41:59 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000096.scope: Consumed 15.381s CPU time.
Oct 14 09:41:59 compute-0 systemd-machined[214636]: Machine qemu-183-instance-00000096 terminated.
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.912 2 INFO nova.virt.libvirt.driver [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance destroyed successfully.
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.913 2 DEBUG nova.objects.instance [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 0239a56b-babd-4c44-b52b-ade80229be78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.928 2 DEBUG nova.virt.libvirt.vif [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-35816390',display_name='tempest-TestGettingAddress-server-35816390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-35816390',id=150,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:40:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-r3oxjpue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:40:56Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=0239a56b-babd-4c44-b52b-ade80229be78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.928 2 DEBUG nova.network.os_vif_util [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.929 2 DEBUG nova.network.os_vif_util [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.929 2 DEBUG os_vif [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.931 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b5d3762-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:41:59 compute-0 podman[419575]: 2025-10-14 09:41:59.933472058 +0000 UTC m=+0.102515787 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 14 09:41:59 compute-0 podman[419577]: 2025-10-14 09:41:59.93357129 +0000 UTC m=+0.110082082 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:41:59 compute-0 nova_compute[259627]: 2025-10-14 09:41:59.937 2 INFO os_vif [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db')
Oct 14 09:41:59 compute-0 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [NOTICE]   (417964) : haproxy version is 2.8.14-c23fe91
Oct 14 09:41:59 compute-0 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [NOTICE]   (417964) : path to executable is /usr/sbin/haproxy
Oct 14 09:41:59 compute-0 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [WARNING]  (417964) : Exiting Master process...
Oct 14 09:41:59 compute-0 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [WARNING]  (417964) : Exiting Master process...
Oct 14 09:41:59 compute-0 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [ALERT]    (417964) : Current worker (417966) exited with code 143 (Terminated)
Oct 14 09:41:59 compute-0 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [WARNING]  (417964) : All workers exited. Exiting... (0)
Oct 14 09:41:59 compute-0 systemd[1]: libpod-7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4.scope: Deactivated successfully.
Oct 14 09:41:59 compute-0 podman[419638]: 2025-10-14 09:41:59.980092842 +0000 UTC m=+0.059159143 container died 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4-userdata-shm.mount: Deactivated successfully.
Oct 14 09:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b2f022e3216a77d69566ed9e97088f68d82c271e111cd888b10c372258ed61-merged.mount: Deactivated successfully.
Oct 14 09:42:00 compute-0 podman[419638]: 2025-10-14 09:42:00.026507741 +0000 UTC m=+0.105574052 container cleanup 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 09:42:00 compute-0 systemd[1]: libpod-conmon-7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4.scope: Deactivated successfully.
Oct 14 09:42:00 compute-0 ceph-mon[74249]: pgmap v2665: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 21 KiB/s wr, 2 op/s
Oct 14 09:42:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4192242180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:42:00 compute-0 podman[419693]: 2025-10-14 09:42:00.098680362 +0000 UTC m=+0.050941541 container remove 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.106 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[866a3d7b-10e5-441a-b99c-5c4f9116c943]: (4, ('Tue Oct 14 09:41:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 (7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4)\n7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4\nTue Oct 14 09:42:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 (7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4)\n7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.108 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7e23c71e-5a92-4a75-adf3-99d3cab54768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.109 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2d242a4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:00 compute-0 kernel: tapd2d242a4-f0: left promiscuous mode
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.126 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c890a57-17a5-4e8b-ae1c-113336c2bbb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.158 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34d806fd-ef56-4cfe-bec1-6018988df5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.159 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc733c53-cb5d-492f-a09a-1214f351787e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.173 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aee7bcfb-c8c7-4aa3-b8fd-4588443dc417]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863378, 'reachable_time': 37524, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419710, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.176 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:42:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.176 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b90453-38d5-4737-a736-0bed004860ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2d242a4\x2dfdb7\x2d41f9\x2d9ee7\x2d4e1b17687d68.mount: Deactivated successfully.
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.320 2 INFO nova.virt.libvirt.driver [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Deleting instance files /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78_del
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.321 2 INFO nova.virt.libvirt.driver [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Deletion of /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78_del complete
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.399 2 DEBUG nova.compute.manager [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-unplugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.400 2 DEBUG oslo_concurrency.lockutils [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.400 2 DEBUG oslo_concurrency.lockutils [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.401 2 DEBUG oslo_concurrency.lockutils [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.401 2 DEBUG nova.compute.manager [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] No waiting events found dispatching network-vif-unplugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.402 2 DEBUG nova.compute.manager [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-unplugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.411 2 INFO nova.compute.manager [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.412 2 DEBUG oslo.service.loopingcall [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.413 2 DEBUG nova.compute.manager [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:42:00 compute-0 nova_compute[259627]: 2025-10-14 09:42:00.413 2 DEBUG nova.network.neutron [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:42:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 25 KiB/s wr, 48 op/s
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.191 2 DEBUG nova.network.neutron [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.207 2 INFO nova.compute.manager [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Took 0.79 seconds to deallocate network for instance.
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.246 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.246 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.290 2 DEBUG oslo_concurrency.processutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.583 2 DEBUG nova.network.neutron [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updated VIF entry in instance network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.585 2 DEBUG nova.network.neutron [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.626 2 DEBUG oslo_concurrency.lockutils [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.717 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.718 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:42:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:42:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419065968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.770 2 DEBUG oslo_concurrency.processutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.776 2 DEBUG nova.compute.provider_tree [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.792 2 DEBUG nova.scheduler.client.report [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.816 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.845 2 INFO nova.scheduler.client.report [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 0239a56b-babd-4c44-b52b-ade80229be78
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.888 2 DEBUG nova.compute.manager [req-48bfe682-e7ab-4104-b6cb-50d646b01f0d req-3f77bf9e-262d-49e9-9acf-e20511b0d44a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-deleted-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.888 2 INFO nova.compute.manager [req-48bfe682-e7ab-4104-b6cb-50d646b01f0d req-3f77bf9e-262d-49e9-9acf-e20511b0d44a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Neutron deleted interface 0b5d3762-db25-4cc9-90f3-79d3eb662378; detaching it from the instance and deleting it from the info cache
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.889 2 DEBUG nova.network.neutron [req-48bfe682-e7ab-4104-b6cb-50d646b01f0d req-3f77bf9e-262d-49e9-9acf-e20511b0d44a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.949 2 DEBUG nova.compute.manager [req-48bfe682-e7ab-4104-b6cb-50d646b01f0d req-3f77bf9e-262d-49e9-9acf-e20511b0d44a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Detach interface failed, port_id=0b5d3762-db25-4cc9-90f3-79d3eb662378, reason: Instance 0239a56b-babd-4c44-b52b-ade80229be78 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.952 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.993 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:42:01 compute-0 nova_compute[259627]: 2025-10-14 09:42:01.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:02 compute-0 ceph-mon[74249]: pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 25 KiB/s wr, 48 op/s
Oct 14 09:42:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3419065968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:42:02 compute-0 nova_compute[259627]: 2025-10-14 09:42:02.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:02 compute-0 nova_compute[259627]: 2025-10-14 09:42:02.542 2 DEBUG nova.compute.manager [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:42:02 compute-0 nova_compute[259627]: 2025-10-14 09:42:02.542 2 DEBUG oslo_concurrency.lockutils [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:02 compute-0 nova_compute[259627]: 2025-10-14 09:42:02.543 2 DEBUG oslo_concurrency.lockutils [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:02 compute-0 nova_compute[259627]: 2025-10-14 09:42:02.543 2 DEBUG oslo_concurrency.lockutils [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:02 compute-0 nova_compute[259627]: 2025-10-14 09:42:02.543 2 DEBUG nova.compute.manager [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] No waiting events found dispatching network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:42:02 compute-0 nova_compute[259627]: 2025-10-14 09:42:02.543 2 WARNING nova.compute.manager [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received unexpected event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 for instance with vm_state deleted and task_state None.
Oct 14 09:42:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 13 KiB/s wr, 48 op/s
Oct 14 09:42:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:42:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:42:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:42:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:42:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:42:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:42:02 compute-0 nova_compute[259627]: 2025-10-14 09:42:02.989 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:04 compute-0 ceph-mon[74249]: pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 13 KiB/s wr, 48 op/s
Oct 14 09:42:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 13 KiB/s wr, 48 op/s
Oct 14 09:42:04 compute-0 nova_compute[259627]: 2025-10-14 09:42:04.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:42:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2607413162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:42:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:42:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2607413162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:42:06 compute-0 ceph-mon[74249]: pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 13 KiB/s wr, 48 op/s
Oct 14 09:42:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2607413162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:42:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2607413162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:42:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct 14 09:42:06 compute-0 nova_compute[259627]: 2025-10-14 09:42:06.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:07.058 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:07.058 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:07 compute-0 nova_compute[259627]: 2025-10-14 09:42:07.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:08 compute-0 ceph-mon[74249]: pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct 14 09:42:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:08 compute-0 nova_compute[259627]: 2025-10-14 09:42:08.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 14 09:42:08 compute-0 nova_compute[259627]: 2025-10-14 09:42:08.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:09 compute-0 nova_compute[259627]: 2025-10-14 09:42:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:10 compute-0 ceph-mon[74249]: pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 14 09:42:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 14 09:42:11 compute-0 nova_compute[259627]: 2025-10-14 09:42:11.230 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434916.2287223, 3e37b67b-524c-4098-9609-97b0b31e72c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:42:11 compute-0 nova_compute[259627]: 2025-10-14 09:42:11.230 2 INFO nova.compute.manager [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] VM Stopped (Lifecycle Event)
Oct 14 09:42:11 compute-0 nova_compute[259627]: 2025-10-14 09:42:11.253 2 DEBUG nova.compute.manager [None req-978b4e9d-affa-4b8a-a42a-b6f625f2d1c9 - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:42:12 compute-0 ceph-mon[74249]: pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 14 09:42:12 compute-0 nova_compute[259627]: 2025-10-14 09:42:12.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 09:42:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:14 compute-0 ceph-mon[74249]: pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 09:42:14 compute-0 unix_chkpwd[419736]: password check failed for user (root)
Oct 14 09:42:14 compute-0 sshd-session[419734]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 14 09:42:14 compute-0 podman[419738]: 2025-10-14 09:42:14.653508166 +0000 UTC m=+0.066552764 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 09:42:14 compute-0 podman[419737]: 2025-10-14 09:42:14.691760625 +0000 UTC m=+0.109076858 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 14 09:42:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 09:42:14 compute-0 nova_compute[259627]: 2025-10-14 09:42:14.912 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434919.9109924, 0239a56b-babd-4c44-b52b-ade80229be78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:42:14 compute-0 nova_compute[259627]: 2025-10-14 09:42:14.912 2 INFO nova.compute.manager [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] VM Stopped (Lifecycle Event)
Oct 14 09:42:14 compute-0 nova_compute[259627]: 2025-10-14 09:42:14.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:14 compute-0 nova_compute[259627]: 2025-10-14 09:42:14.943 2 DEBUG nova.compute.manager [None req-90424742-1551-4a56-96d2-872118ee2c13 - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:42:15 compute-0 sshd-session[419734]: Failed password for root from 193.46.255.99 port 54746 ssh2
Oct 14 09:42:15 compute-0 unix_chkpwd[419780]: password check failed for user (root)
Oct 14 09:42:16 compute-0 ceph-mon[74249]: pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 09:42:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 09:42:17 compute-0 nova_compute[259627]: 2025-10-14 09:42:17.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:17 compute-0 sshd-session[419734]: Failed password for root from 193.46.255.99 port 54746 ssh2
Oct 14 09:42:18 compute-0 ceph-mon[74249]: pgmap v2674: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 09:42:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:19 compute-0 unix_chkpwd[419781]: password check failed for user (root)
Oct 14 09:42:19 compute-0 nova_compute[259627]: 2025-10-14 09:42:19.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:20 compute-0 ceph-mon[74249]: pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:21 compute-0 sshd-session[419734]: Failed password for root from 193.46.255.99 port 54746 ssh2
Oct 14 09:42:21 compute-0 ceph-mon[74249]: pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:22 compute-0 nova_compute[259627]: 2025-10-14 09:42:22.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:22 compute-0 sshd-session[419734]: Received disconnect from 193.46.255.99 port 54746:11:  [preauth]
Oct 14 09:42:22 compute-0 sshd-session[419734]: Disconnected from authenticating user root 193.46.255.99 port 54746 [preauth]
Oct 14 09:42:22 compute-0 sshd-session[419734]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 14 09:42:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:23.707 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:26:39 10.100.0.2 2001:db8::f816:3eff:fe08:2639'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe08:2639/64', 'neutron:device_id': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=176e1cce-63d0-4e90-9078-245732aff057) old=Port_Binding(mac=['fa:16:3e:08:26:39 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:42:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:23.709 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 176e1cce-63d0-4e90-9078-245732aff057 in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 updated
Oct 14 09:42:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:23.710 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa331e50-389c-4e3c-80a7-7c8364a3fce5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:42:23 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:23.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f283fb2c-1e86-446e-bc96-5540a63023d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:23 compute-0 ceph-mon[74249]: pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:24 compute-0 nova_compute[259627]: 2025-10-14 09:42:24.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:25 compute-0 ceph-mon[74249]: pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:27 compute-0 sudo[419782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:27 compute-0 sudo[419782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:27 compute-0 sudo[419782]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:27 compute-0 sudo[419807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:42:27 compute-0 sudo[419807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:27 compute-0 sudo[419807]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:27 compute-0 nova_compute[259627]: 2025-10-14 09:42:27.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:27 compute-0 sudo[419832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:27 compute-0 sudo[419832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:27 compute-0 sudo[419832]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:27 compute-0 sudo[419857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:42:27 compute-0 sudo[419857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:27 compute-0 ceph-mon[74249]: pgmap v2679: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:27 compute-0 sudo[419857]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:42:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:42:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:42:28 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:42:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:42:28 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:42:28 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 54e37c1f-ca94-4e94-b895-10aa7cc2777d does not exist
Oct 14 09:42:28 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 64ea7c65-2165-43fe-9af3-9ece2bef1666 does not exist
Oct 14 09:42:28 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 76b84b66-4081-4783-8c1c-f2024df4dae6 does not exist
Oct 14 09:42:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:42:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:42:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:42:28 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:42:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:42:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:42:28 compute-0 sudo[419914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:28 compute-0 sudo[419914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:28 compute-0 sudo[419914]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:28 compute-0 sudo[419940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:42:28 compute-0 sudo[419940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:28 compute-0 sudo[419940]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:28 compute-0 sudo[419965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:28 compute-0 sudo[419965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:28 compute-0 sudo[419965]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:28 compute-0 sudo[419990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:42:28 compute-0 sudo[419990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:28 compute-0 podman[420054]: 2025-10-14 09:42:28.700952568 +0000 UTC m=+0.066757179 container create b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 09:42:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:28 compute-0 podman[420054]: 2025-10-14 09:42:28.671312761 +0000 UTC m=+0.037117442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:42:28 compute-0 systemd[1]: Started libpod-conmon-b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c.scope.
Oct 14 09:42:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:42:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:42:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:42:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:42:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:42:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:42:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:42:28 compute-0 podman[420054]: 2025-10-14 09:42:28.831621035 +0000 UTC m=+0.197425656 container init b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:42:28 compute-0 podman[420054]: 2025-10-14 09:42:28.841447636 +0000 UTC m=+0.207252257 container start b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:42:28 compute-0 podman[420054]: 2025-10-14 09:42:28.846415318 +0000 UTC m=+0.212219939 container attach b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:42:28 compute-0 youthful_shtern[420070]: 167 167
Oct 14 09:42:28 compute-0 systemd[1]: libpod-b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c.scope: Deactivated successfully.
Oct 14 09:42:28 compute-0 conmon[420070]: conmon b11bc8a501cad7b4fea0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c.scope/container/memory.events
Oct 14 09:42:28 compute-0 podman[420054]: 2025-10-14 09:42:28.849954875 +0000 UTC m=+0.215759466 container died b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90eaac3c445155ae93ea04257f0ac96fa4d0d4b3bfa6700f23f383ccd58afc8-merged.mount: Deactivated successfully.
Oct 14 09:42:28 compute-0 podman[420054]: 2025-10-14 09:42:28.898712021 +0000 UTC m=+0.264516612 container remove b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:42:28 compute-0 systemd[1]: libpod-conmon-b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c.scope: Deactivated successfully.
Oct 14 09:42:29 compute-0 podman[420094]: 2025-10-14 09:42:29.171442583 +0000 UTC m=+0.070577203 container create f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:42:29 compute-0 systemd[1]: Started libpod-conmon-f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75.scope.
Oct 14 09:42:29 compute-0 podman[420094]: 2025-10-14 09:42:29.14281076 +0000 UTC m=+0.041945450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:42:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:29 compute-0 podman[420094]: 2025-10-14 09:42:29.283172465 +0000 UTC m=+0.182307115 container init f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 14 09:42:29 compute-0 podman[420094]: 2025-10-14 09:42:29.298943982 +0000 UTC m=+0.198078602 container start f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:42:29 compute-0 podman[420094]: 2025-10-14 09:42:29.302842528 +0000 UTC m=+0.201977238 container attach f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:42:29 compute-0 nova_compute[259627]: 2025-10-14 09:42:29.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:30 compute-0 ceph-mon[74249]: pgmap v2680: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:30 compute-0 infallible_nightingale[420111]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:42:30 compute-0 infallible_nightingale[420111]: --> relative data size: 1.0
Oct 14 09:42:30 compute-0 infallible_nightingale[420111]: --> All data devices are unavailable
Oct 14 09:42:30 compute-0 systemd[1]: libpod-f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75.scope: Deactivated successfully.
Oct 14 09:42:30 compute-0 podman[420094]: 2025-10-14 09:42:30.502651031 +0000 UTC m=+1.401785661 container died f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:42:30 compute-0 systemd[1]: libpod-f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75.scope: Consumed 1.157s CPU time.
Oct 14 09:42:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896-merged.mount: Deactivated successfully.
Oct 14 09:42:30 compute-0 podman[420094]: 2025-10-14 09:42:30.569484301 +0000 UTC m=+1.468618951 container remove f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:42:30 compute-0 systemd[1]: libpod-conmon-f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75.scope: Deactivated successfully.
Oct 14 09:42:30 compute-0 sudo[419990]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:30 compute-0 podman[420141]: 2025-10-14 09:42:30.642959004 +0000 UTC m=+0.105000907 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:42:30 compute-0 podman[420148]: 2025-10-14 09:42:30.643927738 +0000 UTC m=+0.104070815 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:42:30 compute-0 sudo[420182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:30 compute-0 sudo[420182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:30 compute-0 sudo[420182]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:30 compute-0 sudo[420213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:42:30 compute-0 sudo[420213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:30 compute-0 sudo[420213]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:30 compute-0 sudo[420238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:30 compute-0 sudo[420238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:30 compute-0 sudo[420238]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:30 compute-0 sudo[420263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:42:30 compute-0 sudo[420263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:31 compute-0 podman[420326]: 2025-10-14 09:42:31.272614996 +0000 UTC m=+0.064747120 container create 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 09:42:31 compute-0 systemd[1]: Started libpod-conmon-2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9.scope.
Oct 14 09:42:31 compute-0 podman[420326]: 2025-10-14 09:42:31.237585167 +0000 UTC m=+0.029717361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:42:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:42:31 compute-0 podman[420326]: 2025-10-14 09:42:31.373623405 +0000 UTC m=+0.165755509 container init 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:42:31 compute-0 podman[420326]: 2025-10-14 09:42:31.381512979 +0000 UTC m=+0.173645073 container start 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 09:42:31 compute-0 podman[420326]: 2025-10-14 09:42:31.384888391 +0000 UTC m=+0.177020535 container attach 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:42:31 compute-0 stupefied_kowalevski[420342]: 167 167
Oct 14 09:42:31 compute-0 systemd[1]: libpod-2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9.scope: Deactivated successfully.
Oct 14 09:42:31 compute-0 podman[420326]: 2025-10-14 09:42:31.387316761 +0000 UTC m=+0.179448875 container died 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:42:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb35fd1d902b07273a195b091081147db918c6c1012a8391dec3482532ba510a-merged.mount: Deactivated successfully.
Oct 14 09:42:31 compute-0 podman[420326]: 2025-10-14 09:42:31.426558284 +0000 UTC m=+0.218690378 container remove 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:42:31 compute-0 systemd[1]: libpod-conmon-2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9.scope: Deactivated successfully.
Oct 14 09:42:31 compute-0 nova_compute[259627]: 2025-10-14 09:42:31.601 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:31 compute-0 nova_compute[259627]: 2025-10-14 09:42:31.602 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:31 compute-0 podman[420365]: 2025-10-14 09:42:31.613961903 +0000 UTC m=+0.045564599 container create b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:42:31 compute-0 nova_compute[259627]: 2025-10-14 09:42:31.616 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:42:31 compute-0 systemd[1]: Started libpod-conmon-b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729.scope.
Oct 14 09:42:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:31 compute-0 podman[420365]: 2025-10-14 09:42:31.595730976 +0000 UTC m=+0.027333692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:42:31 compute-0 podman[420365]: 2025-10-14 09:42:31.693367521 +0000 UTC m=+0.124970237 container init b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:42:31 compute-0 podman[420365]: 2025-10-14 09:42:31.705182451 +0000 UTC m=+0.136785147 container start b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 09:42:31 compute-0 nova_compute[259627]: 2025-10-14 09:42:31.706 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:31 compute-0 nova_compute[259627]: 2025-10-14 09:42:31.707 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:31 compute-0 podman[420365]: 2025-10-14 09:42:31.70878534 +0000 UTC m=+0.140388056 container attach b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:42:31 compute-0 nova_compute[259627]: 2025-10-14 09:42:31.715 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:42:31 compute-0 nova_compute[259627]: 2025-10-14 09:42:31.715 2 INFO nova.compute.claims [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:42:31 compute-0 nova_compute[259627]: 2025-10-14 09:42:31.821 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:32 compute-0 ceph-mon[74249]: pgmap v2681: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.052625) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952052651, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2060, "num_deletes": 251, "total_data_size": 3407007, "memory_usage": 3467624, "flush_reason": "Manual Compaction"}
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952067749, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3351426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54426, "largest_seqno": 56485, "table_properties": {"data_size": 3342000, "index_size": 5983, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19013, "raw_average_key_size": 20, "raw_value_size": 3323329, "raw_average_value_size": 3527, "num_data_blocks": 265, "num_entries": 942, "num_filter_entries": 942, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434724, "oldest_key_time": 1760434724, "file_creation_time": 1760434952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15164 microseconds, and 7882 cpu microseconds.
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.067787) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3351426 bytes OK
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.067804) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.069768) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.069782) EVENT_LOG_v1 {"time_micros": 1760434952069777, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.069797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3398360, prev total WAL file size 3398360, number of live WAL files 2.
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.070775) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3272KB)], [128(8088KB)]
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952070835, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11633765, "oldest_snapshot_seqno": -1}
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7743 keys, 9926871 bytes, temperature: kUnknown
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952112859, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9926871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9876417, "index_size": 29989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19397, "raw_key_size": 201186, "raw_average_key_size": 25, "raw_value_size": 9739440, "raw_average_value_size": 1257, "num_data_blocks": 1170, "num_entries": 7743, "num_filter_entries": 7743, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.113068) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9926871 bytes
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.114348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 276.5 rd, 235.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8257, records dropped: 514 output_compression: NoCompression
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.114361) EVENT_LOG_v1 {"time_micros": 1760434952114355, "job": 78, "event": "compaction_finished", "compaction_time_micros": 42081, "compaction_time_cpu_micros": 20615, "output_level": 6, "num_output_files": 1, "total_output_size": 9926871, "num_input_records": 8257, "num_output_records": 7743, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952114917, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952116160, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.070686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:32 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:42:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559109412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.268 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.275 2 DEBUG nova.compute.provider_tree [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.299 2 DEBUG nova.scheduler.client.report [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.328 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.329 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.382 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.383 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.403 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.424 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]: {
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:     "0": [
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:         {
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "devices": [
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "/dev/loop3"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             ],
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_name": "ceph_lv0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_size": "21470642176",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "name": "ceph_lv0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "tags": {
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cluster_name": "ceph",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.crush_device_class": "",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.encrypted": "0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osd_id": "0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.type": "block",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.vdo": "0"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             },
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "type": "block",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "vg_name": "ceph_vg0"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:         }
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:     ],
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:     "1": [
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:         {
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "devices": [
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "/dev/loop4"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             ],
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_name": "ceph_lv1",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_size": "21470642176",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "name": "ceph_lv1",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "tags": {
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cluster_name": "ceph",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.crush_device_class": "",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.encrypted": "0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osd_id": "1",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.type": "block",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.vdo": "0"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             },
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "type": "block",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "vg_name": "ceph_vg1"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:         }
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:     ],
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:     "2": [
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:         {
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "devices": [
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "/dev/loop5"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             ],
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_name": "ceph_lv2",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_size": "21470642176",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "name": "ceph_lv2",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "tags": {
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.cluster_name": "ceph",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.crush_device_class": "",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.encrypted": "0",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osd_id": "2",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.type": "block",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:                 "ceph.vdo": "0"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             },
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "type": "block",
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:             "vg_name": "ceph_vg2"
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:         }
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]:     ]
Oct 14 09:42:32 compute-0 hopeful_boyd[420382]: }
Oct 14 09:42:32 compute-0 systemd[1]: libpod-b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729.scope: Deactivated successfully.
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.549 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.552 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.552 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Creating image(s)
Oct 14 09:42:32 compute-0 podman[420413]: 2025-10-14 09:42:32.576269938 +0000 UTC m=+0.032266523 container died b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.577 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:42:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77-merged.mount: Deactivated successfully.
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.607 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:42:32 compute-0 podman[420413]: 2025-10-14 09:42:32.645540768 +0000 UTC m=+0.101537323 container remove b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.650 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:42:32 compute-0 systemd[1]: libpod-conmon-b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729.scope: Deactivated successfully.
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.659 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:32 compute-0 sudo[420263]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.753 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.755 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.756 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.757 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:42:32 compute-0 sudo[420482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.790 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:42:32 compute-0 sudo[420482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.797 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:32 compute-0 sudo[420482]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:42:32
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images']
Oct 14 09:42:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:42:32 compute-0 sudo[420527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:42:32 compute-0 sudo[420527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:32 compute-0 sudo[420527]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:32 compute-0 nova_compute[259627]: 2025-10-14 09:42:32.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:32 compute-0 sudo[420568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:32 compute-0 sudo[420568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:33 compute-0 sudo[420568]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1559109412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:42:33 compute-0 sudo[420597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:42:33 compute-0 sudo[420597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.099 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.166 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:42:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.262 2 DEBUG nova.objects.instance [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:42:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.383 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.383 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Ensure instance console log exists: /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.384 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.384 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.384 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:33 compute-0 nova_compute[259627]: 2025-10-14 09:42:33.428 2 DEBUG nova.policy [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:42:33 compute-0 podman[420734]: 2025-10-14 09:42:33.486781071 +0000 UTC m=+0.043417076 container create 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:42:33 compute-0 systemd[1]: Started libpod-conmon-04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f.scope.
Oct 14 09:42:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:42:33 compute-0 podman[420734]: 2025-10-14 09:42:33.469484607 +0000 UTC m=+0.026120662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:42:33 compute-0 podman[420734]: 2025-10-14 09:42:33.568182349 +0000 UTC m=+0.124818354 container init 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:42:33 compute-0 podman[420734]: 2025-10-14 09:42:33.578962123 +0000 UTC m=+0.135598128 container start 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:42:33 compute-0 podman[420734]: 2025-10-14 09:42:33.58207196 +0000 UTC m=+0.138707995 container attach 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:42:33 compute-0 practical_cannon[420750]: 167 167
Oct 14 09:42:33 compute-0 systemd[1]: libpod-04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f.scope: Deactivated successfully.
Oct 14 09:42:33 compute-0 podman[420734]: 2025-10-14 09:42:33.583640478 +0000 UTC m=+0.140276493 container died 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70932201dcddd1f9d64d6493654e4e6b61e4851359877357f677b60040e3660-merged.mount: Deactivated successfully.
Oct 14 09:42:33 compute-0 podman[420734]: 2025-10-14 09:42:33.632980019 +0000 UTC m=+0.189616044 container remove 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:42:33 compute-0 systemd[1]: libpod-conmon-04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f.scope: Deactivated successfully.
Oct 14 09:42:33 compute-0 podman[420772]: 2025-10-14 09:42:33.889607537 +0000 UTC m=+0.074304755 container create 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:42:33 compute-0 systemd[1]: Started libpod-conmon-4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf.scope.
Oct 14 09:42:33 compute-0 podman[420772]: 2025-10-14 09:42:33.860546413 +0000 UTC m=+0.045243671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:42:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:34 compute-0 podman[420772]: 2025-10-14 09:42:34.00790516 +0000 UTC m=+0.192602398 container init 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:42:34 compute-0 podman[420772]: 2025-10-14 09:42:34.020123679 +0000 UTC m=+0.204820887 container start 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:42:34 compute-0 podman[420772]: 2025-10-14 09:42:34.024304872 +0000 UTC m=+0.209002140 container attach 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 09:42:34 compute-0 ceph-mon[74249]: pgmap v2682: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:34 compute-0 nova_compute[259627]: 2025-10-14 09:42:34.379 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Successfully created port: b3e71767-ffef-473e-947a-bb35562569c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:42:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:34 compute-0 nova_compute[259627]: 2025-10-14 09:42:34.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]: {
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "osd_id": 2,
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "type": "bluestore"
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:     },
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "osd_id": 1,
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "type": "bluestore"
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:     },
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "osd_id": 0,
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:         "type": "bluestore"
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]:     }
Oct 14 09:42:35 compute-0 stupefied_stonebraker[420789]: }
Oct 14 09:42:35 compute-0 systemd[1]: libpod-4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf.scope: Deactivated successfully.
Oct 14 09:42:35 compute-0 podman[420772]: 2025-10-14 09:42:35.052651688 +0000 UTC m=+1.237348956 container died 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:42:35 compute-0 systemd[1]: libpod-4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf.scope: Consumed 1.043s CPU time.
Oct 14 09:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662-merged.mount: Deactivated successfully.
Oct 14 09:42:35 compute-0 podman[420772]: 2025-10-14 09:42:35.12529128 +0000 UTC m=+1.309988488 container remove 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:42:35 compute-0 systemd[1]: libpod-conmon-4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf.scope: Deactivated successfully.
Oct 14 09:42:35 compute-0 sudo[420597]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:42:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:42:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:42:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:42:35 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev bfe1fe07-a333-4ac2-a6ae-0b1ca3781651 does not exist
Oct 14 09:42:35 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3a35b79a-5b3b-4c6e-ba63-6519d55be060 does not exist
Oct 14 09:42:35 compute-0 sudo[420834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:42:35 compute-0 sudo[420834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:35 compute-0 sudo[420834]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:35 compute-0 sudo[420859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:42:35 compute-0 sudo[420859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:42:35 compute-0 sudo[420859]: pam_unix(sudo:session): session closed for user root
Oct 14 09:42:36 compute-0 ceph-mon[74249]: pgmap v2683: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:42:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:42:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:42:36 compute-0 nova_compute[259627]: 2025-10-14 09:42:36.310 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Successfully updated port: b3e71767-ffef-473e-947a-bb35562569c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:42:36 compute-0 nova_compute[259627]: 2025-10-14 09:42:36.334 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:42:36 compute-0 nova_compute[259627]: 2025-10-14 09:42:36.334 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:42:36 compute-0 nova_compute[259627]: 2025-10-14 09:42:36.334 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:42:36 compute-0 nova_compute[259627]: 2025-10-14 09:42:36.415 2 DEBUG nova.compute.manager [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-changed-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:42:36 compute-0 nova_compute[259627]: 2025-10-14 09:42:36.416 2 DEBUG nova.compute.manager [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing instance network info cache due to event network-changed-b3e71767-ffef-473e-947a-bb35562569c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:42:36 compute-0 nova_compute[259627]: 2025-10-14 09:42:36.416 2 DEBUG oslo_concurrency.lockutils [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:42:36 compute-0 nova_compute[259627]: 2025-10-14 09:42:36.515 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:42:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:37 compute-0 nova_compute[259627]: 2025-10-14 09:42:37.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:38 compute-0 ceph-mon[74249]: pgmap v2684: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.461 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.480 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.481 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance network_info: |[{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.482 2 DEBUG oslo_concurrency.lockutils [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.482 2 DEBUG nova.network.neutron [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing network info cache for port b3e71767-ffef-473e-947a-bb35562569c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.487 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start _get_guest_xml network_info=[{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.495 2 WARNING nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.509 2 DEBUG nova.virt.libvirt.host [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.510 2 DEBUG nova.virt.libvirt.host [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.515 2 DEBUG nova.virt.libvirt.host [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.516 2 DEBUG nova.virt.libvirt.host [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.516 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.517 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.518 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.518 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.519 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.519 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.520 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.520 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.521 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.521 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.522 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.522 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:42:38 compute-0 nova_compute[259627]: 2025-10-14 09:42:38.527 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:42:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978747132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.034 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.072 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.079 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:39 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3978747132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:42:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:42:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2533610027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.553 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.556 2 DEBUG nova.virt.libvirt.vif [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:42:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-754347042',display_name='tempest-TestGettingAddress-server-754347042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-754347042',id=152,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-072443ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:42:32Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=f6f1bb88-2b88-4876-ab72-3e4e4dc9a578,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.556 2 DEBUG nova.network.os_vif_util [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.558 2 DEBUG nova.network.os_vif_util [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.561 2 DEBUG nova.objects.instance [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.599 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <uuid>f6f1bb88-2b88-4876-ab72-3e4e4dc9a578</uuid>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <name>instance-00000098</name>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-754347042</nova:name>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:42:38</nova:creationTime>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <nova:port uuid="b3e71767-ffef-473e-947a-bb35562569c9">
Oct 14 09:42:39 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe57:ac57" ipVersion="6"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <system>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <entry name="serial">f6f1bb88-2b88-4876-ab72-3e4e4dc9a578</entry>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <entry name="uuid">f6f1bb88-2b88-4876-ab72-3e4e4dc9a578</entry>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </system>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <os>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   </os>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <features>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   </features>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk">
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config">
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       </source>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:42:39 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:57:ac:57"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <target dev="tapb3e71767-ff"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/console.log" append="off"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <video>
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </video>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:42:39 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:42:39 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:42:39 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:42:39 compute-0 nova_compute[259627]: </domain>
Oct 14 09:42:39 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.600 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Preparing to wait for external event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.600 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.601 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.601 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.602 2 DEBUG nova.virt.libvirt.vif [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:42:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-754347042',display_name='tempest-TestGettingAddress-server-754347042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-754347042',id=152,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-072443ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:42:32Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=f6f1bb88-2b88-4876-ab72-3e4e4dc9a578,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.603 2 DEBUG nova.network.os_vif_util [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.604 2 DEBUG nova.network.os_vif_util [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.605 2 DEBUG os_vif [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.607 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3e71767-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3e71767-ff, col_values=(('external_ids', {'iface-id': 'b3e71767-ffef-473e-947a-bb35562569c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:ac:57', 'vm-uuid': 'f6f1bb88-2b88-4876-ab72-3e4e4dc9a578'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:42:39 compute-0 NetworkManager[44885]: <info>  [1760434959.6161] manager: (tapb3e71767-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/682)
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.624 2 INFO os_vif [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff')
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.696 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.696 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.697 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:57:ac:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.698 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Using config drive
Oct 14 09:42:39 compute-0 nova_compute[259627]: 2025-10-14 09:42:39.731 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:42:40 compute-0 ceph-mon[74249]: pgmap v2685: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:40 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2533610027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.389 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Creating config drive at /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.397 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5yp76zfn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.569 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5yp76zfn" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.600 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.603 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.744 2 DEBUG nova.network.neutron [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updated VIF entry in instance network info cache for port b3e71767-ffef-473e-947a-bb35562569c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.745 2 DEBUG nova.network.neutron [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:42:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.778 2 DEBUG oslo_concurrency.lockutils [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.789 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.790 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Deleting local config drive /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config because it was imported into RBD.
Oct 14 09:42:40 compute-0 kernel: tapb3e71767-ff: entered promiscuous mode
Oct 14 09:42:40 compute-0 NetworkManager[44885]: <info>  [1760434960.8668] manager: (tapb3e71767-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/683)
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:40 compute-0 ovn_controller[152662]: 2025-10-14T09:42:40Z|01665|binding|INFO|Claiming lport b3e71767-ffef-473e-947a-bb35562569c9 for this chassis.
Oct 14 09:42:40 compute-0 ovn_controller[152662]: 2025-10-14T09:42:40Z|01666|binding|INFO|b3e71767-ffef-473e-947a-bb35562569c9: Claiming fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.884 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57'], port_security=['fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8::f816:3eff:fe57:ac57/64', 'neutron:device_id': 'f6f1bb88-2b88-4876-ab72-3e4e4dc9a578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f0bf2f6f-4af7-4c93-8c0d-bb1990d0ba82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b3e71767-ffef-473e-947a-bb35562569c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.885 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b3e71767-ffef-473e-947a-bb35562569c9 in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 bound to our chassis
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.887 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa331e50-389c-4e3c-80a7-7c8364a3fce5
Oct 14 09:42:40 compute-0 systemd-udevd[421017]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.905 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca68f9b-0df6-420a-9a49-a481a83e1624]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.906 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa331e50-31 in ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.908 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa331e50-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.908 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78030806-8c6c-4ce3-b214-f927e3d5bb10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b94d23c-01f7-409d-96dd-0472c10a945e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:40 compute-0 NetworkManager[44885]: <info>  [1760434960.9223] device (tapb3e71767-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.922 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[10db6d3d-39c7-45b8-ba17-8d0bd38dee59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:40 compute-0 NetworkManager[44885]: <info>  [1760434960.9252] device (tapb3e71767-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:42:40 compute-0 systemd-machined[214636]: New machine qemu-185-instance-00000098.
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.954 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c859cb9-05ff-4aab-8198-9b112165813b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:40 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000098.
Oct 14 09:42:40 compute-0 nova_compute[259627]: 2025-10-14 09:42:40.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.986 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a15e95a6-3e2f-470f-9b01-4931e945991c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:40 compute-0 NetworkManager[44885]: <info>  [1760434960.9929] manager: (tapaa331e50-30): new Veth device (/org/freedesktop/NetworkManager/Devices/684)
Oct 14 09:42:40 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b710c0-d498-413a-8f49-ed76b24c769e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:40 compute-0 systemd-udevd[421023]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:42:41 compute-0 ovn_controller[152662]: 2025-10-14T09:42:41Z|01667|binding|INFO|Setting lport b3e71767-ffef-473e-947a-bb35562569c9 ovn-installed in OVS
Oct 14 09:42:41 compute-0 ovn_controller[152662]: 2025-10-14T09:42:41Z|01668|binding|INFO|Setting lport b3e71767-ffef-473e-947a-bb35562569c9 up in Southbound
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.020 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[74ea99d1-d242-412b-a912-db12ebbbedaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.032 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e34cd34-1334-47fe-a0b9-7241909593fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 NetworkManager[44885]: <info>  [1760434961.0607] device (tapaa331e50-30): carrier: link connected
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.068 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dde2768e-ef2d-480d-8ea0-d613abb3078e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.084 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcbad9b-ad69-4540-9805-f6c0fba995a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa331e50-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873982, 'reachable_time': 36414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421053, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.104 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[caa0ef22-4042-499e-b888-99ed6f9f7066]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:2639'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873982, 'tstamp': 873982}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421054, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.125 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e359392a-186d-4be8-b0a3-4bd8f61745b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa331e50-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873982, 'reachable_time': 36414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421055, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.173 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e343fd-ca9f-4613-830a-0ad3f5eaa591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.261 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d19de77c-50f7-4fc7-992c-135cc0224810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.262 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa331e50-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.263 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.264 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa331e50-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:42:41 compute-0 kernel: tapaa331e50-30: entered promiscuous mode
Oct 14 09:42:41 compute-0 NetworkManager[44885]: <info>  [1760434961.2674] manager: (tapaa331e50-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/685)
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.270 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa331e50-30, col_values=(('external_ids', {'iface-id': '176e1cce-63d0-4e90-9078-245732aff057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:42:41 compute-0 ovn_controller[152662]: 2025-10-14T09:42:41Z|01669|binding|INFO|Releasing lport 176e1cce-63d0-4e90-9078-245732aff057 from this chassis (sb_readonly=0)
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.302 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa331e50-389c-4e3c-80a7-7c8364a3fce5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa331e50-389c-4e3c-80a7-7c8364a3fce5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.304 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[288e2e03-6283-4955-9bab-ef06a34cee44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.305 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: global
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     log         /dev/log local0 debug
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     log-tag     haproxy-metadata-proxy-aa331e50-389c-4e3c-80a7-7c8364a3fce5
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     user        root
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     group       root
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     maxconn     1024
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     pidfile     /var/lib/neutron/external/pids/aa331e50-389c-4e3c-80a7-7c8364a3fce5.pid.haproxy
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     daemon
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: defaults
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     log global
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     mode http
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     option httplog
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     option dontlognull
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     option http-server-close
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     option forwardfor
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     retries                 3
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     timeout http-request    30s
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     timeout connect         30s
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     timeout client          32s
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     timeout server          32s
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     timeout http-keep-alive 30s
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: listen listener
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     bind 169.254.169.254:80
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     server metadata /var/lib/neutron/metadata_proxy
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:     http-request add-header X-OVN-Network-ID aa331e50-389c-4e3c-80a7-7c8364a3fce5
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.309 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'env', 'PROCESS_TAG=haproxy-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa331e50-389c-4e3c-80a7-7c8364a3fce5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.437 2 DEBUG nova.compute.manager [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.437 2 DEBUG oslo_concurrency.lockutils [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.438 2 DEBUG oslo_concurrency.lockutils [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.438 2 DEBUG oslo_concurrency.lockutils [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.439 2 DEBUG nova.compute.manager [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Processing event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.477 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:42:41 compute-0 nova_compute[259627]: 2025-10-14 09:42:41.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:41 compute-0 podman[421129]: 2025-10-14 09:42:41.756258803 +0000 UTC m=+0.072731566 container create 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:42:41 compute-0 systemd[1]: Started libpod-conmon-6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839.scope.
Oct 14 09:42:41 compute-0 podman[421129]: 2025-10-14 09:42:41.727432456 +0000 UTC m=+0.043905239 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 09:42:41 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7492a1d1260024e32fdfbc4b0a12cad644de8f71781dd5c42aeb3535a093dda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 09:42:41 compute-0 podman[421129]: 2025-10-14 09:42:41.857140359 +0000 UTC m=+0.173613132 container init 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:42:41 compute-0 podman[421129]: 2025-10-14 09:42:41.869190494 +0000 UTC m=+0.185663247 container start 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 09:42:41 compute-0 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [NOTICE]   (421149) : New worker (421151) forked
Oct 14 09:42:41 compute-0 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [NOTICE]   (421149) : Loading success.
Oct 14 09:42:41 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.935 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.073 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434962.072886, f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.074 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] VM Started (Lifecycle Event)
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.076 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.084 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.089 2 INFO nova.virt.libvirt.driver [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance spawned successfully.
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.089 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.092 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.096 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.106 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.107 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.107 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.108 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.108 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.109 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:42:42 compute-0 ceph-mon[74249]: pgmap v2686: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.116 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.117 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434962.073146, f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.117 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] VM Paused (Lifecycle Event)
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.177 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.182 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434962.0804393, f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.182 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] VM Resumed (Lifecycle Event)
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.191 2 INFO nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Took 9.64 seconds to spawn the instance on the hypervisor.
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.192 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.203 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.208 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.235 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.255 2 INFO nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Took 10.59 seconds to build instance.
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.282 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:42 compute-0 nova_compute[259627]: 2025-10-14 09:42:42.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:43 compute-0 nova_compute[259627]: 2025-10-14 09:42:43.552 2 DEBUG nova.compute.manager [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:42:43 compute-0 nova_compute[259627]: 2025-10-14 09:42:43.553 2 DEBUG oslo_concurrency.lockutils [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:43 compute-0 nova_compute[259627]: 2025-10-14 09:42:43.553 2 DEBUG oslo_concurrency.lockutils [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:43 compute-0 nova_compute[259627]: 2025-10-14 09:42:43.554 2 DEBUG oslo_concurrency.lockutils [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:43 compute-0 nova_compute[259627]: 2025-10-14 09:42:43.554 2 DEBUG nova.compute.manager [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] No waiting events found dispatching network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:42:43 compute-0 nova_compute[259627]: 2025-10-14 09:42:43.555 2 WARNING nova.compute.manager [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received unexpected event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 for instance with vm_state active and task_state None.
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:42:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:42:44 compute-0 ceph-mon[74249]: pgmap v2687: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:44 compute-0 nova_compute[259627]: 2025-10-14 09:42:44.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:45 compute-0 podman[421161]: 2025-10-14 09:42:45.687513956 +0000 UTC m=+0.090746748 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:42:45 compute-0 podman[421160]: 2025-10-14 09:42:45.736416156 +0000 UTC m=+0.141845392 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 14 09:42:46 compute-0 ceph-mon[74249]: pgmap v2688: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:42:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:47 compute-0 NetworkManager[44885]: <info>  [1760434967.3422] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/686)
Oct 14 09:42:47 compute-0 ovn_controller[152662]: 2025-10-14T09:42:47Z|01670|binding|INFO|Releasing lport 176e1cce-63d0-4e90-9078-245732aff057 from this chassis (sb_readonly=0)
Oct 14 09:42:47 compute-0 NetworkManager[44885]: <info>  [1760434967.3440] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/687)
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:47 compute-0 ovn_controller[152662]: 2025-10-14T09:42:47Z|01671|binding|INFO|Releasing lport 176e1cce-63d0-4e90-9078-245732aff057 from this chassis (sb_readonly=0)
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.811 2 DEBUG nova.compute.manager [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-changed-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.812 2 DEBUG nova.compute.manager [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing instance network info cache due to event network-changed-b3e71767-ffef-473e-947a-bb35562569c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.813 2 DEBUG oslo_concurrency.lockutils [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.814 2 DEBUG oslo_concurrency.lockutils [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:42:47 compute-0 nova_compute[259627]: 2025-10-14 09:42:47.815 2 DEBUG nova.network.neutron [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing network info cache for port b3e71767-ffef-473e-947a-bb35562569c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:42:48 compute-0 ceph-mon[74249]: pgmap v2689: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:42:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:42:49 compute-0 nova_compute[259627]: 2025-10-14 09:42:49.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:42:49.938 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:42:49 compute-0 nova_compute[259627]: 2025-10-14 09:42:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:49 compute-0 nova_compute[259627]: 2025-10-14 09:42:49.997 2 DEBUG nova.network.neutron [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updated VIF entry in instance network info cache for port b3e71767-ffef-473e-947a-bb35562569c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:42:49 compute-0 nova_compute[259627]: 2025-10-14 09:42:49.998 2 DEBUG nova.network.neutron [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:42:50 compute-0 nova_compute[259627]: 2025-10-14 09:42:50.017 2 DEBUG oslo_concurrency.lockutils [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:42:50 compute-0 ceph-mon[74249]: pgmap v2690: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:42:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:42:52 compute-0 ceph-mon[74249]: pgmap v2691: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:42:52 compute-0 nova_compute[259627]: 2025-10-14 09:42:52.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:42:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:53 compute-0 ovn_controller[152662]: 2025-10-14T09:42:53Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:ac:57 10.100.0.6
Oct 14 09:42:53 compute-0 ovn_controller[152662]: 2025-10-14T09:42:53Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:ac:57 10.100.0.6
Oct 14 09:42:54 compute-0 ceph-mon[74249]: pgmap v2692: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:42:54 compute-0 nova_compute[259627]: 2025-10-14 09:42:54.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:42:55 compute-0 nova_compute[259627]: 2025-10-14 09:42:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:56 compute-0 ceph-mon[74249]: pgmap v2693: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:42:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 14 09:42:56 compute-0 nova_compute[259627]: 2025-10-14 09:42:56.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:57 compute-0 ceph-mon[74249]: pgmap v2694: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 14 09:42:57 compute-0 nova_compute[259627]: 2025-10-14 09:42:57.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:42:57 compute-0 nova_compute[259627]: 2025-10-14 09:42:57.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.214669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978214791, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 477, "num_deletes": 257, "total_data_size": 428852, "memory_usage": 439496, "flush_reason": "Manual Compaction"}
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978222497, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 425331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56486, "largest_seqno": 56962, "table_properties": {"data_size": 422595, "index_size": 771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6368, "raw_average_key_size": 18, "raw_value_size": 417114, "raw_average_value_size": 1198, "num_data_blocks": 34, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434953, "oldest_key_time": 1760434953, "file_creation_time": 1760434978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 7867 microseconds, and 4263 cpu microseconds.
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.222552) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 425331 bytes OK
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.222576) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.224915) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.224935) EVENT_LOG_v1 {"time_micros": 1760434978224928, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.224956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 426000, prev total WAL file size 426000, number of live WAL files 2.
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.225765) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(415KB)], [131(9694KB)]
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978225941, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10352202, "oldest_snapshot_seqno": -1}
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7566 keys, 10232868 bytes, temperature: kUnknown
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978294400, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10232868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10182657, "index_size": 30194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 198427, "raw_average_key_size": 26, "raw_value_size": 10047819, "raw_average_value_size": 1328, "num_data_blocks": 1177, "num_entries": 7566, "num_filter_entries": 7566, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.294728) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10232868 bytes
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.296287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.0 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(48.4) write-amplify(24.1) OK, records in: 8091, records dropped: 525 output_compression: NoCompression
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.296318) EVENT_LOG_v1 {"time_micros": 1760434978296304, "job": 80, "event": "compaction_finished", "compaction_time_micros": 68541, "compaction_time_cpu_micros": 48017, "output_level": 6, "num_output_files": 1, "total_output_size": 10232868, "num_input_records": 8091, "num_output_records": 7566, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978296660, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978300963, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.225524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:42:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:42:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145387823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.498 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.590 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.590 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:42:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.867 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.869 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3408MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.870 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.870 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.967 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.968 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:42:58 compute-0 nova_compute[259627]: 2025-10-14 09:42:58.968 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:42:59 compute-0 nova_compute[259627]: 2025-10-14 09:42:59.028 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:42:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2145387823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:42:59 compute-0 ceph-mon[74249]: pgmap v2695: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:42:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:42:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734493827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:42:59 compute-0 nova_compute[259627]: 2025-10-14 09:42:59.501 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:42:59 compute-0 nova_compute[259627]: 2025-10-14 09:42:59.508 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:42:59 compute-0 nova_compute[259627]: 2025-10-14 09:42:59.527 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:42:59 compute-0 nova_compute[259627]: 2025-10-14 09:42:59.549 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:42:59 compute-0 nova_compute[259627]: 2025-10-14 09:42:59.549 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:42:59 compute-0 nova_compute[259627]: 2025-10-14 09:42:59.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/734493827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:43:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:43:01 compute-0 ceph-mon[74249]: pgmap v2696: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:43:01 compute-0 podman[421252]: 2025-10-14 09:43:01.692427893 +0000 UTC m=+0.092041500 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 14 09:43:01 compute-0 podman[421253]: 2025-10-14 09:43:01.719466646 +0000 UTC m=+0.115366192 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, config_id=iscsid)
Oct 14 09:43:02 compute-0 nova_compute[259627]: 2025-10-14 09:43:02.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:43:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:43:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:43:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:43:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:43:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:43:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:43:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:03 compute-0 nova_compute[259627]: 2025-10-14 09:43:03.551 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:03 compute-0 nova_compute[259627]: 2025-10-14 09:43:03.552 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:03 compute-0 nova_compute[259627]: 2025-10-14 09:43:03.552 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:43:03 compute-0 nova_compute[259627]: 2025-10-14 09:43:03.553 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:43:03 compute-0 nova_compute[259627]: 2025-10-14 09:43:03.765 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:43:03 compute-0 nova_compute[259627]: 2025-10-14 09:43:03.766 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:43:03 compute-0 nova_compute[259627]: 2025-10-14 09:43:03.766 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 14 09:43:03 compute-0 nova_compute[259627]: 2025-10-14 09:43:03.767 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:43:03 compute-0 ceph-mon[74249]: pgmap v2697: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:43:04 compute-0 nova_compute[259627]: 2025-10-14 09:43:04.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:43:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:43:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/151493486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:43:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:43:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/151493486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:43:05 compute-0 ceph-mon[74249]: pgmap v2698: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:43:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/151493486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:43:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/151493486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:43:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:43:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:07.058 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:07.059 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:07.060 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.759 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.776 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.776 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.776 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.776 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.777 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.777 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:07 compute-0 nova_compute[259627]: 2025-10-14 09:43:07.777 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:43:07 compute-0 ceph-mon[74249]: pgmap v2699: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 09:43:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:08 compute-0 nova_compute[259627]: 2025-10-14 09:43:08.351 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:08 compute-0 nova_compute[259627]: 2025-10-14 09:43:08.353 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:08 compute-0 nova_compute[259627]: 2025-10-14 09:43:08.370 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:43:08 compute-0 nova_compute[259627]: 2025-10-14 09:43:08.457 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:08 compute-0 nova_compute[259627]: 2025-10-14 09:43:08.457 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:08 compute-0 nova_compute[259627]: 2025-10-14 09:43:08.465 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:43:08 compute-0 nova_compute[259627]: 2025-10-14 09:43:08.466 2 INFO nova.compute.claims [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:43:08 compute-0 nova_compute[259627]: 2025-10-14 09:43:08.611 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:43:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:43:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790316001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.111 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.121 2 DEBUG nova.compute.provider_tree [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.142 2 DEBUG nova.scheduler.client.report [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.172 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.173 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.251 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.252 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.278 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.314 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.440 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.443 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.444 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Creating image(s)
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.485 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.523 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.561 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.567 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.679 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.680 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.682 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.682 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.721 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.729 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4c73d661-07da-475c-be98-686abd5354f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:09 compute-0 nova_compute[259627]: 2025-10-14 09:43:09.836 2 DEBUG nova.policy [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 14 09:43:09 compute-0 ceph-mon[74249]: pgmap v2700: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 09:43:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2790316001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:43:10 compute-0 nova_compute[259627]: 2025-10-14 09:43:10.046 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4c73d661-07da-475c-be98-686abd5354f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:10 compute-0 nova_compute[259627]: 2025-10-14 09:43:10.108 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:43:10 compute-0 nova_compute[259627]: 2025-10-14 09:43:10.207 2 DEBUG nova.objects.instance [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 4c73d661-07da-475c-be98-686abd5354f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:43:10 compute-0 nova_compute[259627]: 2025-10-14 09:43:10.229 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:43:10 compute-0 nova_compute[259627]: 2025-10-14 09:43:10.229 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Ensure instance console log exists: /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:43:10 compute-0 nova_compute[259627]: 2025-10-14 09:43:10.230 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:10 compute-0 nova_compute[259627]: 2025-10-14 09:43:10.231 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:10 compute-0 nova_compute[259627]: 2025-10-14 09:43:10.231 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:43:11 compute-0 nova_compute[259627]: 2025-10-14 09:43:11.077 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Successfully created port: bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 14 09:43:11 compute-0 ceph-mon[74249]: pgmap v2701: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:43:12 compute-0 nova_compute[259627]: 2025-10-14 09:43:12.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 14 09:43:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:13 compute-0 nova_compute[259627]: 2025-10-14 09:43:13.282 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Successfully updated port: bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 14 09:43:13 compute-0 nova_compute[259627]: 2025-10-14 09:43:13.309 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:43:13 compute-0 nova_compute[259627]: 2025-10-14 09:43:13.310 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:43:13 compute-0 nova_compute[259627]: 2025-10-14 09:43:13.310 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:43:13 compute-0 nova_compute[259627]: 2025-10-14 09:43:13.449 2 DEBUG nova.compute.manager [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:13 compute-0 nova_compute[259627]: 2025-10-14 09:43:13.449 2 DEBUG nova.compute.manager [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing instance network info cache due to event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:43:13 compute-0 nova_compute[259627]: 2025-10-14 09:43:13.450 2 DEBUG oslo_concurrency.lockutils [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:43:13 compute-0 nova_compute[259627]: 2025-10-14 09:43:13.715 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:43:13 compute-0 ceph-mon[74249]: pgmap v2702: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 14 09:43:14 compute-0 nova_compute[259627]: 2025-10-14 09:43:14.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.369 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.392 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.392 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance network_info: |[{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.393 2 DEBUG oslo_concurrency.lockutils [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.393 2 DEBUG nova.network.neutron [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.399 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start _get_guest_xml network_info=[{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.405 2 WARNING nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.416 2 DEBUG nova.virt.libvirt.host [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.417 2 DEBUG nova.virt.libvirt.host [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.421 2 DEBUG nova.virt.libvirt.host [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.422 2 DEBUG nova.virt.libvirt.host [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.423 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.423 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.424 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.424 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.425 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.425 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.426 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.427 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.427 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.428 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.428 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.429 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.434 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:15 compute-0 ceph-mon[74249]: pgmap v2703: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 14 09:43:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:43:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/332365176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:43:15 compute-0 nova_compute[259627]: 2025-10-14 09:43:15.983 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.025 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.031 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:43:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263957661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.516 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.518 2 DEBUG nova.virt.libvirt.vif [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-199870946',display_name='tempest-TestGettingAddress-server-199870946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-199870946',id=153,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-hpaifwur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:43:09Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4c73d661-07da-475c-be98-686abd5354f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.518 2 DEBUG nova.network.os_vif_util [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.519 2 DEBUG nova.network.os_vif_util [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.521 2 DEBUG nova.objects.instance [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c73d661-07da-475c-be98-686abd5354f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.544 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <uuid>4c73d661-07da-475c-be98-686abd5354f9</uuid>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <name>instance-00000099</name>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <nova:name>tempest-TestGettingAddress-server-199870946</nova:name>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:43:15</nova:creationTime>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <nova:ports>
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <nova:port uuid="bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe">
Oct 14 09:43:16 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef0:3484" ipVersion="6"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:         </nova:port>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       </nova:ports>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <system>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <entry name="serial">4c73d661-07da-475c-be98-686abd5354f9</entry>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <entry name="uuid">4c73d661-07da-475c-be98-686abd5354f9</entry>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </system>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <os>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   </os>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <features>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   </features>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4c73d661-07da-475c-be98-686abd5354f9_disk">
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       </source>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/4c73d661-07da-475c-be98-686abd5354f9_disk.config">
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       </source>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:43:16 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <interface type="ethernet">
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <mac address="fa:16:3e:f0:34:84"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <driver name="vhost" rx_queue_size="512"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <mtu size="1442"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <target dev="tapbc5220f4-1b"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </interface>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/console.log" append="off"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <video>
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </video>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:43:16 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:43:16 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:43:16 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:43:16 compute-0 nova_compute[259627]: </domain>
Oct 14 09:43:16 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.546 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Preparing to wait for external event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.546 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.547 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.547 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.548 2 DEBUG nova.virt.libvirt.vif [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-199870946',display_name='tempest-TestGettingAddress-server-199870946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-199870946',id=153,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-hpaifwur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:43:09Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4c73d661-07da-475c-be98-686abd5354f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.548 2 DEBUG nova.network.os_vif_util [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.549 2 DEBUG nova.network.os_vif_util [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.549 2 DEBUG os_vif [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc5220f4-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc5220f4-1b, col_values=(('external_ids', {'iface-id': 'bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:34:84', 'vm-uuid': '4c73d661-07da-475c-be98-686abd5354f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:16 compute-0 NetworkManager[44885]: <info>  [1760434996.5593] manager: (tapbc5220f4-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/688)
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.567 2 INFO os_vif [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b')
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.625 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.625 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.625 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:f0:34:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.626 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Using config drive
Oct 14 09:43:16 compute-0 podman[421546]: 2025-10-14 09:43:16.629878875 +0000 UTC m=+0.046475941 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:43:16 compute-0 podman[421543]: 2025-10-14 09:43:16.652588492 +0000 UTC m=+0.073797262 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:43:16 compute-0 nova_compute[259627]: 2025-10-14 09:43:16.653 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:43:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:43:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/332365176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:43:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1263957661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.018 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Creating config drive at /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.023 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeptijra8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.164 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeptijra8" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.188 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.191 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config 4c73d661-07da-475c-be98-686abd5354f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:17 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.343 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config 4c73d661-07da-475c-be98-686abd5354f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.344 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Deleting local config drive /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config because it was imported into RBD.
Oct 14 09:43:17 compute-0 kernel: tapbc5220f4-1b: entered promiscuous mode
Oct 14 09:43:17 compute-0 NetworkManager[44885]: <info>  [1760434997.3941] manager: (tapbc5220f4-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/689)
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:17 compute-0 ovn_controller[152662]: 2025-10-14T09:43:17Z|01672|binding|INFO|Claiming lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe for this chassis.
Oct 14 09:43:17 compute-0 ovn_controller[152662]: 2025-10-14T09:43:17Z|01673|binding|INFO|bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe: Claiming fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.404 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484'], port_security=['fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fef0:3484/64', 'neutron:device_id': '4c73d661-07da-475c-be98-686abd5354f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f0bf2f6f-4af7-4c93-8c0d-bb1990d0ba82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.406 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 bound to our chassis
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.408 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa331e50-389c-4e3c-80a7-7c8364a3fce5
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:17 compute-0 ovn_controller[152662]: 2025-10-14T09:43:17Z|01674|binding|INFO|Setting lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe up in Southbound
Oct 14 09:43:17 compute-0 ovn_controller[152662]: 2025-10-14T09:43:17Z|01675|binding|INFO|Setting lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe ovn-installed in OVS
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:17 compute-0 systemd-machined[214636]: New machine qemu-186-instance-00000099.
Oct 14 09:43:17 compute-0 systemd-udevd[421662]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.432 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e10d193e-a8ad-49d8-b8a5-5411ede24a6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:17 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000099.
Oct 14 09:43:17 compute-0 NetworkManager[44885]: <info>  [1760434997.4468] device (tapbc5220f4-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 09:43:17 compute-0 NetworkManager[44885]: <info>  [1760434997.4475] device (tapbc5220f4-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.471 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b42e32-085e-488c-b772-2c16e7d25260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.474 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b303643c-759f-4ec8-b5da-50f29908d4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.508 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a8db9706-c940-43cc-a39f-385154c0ce67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.526 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c59646fb-d538-408e-91be-a96be6d773d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa331e50-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873982, 'reachable_time': 36414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421674, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b071442c-0d3b-4e69-8751-c4fdde064cf2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa331e50-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873997, 'tstamp': 873997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421676, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa331e50-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874001, 'tstamp': 874001}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421676, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa331e50-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.552 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa331e50-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.553 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.553 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa331e50-30, col_values=(('external_ids', {'iface-id': '176e1cce-63d0-4e90-9078-245732aff057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:17 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.554 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.603 2 DEBUG nova.network.neutron [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updated VIF entry in instance network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.604 2 DEBUG nova.network.neutron [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.623 2 DEBUG oslo_concurrency.lockutils [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.649 2 DEBUG nova.compute.manager [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.650 2 DEBUG oslo_concurrency.lockutils [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.650 2 DEBUG oslo_concurrency.lockutils [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.651 2 DEBUG oslo_concurrency.lockutils [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:17 compute-0 nova_compute[259627]: 2025-10-14 09:43:17.651 2 DEBUG nova.compute.manager [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Processing event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 14 09:43:17 compute-0 ceph-mon[74249]: pgmap v2704: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:43:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.234 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434998.2333598, 4c73d661-07da-475c-be98-686abd5354f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.234 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] VM Started (Lifecycle Event)
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.237 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.242 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.246 2 INFO nova.virt.libvirt.driver [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance spawned successfully.
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.247 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.281 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.293 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.297 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.297 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.298 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.298 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.299 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.299 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.331 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.332 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434998.233727, 4c73d661-07da-475c-be98-686abd5354f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.332 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] VM Paused (Lifecycle Event)
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.363 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.367 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434998.2416985, 4c73d661-07da-475c-be98-686abd5354f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.367 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] VM Resumed (Lifecycle Event)
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.378 2 INFO nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Took 8.94 seconds to spawn the instance on the hypervisor.
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.378 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.389 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.393 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.424 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.454 2 INFO nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Took 10.03 seconds to build instance.
Oct 14 09:43:18 compute-0 nova_compute[259627]: 2025-10-14 09:43:18.485 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:43:19 compute-0 nova_compute[259627]: 2025-10-14 09:43:19.771 2 DEBUG nova.compute.manager [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:19 compute-0 nova_compute[259627]: 2025-10-14 09:43:19.772 2 DEBUG oslo_concurrency.lockutils [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:19 compute-0 nova_compute[259627]: 2025-10-14 09:43:19.772 2 DEBUG oslo_concurrency.lockutils [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:19 compute-0 nova_compute[259627]: 2025-10-14 09:43:19.773 2 DEBUG oslo_concurrency.lockutils [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:19 compute-0 nova_compute[259627]: 2025-10-14 09:43:19.773 2 DEBUG nova.compute.manager [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] No waiting events found dispatching network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:43:19 compute-0 nova_compute[259627]: 2025-10-14 09:43:19.773 2 WARNING nova.compute.manager [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received unexpected event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe for instance with vm_state active and task_state None.
Oct 14 09:43:19 compute-0 ceph-mon[74249]: pgmap v2705: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 09:43:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:43:21 compute-0 nova_compute[259627]: 2025-10-14 09:43:21.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:21 compute-0 nova_compute[259627]: 2025-10-14 09:43:21.783 2 DEBUG nova.compute.manager [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:21 compute-0 nova_compute[259627]: 2025-10-14 09:43:21.784 2 DEBUG nova.compute.manager [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing instance network info cache due to event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:43:21 compute-0 nova_compute[259627]: 2025-10-14 09:43:21.784 2 DEBUG oslo_concurrency.lockutils [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:43:21 compute-0 nova_compute[259627]: 2025-10-14 09:43:21.785 2 DEBUG oslo_concurrency.lockutils [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:43:21 compute-0 nova_compute[259627]: 2025-10-14 09:43:21.785 2 DEBUG nova.network.neutron [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:43:21 compute-0 ceph-mon[74249]: pgmap v2706: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:43:22 compute-0 nova_compute[259627]: 2025-10-14 09:43:22.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:43:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:23 compute-0 ceph-mon[74249]: pgmap v2707: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:43:23 compute-0 nova_compute[259627]: 2025-10-14 09:43:23.912 2 DEBUG nova.network.neutron [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updated VIF entry in instance network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:43:23 compute-0 nova_compute[259627]: 2025-10-14 09:43:23.913 2 DEBUG nova.network.neutron [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:43:23 compute-0 nova_compute[259627]: 2025-10-14 09:43:23.937 2 DEBUG oslo_concurrency.lockutils [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:43:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:43:25 compute-0 ceph-mon[74249]: pgmap v2708: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 09:43:26 compute-0 nova_compute[259627]: 2025-10-14 09:43:26.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 09:43:27 compute-0 nova_compute[259627]: 2025-10-14 09:43:27.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:27 compute-0 ceph-mon[74249]: pgmap v2709: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 09:43:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 14 09:43:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:43:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2800 writes, 11K keys, 2800 commit groups, 1.0 writes per commit group, ingest: 14.21 MB, 0.02 MB/s
                                           Interval WAL: 2799 writes, 1060 syncs, 2.64 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:43:30 compute-0 ceph-mon[74249]: pgmap v2710: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 14 09:43:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 09:43:31 compute-0 ceph-mon[74249]: pgmap v2711: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 09:43:31 compute-0 nova_compute[259627]: 2025-10-14 09:43:31.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:32 compute-0 nova_compute[259627]: 2025-10-14 09:43:32.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:32 compute-0 podman[421720]: 2025-10-14 09:43:32.695974635 +0000 UTC m=+0.080924827 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:43:32 compute-0 podman[421719]: 2025-10-14 09:43:32.725759586 +0000 UTC m=+0.111000215 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:43:32
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms']
Oct 14 09:43:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:43:33 compute-0 ovn_controller[152662]: 2025-10-14T09:43:33Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:34:84 10.100.0.11
Oct 14 09:43:33 compute-0 ovn_controller[152662]: 2025-10-14T09:43:33Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:34:84 10.100.0.11
Oct 14 09:43:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:43:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:43:34 compute-0 ceph-mon[74249]: pgmap v2712: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Oct 14 09:43:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:43:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 176K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2838 writes, 11K keys, 2838 commit groups, 1.0 writes per commit group, ingest: 12.66 MB, 0.02 MB/s
                                           Interval WAL: 2838 writes, 1140 syncs, 2.49 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:43:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Oct 14 09:43:35 compute-0 sudo[421758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:35 compute-0 sudo[421758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:35 compute-0 sudo[421758]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:35 compute-0 sudo[421783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:43:35 compute-0 sudo[421783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:35 compute-0 sudo[421783]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:35 compute-0 sudo[421808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:35 compute-0 sudo[421808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:35 compute-0 sudo[421808]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:35 compute-0 sudo[421833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:43:35 compute-0 sudo[421833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:36 compute-0 ceph-mon[74249]: pgmap v2713: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Oct 14 09:43:36 compute-0 sudo[421833]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:43:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:43:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:43:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:43:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:43:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:43:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev eabe0181-4bce-4ad0-9efb-c5fa37cb0e54 does not exist
Oct 14 09:43:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 643d0043-ae13-4809-868c-36f0d555bd48 does not exist
Oct 14 09:43:36 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 06fc6d1a-30f5-4cda-8613-a79455cfcd7c does not exist
Oct 14 09:43:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:43:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:43:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:43:36 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:43:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:43:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:43:36 compute-0 sudo[421889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:36 compute-0 sudo[421889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:36 compute-0 sudo[421889]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:36 compute-0 nova_compute[259627]: 2025-10-14 09:43:36.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:36 compute-0 sudo[421914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:43:36 compute-0 sudo[421914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:36 compute-0 sudo[421914]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:36 compute-0 sudo[421939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:36 compute-0 sudo[421939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:36 compute-0 sudo[421939]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:36 compute-0 sudo[421964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:43:36 compute-0 sudo[421964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:43:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:43:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:43:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:43:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:43:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:43:37 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:43:37 compute-0 podman[422029]: 2025-10-14 09:43:37.119728172 +0000 UTC m=+0.074051717 container create f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:43:37 compute-0 systemd[1]: Started libpod-conmon-f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f.scope.
Oct 14 09:43:37 compute-0 podman[422029]: 2025-10-14 09:43:37.088418004 +0000 UTC m=+0.042741559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:43:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:43:37 compute-0 podman[422029]: 2025-10-14 09:43:37.225339624 +0000 UTC m=+0.179663219 container init f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:43:37 compute-0 podman[422029]: 2025-10-14 09:43:37.237351399 +0000 UTC m=+0.191674934 container start f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:43:37 compute-0 suspicious_herschel[422046]: 167 167
Oct 14 09:43:37 compute-0 podman[422029]: 2025-10-14 09:43:37.241573302 +0000 UTC m=+0.195896837 container attach f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:43:37 compute-0 systemd[1]: libpod-f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f.scope: Deactivated successfully.
Oct 14 09:43:37 compute-0 podman[422029]: 2025-10-14 09:43:37.243220533 +0000 UTC m=+0.197544078 container died f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5240cd7e26c287c815195e639e9644b2e11fdfbf14d585695adcc382bed9e0a-merged.mount: Deactivated successfully.
Oct 14 09:43:37 compute-0 podman[422029]: 2025-10-14 09:43:37.292148634 +0000 UTC m=+0.246472159 container remove f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:43:37 compute-0 systemd[1]: libpod-conmon-f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f.scope: Deactivated successfully.
Oct 14 09:43:37 compute-0 nova_compute[259627]: 2025-10-14 09:43:37.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:37 compute-0 podman[422069]: 2025-10-14 09:43:37.508489173 +0000 UTC m=+0.043064288 container create cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 14 09:43:37 compute-0 systemd[1]: Started libpod-conmon-cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2.scope.
Oct 14 09:43:37 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:37 compute-0 podman[422069]: 2025-10-14 09:43:37.489611139 +0000 UTC m=+0.024186244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:43:37 compute-0 podman[422069]: 2025-10-14 09:43:37.601412973 +0000 UTC m=+0.135988058 container init cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:43:37 compute-0 podman[422069]: 2025-10-14 09:43:37.613644273 +0000 UTC m=+0.148219368 container start cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:43:37 compute-0 podman[422069]: 2025-10-14 09:43:37.617150909 +0000 UTC m=+0.151726024 container attach cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:43:38 compute-0 ceph-mon[74249]: pgmap v2714: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:43:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:38 compute-0 unruffled_meitner[422086]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:43:38 compute-0 unruffled_meitner[422086]: --> relative data size: 1.0
Oct 14 09:43:38 compute-0 unruffled_meitner[422086]: --> All data devices are unavailable
Oct 14 09:43:38 compute-0 systemd[1]: libpod-cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2.scope: Deactivated successfully.
Oct 14 09:43:38 compute-0 podman[422069]: 2025-10-14 09:43:38.747458147 +0000 UTC m=+1.282033252 container died cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:43:38 compute-0 systemd[1]: libpod-cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2.scope: Consumed 1.079s CPU time.
Oct 14 09:43:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 09:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64-merged.mount: Deactivated successfully.
Oct 14 09:43:38 compute-0 podman[422069]: 2025-10-14 09:43:38.82950447 +0000 UTC m=+1.364079585 container remove cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 14 09:43:38 compute-0 systemd[1]: libpod-conmon-cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2.scope: Deactivated successfully.
Oct 14 09:43:38 compute-0 sudo[421964]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:38 compute-0 sudo[422129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:38 compute-0 sudo[422129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:38 compute-0 sudo[422129]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:39 compute-0 sudo[422154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:43:39 compute-0 sudo[422154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:39 compute-0 sudo[422154]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:39 compute-0 sudo[422179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:39 compute-0 sudo[422179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:39 compute-0 sudo[422179]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:39 compute-0 sudo[422204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:43:39 compute-0 sudo[422204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:39 compute-0 podman[422270]: 2025-10-14 09:43:39.546749942 +0000 UTC m=+0.047443666 container create 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 09:43:39 compute-0 systemd[1]: Started libpod-conmon-35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46.scope.
Oct 14 09:43:39 compute-0 podman[422270]: 2025-10-14 09:43:39.52222608 +0000 UTC m=+0.022919864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:43:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:43:39 compute-0 podman[422270]: 2025-10-14 09:43:39.641985649 +0000 UTC m=+0.142679443 container init 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:43:39 compute-0 podman[422270]: 2025-10-14 09:43:39.649569445 +0000 UTC m=+0.150263219 container start 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:43:39 compute-0 podman[422270]: 2025-10-14 09:43:39.653634255 +0000 UTC m=+0.154328039 container attach 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:43:39 compute-0 angry_williams[422286]: 167 167
Oct 14 09:43:39 compute-0 systemd[1]: libpod-35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46.scope: Deactivated successfully.
Oct 14 09:43:39 compute-0 podman[422270]: 2025-10-14 09:43:39.654584998 +0000 UTC m=+0.155278752 container died 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a47f03df60258a46095102995fbc96ea668b45a253f780d4f0ce4fb7431544d-merged.mount: Deactivated successfully.
Oct 14 09:43:39 compute-0 podman[422270]: 2025-10-14 09:43:39.695222755 +0000 UTC m=+0.195916529 container remove 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 09:43:39 compute-0 systemd[1]: libpod-conmon-35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46.scope: Deactivated successfully.
Oct 14 09:43:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:43:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.73 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2257 writes, 9325 keys, 2257 commit groups, 1.0 writes per commit group, ingest: 11.79 MB, 0.02 MB/s
                                           Interval WAL: 2257 writes, 876 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:43:39 compute-0 podman[422310]: 2025-10-14 09:43:39.880557654 +0000 UTC m=+0.048355968 container create b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:43:39 compute-0 systemd[1]: Started libpod-conmon-b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae.scope.
Oct 14 09:43:39 compute-0 podman[422310]: 2025-10-14 09:43:39.858692457 +0000 UTC m=+0.026490851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:43:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:39 compute-0 podman[422310]: 2025-10-14 09:43:39.981006559 +0000 UTC m=+0.148804873 container init b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:43:39 compute-0 podman[422310]: 2025-10-14 09:43:39.989308122 +0000 UTC m=+0.157106446 container start b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 09:43:39 compute-0 podman[422310]: 2025-10-14 09:43:39.992709396 +0000 UTC m=+0.160507730 container attach b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:43:40 compute-0 ceph-mon[74249]: pgmap v2715: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 09:43:40 compute-0 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 09:43:40 compute-0 silly_jackson[422327]: {
Oct 14 09:43:40 compute-0 silly_jackson[422327]:     "0": [
Oct 14 09:43:40 compute-0 silly_jackson[422327]:         {
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "devices": [
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "/dev/loop3"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             ],
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_name": "ceph_lv0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_size": "21470642176",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "name": "ceph_lv0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "tags": {
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cluster_name": "ceph",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.crush_device_class": "",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.encrypted": "0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osd_id": "0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.type": "block",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.vdo": "0"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             },
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "type": "block",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "vg_name": "ceph_vg0"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:         }
Oct 14 09:43:40 compute-0 silly_jackson[422327]:     ],
Oct 14 09:43:40 compute-0 silly_jackson[422327]:     "1": [
Oct 14 09:43:40 compute-0 silly_jackson[422327]:         {
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "devices": [
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "/dev/loop4"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             ],
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_name": "ceph_lv1",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_size": "21470642176",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "name": "ceph_lv1",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "tags": {
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cluster_name": "ceph",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.crush_device_class": "",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.encrypted": "0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osd_id": "1",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.type": "block",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.vdo": "0"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             },
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "type": "block",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "vg_name": "ceph_vg1"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:         }
Oct 14 09:43:40 compute-0 silly_jackson[422327]:     ],
Oct 14 09:43:40 compute-0 silly_jackson[422327]:     "2": [
Oct 14 09:43:40 compute-0 silly_jackson[422327]:         {
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "devices": [
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "/dev/loop5"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             ],
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_name": "ceph_lv2",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_size": "21470642176",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "name": "ceph_lv2",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "tags": {
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.cluster_name": "ceph",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.crush_device_class": "",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.encrypted": "0",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osd_id": "2",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.type": "block",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:                 "ceph.vdo": "0"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             },
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "type": "block",
Oct 14 09:43:40 compute-0 silly_jackson[422327]:             "vg_name": "ceph_vg2"
Oct 14 09:43:40 compute-0 silly_jackson[422327]:         }
Oct 14 09:43:40 compute-0 silly_jackson[422327]:     ]
Oct 14 09:43:40 compute-0 silly_jackson[422327]: }
Oct 14 09:43:40 compute-0 systemd[1]: libpod-b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae.scope: Deactivated successfully.
Oct 14 09:43:40 compute-0 podman[422310]: 2025-10-14 09:43:40.751628689 +0000 UTC m=+0.919427033 container died b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 14 09:43:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27-merged.mount: Deactivated successfully.
Oct 14 09:43:40 compute-0 podman[422310]: 2025-10-14 09:43:40.830129215 +0000 UTC m=+0.997927529 container remove b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:43:40 compute-0 systemd[1]: libpod-conmon-b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae.scope: Deactivated successfully.
Oct 14 09:43:40 compute-0 sudo[422204]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:40 compute-0 sudo[422347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:40 compute-0 sudo[422347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:40 compute-0 sudo[422347]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:41 compute-0 sudo[422372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:43:41 compute-0 sudo[422372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:41 compute-0 sudo[422372]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:41 compute-0 sudo[422397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:41 compute-0 sudo[422397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:41 compute-0 sudo[422397]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:41 compute-0 sudo[422422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:43:41 compute-0 sudo[422422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:41 compute-0 podman[422488]: 2025-10-14 09:43:41.550283578 +0000 UTC m=+0.063003517 container create 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:43:41 compute-0 nova_compute[259627]: 2025-10-14 09:43:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:41 compute-0 systemd[1]: Started libpod-conmon-7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad.scope.
Oct 14 09:43:41 compute-0 podman[422488]: 2025-10-14 09:43:41.528260187 +0000 UTC m=+0.040980146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:43:41 compute-0 podman[422488]: 2025-10-14 09:43:41.646976151 +0000 UTC m=+0.159696120 container init 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:43:41 compute-0 podman[422488]: 2025-10-14 09:43:41.653333367 +0000 UTC m=+0.166053296 container start 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:43:41 compute-0 podman[422488]: 2025-10-14 09:43:41.656878784 +0000 UTC m=+0.169598743 container attach 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:43:41 compute-0 funny_nobel[422504]: 167 167
Oct 14 09:43:41 compute-0 systemd[1]: libpod-7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad.scope: Deactivated successfully.
Oct 14 09:43:41 compute-0 podman[422488]: 2025-10-14 09:43:41.659935479 +0000 UTC m=+0.172655438 container died 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-90e5b2ae09ad41eaf9c4a6f939d263da24ff2bc7328d036a7509f34a3af76ec4-merged.mount: Deactivated successfully.
Oct 14 09:43:41 compute-0 podman[422488]: 2025-10-14 09:43:41.701597491 +0000 UTC m=+0.214317430 container remove 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 09:43:41 compute-0 systemd[1]: libpod-conmon-7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad.scope: Deactivated successfully.
Oct 14 09:43:41 compute-0 podman[422527]: 2025-10-14 09:43:41.924107002 +0000 UTC m=+0.054469938 container create cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:43:41 compute-0 systemd[1]: Started libpod-conmon-cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14.scope.
Oct 14 09:43:41 compute-0 podman[422527]: 2025-10-14 09:43:41.901350733 +0000 UTC m=+0.031713709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:43:42 compute-0 podman[422527]: 2025-10-14 09:43:42.017862912 +0000 UTC m=+0.148225888 container init cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:43:42 compute-0 podman[422527]: 2025-10-14 09:43:42.03082139 +0000 UTC m=+0.161184326 container start cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:43:42 compute-0 podman[422527]: 2025-10-14 09:43:42.035158777 +0000 UTC m=+0.165521713 container attach cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:43:42 compute-0 ceph-mon[74249]: pgmap v2716: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 09:43:42 compute-0 nova_compute[259627]: 2025-10-14 09:43:42.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 09:43:43 compute-0 serene_villani[422543]: {
Oct 14 09:43:43 compute-0 serene_villani[422543]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "osd_id": 2,
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "type": "bluestore"
Oct 14 09:43:43 compute-0 serene_villani[422543]:     },
Oct 14 09:43:43 compute-0 serene_villani[422543]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "osd_id": 1,
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "type": "bluestore"
Oct 14 09:43:43 compute-0 serene_villani[422543]:     },
Oct 14 09:43:43 compute-0 serene_villani[422543]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "osd_id": 0,
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:43:43 compute-0 serene_villani[422543]:         "type": "bluestore"
Oct 14 09:43:43 compute-0 serene_villani[422543]:     }
Oct 14 09:43:43 compute-0 serene_villani[422543]: }
Oct 14 09:43:43 compute-0 systemd[1]: libpod-cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14.scope: Deactivated successfully.
Oct 14 09:43:43 compute-0 systemd[1]: libpod-cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14.scope: Consumed 1.093s CPU time.
Oct 14 09:43:43 compute-0 podman[422527]: 2025-10-14 09:43:43.122232444 +0000 UTC m=+1.252595380 container died cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48-merged.mount: Deactivated successfully.
Oct 14 09:43:43 compute-0 podman[422527]: 2025-10-14 09:43:43.194116248 +0000 UTC m=+1.324479184 container remove cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:43:43 compute-0 systemd[1]: libpod-conmon-cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14.scope: Deactivated successfully.
Oct 14 09:43:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:43 compute-0 sudo[422422]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:43:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:43:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:43:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 49fd0fa4-4a71-45ca-9d2c-715dd1b7d841 does not exist
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d370b1ea-3fbe-4138-9e5e-7b7233c0c400 does not exist
Oct 14 09:43:43 compute-0 sudo[422591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:43:43 compute-0 sudo[422591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:43 compute-0 sudo[422591]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:43 compute-0 sudo[422616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:43:43 compute-0 sudo[422616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:43:43 compute-0 sudo[422616]: pam_unix(sudo:session): session closed for user root
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518291739923162 of space, bias 1.0, pg target 0.4554875219769486 quantized to 32 (current 32)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:43:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:43:44 compute-0 ceph-mon[74249]: pgmap v2717: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 09:43:44 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:43:44 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:43:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 09:43:46 compute-0 ceph-mon[74249]: pgmap v2718: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 09:43:46 compute-0 nova_compute[259627]: 2025-10-14 09:43:46.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 09:43:47 compute-0 nova_compute[259627]: 2025-10-14 09:43:47.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:47 compute-0 podman[422642]: 2025-10-14 09:43:47.711579976 +0000 UTC m=+0.104318121 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 14 09:43:47 compute-0 podman[422641]: 2025-10-14 09:43:47.766961745 +0000 UTC m=+0.166856906 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:43:48 compute-0 ceph-mon[74249]: pgmap v2719: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 09:43:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 14 09:43:50 compute-0 ceph-mon[74249]: pgmap v2720: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 14 09:43:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 14 09:43:50 compute-0 nova_compute[259627]: 2025-10-14 09:43:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:51 compute-0 ovn_controller[152662]: 2025-10-14T09:43:51Z|01676|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 14 09:43:51 compute-0 nova_compute[259627]: 2025-10-14 09:43:51.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:52 compute-0 ceph-mon[74249]: pgmap v2721: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 14 09:43:52 compute-0 nova_compute[259627]: 2025-10-14 09:43:52.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:43:52 compute-0 nova_compute[259627]: 2025-10-14 09:43:52.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:52 compute-0 nova_compute[259627]: 2025-10-14 09:43:52.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:43:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:54 compute-0 ceph-mon[74249]: pgmap v2722: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:43:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:43:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.475 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.478 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.713 2 DEBUG nova.compute.manager [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.714 2 DEBUG nova.compute.manager [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing instance network info cache due to event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.714 2 DEBUG oslo_concurrency.lockutils [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.715 2 DEBUG oslo_concurrency.lockutils [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.715 2 DEBUG nova.network.neutron [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.790 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.791 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.792 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.793 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.794 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.796 2 INFO nova.compute.manager [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Terminating instance
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.798 2 DEBUG nova.compute.manager [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:43:55 compute-0 kernel: tapbc5220f4-1b (unregistering): left promiscuous mode
Oct 14 09:43:55 compute-0 NetworkManager[44885]: <info>  [1760435035.8636] device (tapbc5220f4-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:55 compute-0 ovn_controller[152662]: 2025-10-14T09:43:55Z|01677|binding|INFO|Releasing lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe from this chassis (sb_readonly=0)
Oct 14 09:43:55 compute-0 ovn_controller[152662]: 2025-10-14T09:43:55Z|01678|binding|INFO|Setting lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe down in Southbound
Oct 14 09:43:55 compute-0 ovn_controller[152662]: 2025-10-14T09:43:55Z|01679|binding|INFO|Removing iface tapbc5220f4-1b ovn-installed in OVS
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.894 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484'], port_security=['fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fef0:3484/64', 'neutron:device_id': '4c73d661-07da-475c-be98-686abd5354f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f0bf2f6f-4af7-4c93-8c0d-bb1990d0ba82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:43:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.897 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 unbound from our chassis
Oct 14 09:43:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.898 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa331e50-389c-4e3c-80a7-7c8364a3fce5
Oct 14 09:43:55 compute-0 nova_compute[259627]: 2025-10-14 09:43:55.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.926 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[463db305-372f-4d84-a995-5464915656e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:55 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct 14 09:43:55 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Consumed 13.766s CPU time.
Oct 14 09:43:55 compute-0 systemd-machined[214636]: Machine qemu-186-instance-00000099 terminated.
Oct 14 09:43:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.975 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[028d7a15-865e-4295-8d5d-5a3cd553100c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:55 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.982 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fa61b758-4ede-4908-8ded-8bf3eea6bd1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.025 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a09dacca-661e-406c-b8b7-f2a7788052f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d41d7781-7ed7-4e89-a3e4-6cff02e19976]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa331e50-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873982, 'reachable_time': 36414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422702, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.055 2 INFO nova.virt.libvirt.driver [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance destroyed successfully.
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.056 2 DEBUG nova.objects.instance [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 4c73d661-07da-475c-be98-686abd5354f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.069 2 DEBUG nova.virt.libvirt.vif [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-199870946',display_name='tempest-TestGettingAddress-server-199870946',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-199870946',id=153,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:43:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-hpaifwur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:43:18Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4c73d661-07da-475c-be98-686abd5354f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.070 2 DEBUG nova.network.os_vif_util [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.071 2 DEBUG nova.network.os_vif_util [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.071 2 DEBUG os_vif [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.073 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc5220f4-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.083 2 INFO os_vif [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b')
Oct 14 09:43:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.083 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de399ffb-e640-4b99-9e9c-2bcf7906bb07]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa331e50-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873997, 'tstamp': 873997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422710, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa331e50-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874001, 'tstamp': 874001}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422710, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.085 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa331e50-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.090 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa331e50-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.090 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:43:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.091 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa331e50-30, col_values=(('external_ids', {'iface-id': '176e1cce-63d0-4e90-9078-245732aff057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:43:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.092 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:56 compute-0 ceph-mon[74249]: pgmap v2723: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.404 2 DEBUG nova.compute.manager [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-unplugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.405 2 DEBUG oslo_concurrency.lockutils [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.405 2 DEBUG oslo_concurrency.lockutils [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.405 2 DEBUG oslo_concurrency.lockutils [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.406 2 DEBUG nova.compute.manager [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] No waiting events found dispatching network-vif-unplugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.406 2 DEBUG nova.compute.manager [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-unplugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.556 2 INFO nova.virt.libvirt.driver [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Deleting instance files /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9_del
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.558 2 INFO nova.virt.libvirt.driver [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Deletion of /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9_del complete
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.636 2 INFO nova.compute.manager [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.637 2 DEBUG oslo.service.loopingcall [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.637 2 DEBUG nova.compute.manager [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:43:56 compute-0 nova_compute[259627]: 2025-10-14 09:43:56.638 2 DEBUG nova.network.neutron [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:43:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 85 B/s wr, 0 op/s
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.543 2 DEBUG nova.network.neutron [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.562 2 INFO nova.compute.manager [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Took 0.92 seconds to deallocate network for instance.
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.600 2 DEBUG nova.network.neutron [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updated VIF entry in instance network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.601 2 DEBUG nova.network.neutron [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.607 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.607 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.627 2 DEBUG oslo_concurrency.lockutils [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.699 2 DEBUG oslo_concurrency.processutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.814 2 DEBUG nova.compute.manager [req-1eca8c53-ccd7-463d-b20a-49fcf9faf19e req-e257d97d-9002-4366-a741-bcb8d3462ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-deleted-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.815 2 INFO nova.compute.manager [req-1eca8c53-ccd7-463d-b20a-49fcf9faf19e req-e257d97d-9002-4366-a741-bcb8d3462ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Neutron deleted interface bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe; detaching it from the instance and deleting it from the info cache
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.816 2 DEBUG nova.network.neutron [req-1eca8c53-ccd7-463d-b20a-49fcf9faf19e req-e257d97d-9002-4366-a741-bcb8d3462ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.841 2 DEBUG nova.compute.manager [req-1eca8c53-ccd7-463d-b20a-49fcf9faf19e req-e257d97d-9002-4366-a741-bcb8d3462ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Detach interface failed, port_id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe, reason: Instance 4c73d661-07da-475c-be98-686abd5354f9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:57 compute-0 nova_compute[259627]: 2025-10-14 09:43:57.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.024 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:58 compute-0 ceph-mon[74249]: pgmap v2724: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 85 B/s wr, 0 op/s
Oct 14 09:43:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:43:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201868617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:43:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.223 2 DEBUG oslo_concurrency.processutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.230 2 DEBUG nova.compute.provider_tree [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.250 2 DEBUG nova.scheduler.client.report [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.276 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.279 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.279 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.280 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.280 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.353 2 INFO nova.scheduler.client.report [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 4c73d661-07da-475c-be98-686abd5354f9
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.425 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.514 2 DEBUG nova.compute.manager [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.515 2 DEBUG oslo_concurrency.lockutils [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.515 2 DEBUG oslo_concurrency.lockutils [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.515 2 DEBUG oslo_concurrency.lockutils [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.516 2 DEBUG nova.compute.manager [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] No waiting events found dispatching network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.516 2 WARNING nova.compute.manager [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received unexpected event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe for instance with vm_state deleted and task_state None.
Oct 14 09:43:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:43:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/6284015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.729 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 85 B/s wr, 0 op/s
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.825 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:43:58 compute-0 nova_compute[259627]: 2025-10-14 09:43:58.826 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.006 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.007 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3367MB free_disk=59.89720153808594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.097 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.098 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.098 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.151 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:43:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3201868617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:43:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/6284015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:43:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:43:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653863839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.621 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.629 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.696 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.779 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.779 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.874 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.875 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.875 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.875 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.876 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.878 2 INFO nova.compute.manager [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Terminating instance
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.880 2 DEBUG nova.compute.manager [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.911 2 DEBUG nova.compute.manager [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-changed-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.911 2 DEBUG nova.compute.manager [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing instance network info cache due to event network-changed-b3e71767-ffef-473e-947a-bb35562569c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.912 2 DEBUG oslo_concurrency.lockutils [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.912 2 DEBUG oslo_concurrency.lockutils [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.912 2 DEBUG nova.network.neutron [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing network info cache for port b3e71767-ffef-473e-947a-bb35562569c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 14 09:43:59 compute-0 kernel: tapb3e71767-ff (unregistering): left promiscuous mode
Oct 14 09:43:59 compute-0 NetworkManager[44885]: <info>  [1760435039.9441] device (tapb3e71767-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 09:43:59 compute-0 ovn_controller[152662]: 2025-10-14T09:43:59Z|01680|binding|INFO|Releasing lport b3e71767-ffef-473e-947a-bb35562569c9 from this chassis (sb_readonly=0)
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:59 compute-0 ovn_controller[152662]: 2025-10-14T09:43:59Z|01681|binding|INFO|Setting lport b3e71767-ffef-473e-947a-bb35562569c9 down in Southbound
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:43:59 compute-0 ovn_controller[152662]: 2025-10-14T09:43:59Z|01682|binding|INFO|Removing iface tapb3e71767-ff ovn-installed in OVS
Oct 14 09:43:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.969 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57'], port_security=['fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8::f816:3eff:fe57:ac57/64', 'neutron:device_id': 'f6f1bb88-2b88-4876-ab72-3e4e4dc9a578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f0bf2f6f-4af7-4c93-8c0d-bb1990d0ba82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b3e71767-ffef-473e-947a-bb35562569c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:43:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.971 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b3e71767-ffef-473e-947a-bb35562569c9 in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 unbound from our chassis
Oct 14 09:43:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.973 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa331e50-389c-4e3c-80a7-7c8364a3fce5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 14 09:43:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.974 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe75a7d9-c926-4191-b3ff-8189e47abf74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:43:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.975 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 namespace which is not needed anymore
Oct 14 09:43:59 compute-0 nova_compute[259627]: 2025-10-14 09:43:59.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:00 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000098.scope: Deactivated successfully.
Oct 14 09:44:00 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000098.scope: Consumed 17.165s CPU time.
Oct 14 09:44:00 compute-0 systemd-machined[214636]: Machine qemu-185-instance-00000098 terminated.
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.117 2 INFO nova.virt.libvirt.driver [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance destroyed successfully.
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.117 2 DEBUG nova.objects.instance [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:44:00 compute-0 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [NOTICE]   (421149) : haproxy version is 2.8.14-c23fe91
Oct 14 09:44:00 compute-0 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [NOTICE]   (421149) : path to executable is /usr/sbin/haproxy
Oct 14 09:44:00 compute-0 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [WARNING]  (421149) : Exiting Master process...
Oct 14 09:44:00 compute-0 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [WARNING]  (421149) : Exiting Master process...
Oct 14 09:44:00 compute-0 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [ALERT]    (421149) : Current worker (421151) exited with code 143 (Terminated)
Oct 14 09:44:00 compute-0 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [WARNING]  (421149) : All workers exited. Exiting... (0)
Oct 14 09:44:00 compute-0 systemd[1]: libpod-6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839.scope: Deactivated successfully.
Oct 14 09:44:00 compute-0 podman[422821]: 2025-10-14 09:44:00.131800015 +0000 UTC m=+0.052097030 container died 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.141 2 DEBUG nova.virt.libvirt.vif [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:42:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-754347042',display_name='tempest-TestGettingAddress-server-754347042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-754347042',id=152,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:42:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-072443ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:42:42Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=f6f1bb88-2b88-4876-ab72-3e4e4dc9a578,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.142 2 DEBUG nova.network.os_vif_util [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.143 2 DEBUG nova.network.os_vif_util [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.143 2 DEBUG os_vif [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e71767-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.152 2 INFO os_vif [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff')
Oct 14 09:44:00 compute-0 ceph-mon[74249]: pgmap v2725: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 85 B/s wr, 0 op/s
Oct 14 09:44:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/653863839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839-userdata-shm.mount: Deactivated successfully.
Oct 14 09:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7492a1d1260024e32fdfbc4b0a12cad644de8f71781dd5c42aeb3535a093dda-merged.mount: Deactivated successfully.
Oct 14 09:44:00 compute-0 podman[422821]: 2025-10-14 09:44:00.188477276 +0000 UTC m=+0.108774291 container cleanup 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:44:00 compute-0 systemd[1]: libpod-conmon-6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839.scope: Deactivated successfully.
Oct 14 09:44:00 compute-0 podman[422879]: 2025-10-14 09:44:00.253932782 +0000 UTC m=+0.045415735 container remove 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.260 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e0485c-744f-4463-9f72-4f3dc82191f1]: (4, ('Tue Oct 14 09:44:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 (6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839)\n6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839\nTue Oct 14 09:44:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 (6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839)\n6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.261 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7020c281-46ff-4bc4-a1ff-5a1be6345e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.262 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa331e50-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:00 compute-0 kernel: tapaa331e50-30: left promiscuous mode
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e02379e-3acd-47a4-bc41-b3e6f5922bcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fdf9d4-37f9-48b8-9bbc-89b14063075f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.317 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87c416f1-cf10-4e83-8774-90a652859e46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.333 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20b526d1-027f-457b-97a5-71a30288e9fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873974, 'reachable_time': 41843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422896, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.335 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.335 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7296be15-fc24-4e22-ad72-ecdf83e769fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 14 09:44:00 compute-0 systemd[1]: run-netns-ovnmeta\x2daa331e50\x2d389c\x2d4e3c\x2d80a7\x2d7c8364a3fce5.mount: Deactivated successfully.
Oct 14 09:44:00 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.479 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.601 2 INFO nova.virt.libvirt.driver [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Deleting instance files /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_del
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.602 2 INFO nova.virt.libvirt.driver [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Deletion of /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_del complete
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.617 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-unplugged-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.618 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.618 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.618 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.619 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] No waiting events found dispatching network-vif-unplugged-b3e71767-ffef-473e-947a-bb35562569c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.619 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-unplugged-b3e71767-ffef-473e-947a-bb35562569c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.620 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.620 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.621 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.621 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.622 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] No waiting events found dispatching network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.622 2 WARNING nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received unexpected event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 for instance with vm_state active and task_state deleting.
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.686 2 INFO nova.compute.manager [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.687 2 DEBUG oslo.service.loopingcall [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.688 2 DEBUG nova.compute.manager [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:44:00 compute-0 nova_compute[259627]: 2025-10-14 09:44:00.688 2 DEBUG nova.network.neutron [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:44:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 09:44:02 compute-0 ceph-mon[74249]: pgmap v2726: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 09:44:02 compute-0 nova_compute[259627]: 2025-10-14 09:44:02.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:44:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:44:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 09:44:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:44:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:44:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:44:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:44:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:03 compute-0 podman[422898]: 2025-10-14 09:44:03.659534025 +0000 UTC m=+0.064772270 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 09:44:03 compute-0 podman[422899]: 2025-10-14 09:44:03.665384599 +0000 UTC m=+0.066271818 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:44:03 compute-0 nova_compute[259627]: 2025-10-14 09:44:03.943 2 DEBUG nova.network.neutron [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:44:03 compute-0 nova_compute[259627]: 2025-10-14 09:44:03.962 2 INFO nova.compute.manager [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Took 3.27 seconds to deallocate network for instance.
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.010 2 DEBUG nova.network.neutron [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updated VIF entry in instance network info cache for port b3e71767-ffef-473e-947a-bb35562569c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.010 2 DEBUG nova.network.neutron [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.024 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.024 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.048 2 DEBUG oslo_concurrency.lockutils [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.056 2 DEBUG nova.compute.manager [req-80504e89-1f91-4246-929e-036ce1ad672f req-3a05c3ec-4c2d-4540-8ff1-300ca80de079 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-deleted-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.071 2 DEBUG oslo_concurrency.processutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:04 compute-0 ceph-mon[74249]: pgmap v2727: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 09:44:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:44:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438459309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.497 2 DEBUG oslo_concurrency.processutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.506 2 DEBUG nova.compute.provider_tree [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.530 2 DEBUG nova.scheduler.client.report [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.566 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.603 2 INFO nova.scheduler.client.report [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance f6f1bb88-2b88-4876-ab72-3e4e4dc9a578
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.703 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.759 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.760 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.760 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.760 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.777 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.778 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.778 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:04 compute-0 nova_compute[259627]: 2025-10-14 09:44:04.779 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:44:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 09:44:05 compute-0 nova_compute[259627]: 2025-10-14 09:44:05.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3438459309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:44:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/795924222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:44:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:44:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/795924222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:44:06 compute-0 ceph-mon[74249]: pgmap v2728: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 09:44:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/795924222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:44:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/795924222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:44:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 09:44:06 compute-0 nova_compute[259627]: 2025-10-14 09:44:06.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:07.060 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:07.060 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:07.061 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:07 compute-0 ceph-mon[74249]: pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 09:44:07 compute-0 nova_compute[259627]: 2025-10-14 09:44:07.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:07 compute-0 nova_compute[259627]: 2025-10-14 09:44:07.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:08 compute-0 nova_compute[259627]: 2025-10-14 09:44:08.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Oct 14 09:44:09 compute-0 ceph-mon[74249]: pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Oct 14 09:44:10 compute-0 nova_compute[259627]: 2025-10-14 09:44:10.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Oct 14 09:44:11 compute-0 nova_compute[259627]: 2025-10-14 09:44:11.053 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760435036.0522313, 4c73d661-07da-475c-be98-686abd5354f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:44:11 compute-0 nova_compute[259627]: 2025-10-14 09:44:11.054 2 INFO nova.compute.manager [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] VM Stopped (Lifecycle Event)
Oct 14 09:44:11 compute-0 nova_compute[259627]: 2025-10-14 09:44:11.083 2 DEBUG nova.compute.manager [None req-0e6ab70f-f4fb-46f9-829b-2ce898060edb - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:44:11 compute-0 ceph-mon[74249]: pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Oct 14 09:44:12 compute-0 nova_compute[259627]: 2025-10-14 09:44:12.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 09:44:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:13 compute-0 ceph-mon[74249]: pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 09:44:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 09:44:15 compute-0 nova_compute[259627]: 2025-10-14 09:44:15.115 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760435040.1139867, f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:44:15 compute-0 nova_compute[259627]: 2025-10-14 09:44:15.115 2 INFO nova.compute.manager [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] VM Stopped (Lifecycle Event)
Oct 14 09:44:15 compute-0 nova_compute[259627]: 2025-10-14 09:44:15.150 2 DEBUG nova.compute.manager [None req-7e1163d9-b40b-42c8-8904-c53514253f19 - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:44:15 compute-0 nova_compute[259627]: 2025-10-14 09:44:15.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:15 compute-0 ceph-mon[74249]: pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 09:44:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 09:44:16 compute-0 nova_compute[259627]: 2025-10-14 09:44:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:17 compute-0 nova_compute[259627]: 2025-10-14 09:44:17.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:17 compute-0 ceph-mon[74249]: pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 09:44:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:18 compute-0 podman[422960]: 2025-10-14 09:44:18.716259387 +0000 UTC m=+0.115902075 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:44:18 compute-0 podman[422959]: 2025-10-14 09:44:18.7428884 +0000 UTC m=+0.148300840 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009)
Oct 14 09:44:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:44:18 compute-0 nova_compute[259627]: 2025-10-14 09:44:18.949 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "9a630cb9-9f1a-4ea2-9047-158683504522" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:18 compute-0 nova_compute[259627]: 2025-10-14 09:44:18.949 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:18 compute-0 nova_compute[259627]: 2025-10-14 09:44:18.968 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.060 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.061 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.073 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.073 2 INFO nova.compute.claims [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Claim successful on node compute-0.ctlplane.example.com
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.190 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:44:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751397359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.661 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.670 2 DEBUG nova.compute.provider_tree [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.693 2 DEBUG nova.scheduler.client.report [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.748 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.749 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.819 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.820 2 DEBUG nova.network.neutron [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.852 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.881 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 14 09:44:19 compute-0 ceph-mon[74249]: pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:44:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3751397359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.989 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.991 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 14 09:44:19 compute-0 nova_compute[259627]: 2025-10-14 09:44:19.992 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Creating image(s)
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.026 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.061 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.098 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.103 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.213 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.214 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.214 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.215 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.244 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.248 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9a630cb9-9f1a-4ea2-9047-158683504522_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.331 2 DEBUG nova.network.neutron [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.332 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.556 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9a630cb9-9f1a-4ea2-9047-158683504522_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.629 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] resizing rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.740 2 DEBUG nova.objects.instance [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a630cb9-9f1a-4ea2-9047-158683504522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.763 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.764 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Ensure instance console log exists: /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.764 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.765 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.765 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.767 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.771 2 WARNING nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.777 2 DEBUG nova.virt.libvirt.host [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.778 2 DEBUG nova.virt.libvirt.host [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.785 2 DEBUG nova.virt.libvirt.host [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.785 2 DEBUG nova.virt.libvirt.host [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.786 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.786 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.787 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.787 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.788 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.788 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.789 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.789 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.789 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.790 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.790 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.790 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 14 09:44:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 09:44:20 compute-0 nova_compute[259627]: 2025-10-14 09:44:20.794 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:44:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662102445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.283 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.309 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.314 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 09:44:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/414932043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.769 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.772 2 DEBUG nova.objects.instance [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a630cb9-9f1a-4ea2-9047-158683504522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.792 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] End _get_guest_xml xml=<domain type="kvm">
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <uuid>9a630cb9-9f1a-4ea2-9047-158683504522</uuid>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <name>instance-0000009a</name>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <memory>131072</memory>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <vcpu>1</vcpu>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <metadata>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <nova:name>tempest-AggregatesAdminTestJSON-server-1922733757</nova:name>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <nova:creationTime>2025-10-14 09:44:20</nova:creationTime>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <nova:flavor name="m1.nano">
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <nova:memory>128</nova:memory>
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <nova:disk>1</nova:disk>
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <nova:swap>0</nova:swap>
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <nova:ephemeral>0</nova:ephemeral>
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <nova:vcpus>1</nova:vcpus>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       </nova:flavor>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <nova:owner>
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <nova:user uuid="3088994959d141278978eee7990c1ab0">tempest-AggregatesAdminTestJSON-414260326-project-member</nova:user>
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <nova:project uuid="71af0b96f64c4075b5773834ff62aa97">tempest-AggregatesAdminTestJSON-414260326</nova:project>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       </nova:owner>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <nova:ports/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     </nova:instance>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   </metadata>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <sysinfo type="smbios">
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <system>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <entry name="manufacturer">RDO</entry>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <entry name="product">OpenStack Compute</entry>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <entry name="serial">9a630cb9-9f1a-4ea2-9047-158683504522</entry>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <entry name="uuid">9a630cb9-9f1a-4ea2-9047-158683504522</entry>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <entry name="family">Virtual Machine</entry>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     </system>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   </sysinfo>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <os>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <boot dev="hd"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <smbios mode="sysinfo"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   </os>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <features>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <acpi/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <apic/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <vmcoreinfo/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   </features>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <clock offset="utc">
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <timer name="pit" tickpolicy="delay"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <timer name="hpet" present="no"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   </clock>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <cpu mode="host-model" match="exact">
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <topology sockets="1" cores="1" threads="1"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   </cpu>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   <devices>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <disk type="network" device="disk">
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9a630cb9-9f1a-4ea2-9047-158683504522_disk">
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       </source>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <target dev="vda" bus="virtio"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <disk type="network" device="cdrom">
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <driver type="raw" cache="none"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <source protocol="rbd" name="vms/9a630cb9-9f1a-4ea2-9047-158683504522_disk.config">
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <host name="192.168.122.100" port="6789"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       </source>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <auth username="openstack">
Oct 14 09:44:21 compute-0 nova_compute[259627]:         <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       </auth>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <target dev="sda" bus="sata"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     </disk>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <serial type="pty">
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <log file="/var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/console.log" append="off"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     </serial>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <video>
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <model type="virtio"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     </video>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <input type="tablet" bus="usb"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <rng model="virtio">
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <backend model="random">/dev/urandom</backend>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     </rng>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="pci" model="pcie-root-port"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <controller type="usb" index="0"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     <memballoon model="virtio">
Oct 14 09:44:21 compute-0 nova_compute[259627]:       <stats period="10"/>
Oct 14 09:44:21 compute-0 nova_compute[259627]:     </memballoon>
Oct 14 09:44:21 compute-0 nova_compute[259627]:   </devices>
Oct 14 09:44:21 compute-0 nova_compute[259627]: </domain>
Oct 14 09:44:21 compute-0 nova_compute[259627]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.852 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.853 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.853 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Using config drive
Oct 14 09:44:21 compute-0 nova_compute[259627]: 2025-10-14 09:44:21.881 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:44:21 compute-0 ceph-mon[74249]: pgmap v2736: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 09:44:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1662102445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:44:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/414932043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 09:44:22 compute-0 nova_compute[259627]: 2025-10-14 09:44:22.199 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Creating config drive at /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config
Oct 14 09:44:22 compute-0 nova_compute[259627]: 2025-10-14 09:44:22.203 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6rsfp4xm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:22 compute-0 nova_compute[259627]: 2025-10-14 09:44:22.353 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6rsfp4xm" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:22 compute-0 nova_compute[259627]: 2025-10-14 09:44:22.394 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 14 09:44:22 compute-0 nova_compute[259627]: 2025-10-14 09:44:22.399 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:22 compute-0 nova_compute[259627]: 2025-10-14 09:44:22.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:22 compute-0 nova_compute[259627]: 2025-10-14 09:44:22.600 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:22 compute-0 nova_compute[259627]: 2025-10-14 09:44:22.601 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Deleting local config drive /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config because it was imported into RBD.
Oct 14 09:44:22 compute-0 systemd-machined[214636]: New machine qemu-187-instance-0000009a.
Oct 14 09:44:22 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-0000009a.
Oct 14 09:44:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 09:44:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.780 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760435063.7795699, 9a630cb9-9f1a-4ea2-9047-158683504522 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.781 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] VM Resumed (Lifecycle Event)
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.783 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.783 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.787 2 INFO nova.virt.libvirt.driver [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance spawned successfully.
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.787 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.814 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.818 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.818 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.819 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.819 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.819 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.820 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.824 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.873 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.874 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760435063.783396, 9a630cb9-9f1a-4ea2-9047-158683504522 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.874 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] VM Started (Lifecycle Event)
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.898 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.901 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.904 2 INFO nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Took 3.91 seconds to spawn the instance on the hypervisor.
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.905 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.937 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 14 09:44:23 compute-0 ceph-mon[74249]: pgmap v2737: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.963 2 INFO nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Took 4.94 seconds to build instance.
Oct 14 09:44:23 compute-0 nova_compute[259627]: 2025-10-14 09:44:23.977 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.713 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "9a630cb9-9f1a-4ea2-9047-158683504522" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.714 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.714 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "9a630cb9-9f1a-4ea2-9047-158683504522-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.714 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.714 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.715 2 INFO nova.compute.manager [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Terminating instance
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.716 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "refresh_cache-9a630cb9-9f1a-4ea2-9047-158683504522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.716 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquired lock "refresh_cache-9a630cb9-9f1a-4ea2-9047-158683504522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.716 2 DEBUG nova.network.neutron [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 14 09:44:25 compute-0 nova_compute[259627]: 2025-10-14 09:44:25.900 2 DEBUG nova.network.neutron [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:44:25 compute-0 ceph-mon[74249]: pgmap v2738: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 09:44:26 compute-0 nova_compute[259627]: 2025-10-14 09:44:26.285 2 DEBUG nova.network.neutron [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:44:26 compute-0 nova_compute[259627]: 2025-10-14 09:44:26.305 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Releasing lock "refresh_cache-9a630cb9-9f1a-4ea2-9047-158683504522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 14 09:44:26 compute-0 nova_compute[259627]: 2025-10-14 09:44:26.306 2 DEBUG nova.compute.manager [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 14 09:44:26 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Oct 14 09:44:26 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Consumed 3.672s CPU time.
Oct 14 09:44:26 compute-0 systemd-machined[214636]: Machine qemu-187-instance-0000009a terminated.
Oct 14 09:44:26 compute-0 nova_compute[259627]: 2025-10-14 09:44:26.535 2 INFO nova.virt.libvirt.driver [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance destroyed successfully.
Oct 14 09:44:26 compute-0 nova_compute[259627]: 2025-10-14 09:44:26.536 2 DEBUG nova.objects.instance [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lazy-loading 'resources' on Instance uuid 9a630cb9-9f1a-4ea2-9047-158683504522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 14 09:44:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.007 2 INFO nova.virt.libvirt.driver [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Deleting instance files /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522_del
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.008 2 INFO nova.virt.libvirt.driver [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Deletion of /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522_del complete
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.083 2 INFO nova.compute.manager [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.083 2 DEBUG oslo.service.loopingcall [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.084 2 DEBUG nova.compute.manager [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.084 2 DEBUG nova.network.neutron [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.513 2 DEBUG nova.network.neutron [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.528 2 DEBUG nova.network.neutron [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.542 2 INFO nova.compute.manager [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Took 0.46 seconds to deallocate network for instance.
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.585 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.586 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:27 compute-0 nova_compute[259627]: 2025-10-14 09:44:27.659 2 DEBUG oslo_concurrency.processutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:27 compute-0 ceph-mon[74249]: pgmap v2739: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:44:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:44:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425827472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:28 compute-0 nova_compute[259627]: 2025-10-14 09:44:28.208 2 DEBUG oslo_concurrency.processutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:28 compute-0 nova_compute[259627]: 2025-10-14 09:44:28.215 2 DEBUG nova.compute.provider_tree [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:44:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:28 compute-0 nova_compute[259627]: 2025-10-14 09:44:28.232 2 DEBUG nova.scheduler.client.report [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:44:28 compute-0 nova_compute[259627]: 2025-10-14 09:44:28.251 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:28 compute-0 nova_compute[259627]: 2025-10-14 09:44:28.307 2 INFO nova.scheduler.client.report [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Deleted allocations for instance 9a630cb9-9f1a-4ea2-9047-158683504522
Oct 14 09:44:28 compute-0 nova_compute[259627]: 2025-10-14 09:44:28.365 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:44:28 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3425827472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:29 compute-0 ceph-mon[74249]: pgmap v2740: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 09:44:29 compute-0 nova_compute[259627]: 2025-10-14 09:44:29.991 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:29 compute-0 nova_compute[259627]: 2025-10-14 09:44:29.991 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:44:30 compute-0 nova_compute[259627]: 2025-10-14 09:44:30.017 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:44:30 compute-0 nova_compute[259627]: 2025-10-14 09:44:30.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 09:44:31 compute-0 nova_compute[259627]: 2025-10-14 09:44:31.161 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:31 compute-0 ceph-mon[74249]: pgmap v2741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 09:44:32 compute-0 nova_compute[259627]: 2025-10-14 09:44:32.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:44:32
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.mgr']
Oct 14 09:44:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:44:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:44:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:44:33 compute-0 nova_compute[259627]: 2025-10-14 09:44:33.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:34 compute-0 ceph-mon[74249]: pgmap v2742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 09:44:34 compute-0 podman[423414]: 2025-10-14 09:44:34.692396057 +0000 UTC m=+0.086753590 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:44:34 compute-0 podman[423413]: 2025-10-14 09:44:34.69904098 +0000 UTC m=+0.095641348 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 09:44:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 09:44:35 compute-0 nova_compute[259627]: 2025-10-14 09:44:35.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:36 compute-0 ceph-mon[74249]: pgmap v2743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 09:44:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 09:44:37 compute-0 nova_compute[259627]: 2025-10-14 09:44:37.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:38 compute-0 ceph-mon[74249]: pgmap v2744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 09:44:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:38 compute-0 ovn_controller[152662]: 2025-10-14T09:44:38Z|01683|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 14 09:44:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 09:44:40 compute-0 ceph-mon[74249]: pgmap v2745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 09:44:40 compute-0 nova_compute[259627]: 2025-10-14 09:44:40.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct 14 09:44:41 compute-0 nova_compute[259627]: 2025-10-14 09:44:41.532 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760435066.531276, 9a630cb9-9f1a-4ea2-9047-158683504522 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 14 09:44:41 compute-0 nova_compute[259627]: 2025-10-14 09:44:41.533 2 INFO nova.compute.manager [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] VM Stopped (Lifecycle Event)
Oct 14 09:44:41 compute-0 nova_compute[259627]: 2025-10-14 09:44:41.558 2 DEBUG nova.compute.manager [None req-7771e10f-f69c-4104-8800-0f0bb123037c - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 14 09:44:42 compute-0 ceph-mon[74249]: pgmap v2746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct 14 09:44:42 compute-0 nova_compute[259627]: 2025-10-14 09:44:42.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 09:44:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:43 compute-0 sudo[423452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:43 compute-0 sudo[423452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:43 compute-0 sudo[423452]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:43 compute-0 sudo[423477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:44:43 compute-0 sudo[423477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:43 compute-0 sudo[423477]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:44:43 compute-0 sudo[423502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:44:43 compute-0 sudo[423502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:43 compute-0 sudo[423502]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:43 compute-0 sudo[423527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:44:43 compute-0 sudo[423527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:44 compute-0 ceph-mon[74249]: pgmap v2747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 09:44:44 compute-0 nova_compute[259627]: 2025-10-14 09:44:44.163 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:44 compute-0 sudo[423527]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:44:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:44:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:44:44 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:44:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:44:44 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:44:44 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3d6f2b1f-69f2-462c-a652-25168b7139ea does not exist
Oct 14 09:44:44 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0613f838-1b00-47cf-ba37-b0fed131d850 does not exist
Oct 14 09:44:44 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e1417b61-9154-4582-9729-7d2bc1707c82 does not exist
Oct 14 09:44:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:44:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:44:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:44:44 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:44:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:44:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:44:44 compute-0 sudo[423583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:44 compute-0 sudo[423583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:44 compute-0 sudo[423583]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:44 compute-0 sudo[423608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:44:44 compute-0 sudo[423608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:44 compute-0 sudo[423608]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:44 compute-0 sudo[423633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:44 compute-0 sudo[423633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:44 compute-0 sudo[423633]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:44 compute-0 sudo[423658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:44:44 compute-0 sudo[423658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 09:44:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:44:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:44:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:44:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:44:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:44:45 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:44:45 compute-0 podman[423724]: 2025-10-14 09:44:45.072254657 +0000 UTC m=+0.048332097 container create 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 09:44:45 compute-0 systemd[1]: Started libpod-conmon-00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0.scope.
Oct 14 09:44:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:44:45 compute-0 podman[423724]: 2025-10-14 09:44:45.052610115 +0000 UTC m=+0.028687575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:44:45 compute-0 podman[423724]: 2025-10-14 09:44:45.166705284 +0000 UTC m=+0.142782744 container init 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 09:44:45 compute-0 podman[423724]: 2025-10-14 09:44:45.174621328 +0000 UTC m=+0.150698798 container start 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 09:44:45 compute-0 podman[423724]: 2025-10-14 09:44:45.178675637 +0000 UTC m=+0.154753087 container attach 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:44:45 compute-0 amazing_lederberg[423740]: 167 167
Oct 14 09:44:45 compute-0 systemd[1]: libpod-00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0.scope: Deactivated successfully.
Oct 14 09:44:45 compute-0 podman[423724]: 2025-10-14 09:44:45.181807284 +0000 UTC m=+0.157884724 container died 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 09:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b08a7328531acb1b75c94ff7e2a9979088c8ae12579aa1c79aafc57113d5b79f-merged.mount: Deactivated successfully.
Oct 14 09:44:45 compute-0 podman[423724]: 2025-10-14 09:44:45.233837951 +0000 UTC m=+0.209915411 container remove 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:44:45 compute-0 systemd[1]: libpod-conmon-00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0.scope: Deactivated successfully.
Oct 14 09:44:45 compute-0 nova_compute[259627]: 2025-10-14 09:44:45.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:45 compute-0 podman[423762]: 2025-10-14 09:44:45.474926867 +0000 UTC m=+0.073144926 container create 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:44:45 compute-0 systemd[1]: Started libpod-conmon-8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee.scope.
Oct 14 09:44:45 compute-0 podman[423762]: 2025-10-14 09:44:45.446672334 +0000 UTC m=+0.044890413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:44:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:45 compute-0 podman[423762]: 2025-10-14 09:44:45.579543645 +0000 UTC m=+0.177761704 container init 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:44:45 compute-0 podman[423762]: 2025-10-14 09:44:45.595679931 +0000 UTC m=+0.193897980 container start 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:44:45 compute-0 podman[423762]: 2025-10-14 09:44:45.599700609 +0000 UTC m=+0.197918658 container attach 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:44:46 compute-0 ceph-mon[74249]: pgmap v2748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 09:44:46 compute-0 boring_torvalds[423779]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:44:46 compute-0 boring_torvalds[423779]: --> relative data size: 1.0
Oct 14 09:44:46 compute-0 boring_torvalds[423779]: --> All data devices are unavailable
Oct 14 09:44:46 compute-0 systemd[1]: libpod-8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee.scope: Deactivated successfully.
Oct 14 09:44:46 compute-0 systemd[1]: libpod-8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee.scope: Consumed 1.089s CPU time.
Oct 14 09:44:46 compute-0 podman[423762]: 2025-10-14 09:44:46.728381167 +0000 UTC m=+1.326599226 container died 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125-merged.mount: Deactivated successfully.
Oct 14 09:44:46 compute-0 podman[423762]: 2025-10-14 09:44:46.797641907 +0000 UTC m=+1.395859946 container remove 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:44:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:44:46 compute-0 systemd[1]: libpod-conmon-8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee.scope: Deactivated successfully.
Oct 14 09:44:46 compute-0 sudo[423658]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:46 compute-0 sudo[423819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:46 compute-0 sudo[423819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:46 compute-0 sudo[423819]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:46 compute-0 sudo[423844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:44:46 compute-0 sudo[423844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:46 compute-0 sudo[423844]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:47 compute-0 sudo[423869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:47 compute-0 sudo[423869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:47 compute-0 sudo[423869]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:47 compute-0 sudo[423894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:44:47 compute-0 sudo[423894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:47 compute-0 podman[423960]: 2025-10-14 09:44:47.417267513 +0000 UTC m=+0.055852602 container create 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 09:44:47 compute-0 systemd[1]: Started libpod-conmon-05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f.scope.
Oct 14 09:44:47 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:44:47 compute-0 podman[423960]: 2025-10-14 09:44:47.386922928 +0000 UTC m=+0.025508067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:44:47 compute-0 podman[423960]: 2025-10-14 09:44:47.493724459 +0000 UTC m=+0.132309528 container init 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:44:47 compute-0 podman[423960]: 2025-10-14 09:44:47.499241754 +0000 UTC m=+0.137826833 container start 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 09:44:47 compute-0 quirky_hertz[423977]: 167 167
Oct 14 09:44:47 compute-0 podman[423960]: 2025-10-14 09:44:47.503249073 +0000 UTC m=+0.141834142 container attach 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:44:47 compute-0 systemd[1]: libpod-05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f.scope: Deactivated successfully.
Oct 14 09:44:47 compute-0 podman[423960]: 2025-10-14 09:44:47.504155375 +0000 UTC m=+0.142740474 container died 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe6f044e97aea2da9808f587d685754d0acf878869106ee614298344c6ad7cec-merged.mount: Deactivated successfully.
Oct 14 09:44:47 compute-0 podman[423960]: 2025-10-14 09:44:47.547635842 +0000 UTC m=+0.186220911 container remove 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:44:47 compute-0 systemd[1]: libpod-conmon-05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f.scope: Deactivated successfully.
Oct 14 09:44:47 compute-0 nova_compute[259627]: 2025-10-14 09:44:47.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:47 compute-0 podman[424001]: 2025-10-14 09:44:47.725706562 +0000 UTC m=+0.055269807 container create e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 09:44:47 compute-0 systemd[1]: Started libpod-conmon-e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2.scope.
Oct 14 09:44:47 compute-0 podman[424001]: 2025-10-14 09:44:47.697992562 +0000 UTC m=+0.027555867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:44:47 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:47 compute-0 podman[424001]: 2025-10-14 09:44:47.82587634 +0000 UTC m=+0.155439635 container init e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 09:44:47 compute-0 podman[424001]: 2025-10-14 09:44:47.833904307 +0000 UTC m=+0.163467512 container start e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:44:47 compute-0 podman[424001]: 2025-10-14 09:44:47.836759697 +0000 UTC m=+0.166322912 container attach e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:44:48 compute-0 ceph-mon[74249]: pgmap v2749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:44:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]: {
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:     "0": [
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:         {
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "devices": [
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "/dev/loop3"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             ],
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_name": "ceph_lv0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_size": "21470642176",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "name": "ceph_lv0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "tags": {
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cluster_name": "ceph",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.crush_device_class": "",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.encrypted": "0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osd_id": "0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.type": "block",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.vdo": "0"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             },
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "type": "block",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "vg_name": "ceph_vg0"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:         }
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:     ],
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:     "1": [
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:         {
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "devices": [
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "/dev/loop4"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             ],
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_name": "ceph_lv1",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_size": "21470642176",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "name": "ceph_lv1",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "tags": {
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cluster_name": "ceph",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.crush_device_class": "",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.encrypted": "0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osd_id": "1",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.type": "block",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.vdo": "0"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             },
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "type": "block",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "vg_name": "ceph_vg1"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:         }
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:     ],
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:     "2": [
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:         {
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "devices": [
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "/dev/loop5"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             ],
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_name": "ceph_lv2",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_size": "21470642176",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "name": "ceph_lv2",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "tags": {
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.cluster_name": "ceph",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.crush_device_class": "",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.encrypted": "0",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osd_id": "2",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.type": "block",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:                 "ceph.vdo": "0"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             },
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "type": "block",
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:             "vg_name": "ceph_vg2"
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:         }
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]:     ]
Oct 14 09:44:48 compute-0 peaceful_haibt[424017]: }
Oct 14 09:44:48 compute-0 systemd[1]: libpod-e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2.scope: Deactivated successfully.
Oct 14 09:44:48 compute-0 podman[424001]: 2025-10-14 09:44:48.692070586 +0000 UTC m=+1.021633791 container died e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 14 09:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77-merged.mount: Deactivated successfully.
Oct 14 09:44:48 compute-0 podman[424001]: 2025-10-14 09:44:48.761601592 +0000 UTC m=+1.091164807 container remove e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 09:44:48 compute-0 systemd[1]: libpod-conmon-e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2.scope: Deactivated successfully.
Oct 14 09:44:48 compute-0 sudo[423894]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:44:48 compute-0 sudo[424053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:48 compute-0 sudo[424053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:48 compute-0 sudo[424053]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:48 compute-0 podman[424041]: 2025-10-14 09:44:48.896429861 +0000 UTC m=+0.094818438 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:44:48 compute-0 podman[424040]: 2025-10-14 09:44:48.897877996 +0000 UTC m=+0.092744397 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 14 09:44:48 compute-0 sudo[424110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:44:48 compute-0 sudo[424110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:48 compute-0 sudo[424110]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:48 compute-0 sudo[424135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:48 compute-0 sudo[424135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:48 compute-0 sudo[424135]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:49 compute-0 sudo[424160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:44:49 compute-0 sudo[424160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:49 compute-0 podman[424225]: 2025-10-14 09:44:49.472400645 +0000 UTC m=+0.048519592 container create eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:44:49 compute-0 systemd[1]: Started libpod-conmon-eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b.scope.
Oct 14 09:44:49 compute-0 podman[424225]: 2025-10-14 09:44:49.454413204 +0000 UTC m=+0.030532191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:44:49 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:44:49 compute-0 podman[424225]: 2025-10-14 09:44:49.586945356 +0000 UTC m=+0.163064323 container init eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:44:49 compute-0 podman[424225]: 2025-10-14 09:44:49.599520125 +0000 UTC m=+0.175639062 container start eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:44:49 compute-0 podman[424225]: 2025-10-14 09:44:49.604833955 +0000 UTC m=+0.180952942 container attach eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:44:49 compute-0 pensive_clarke[424241]: 167 167
Oct 14 09:44:49 compute-0 systemd[1]: libpod-eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b.scope: Deactivated successfully.
Oct 14 09:44:49 compute-0 conmon[424241]: conmon eb0c59a05bb32c880ebe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b.scope/container/memory.events
Oct 14 09:44:49 compute-0 podman[424225]: 2025-10-14 09:44:49.609631533 +0000 UTC m=+0.185750500 container died eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:44:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a07eef93812081a35ff66e2053bf846a069d0da409e052dab8d72dad56eb743-merged.mount: Deactivated successfully.
Oct 14 09:44:49 compute-0 podman[424225]: 2025-10-14 09:44:49.655419276 +0000 UTC m=+0.231538233 container remove eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 09:44:49 compute-0 systemd[1]: libpod-conmon-eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b.scope: Deactivated successfully.
Oct 14 09:44:49 compute-0 podman[424264]: 2025-10-14 09:44:49.852866812 +0000 UTC m=+0.047825545 container create a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 09:44:49 compute-0 systemd[1]: Started libpod-conmon-a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774.scope.
Oct 14 09:44:49 compute-0 podman[424264]: 2025-10-14 09:44:49.832141663 +0000 UTC m=+0.027100436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:44:49 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:44:49 compute-0 podman[424264]: 2025-10-14 09:44:49.945624558 +0000 UTC m=+0.140583311 container init a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:44:49 compute-0 podman[424264]: 2025-10-14 09:44:49.958116485 +0000 UTC m=+0.153075218 container start a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 09:44:49 compute-0 podman[424264]: 2025-10-14 09:44:49.961807825 +0000 UTC m=+0.156766588 container attach a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:44:50 compute-0 ceph-mon[74249]: pgmap v2750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:44:50 compute-0 nova_compute[259627]: 2025-10-14 09:44:50.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:44:50 compute-0 nova_compute[259627]: 2025-10-14 09:44:50.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:51 compute-0 practical_gagarin[424281]: {
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "osd_id": 2,
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "type": "bluestore"
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:     },
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "osd_id": 1,
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "type": "bluestore"
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:     },
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "osd_id": 0,
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:         "type": "bluestore"
Oct 14 09:44:51 compute-0 practical_gagarin[424281]:     }
Oct 14 09:44:51 compute-0 practical_gagarin[424281]: }
Oct 14 09:44:51 compute-0 systemd[1]: libpod-a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774.scope: Deactivated successfully.
Oct 14 09:44:51 compute-0 systemd[1]: libpod-a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774.scope: Consumed 1.114s CPU time.
Oct 14 09:44:51 compute-0 podman[424264]: 2025-10-14 09:44:51.068597106 +0000 UTC m=+1.263555899 container died a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7-merged.mount: Deactivated successfully.
Oct 14 09:44:51 compute-0 podman[424264]: 2025-10-14 09:44:51.13271701 +0000 UTC m=+1.327675743 container remove a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:44:51 compute-0 systemd[1]: libpod-conmon-a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774.scope: Deactivated successfully.
Oct 14 09:44:51 compute-0 sudo[424160]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:44:51 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:44:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:44:51 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:44:51 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c14abd6f-1c3c-40a9-99e8-14ffa4fd9e49 does not exist
Oct 14 09:44:51 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 213a146c-afff-46bb-86ed-f53884eb468a does not exist
Oct 14 09:44:51 compute-0 sudo[424328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:44:51 compute-0 sudo[424328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:51 compute-0 sudo[424328]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:51 compute-0 sudo[424353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:44:51 compute-0 sudo[424353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:44:51 compute-0 sudo[424353]: pam_unix(sudo:session): session closed for user root
Oct 14 09:44:52 compute-0 ceph-mon[74249]: pgmap v2751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:44:52 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:44:52 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:44:52 compute-0 nova_compute[259627]: 2025-10-14 09:44:52.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 09:44:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:54 compute-0 ceph-mon[74249]: pgmap v2752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 09:44:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 09:44:55 compute-0 ceph-mon[74249]: pgmap v2753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 09:44:55 compute-0 nova_compute[259627]: 2025-10-14 09:44:55.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:56.808 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:44:56 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:44:56.810 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:44:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 09:44:56 compute-0 nova_compute[259627]: 2025-10-14 09:44:56.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:57 compute-0 nova_compute[259627]: 2025-10-14 09:44:57.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:44:57 compute-0 ceph-mon[74249]: pgmap v2754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 09:44:57 compute-0 nova_compute[259627]: 2025-10-14 09:44:57.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:57 compute-0 nova_compute[259627]: 2025-10-14 09:44:57.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.007 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.008 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:44:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:44:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3963880948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.527 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.802 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.803 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.804 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.804 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:44:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:44:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3963880948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.891 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.891 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.907 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.935 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.935 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.956 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:44:58 compute-0 nova_compute[259627]: 2025-10-14 09:44:58.990 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:44:59 compute-0 nova_compute[259627]: 2025-10-14 09:44:59.015 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:44:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:44:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340884269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:44:59 compute-0 nova_compute[259627]: 2025-10-14 09:44:59.479 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:44:59 compute-0 nova_compute[259627]: 2025-10-14 09:44:59.487 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:44:59 compute-0 nova_compute[259627]: 2025-10-14 09:44:59.508 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:44:59 compute-0 nova_compute[259627]: 2025-10-14 09:44:59.536 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:44:59 compute-0 nova_compute[259627]: 2025-10-14 09:44:59.536 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:44:59 compute-0 ceph-mon[74249]: pgmap v2755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:44:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2340884269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:45:00 compute-0 nova_compute[259627]: 2025-10-14 09:45:00.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:00 compute-0 nova_compute[259627]: 2025-10-14 09:45:00.537 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:45:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:01 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:45:01.813 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:45:01 compute-0 ceph-mon[74249]: pgmap v2756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:02 compute-0 nova_compute[259627]: 2025-10-14 09:45:02.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:45:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:45:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:45:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:45:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:45:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:45:02 compute-0 nova_compute[259627]: 2025-10-14 09:45:02.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:45:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:03 compute-0 ceph-mon[74249]: pgmap v2757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:03 compute-0 nova_compute[259627]: 2025-10-14 09:45:03.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:45:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:04 compute-0 nova_compute[259627]: 2025-10-14 09:45:04.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:45:04 compute-0 nova_compute[259627]: 2025-10-14 09:45:04.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:45:05 compute-0 nova_compute[259627]: 2025-10-14 09:45:05.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:05 compute-0 podman[424424]: 2025-10-14 09:45:05.689255804 +0000 UTC m=+0.093119806 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:45:05 compute-0 podman[424425]: 2025-10-14 09:45:05.694709817 +0000 UTC m=+0.098151029 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid)
Oct 14 09:45:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:45:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2485716450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:45:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:45:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2485716450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:45:05 compute-0 ceph-mon[74249]: pgmap v2758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2485716450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:45:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2485716450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:45:05 compute-0 nova_compute[259627]: 2025-10-14 09:45:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:45:05 compute-0 nova_compute[259627]: 2025-10-14 09:45:05.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:45:05 compute-0 nova_compute[259627]: 2025-10-14 09:45:05.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:45:06 compute-0 nova_compute[259627]: 2025-10-14 09:45:06.009 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:45:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:45:07.061 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:45:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:45:07.061 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:45:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:45:07.061 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:45:07 compute-0 nova_compute[259627]: 2025-10-14 09:45:07.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:07 compute-0 ceph-mon[74249]: pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:07 compute-0 nova_compute[259627]: 2025-10-14 09:45:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:45:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:09 compute-0 ceph-mon[74249]: pgmap v2760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:10 compute-0 nova_compute[259627]: 2025-10-14 09:45:10.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:11 compute-0 ceph-mon[74249]: pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:12 compute-0 nova_compute[259627]: 2025-10-14 09:45:12.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:13 compute-0 ceph-mon[74249]: pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:15 compute-0 nova_compute[259627]: 2025-10-14 09:45:15.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:15 compute-0 ceph-mon[74249]: pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:17 compute-0 nova_compute[259627]: 2025-10-14 09:45:17.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:17 compute-0 ceph-mon[74249]: pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:18 compute-0 ovn_controller[152662]: 2025-10-14T09:45:18Z|01684|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 14 09:45:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:19 compute-0 podman[424466]: 2025-10-14 09:45:19.685835427 +0000 UTC m=+0.084740721 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 14 09:45:19 compute-0 podman[424465]: 2025-10-14 09:45:19.737474264 +0000 UTC m=+0.143299468 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:45:19 compute-0 ceph-mon[74249]: pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:20 compute-0 nova_compute[259627]: 2025-10-14 09:45:20.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:21 compute-0 sshd-session[424510]: Connection closed by 174.45.40.158 port 50284
Oct 14 09:45:21 compute-0 sshd-session[424511]: Invalid user a from 174.45.40.158 port 50286
Oct 14 09:45:21 compute-0 sshd-session[424511]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:45:21 compute-0 sshd-session[424511]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=174.45.40.158
Oct 14 09:45:21 compute-0 ceph-mon[74249]: pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:22 compute-0 nova_compute[259627]: 2025-10-14 09:45:22.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:23 compute-0 sshd-session[424511]: Failed password for invalid user a from 174.45.40.158 port 50286 ssh2
Oct 14 09:45:23 compute-0 ceph-mon[74249]: pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:24 compute-0 sshd-session[424511]: Connection closed by invalid user a 174.45.40.158 port 50286 [preauth]
Oct 14 09:45:25 compute-0 sshd-session[424513]: Invalid user nil from 174.45.40.158 port 50296
Oct 14 09:45:25 compute-0 sshd-session[424513]: Failed none for invalid user nil from 174.45.40.158 port 50296 ssh2
Oct 14 09:45:25 compute-0 sshd-session[424513]: Connection closed by invalid user nil 174.45.40.158 port 50296 [preauth]
Oct 14 09:45:25 compute-0 nova_compute[259627]: 2025-10-14 09:45:25.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:25 compute-0 sshd-session[424515]: Invalid user admin from 174.45.40.158 port 50312
Oct 14 09:45:25 compute-0 sshd-session[424515]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:45:25 compute-0 sshd-session[424515]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=174.45.40.158
Oct 14 09:45:26 compute-0 ceph-mon[74249]: pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 14 09:45:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 14 09:45:27 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 14 09:45:27 compute-0 nova_compute[259627]: 2025-10-14 09:45:27.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:28 compute-0 ceph-mon[74249]: pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:28 compute-0 ceph-mon[74249]: osdmap e293: 3 total, 3 up, 3 in
Oct 14 09:45:28 compute-0 sshd-session[424515]: Failed password for invalid user admin from 174.45.40.158 port 50312 ssh2
Oct 14 09:45:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:29 compute-0 sshd-session[424515]: Connection closed by invalid user admin 174.45.40.158 port 50312 [preauth]
Oct 14 09:45:30 compute-0 ceph-mon[74249]: pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:30 compute-0 unix_chkpwd[424519]: password check failed for user (root)
Oct 14 09:45:30 compute-0 sshd-session[424517]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=174.45.40.158  user=root
Oct 14 09:45:30 compute-0 nova_compute[259627]: 2025-10-14 09:45:30.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:45:32 compute-0 ceph-mon[74249]: pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:45:32 compute-0 sshd-session[424517]: Failed password for root from 174.45.40.158 port 45800 ssh2
Oct 14 09:45:32 compute-0 nova_compute[259627]: 2025-10-14 09:45:32.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:45:32
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Oct 14 09:45:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:45:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 14 09:45:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 14 09:45:33 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:45:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:45:33 compute-0 sshd-session[424517]: Connection closed by authenticating user root 174.45.40.158 port 45800 [preauth]
Oct 14 09:45:34 compute-0 ceph-mon[74249]: pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:45:34 compute-0 ceph-mon[74249]: osdmap e294: 3 total, 3 up, 3 in
Oct 14 09:45:34 compute-0 sshd-session[424520]: Invalid user orangepi from 174.45.40.158 port 45814
Oct 14 09:45:34 compute-0 sshd-session[424520]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:45:34 compute-0 sshd-session[424520]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=174.45.40.158
Oct 14 09:45:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 14 09:45:35 compute-0 nova_compute[259627]: 2025-10-14 09:45:35.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:36 compute-0 ceph-mon[74249]: pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 14 09:45:36 compute-0 sshd-session[424520]: Failed password for invalid user orangepi from 174.45.40.158 port 45814 ssh2
Oct 14 09:45:36 compute-0 podman[424522]: 2025-10-14 09:45:36.71221481 +0000 UTC m=+0.098520919 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:45:36 compute-0 podman[424523]: 2025-10-14 09:45:36.721625611 +0000 UTC m=+0.103091271 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:45:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 14 09:45:37 compute-0 nova_compute[259627]: 2025-10-14 09:45:37.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:38 compute-0 ceph-mon[74249]: pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 14 09:45:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:45:38 compute-0 sshd-session[424520]: Connection closed by invalid user orangepi 174.45.40.158 port 45814 [preauth]
Oct 14 09:45:39 compute-0 sshd-session[424561]: Invalid user support from 174.45.40.158 port 33204
Oct 14 09:45:39 compute-0 sshd-session[424561]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:45:39 compute-0 sshd-session[424561]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=174.45.40.158
Oct 14 09:45:40 compute-0 ceph-mon[74249]: pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:45:40 compute-0 nova_compute[259627]: 2025-10-14 09:45:40.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:40 compute-0 sshd-session[424561]: Failed password for invalid user support from 174.45.40.158 port 33204 ssh2
Oct 14 09:45:41 compute-0 sshd-session[424561]: Connection closed by invalid user support 174.45.40.158 port 33204 [preauth]
Oct 14 09:45:41 compute-0 sshd-session[424563]: Invalid user ubnt from 174.45.40.158 port 33220
Oct 14 09:45:41 compute-0 sshd-session[424563]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:45:41 compute-0 sshd-session[424563]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=174.45.40.158
Oct 14 09:45:42 compute-0 ceph-mon[74249]: pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:42 compute-0 nova_compute[259627]: 2025-10-14 09:45:42.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:45:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:45:44 compute-0 ceph-mon[74249]: pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:44 compute-0 sshd-session[424563]: Failed password for invalid user ubnt from 174.45.40.158 port 33220 ssh2
Oct 14 09:45:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:45 compute-0 sshd-session[424563]: Connection closed by invalid user ubnt 174.45.40.158 port 33220 [preauth]
Oct 14 09:45:45 compute-0 sshd-session[424565]: Invalid user user from 174.45.40.158 port 33230
Oct 14 09:45:45 compute-0 sshd-session[424565]: pam_unix(sshd:auth): check pass; user unknown
Oct 14 09:45:45 compute-0 sshd-session[424565]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=174.45.40.158
Oct 14 09:45:45 compute-0 nova_compute[259627]: 2025-10-14 09:45:45.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:46 compute-0 ceph-mon[74249]: pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:47 compute-0 sshd-session[424565]: Failed password for invalid user user from 174.45.40.158 port 33230 ssh2
Oct 14 09:45:47 compute-0 nova_compute[259627]: 2025-10-14 09:45:47.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:48 compute-0 ceph-mon[74249]: pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:49 compute-0 sshd-session[424565]: Connection closed by invalid user user 174.45.40.158 port 33230 [preauth]
Oct 14 09:45:49 compute-0 sshd-session[424567]: Connection closed by authenticating user root 174.45.40.158 port 59368 [preauth]
Oct 14 09:45:50 compute-0 sshd[189162]: drop connection #0 from [174.45.40.158]:59378 on [38.102.83.202]:22 penalty: failed authentication
Oct 14 09:45:50 compute-0 ceph-mon[74249]: pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:50 compute-0 podman[424570]: 2025-10-14 09:45:50.673692393 +0000 UTC m=+0.077912904 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:45:50 compute-0 podman[424569]: 2025-10-14 09:45:50.684835516 +0000 UTC m=+0.102028305 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:45:50 compute-0 nova_compute[259627]: 2025-10-14 09:45:50.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:50 compute-0 nova_compute[259627]: 2025-10-14 09:45:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:45:51 compute-0 sudo[424614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:51 compute-0 sudo[424614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:51 compute-0 sudo[424614]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:51 compute-0 sudo[424639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:45:51 compute-0 sudo[424639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:51 compute-0 sudo[424639]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:51 compute-0 sudo[424664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:51 compute-0 sudo[424664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:51 compute-0 sudo[424664]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:51 compute-0 sudo[424689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 14 09:45:51 compute-0 sudo[424689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:51 compute-0 sudo[424689]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:45:51 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:45:51 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:52 compute-0 sudo[424734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:52 compute-0 sudo[424734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:52 compute-0 sudo[424734]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:52 compute-0 sudo[424759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:45:52 compute-0 sudo[424759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:52 compute-0 sudo[424759]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:52 compute-0 sudo[424784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:52 compute-0 sudo[424784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:52 compute-0 sudo[424784]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:52 compute-0 ceph-mon[74249]: pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:52 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:52 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:52 compute-0 sudo[424809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:45:52 compute-0 sudo[424809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:52 compute-0 nova_compute[259627]: 2025-10-14 09:45:52.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:52 compute-0 sudo[424809]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 09:45:52 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:45:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:45:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:45:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:45:52 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:45:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:45:52 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:52 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 95f56ebb-2817-432d-b8b7-c6c47a0dbe67 does not exist
Oct 14 09:45:52 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b2a9aa0c-eab5-4050-8f9f-c40500e77fea does not exist
Oct 14 09:45:52 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6a86aced-22a1-4749-bbcf-5ec54ebad9df does not exist
Oct 14 09:45:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:45:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:45:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:45:52 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:45:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:45:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:45:52 compute-0 sudo[424866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:52 compute-0 sudo[424866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:52 compute-0 sudo[424866]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:52 compute-0 sudo[424891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:45:52 compute-0 sudo[424891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:52 compute-0 sudo[424891]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:52 compute-0 sudo[424916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:52 compute-0 sudo[424916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:52 compute-0 sudo[424916]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:52 compute-0 sudo[424941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:45:52 compute-0 sudo[424941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:45:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:45:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:45:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:45:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:45:53 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:45:53 compute-0 podman[425007]: 2025-10-14 09:45:53.236332319 +0000 UTC m=+0.038719151 container create e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 09:45:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:53 compute-0 systemd[1]: Started libpod-conmon-e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a.scope.
Oct 14 09:45:53 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:45:53 compute-0 podman[425007]: 2025-10-14 09:45:53.218690976 +0000 UTC m=+0.021077839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:45:53 compute-0 podman[425007]: 2025-10-14 09:45:53.314728363 +0000 UTC m=+0.117115265 container init e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:45:53 compute-0 podman[425007]: 2025-10-14 09:45:53.324928134 +0000 UTC m=+0.127314976 container start e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:45:53 compute-0 podman[425007]: 2025-10-14 09:45:53.328348258 +0000 UTC m=+0.130735120 container attach e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:45:53 compute-0 reverent_golick[425023]: 167 167
Oct 14 09:45:53 compute-0 systemd[1]: libpod-e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a.scope: Deactivated successfully.
Oct 14 09:45:53 compute-0 conmon[425023]: conmon e29586cb545854b6fc3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a.scope/container/memory.events
Oct 14 09:45:53 compute-0 podman[425007]: 2025-10-14 09:45:53.332432248 +0000 UTC m=+0.134819090 container died e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0c82df2e5e07cfada0ea762068357834ee4642712fb8ad551ed9c0dade825f9-merged.mount: Deactivated successfully.
Oct 14 09:45:53 compute-0 podman[425007]: 2025-10-14 09:45:53.37328619 +0000 UTC m=+0.175673032 container remove e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:45:53 compute-0 systemd[1]: libpod-conmon-e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a.scope: Deactivated successfully.
Oct 14 09:45:53 compute-0 podman[425047]: 2025-10-14 09:45:53.532229841 +0000 UTC m=+0.044276178 container create 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:45:53 compute-0 systemd[1]: Started libpod-conmon-78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950.scope.
Oct 14 09:45:53 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:53 compute-0 podman[425047]: 2025-10-14 09:45:53.606974425 +0000 UTC m=+0.119020762 container init 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:45:53 compute-0 podman[425047]: 2025-10-14 09:45:53.513678866 +0000 UTC m=+0.025725223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:45:53 compute-0 podman[425047]: 2025-10-14 09:45:53.612802118 +0000 UTC m=+0.124848445 container start 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:45:53 compute-0 podman[425047]: 2025-10-14 09:45:53.616378686 +0000 UTC m=+0.128425033 container attach 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:45:54 compute-0 ceph-mon[74249]: pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:54 compute-0 angry_boyd[425064]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:45:54 compute-0 angry_boyd[425064]: --> relative data size: 1.0
Oct 14 09:45:54 compute-0 angry_boyd[425064]: --> All data devices are unavailable
Oct 14 09:45:54 compute-0 systemd[1]: libpod-78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950.scope: Deactivated successfully.
Oct 14 09:45:54 compute-0 podman[425047]: 2025-10-14 09:45:54.621997814 +0000 UTC m=+1.134044151 container died 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5-merged.mount: Deactivated successfully.
Oct 14 09:45:54 compute-0 podman[425047]: 2025-10-14 09:45:54.673389735 +0000 UTC m=+1.185436062 container remove 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:45:54 compute-0 systemd[1]: libpod-conmon-78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950.scope: Deactivated successfully.
Oct 14 09:45:54 compute-0 sudo[424941]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:54 compute-0 sudo[425105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:54 compute-0 sudo[425105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:54 compute-0 sudo[425105]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:54 compute-0 sudo[425130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:45:54 compute-0 sudo[425130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:54 compute-0 sudo[425130]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:54 compute-0 sudo[425155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:54 compute-0 sudo[425155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:54 compute-0 sudo[425155]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:54 compute-0 sudo[425180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:45:54 compute-0 sudo[425180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:55 compute-0 podman[425247]: 2025-10-14 09:45:55.339871571 +0000 UTC m=+0.052139871 container create d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:45:55 compute-0 systemd[1]: Started libpod-conmon-d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817.scope.
Oct 14 09:45:55 compute-0 podman[425247]: 2025-10-14 09:45:55.312981921 +0000 UTC m=+0.025250271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:45:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:45:55 compute-0 podman[425247]: 2025-10-14 09:45:55.444086458 +0000 UTC m=+0.156354798 container init d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:45:55 compute-0 podman[425247]: 2025-10-14 09:45:55.456756939 +0000 UTC m=+0.169025199 container start d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:45:55 compute-0 podman[425247]: 2025-10-14 09:45:55.461489735 +0000 UTC m=+0.173758025 container attach d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:45:55 compute-0 pedantic_easley[425263]: 167 167
Oct 14 09:45:55 compute-0 systemd[1]: libpod-d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817.scope: Deactivated successfully.
Oct 14 09:45:55 compute-0 podman[425247]: 2025-10-14 09:45:55.46414481 +0000 UTC m=+0.176413080 container died d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bbd490e2fa846853bd10e60288e493776e10850b1856c7ffd135bec563f6499-merged.mount: Deactivated successfully.
Oct 14 09:45:55 compute-0 podman[425247]: 2025-10-14 09:45:55.500687427 +0000 UTC m=+0.212955697 container remove d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:45:55 compute-0 systemd[1]: libpod-conmon-d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817.scope: Deactivated successfully.
Oct 14 09:45:55 compute-0 podman[425287]: 2025-10-14 09:45:55.690976817 +0000 UTC m=+0.053125545 container create 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:45:55 compute-0 systemd[1]: Started libpod-conmon-36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c.scope.
Oct 14 09:45:55 compute-0 podman[425287]: 2025-10-14 09:45:55.668647569 +0000 UTC m=+0.030796377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:45:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:55 compute-0 nova_compute[259627]: 2025-10-14 09:45:55.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:55 compute-0 podman[425287]: 2025-10-14 09:45:55.788647353 +0000 UTC m=+0.150796171 container init 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:45:55 compute-0 podman[425287]: 2025-10-14 09:45:55.795710447 +0000 UTC m=+0.157859205 container start 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:45:55 compute-0 podman[425287]: 2025-10-14 09:45:55.799425618 +0000 UTC m=+0.161574386 container attach 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:45:56 compute-0 ceph-mon[74249]: pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]: {
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:     "0": [
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:         {
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "devices": [
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "/dev/loop3"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             ],
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_name": "ceph_lv0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_size": "21470642176",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "name": "ceph_lv0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "tags": {
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cluster_name": "ceph",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.crush_device_class": "",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.encrypted": "0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osd_id": "0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.type": "block",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.vdo": "0"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             },
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "type": "block",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "vg_name": "ceph_vg0"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:         }
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:     ],
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:     "1": [
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:         {
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "devices": [
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "/dev/loop4"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             ],
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_name": "ceph_lv1",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_size": "21470642176",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "name": "ceph_lv1",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "tags": {
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cluster_name": "ceph",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.crush_device_class": "",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.encrypted": "0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osd_id": "1",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.type": "block",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.vdo": "0"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             },
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "type": "block",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "vg_name": "ceph_vg1"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:         }
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:     ],
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:     "2": [
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:         {
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "devices": [
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "/dev/loop5"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             ],
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_name": "ceph_lv2",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_size": "21470642176",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "name": "ceph_lv2",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "tags": {
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.cluster_name": "ceph",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.crush_device_class": "",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.encrypted": "0",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osd_id": "2",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.type": "block",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:                 "ceph.vdo": "0"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             },
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "type": "block",
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:             "vg_name": "ceph_vg2"
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:         }
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]:     ]
Oct 14 09:45:56 compute-0 stupefied_brahmagupta[425304]: }
Oct 14 09:45:56 compute-0 systemd[1]: libpod-36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c.scope: Deactivated successfully.
Oct 14 09:45:56 compute-0 podman[425287]: 2025-10-14 09:45:56.599056021 +0000 UTC m=+0.961204749 container died 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:45:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9-merged.mount: Deactivated successfully.
Oct 14 09:45:56 compute-0 podman[425287]: 2025-10-14 09:45:56.786208743 +0000 UTC m=+1.148357471 container remove 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:45:56 compute-0 systemd[1]: libpod-conmon-36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c.scope: Deactivated successfully.
Oct 14 09:45:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:56 compute-0 sudo[425180]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:56 compute-0 sudo[425327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:56 compute-0 sudo[425327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:56 compute-0 sudo[425327]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:57 compute-0 sudo[425352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:45:57 compute-0 sudo[425352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:57 compute-0 sudo[425352]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:57 compute-0 sudo[425377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:57 compute-0 sudo[425377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:57 compute-0 sudo[425377]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:57 compute-0 sudo[425402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:45:57 compute-0 sudo[425402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:57 compute-0 ceph-mon[74249]: pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:57 compute-0 podman[425465]: 2025-10-14 09:45:57.601680725 +0000 UTC m=+0.061873040 container create 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:45:57 compute-0 systemd[1]: Started libpod-conmon-057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540.scope.
Oct 14 09:45:57 compute-0 podman[425465]: 2025-10-14 09:45:57.568258814 +0000 UTC m=+0.028451129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:45:57 compute-0 nova_compute[259627]: 2025-10-14 09:45:57.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:45:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:45:57 compute-0 podman[425465]: 2025-10-14 09:45:57.718663845 +0000 UTC m=+0.178856130 container init 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 14 09:45:57 compute-0 podman[425465]: 2025-10-14 09:45:57.727254206 +0000 UTC m=+0.187446511 container start 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:45:57 compute-0 silly_beaver[425481]: 167 167
Oct 14 09:45:57 compute-0 systemd[1]: libpod-057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540.scope: Deactivated successfully.
Oct 14 09:45:57 compute-0 podman[425465]: 2025-10-14 09:45:57.740723687 +0000 UTC m=+0.200916082 container attach 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:45:57 compute-0 podman[425465]: 2025-10-14 09:45:57.741815054 +0000 UTC m=+0.202007339 container died 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-25f0e43edca948d7d00c3242760025fcc8b36f1f5701dc77d5b77472836b1a11-merged.mount: Deactivated successfully.
Oct 14 09:45:57 compute-0 podman[425465]: 2025-10-14 09:45:57.828096861 +0000 UTC m=+0.288289166 container remove 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:45:57 compute-0 systemd[1]: libpod-conmon-057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540.scope: Deactivated successfully.
Oct 14 09:45:58 compute-0 podman[425507]: 2025-10-14 09:45:58.063223911 +0000 UTC m=+0.079346308 container create cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 14 09:45:58 compute-0 systemd[1]: Started libpod-conmon-cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c.scope.
Oct 14 09:45:58 compute-0 podman[425507]: 2025-10-14 09:45:58.028300514 +0000 UTC m=+0.044422951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:45:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:45:58 compute-0 podman[425507]: 2025-10-14 09:45:58.177349592 +0000 UTC m=+0.193471979 container init cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:45:58 compute-0 podman[425507]: 2025-10-14 09:45:58.188213368 +0000 UTC m=+0.204335765 container start cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:45:58 compute-0 podman[425507]: 2025-10-14 09:45:58.20459479 +0000 UTC m=+0.220717167 container attach cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:45:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:45:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]: {
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "osd_id": 2,
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "type": "bluestore"
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:     },
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "osd_id": 1,
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "type": "bluestore"
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:     },
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "osd_id": 0,
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:         "type": "bluestore"
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]:     }
Oct 14 09:45:59 compute-0 xenodochial_carver[425524]: }
Oct 14 09:45:59 compute-0 systemd[1]: libpod-cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c.scope: Deactivated successfully.
Oct 14 09:45:59 compute-0 systemd[1]: libpod-cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c.scope: Consumed 1.169s CPU time.
Oct 14 09:45:59 compute-0 podman[425557]: 2025-10-14 09:45:59.412604115 +0000 UTC m=+0.040844113 container died cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb-merged.mount: Deactivated successfully.
Oct 14 09:45:59 compute-0 podman[425557]: 2025-10-14 09:45:59.489257666 +0000 UTC m=+0.117497624 container remove cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 09:45:59 compute-0 systemd[1]: libpod-conmon-cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c.scope: Deactivated successfully.
Oct 14 09:45:59 compute-0 sudo[425402]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:45:59 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:45:59 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:59 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev fa29527b-fe4d-4a87-9cf7-493f7ab36c29 does not exist
Oct 14 09:45:59 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5ad0b29e-3854-4aeb-a0a2-5734ac61d947 does not exist
Oct 14 09:45:59 compute-0 sudo[425572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:45:59 compute-0 sudo[425572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:59 compute-0 sudo[425572]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:59 compute-0 sudo[425597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:45:59 compute-0 sudo[425597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:45:59 compute-0 sudo[425597]: pam_unix(sudo:session): session closed for user root
Oct 14 09:45:59 compute-0 ceph-mon[74249]: pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:45:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:45:59 compute-0 nova_compute[259627]: 2025-10-14 09:45:59.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:45:59 compute-0 nova_compute[259627]: 2025-10-14 09:45:59.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.020 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:46:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:46:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600493190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.526 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.771 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.773 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3545MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.774 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.774 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.846 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.846 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:46:00 compute-0 nova_compute[259627]: 2025-10-14 09:46:00.863 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:46:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3600493190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:46:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:46:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2981660896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:46:01 compute-0 nova_compute[259627]: 2025-10-14 09:46:01.317 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:46:01 compute-0 nova_compute[259627]: 2025-10-14 09:46:01.322 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:46:01 compute-0 nova_compute[259627]: 2025-10-14 09:46:01.341 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:46:01 compute-0 nova_compute[259627]: 2025-10-14 09:46:01.342 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:46:01 compute-0 nova_compute[259627]: 2025-10-14 09:46:01.342 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:46:01 compute-0 ceph-mon[74249]: pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2981660896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:46:02 compute-0 nova_compute[259627]: 2025-10-14 09:46:02.342 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:02 compute-0 nova_compute[259627]: 2025-10-14 09:46:02.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:46:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:46:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:46:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:46:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:46:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:46:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:03 compute-0 ceph-mon[74249]: pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:04 compute-0 nova_compute[259627]: 2025-10-14 09:46:04.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:46:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930918275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:46:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:46:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930918275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:46:05 compute-0 nova_compute[259627]: 2025-10-14 09:46:05.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:05 compute-0 ceph-mon[74249]: pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/930918275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:46:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/930918275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:46:05 compute-0 nova_compute[259627]: 2025-10-14 09:46:05.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:05 compute-0 nova_compute[259627]: 2025-10-14 09:46:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:05 compute-0 nova_compute[259627]: 2025-10-14 09:46:05.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:46:05 compute-0 nova_compute[259627]: 2025-10-14 09:46:05.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:46:05 compute-0 nova_compute[259627]: 2025-10-14 09:46:05.991 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:46:05 compute-0 nova_compute[259627]: 2025-10-14 09:46:05.992 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:05 compute-0 nova_compute[259627]: 2025-10-14 09:46:05.992 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:46:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:46:07.062 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:46:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:46:07.063 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:46:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:46:07.063 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:46:07 compute-0 podman[425667]: 2025-10-14 09:46:07.701362139 +0000 UTC m=+0.097100604 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid)
Oct 14 09:46:07 compute-0 podman[425666]: 2025-10-14 09:46:07.70182184 +0000 UTC m=+0.098269272 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:46:07 compute-0 nova_compute[259627]: 2025-10-14 09:46:07.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:07 compute-0 ceph-mon[74249]: pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:09 compute-0 ceph-mon[74249]: pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:09 compute-0 nova_compute[259627]: 2025-10-14 09:46:09.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:10 compute-0 nova_compute[259627]: 2025-10-14 09:46:10.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:11 compute-0 ceph-mon[74249]: pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:12 compute-0 nova_compute[259627]: 2025-10-14 09:46:12.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:13 compute-0 ceph-mon[74249]: pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:13 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 14 09:46:13 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:13.995420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:46:13 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 14 09:46:13 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435173995463, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1848, "num_deletes": 253, "total_data_size": 3017036, "memory_usage": 3071376, "flush_reason": "Manual Compaction"}
Oct 14 09:46:13 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174013312, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2954292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56963, "largest_seqno": 58810, "table_properties": {"data_size": 2945786, "index_size": 5255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17583, "raw_average_key_size": 20, "raw_value_size": 2928718, "raw_average_value_size": 3377, "num_data_blocks": 233, "num_entries": 867, "num_filter_entries": 867, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434979, "oldest_key_time": 1760434979, "file_creation_time": 1760435173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 17951 microseconds, and 11924 cpu microseconds.
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.013367) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2954292 bytes OK
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.013394) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.014803) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.014823) EVENT_LOG_v1 {"time_micros": 1760435174014816, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.014844) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3009151, prev total WAL file size 3009151, number of live WAL files 2.
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.016521) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2885KB)], [134(9993KB)]
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174016591, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13187160, "oldest_snapshot_seqno": -1}
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7911 keys, 11433302 bytes, temperature: kUnknown
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174089938, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11433302, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11379589, "index_size": 32800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 206287, "raw_average_key_size": 26, "raw_value_size": 11237525, "raw_average_value_size": 1420, "num_data_blocks": 1284, "num_entries": 7911, "num_filter_entries": 7911, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.090260) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11433302 bytes
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.091303) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.5 rd, 155.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 8433, records dropped: 522 output_compression: NoCompression
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.091319) EVENT_LOG_v1 {"time_micros": 1760435174091311, "job": 82, "event": "compaction_finished", "compaction_time_micros": 73483, "compaction_time_cpu_micros": 53636, "output_level": 6, "num_output_files": 1, "total_output_size": 11433302, "num_input_records": 8433, "num_output_records": 7911, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174091955, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174093685, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.016294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:14 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:15 compute-0 nova_compute[259627]: 2025-10-14 09:46:15.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:16 compute-0 ceph-mon[74249]: pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:17 compute-0 nova_compute[259627]: 2025-10-14 09:46:17.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:18 compute-0 ceph-mon[74249]: pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:20 compute-0 ceph-mon[74249]: pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:20 compute-0 nova_compute[259627]: 2025-10-14 09:46:20.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:21 compute-0 podman[425709]: 2025-10-14 09:46:21.67364901 +0000 UTC m=+0.078473807 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 09:46:21 compute-0 podman[425708]: 2025-10-14 09:46:21.769418919 +0000 UTC m=+0.175921587 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:46:22 compute-0 ceph-mon[74249]: pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:22 compute-0 nova_compute[259627]: 2025-10-14 09:46:22.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:22 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 14 09:46:22 compute-0 systemd[1]: virtsecretd.service: Consumed 1.234s CPU time.
Oct 14 09:46:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.260811) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183260892, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 318, "num_deletes": 250, "total_data_size": 151625, "memory_usage": 158480, "flush_reason": "Manual Compaction"}
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183264272, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 150156, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58811, "largest_seqno": 59128, "table_properties": {"data_size": 148094, "index_size": 289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5649, "raw_average_key_size": 20, "raw_value_size": 144041, "raw_average_value_size": 514, "num_data_blocks": 13, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435174, "oldest_key_time": 1760435174, "file_creation_time": 1760435183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 3502 microseconds, and 1669 cpu microseconds.
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.264324) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 150156 bytes OK
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.264346) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.265870) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.265891) EVENT_LOG_v1 {"time_micros": 1760435183265884, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.265914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 149389, prev total WAL file size 149389, number of live WAL files 2.
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.266424) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323530' seq:72057594037927935, type:22 .. '6D6772737461740032353031' seq:0, type:0; will stop at (end)
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(146KB)], [137(10MB)]
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183266483, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11583458, "oldest_snapshot_seqno": -1}
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7684 keys, 8292053 bytes, temperature: kUnknown
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183325796, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8292053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8244601, "index_size": 27115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19269, "raw_key_size": 201751, "raw_average_key_size": 26, "raw_value_size": 8111246, "raw_average_value_size": 1055, "num_data_blocks": 1046, "num_entries": 7684, "num_filter_entries": 7684, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.326248) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8292053 bytes
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.327890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.9 rd, 139.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.9 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(132.4) write-amplify(55.2) OK, records in: 8191, records dropped: 507 output_compression: NoCompression
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.327929) EVENT_LOG_v1 {"time_micros": 1760435183327911, "job": 84, "event": "compaction_finished", "compaction_time_micros": 59426, "compaction_time_cpu_micros": 42843, "output_level": 6, "num_output_files": 1, "total_output_size": 8292053, "num_input_records": 8191, "num_output_records": 7684, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183328199, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183332569, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.266359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:23 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:46:24 compute-0 ceph-mon[74249]: pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:25 compute-0 nova_compute[259627]: 2025-10-14 09:46:25.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:26 compute-0 ceph-mon[74249]: pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:27 compute-0 nova_compute[259627]: 2025-10-14 09:46:27.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:28 compute-0 ceph-mon[74249]: pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:30 compute-0 ceph-mon[74249]: pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:30 compute-0 nova_compute[259627]: 2025-10-14 09:46:30.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:32 compute-0 ceph-mon[74249]: pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:32 compute-0 nova_compute[259627]: 2025-10-14 09:46:32.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:46:32
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'images', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr']
Oct 14 09:46:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:46:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:46:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:46:33 compute-0 nova_compute[259627]: 2025-10-14 09:46:33.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:34 compute-0 ceph-mon[74249]: pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:36 compute-0 nova_compute[259627]: 2025-10-14 09:46:36.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:36 compute-0 ceph-mon[74249]: pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:37 compute-0 nova_compute[259627]: 2025-10-14 09:46:37.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:38 compute-0 ceph-mon[74249]: pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:38 compute-0 podman[425753]: 2025-10-14 09:46:38.682478166 +0000 UTC m=+0.082704590 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:46:38 compute-0 podman[425752]: 2025-10-14 09:46:38.690174175 +0000 UTC m=+0.094315145 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 09:46:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:40 compute-0 ceph-mon[74249]: pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:41 compute-0 nova_compute[259627]: 2025-10-14 09:46:41.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:42 compute-0 ceph-mon[74249]: pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:42 compute-0 nova_compute[259627]: 2025-10-14 09:46:42.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:46:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:46:44 compute-0 ceph-mon[74249]: pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:46 compute-0 nova_compute[259627]: 2025-10-14 09:46:46.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:46 compute-0 ceph-mon[74249]: pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:47 compute-0 nova_compute[259627]: 2025-10-14 09:46:47.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:48 compute-0 ceph-mon[74249]: pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:50 compute-0 ceph-mon[74249]: pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:50 compute-0 nova_compute[259627]: 2025-10-14 09:46:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:51 compute-0 nova_compute[259627]: 2025-10-14 09:46:51.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:52 compute-0 ceph-mon[74249]: pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:52 compute-0 podman[425791]: 2025-10-14 09:46:52.640913007 +0000 UTC m=+0.054944880 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:46:52 compute-0 podman[425790]: 2025-10-14 09:46:52.680389545 +0000 UTC m=+0.094871899 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:46:52 compute-0 nova_compute[259627]: 2025-10-14 09:46:52.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:54 compute-0 ceph-mon[74249]: pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:56 compute-0 nova_compute[259627]: 2025-10-14 09:46:56.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:56 compute-0 ceph-mon[74249]: pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:57 compute-0 nova_compute[259627]: 2025-10-14 09:46:57.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:46:58 compute-0 ceph-mon[74249]: pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:46:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:46:59 compute-0 sudo[425835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:46:59 compute-0 sudo[425835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:46:59 compute-0 sudo[425835]: pam_unix(sudo:session): session closed for user root
Oct 14 09:46:59 compute-0 sudo[425860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:46:59 compute-0 sudo[425860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:46:59 compute-0 sudo[425860]: pam_unix(sudo:session): session closed for user root
Oct 14 09:46:59 compute-0 nova_compute[259627]: 2025-10-14 09:46:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:46:59 compute-0 nova_compute[259627]: 2025-10-14 09:46:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.019 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:47:00 compute-0 sudo[425885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:47:00 compute-0 sudo[425885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:00 compute-0 sudo[425885]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:00 compute-0 sudo[425911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:47:00 compute-0 sudo[425911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:00 compute-0 ceph-mon[74249]: pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:47:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169817489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.495 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:47:00 compute-0 sudo[425911]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.687 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.688 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3587MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.688 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.688 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:47:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:47:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:47:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:47:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:47:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:47:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:47:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f8d9f02f-f0de-4abe-9a58-8961c14491f0 does not exist
Oct 14 09:47:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ef88425e-0c7c-4964-82f1-41d4be2a440a does not exist
Oct 14 09:47:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:47:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:47:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 40c89b73-e6f3-465d-9d9b-c8801b143c1b does not exist
Oct 14 09:47:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:47:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:47:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:47:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:47:00 compute-0 sudo[425987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:47:00 compute-0 sudo[425987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:00 compute-0 sudo[425987]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:00 compute-0 sudo[426012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:47:00 compute-0 sudo[426012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:00 compute-0 sudo[426012]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.914 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:47:00 compute-0 nova_compute[259627]: 2025-10-14 09:47:00.915 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:47:00 compute-0 sudo[426037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:47:00 compute-0 sudo[426037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:00 compute-0 sudo[426037]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:01 compute-0 sudo[426062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:47:01 compute-0 sudo[426062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:01 compute-0 nova_compute[259627]: 2025-10-14 09:47:01.046 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:47:01 compute-0 nova_compute[259627]: 2025-10-14 09:47:01.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1169817489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:47:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:47:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:47:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:47:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:47:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:47:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:47:01 compute-0 ceph-mon[74249]: pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:01 compute-0 podman[426146]: 2025-10-14 09:47:01.421053089 +0000 UTC m=+0.057016020 container create 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 09:47:01 compute-0 systemd[1]: Started libpod-conmon-753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477.scope.
Oct 14 09:47:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:47:01 compute-0 podman[426146]: 2025-10-14 09:47:01.393200466 +0000 UTC m=+0.029163437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:47:01 compute-0 podman[426146]: 2025-10-14 09:47:01.491560079 +0000 UTC m=+0.127523030 container init 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:47:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:47:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150856695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:47:01 compute-0 podman[426146]: 2025-10-14 09:47:01.50340943 +0000 UTC m=+0.139372391 container start 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:47:01 compute-0 podman[426146]: 2025-10-14 09:47:01.507600813 +0000 UTC m=+0.143563764 container attach 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:47:01 compute-0 angry_mccarthy[426162]: 167 167
Oct 14 09:47:01 compute-0 systemd[1]: libpod-753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477.scope: Deactivated successfully.
Oct 14 09:47:01 compute-0 conmon[426162]: conmon 753f6e320fa80bed18ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477.scope/container/memory.events
Oct 14 09:47:01 compute-0 podman[426146]: 2025-10-14 09:47:01.513252692 +0000 UTC m=+0.149215663 container died 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:47:01 compute-0 nova_compute[259627]: 2025-10-14 09:47:01.515 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:47:01 compute-0 nova_compute[259627]: 2025-10-14 09:47:01.522 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc717758962283ed6c4abb2725ef80c9fdfa5ee06288cb7f450655184acb642-merged.mount: Deactivated successfully.
Oct 14 09:47:01 compute-0 nova_compute[259627]: 2025-10-14 09:47:01.547 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:47:01 compute-0 nova_compute[259627]: 2025-10-14 09:47:01.549 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:47:01 compute-0 nova_compute[259627]: 2025-10-14 09:47:01.549 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:47:01 compute-0 podman[426146]: 2025-10-14 09:47:01.566005046 +0000 UTC m=+0.201968017 container remove 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:47:01 compute-0 systemd[1]: libpod-conmon-753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477.scope: Deactivated successfully.
Oct 14 09:47:01 compute-0 podman[426188]: 2025-10-14 09:47:01.808454406 +0000 UTC m=+0.073964196 container create 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:47:01 compute-0 systemd[1]: Started libpod-conmon-89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7.scope.
Oct 14 09:47:01 compute-0 podman[426188]: 2025-10-14 09:47:01.767291086 +0000 UTC m=+0.032800916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:47:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:01 compute-0 podman[426188]: 2025-10-14 09:47:01.909843444 +0000 UTC m=+0.175353214 container init 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:47:01 compute-0 podman[426188]: 2025-10-14 09:47:01.924962545 +0000 UTC m=+0.190472345 container start 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:47:01 compute-0 podman[426188]: 2025-10-14 09:47:01.929258831 +0000 UTC m=+0.194768621 container attach 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:47:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3150856695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:47:02 compute-0 nova_compute[259627]: 2025-10-14 09:47:02.549 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:47:02 compute-0 nova_compute[259627]: 2025-10-14 09:47:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:47:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:47:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:47:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:47:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:47:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:47:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:03 compute-0 angry_northcutt[426205]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:47:03 compute-0 angry_northcutt[426205]: --> relative data size: 1.0
Oct 14 09:47:03 compute-0 angry_northcutt[426205]: --> All data devices are unavailable
Oct 14 09:47:03 compute-0 systemd[1]: libpod-89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7.scope: Deactivated successfully.
Oct 14 09:47:03 compute-0 systemd[1]: libpod-89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7.scope: Consumed 1.085s CPU time.
Oct 14 09:47:03 compute-0 podman[426188]: 2025-10-14 09:47:03.063310181 +0000 UTC m=+1.328819971 container died 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:47:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e-merged.mount: Deactivated successfully.
Oct 14 09:47:03 compute-0 podman[426188]: 2025-10-14 09:47:03.129772932 +0000 UTC m=+1.395282682 container remove 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:47:03 compute-0 systemd[1]: libpod-conmon-89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7.scope: Deactivated successfully.
Oct 14 09:47:03 compute-0 sudo[426062]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:03 compute-0 sudo[426247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:47:03 compute-0 sudo[426247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:03 compute-0 sudo[426247]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:03 compute-0 ceph-mon[74249]: pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:03 compute-0 sudo[426272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:47:03 compute-0 sudo[426272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:03 compute-0 sudo[426272]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:03 compute-0 sudo[426297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:47:03 compute-0 sudo[426297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:03 compute-0 sudo[426297]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:03 compute-0 sudo[426322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:47:03 compute-0 sudo[426322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:03 compute-0 podman[426388]: 2025-10-14 09:47:03.905633991 +0000 UTC m=+0.066952834 container create 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:47:03 compute-0 systemd[1]: Started libpod-conmon-0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26.scope.
Oct 14 09:47:03 compute-0 podman[426388]: 2025-10-14 09:47:03.880081234 +0000 UTC m=+0.041400117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:47:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:47:04 compute-0 podman[426388]: 2025-10-14 09:47:04.001475043 +0000 UTC m=+0.162793886 container init 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 09:47:04 compute-0 podman[426388]: 2025-10-14 09:47:04.013221382 +0000 UTC m=+0.174540215 container start 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 09:47:04 compute-0 podman[426388]: 2025-10-14 09:47:04.017206309 +0000 UTC m=+0.178525212 container attach 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:47:04 compute-0 affectionate_leavitt[426404]: 167 167
Oct 14 09:47:04 compute-0 systemd[1]: libpod-0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26.scope: Deactivated successfully.
Oct 14 09:47:04 compute-0 podman[426388]: 2025-10-14 09:47:04.019614548 +0000 UTC m=+0.180933391 container died 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:47:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3f89df327f558ffd345c6bdf4ece695df76c6bc7ceced2756389543cba510d2-merged.mount: Deactivated successfully.
Oct 14 09:47:04 compute-0 podman[426388]: 2025-10-14 09:47:04.070778744 +0000 UTC m=+0.232097587 container remove 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:47:04 compute-0 systemd[1]: libpod-conmon-0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26.scope: Deactivated successfully.
Oct 14 09:47:04 compute-0 podman[426428]: 2025-10-14 09:47:04.306667413 +0000 UTC m=+0.072415548 container create 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:47:04 compute-0 systemd[1]: Started libpod-conmon-00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449.scope.
Oct 14 09:47:04 compute-0 podman[426428]: 2025-10-14 09:47:04.276735008 +0000 UTC m=+0.042483193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:47:04 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:04 compute-0 podman[426428]: 2025-10-14 09:47:04.415189186 +0000 UTC m=+0.180937321 container init 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:47:04 compute-0 podman[426428]: 2025-10-14 09:47:04.460407555 +0000 UTC m=+0.226155690 container start 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 09:47:04 compute-0 podman[426428]: 2025-10-14 09:47:04.466319921 +0000 UTC m=+0.232068056 container attach 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 09:47:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]: {
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:     "0": [
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:         {
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "devices": [
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "/dev/loop3"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             ],
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_name": "ceph_lv0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_size": "21470642176",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "name": "ceph_lv0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "tags": {
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cluster_name": "ceph",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.crush_device_class": "",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.encrypted": "0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osd_id": "0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.type": "block",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.vdo": "0"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             },
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "type": "block",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "vg_name": "ceph_vg0"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:         }
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:     ],
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:     "1": [
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:         {
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "devices": [
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "/dev/loop4"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             ],
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_name": "ceph_lv1",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_size": "21470642176",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "name": "ceph_lv1",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "tags": {
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cluster_name": "ceph",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.crush_device_class": "",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.encrypted": "0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osd_id": "1",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.type": "block",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.vdo": "0"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             },
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "type": "block",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "vg_name": "ceph_vg1"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:         }
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:     ],
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:     "2": [
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:         {
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "devices": [
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "/dev/loop5"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             ],
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_name": "ceph_lv2",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_size": "21470642176",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "name": "ceph_lv2",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "tags": {
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.cluster_name": "ceph",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.crush_device_class": "",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.encrypted": "0",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osd_id": "2",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.type": "block",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:                 "ceph.vdo": "0"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             },
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "type": "block",
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:             "vg_name": "ceph_vg2"
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:         }
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]:     ]
Oct 14 09:47:05 compute-0 sleepy_chatterjee[426444]: }
Oct 14 09:47:05 compute-0 systemd[1]: libpod-00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449.scope: Deactivated successfully.
Oct 14 09:47:05 compute-0 podman[426428]: 2025-10-14 09:47:05.271859838 +0000 UTC m=+1.037607943 container died 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279-merged.mount: Deactivated successfully.
Oct 14 09:47:05 compute-0 podman[426428]: 2025-10-14 09:47:05.343296301 +0000 UTC m=+1.109044406 container remove 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 09:47:05 compute-0 systemd[1]: libpod-conmon-00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449.scope: Deactivated successfully.
Oct 14 09:47:05 compute-0 sudo[426322]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:05 compute-0 sudo[426467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:47:05 compute-0 sudo[426467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:05 compute-0 sudo[426467]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:05 compute-0 sudo[426492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:47:05 compute-0 sudo[426492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:05 compute-0 sudo[426492]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:05 compute-0 sudo[426517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:47:05 compute-0 sudo[426517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:05 compute-0 sudo[426517]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:05 compute-0 sudo[426542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:47:05 compute-0 sudo[426542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:47:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2095429104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:47:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:47:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2095429104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:47:05 compute-0 ceph-mon[74249]: pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2095429104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:47:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2095429104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:47:05 compute-0 nova_compute[259627]: 2025-10-14 09:47:05.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:47:05 compute-0 nova_compute[259627]: 2025-10-14 09:47:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:47:05 compute-0 nova_compute[259627]: 2025-10-14 09:47:05.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:47:05 compute-0 nova_compute[259627]: 2025-10-14 09:47:05.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:47:06 compute-0 nova_compute[259627]: 2025-10-14 09:47:06.001 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:47:06 compute-0 podman[426608]: 2025-10-14 09:47:06.108683813 +0000 UTC m=+0.059730756 container create af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:47:06 compute-0 systemd[1]: Started libpod-conmon-af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb.scope.
Oct 14 09:47:06 compute-0 podman[426608]: 2025-10-14 09:47:06.088066777 +0000 UTC m=+0.039113720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:47:06 compute-0 nova_compute[259627]: 2025-10-14 09:47:06.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:47:06 compute-0 podman[426608]: 2025-10-14 09:47:06.221570764 +0000 UTC m=+0.172617677 container init af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 09:47:06 compute-0 podman[426608]: 2025-10-14 09:47:06.229662302 +0000 UTC m=+0.180709255 container start af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 09:47:06 compute-0 podman[426608]: 2025-10-14 09:47:06.233753823 +0000 UTC m=+0.184800776 container attach af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:47:06 compute-0 optimistic_nobel[426624]: 167 167
Oct 14 09:47:06 compute-0 systemd[1]: libpod-af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb.scope: Deactivated successfully.
Oct 14 09:47:06 compute-0 podman[426608]: 2025-10-14 09:47:06.239263408 +0000 UTC m=+0.190310351 container died af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 09:47:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0569f6ca98b66bd754466065c03f3096a15837ae09ddb7af5f5b17cd6b9120b0-merged.mount: Deactivated successfully.
Oct 14 09:47:06 compute-0 podman[426608]: 2025-10-14 09:47:06.290004133 +0000 UTC m=+0.241051056 container remove af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:47:06 compute-0 systemd[1]: libpod-conmon-af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb.scope: Deactivated successfully.
Oct 14 09:47:06 compute-0 podman[426649]: 2025-10-14 09:47:06.504473296 +0000 UTC m=+0.076976100 container create f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:47:06 compute-0 systemd[1]: Started libpod-conmon-f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df.scope.
Oct 14 09:47:06 compute-0 podman[426649]: 2025-10-14 09:47:06.475985257 +0000 UTC m=+0.048488151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:47:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:47:06 compute-0 podman[426649]: 2025-10-14 09:47:06.605377752 +0000 UTC m=+0.177880626 container init f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:47:06 compute-0 podman[426649]: 2025-10-14 09:47:06.619673343 +0000 UTC m=+0.192176137 container start f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:47:06 compute-0 podman[426649]: 2025-10-14 09:47:06.623684372 +0000 UTC m=+0.196187196 container attach f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:47:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:06 compute-0 nova_compute[259627]: 2025-10-14 09:47:06.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:47:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:47:07.064 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:47:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:47:07.064 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:47:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:47:07.065 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:47:07 compute-0 gifted_cori[426665]: {
Oct 14 09:47:07 compute-0 gifted_cori[426665]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "osd_id": 2,
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "type": "bluestore"
Oct 14 09:47:07 compute-0 gifted_cori[426665]:     },
Oct 14 09:47:07 compute-0 gifted_cori[426665]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "osd_id": 1,
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "type": "bluestore"
Oct 14 09:47:07 compute-0 gifted_cori[426665]:     },
Oct 14 09:47:07 compute-0 gifted_cori[426665]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "osd_id": 0,
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:47:07 compute-0 gifted_cori[426665]:         "type": "bluestore"
Oct 14 09:47:07 compute-0 gifted_cori[426665]:     }
Oct 14 09:47:07 compute-0 gifted_cori[426665]: }
Oct 14 09:47:07 compute-0 systemd[1]: libpod-f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df.scope: Deactivated successfully.
Oct 14 09:47:07 compute-0 podman[426649]: 2025-10-14 09:47:07.731166459 +0000 UTC m=+1.303669293 container died f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:47:07 compute-0 systemd[1]: libpod-f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df.scope: Consumed 1.121s CPU time.
Oct 14 09:47:07 compute-0 nova_compute[259627]: 2025-10-14 09:47:07.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9-merged.mount: Deactivated successfully.
Oct 14 09:47:07 compute-0 podman[426649]: 2025-10-14 09:47:07.807730268 +0000 UTC m=+1.380233072 container remove f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:47:07 compute-0 systemd[1]: libpod-conmon-f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df.scope: Deactivated successfully.
Oct 14 09:47:07 compute-0 sudo[426542]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:47:07 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:47:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:47:07 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:47:07 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e514fc5c-17b7-4073-81a7-19661a5bc7b7 does not exist
Oct 14 09:47:07 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 644e508d-eb9e-4192-8fa0-d2073bd38ec8 does not exist
Oct 14 09:47:07 compute-0 nova_compute[259627]: 2025-10-14 09:47:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:47:07 compute-0 nova_compute[259627]: 2025-10-14 09:47:07.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:47:08 compute-0 ceph-mon[74249]: pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:47:08 compute-0 sudo[426709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:47:08 compute-0 sudo[426709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:08 compute-0 sudo[426709]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:08 compute-0 sudo[426734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:47:08 compute-0 sudo[426734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:47:08 compute-0 sudo[426734]: pam_unix(sudo:session): session closed for user root
Oct 14 09:47:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:09 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:47:09 compute-0 podman[426760]: 2025-10-14 09:47:09.701964352 +0000 UTC m=+0.093235389 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 14 09:47:09 compute-0 podman[426759]: 2025-10-14 09:47:09.708192475 +0000 UTC m=+0.101765949 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 09:47:09 compute-0 nova_compute[259627]: 2025-10-14 09:47:09.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:47:10 compute-0 ceph-mon[74249]: pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:11 compute-0 nova_compute[259627]: 2025-10-14 09:47:11.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:12 compute-0 ceph-mon[74249]: pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:12 compute-0 nova_compute[259627]: 2025-10-14 09:47:12.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:14 compute-0 ceph-mon[74249]: pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:16 compute-0 ceph-mon[74249]: pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:16 compute-0 nova_compute[259627]: 2025-10-14 09:47:16.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:17 compute-0 nova_compute[259627]: 2025-10-14 09:47:17.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:18 compute-0 ceph-mon[74249]: pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:20 compute-0 ceph-mon[74249]: pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:21 compute-0 nova_compute[259627]: 2025-10-14 09:47:21.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:22 compute-0 ceph-mon[74249]: pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:22 compute-0 nova_compute[259627]: 2025-10-14 09:47:22.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:23 compute-0 podman[426801]: 2025-10-14 09:47:23.690408207 +0000 UTC m=+0.093456434 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 14 09:47:23 compute-0 podman[426800]: 2025-10-14 09:47:23.769483638 +0000 UTC m=+0.177381264 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:47:24 compute-0 ceph-mon[74249]: pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:26 compute-0 ceph-mon[74249]: pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:26 compute-0 nova_compute[259627]: 2025-10-14 09:47:26.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 14 09:47:27 compute-0 nova_compute[259627]: 2025-10-14 09:47:27.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:28 compute-0 ceph-mon[74249]: pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 14 09:47:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 14 09:47:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 14 09:47:30 compute-0 ceph-mon[74249]: pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 14 09:47:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 14 09:47:30 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 14 09:47:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 14 09:47:31 compute-0 ceph-mon[74249]: osdmap e295: 3 total, 3 up, 3 in
Oct 14 09:47:31 compute-0 nova_compute[259627]: 2025-10-14 09:47:31.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:32 compute-0 ceph-mon[74249]: pgmap v2834: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:47:32 compute-0 nova_compute[259627]: 2025-10-14 09:47:32.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:47:32
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images', '.mgr', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 14 09:47:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:47:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 14 09:47:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 14 09:47:33 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 14 09:47:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:47:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:47:34 compute-0 ceph-mon[74249]: pgmap v2835: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 14 09:47:34 compute-0 ceph-mon[74249]: osdmap e296: 3 total, 3 up, 3 in
Oct 14 09:47:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct 14 09:47:36 compute-0 nova_compute[259627]: 2025-10-14 09:47:36.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:36 compute-0 ceph-mon[74249]: pgmap v2837: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct 14 09:47:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 09:47:37 compute-0 ceph-mon[74249]: pgmap v2838: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 09:47:37 compute-0 nova_compute[259627]: 2025-10-14 09:47:37.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 14 09:47:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 14 09:47:38 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 14 09:47:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 14 09:47:39 compute-0 ceph-mon[74249]: osdmap e297: 3 total, 3 up, 3 in
Oct 14 09:47:39 compute-0 ceph-mon[74249]: pgmap v2840: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 14 09:47:40 compute-0 podman[426844]: 2025-10-14 09:47:40.678433643 +0000 UTC m=+0.079836470 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 09:47:40 compute-0 podman[426845]: 2025-10-14 09:47:40.6807672 +0000 UTC m=+0.078568199 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 14 09:47:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 14 09:47:41 compute-0 nova_compute[259627]: 2025-10-14 09:47:41.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:41 compute-0 ceph-mon[74249]: pgmap v2841: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 14 09:47:42 compute-0 nova_compute[259627]: 2025-10-14 09:47:42.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Oct 14 09:47:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:47:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:47:43 compute-0 ceph-mon[74249]: pgmap v2842: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Oct 14 09:47:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Oct 14 09:47:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 14 09:47:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 14 09:47:44 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 14 09:47:45 compute-0 ceph-mon[74249]: pgmap v2843: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Oct 14 09:47:45 compute-0 ceph-mon[74249]: osdmap e298: 3 total, 3 up, 3 in
Oct 14 09:47:46 compute-0 nova_compute[259627]: 2025-10-14 09:47:46.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 17 op/s
Oct 14 09:47:47 compute-0 nova_compute[259627]: 2025-10-14 09:47:47.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:47 compute-0 ceph-mon[74249]: pgmap v2845: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 17 op/s
Oct 14 09:47:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 09:47:49 compute-0 ceph-mon[74249]: pgmap v2846: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 09:47:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 09:47:51 compute-0 nova_compute[259627]: 2025-10-14 09:47:51.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:52 compute-0 ceph-mon[74249]: pgmap v2847: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 09:47:52 compute-0 nova_compute[259627]: 2025-10-14 09:47:52.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 09:47:52 compute-0 nova_compute[259627]: 2025-10-14 09:47:52.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:47:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:54 compute-0 ceph-mon[74249]: pgmap v2848: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 09:47:54 compute-0 podman[426884]: 2025-10-14 09:47:54.65837634 +0000 UTC m=+0.065912359 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:47:54 compute-0 podman[426883]: 2025-10-14 09:47:54.683630449 +0000 UTC m=+0.090111922 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 14 09:47:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 09:47:56 compute-0 ceph-mon[74249]: pgmap v2849: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 09:47:56 compute-0 nova_compute[259627]: 2025-10-14 09:47:56.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 14 09:47:57 compute-0 nova_compute[259627]: 2025-10-14 09:47:57.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:47:58 compute-0 ceph-mon[74249]: pgmap v2850: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 14 09:47:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:47:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:47:59 compute-0 nova_compute[259627]: 2025-10-14 09:47:59.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:00 compute-0 ceph-mon[74249]: pgmap v2851: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:01 compute-0 nova_compute[259627]: 2025-10-14 09:48:01.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:01 compute-0 nova_compute[259627]: 2025-10-14 09:48:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:48:02 compute-0 ceph-mon[74249]: pgmap v2852: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:48:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525910947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.462 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.650 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.651 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3592MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.651 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.652 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.738 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.739 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:48:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:48:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:48:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:48:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:48:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:48:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:48:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:02 compute-0 nova_compute[259627]: 2025-10-14 09:48:02.913 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:48:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/525910947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:48:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:48:03 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186373855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:48:03 compute-0 nova_compute[259627]: 2025-10-14 09:48:03.334 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:48:03 compute-0 nova_compute[259627]: 2025-10-14 09:48:03.342 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:48:03 compute-0 nova_compute[259627]: 2025-10-14 09:48:03.359 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:48:03 compute-0 nova_compute[259627]: 2025-10-14 09:48:03.361 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:48:03 compute-0 nova_compute[259627]: 2025-10-14 09:48:03.362 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:48:04 compute-0 ceph-mon[74249]: pgmap v2853: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1186373855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:48:04 compute-0 nova_compute[259627]: 2025-10-14 09:48:04.362 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:48:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/185654086' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:48:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:48:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/185654086' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:48:05 compute-0 nova_compute[259627]: 2025-10-14 09:48:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:05 compute-0 nova_compute[259627]: 2025-10-14 09:48:05.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:48:05 compute-0 nova_compute[259627]: 2025-10-14 09:48:05.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:48:06 compute-0 nova_compute[259627]: 2025-10-14 09:48:06.020 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:48:06 compute-0 ceph-mon[74249]: pgmap v2854: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/185654086' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:48:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/185654086' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:48:06 compute-0 nova_compute[259627]: 2025-10-14 09:48:06.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:07 compute-0 nova_compute[259627]: 2025-10-14 09:48:07.015 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:48:07.065 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:48:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:48:07.066 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:48:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:48:07.066 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:48:07 compute-0 nova_compute[259627]: 2025-10-14 09:48:07.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:08 compute-0 ceph-mon[74249]: pgmap v2855: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:08 compute-0 sudo[426970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:08 compute-0 sudo[426970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:08 compute-0 sudo[426970]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:08 compute-0 sudo[426995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:48:08 compute-0 sudo[426995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:08 compute-0 sudo[426995]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:08 compute-0 sudo[427020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:08 compute-0 sudo[427020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:08 compute-0 sudo[427020]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:08 compute-0 sudo[427045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 14 09:48:08 compute-0 sudo[427045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:08 compute-0 nova_compute[259627]: 2025-10-14 09:48:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:09 compute-0 podman[427139]: 2025-10-14 09:48:09.192401312 +0000 UTC m=+0.100022874 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:48:09 compute-0 podman[427139]: 2025-10-14 09:48:09.321567582 +0000 UTC m=+0.229189144 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:48:09 compute-0 nova_compute[259627]: 2025-10-14 09:48:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:09 compute-0 nova_compute[259627]: 2025-10-14 09:48:09.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:48:10 compute-0 ceph-mon[74249]: pgmap v2856: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:10 compute-0 sudo[427045]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:48:10 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:48:10 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:10 compute-0 sudo[427301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:10 compute-0 sudo[427301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:10 compute-0 sudo[427301]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:10 compute-0 sudo[427326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:48:10 compute-0 sudo[427326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:10 compute-0 sudo[427326]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:10 compute-0 sudo[427351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:10 compute-0 sudo[427351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:10 compute-0 sudo[427351]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:10 compute-0 sudo[427376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:48:10 compute-0 sudo[427376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:10 compute-0 sudo[427376]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:10 compute-0 nova_compute[259627]: 2025-10-14 09:48:10.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:48:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:48:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:48:11 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:48:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:48:11 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:11 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 07fd6601-b22c-44c4-bc05-680132fecab2 does not exist
Oct 14 09:48:11 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 1be47d71-0f27-4eb7-ad7b-00913c128e69 does not exist
Oct 14 09:48:11 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 06a248db-b2ba-45b0-bd58-71e3c0209744 does not exist
Oct 14 09:48:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:48:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:48:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:48:11 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:48:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:48:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:48:11 compute-0 sudo[427432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:11 compute-0 sudo[427432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:11 compute-0 sudo[427432]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:48:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:48:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:48:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:48:11 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:48:11 compute-0 sudo[427459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:48:11 compute-0 sudo[427459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:11 compute-0 sudo[427459]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:11 compute-0 podman[427456]: 2025-10-14 09:48:11.269193827 +0000 UTC m=+0.105782467 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 09:48:11 compute-0 podman[427457]: 2025-10-14 09:48:11.26890056 +0000 UTC m=+0.105527611 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 14 09:48:11 compute-0 nova_compute[259627]: 2025-10-14 09:48:11.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:11 compute-0 sudo[427519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:11 compute-0 sudo[427519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:11 compute-0 sudo[427519]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:11 compute-0 sudo[427547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:48:11 compute-0 sudo[427547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:11 compute-0 podman[427614]: 2025-10-14 09:48:11.880639702 +0000 UTC m=+0.069506276 container create 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:48:11 compute-0 systemd[1]: Started libpod-conmon-788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657.scope.
Oct 14 09:48:11 compute-0 podman[427614]: 2025-10-14 09:48:11.855550897 +0000 UTC m=+0.044417451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:48:11 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:48:11 compute-0 podman[427614]: 2025-10-14 09:48:11.983750343 +0000 UTC m=+0.172616917 container init 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:48:11 compute-0 podman[427614]: 2025-10-14 09:48:11.991270777 +0000 UTC m=+0.180137361 container start 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:48:11 compute-0 podman[427614]: 2025-10-14 09:48:11.995191703 +0000 UTC m=+0.184058307 container attach 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 09:48:11 compute-0 happy_herschel[427631]: 167 167
Oct 14 09:48:11 compute-0 systemd[1]: libpod-788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657.scope: Deactivated successfully.
Oct 14 09:48:12 compute-0 podman[427614]: 2025-10-14 09:48:12.001199401 +0000 UTC m=+0.190065985 container died 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:48:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-92d9bd470852f3821d00abbe72f11bb98c8a0a72c5d8e540e18a3d8530f619d1-merged.mount: Deactivated successfully.
Oct 14 09:48:12 compute-0 podman[427614]: 2025-10-14 09:48:12.04434112 +0000 UTC m=+0.233207674 container remove 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:48:12 compute-0 systemd[1]: libpod-conmon-788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657.scope: Deactivated successfully.
Oct 14 09:48:12 compute-0 ceph-mon[74249]: pgmap v2857: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:12 compute-0 podman[427654]: 2025-10-14 09:48:12.308599624 +0000 UTC m=+0.082321491 container create 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:48:12 compute-0 podman[427654]: 2025-10-14 09:48:12.278110976 +0000 UTC m=+0.051832903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:48:12 compute-0 systemd[1]: Started libpod-conmon-68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3.scope.
Oct 14 09:48:12 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:12 compute-0 podman[427654]: 2025-10-14 09:48:12.449624195 +0000 UTC m=+0.223346112 container init 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:48:12 compute-0 podman[427654]: 2025-10-14 09:48:12.464985962 +0000 UTC m=+0.238707829 container start 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 09:48:12 compute-0 podman[427654]: 2025-10-14 09:48:12.469641736 +0000 UTC m=+0.243363593 container attach 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 09:48:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:12 compute-0 nova_compute[259627]: 2025-10-14 09:48:12.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:13 compute-0 festive_kowalevski[427671]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:48:13 compute-0 festive_kowalevski[427671]: --> relative data size: 1.0
Oct 14 09:48:13 compute-0 festive_kowalevski[427671]: --> All data devices are unavailable
Oct 14 09:48:13 compute-0 systemd[1]: libpod-68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3.scope: Deactivated successfully.
Oct 14 09:48:13 compute-0 podman[427654]: 2025-10-14 09:48:13.696956624 +0000 UTC m=+1.470678491 container died 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:48:13 compute-0 systemd[1]: libpod-68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3.scope: Consumed 1.197s CPU time.
Oct 14 09:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc-merged.mount: Deactivated successfully.
Oct 14 09:48:13 compute-0 podman[427654]: 2025-10-14 09:48:13.79093178 +0000 UTC m=+1.564653657 container remove 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:48:13 compute-0 systemd[1]: libpod-conmon-68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3.scope: Deactivated successfully.
Oct 14 09:48:13 compute-0 sudo[427547]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:13 compute-0 sudo[427711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:13 compute-0 sudo[427711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:13 compute-0 sudo[427711]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:14 compute-0 sudo[427736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:48:14 compute-0 sudo[427736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:14 compute-0 sudo[427736]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:14 compute-0 sudo[427761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:14 compute-0 sudo[427761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:14 compute-0 sudo[427761]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:14 compute-0 sudo[427786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:48:14 compute-0 sudo[427786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:14 compute-0 ceph-mon[74249]: pgmap v2858: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:14 compute-0 podman[427855]: 2025-10-14 09:48:14.661650428 +0000 UTC m=+0.073026663 container create 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:48:14 compute-0 systemd[1]: Started libpod-conmon-01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42.scope.
Oct 14 09:48:14 compute-0 podman[427855]: 2025-10-14 09:48:14.630885943 +0000 UTC m=+0.042262238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:48:14 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:48:14 compute-0 podman[427855]: 2025-10-14 09:48:14.752789904 +0000 UTC m=+0.164166199 container init 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:48:14 compute-0 podman[427855]: 2025-10-14 09:48:14.764888571 +0000 UTC m=+0.176264796 container start 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 09:48:14 compute-0 podman[427855]: 2025-10-14 09:48:14.769589896 +0000 UTC m=+0.180966151 container attach 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:48:14 compute-0 silly_ishizaka[427872]: 167 167
Oct 14 09:48:14 compute-0 systemd[1]: libpod-01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42.scope: Deactivated successfully.
Oct 14 09:48:14 compute-0 podman[427855]: 2025-10-14 09:48:14.771670887 +0000 UTC m=+0.183047112 container died 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 09:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dc3234ce6d4f37bfd8b066f18cc4920fe82a990b20d209d1b3e98375e16ccca-merged.mount: Deactivated successfully.
Oct 14 09:48:14 compute-0 podman[427855]: 2025-10-14 09:48:14.828050021 +0000 UTC m=+0.239426246 container remove 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:48:14 compute-0 systemd[1]: libpod-conmon-01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42.scope: Deactivated successfully.
Oct 14 09:48:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:15 compute-0 podman[427894]: 2025-10-14 09:48:15.041446097 +0000 UTC m=+0.048626904 container create 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:48:15 compute-0 systemd[1]: Started libpod-conmon-1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6.scope.
Oct 14 09:48:15 compute-0 podman[427894]: 2025-10-14 09:48:15.021544479 +0000 UTC m=+0.028725296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:48:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:15 compute-0 podman[427894]: 2025-10-14 09:48:15.16995541 +0000 UTC m=+0.177136237 container init 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:48:15 compute-0 podman[427894]: 2025-10-14 09:48:15.187006179 +0000 UTC m=+0.194186976 container start 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:48:15 compute-0 podman[427894]: 2025-10-14 09:48:15.191852268 +0000 UTC m=+0.199033075 container attach 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:48:15 compute-0 inspiring_tu[427911]: {
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:     "0": [
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:         {
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "devices": [
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "/dev/loop3"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             ],
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_name": "ceph_lv0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_size": "21470642176",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "name": "ceph_lv0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "tags": {
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cluster_name": "ceph",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.crush_device_class": "",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.encrypted": "0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osd_id": "0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.type": "block",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.vdo": "0"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             },
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "type": "block",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "vg_name": "ceph_vg0"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:         }
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:     ],
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:     "1": [
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:         {
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "devices": [
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "/dev/loop4"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             ],
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_name": "ceph_lv1",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_size": "21470642176",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "name": "ceph_lv1",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "tags": {
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cluster_name": "ceph",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.crush_device_class": "",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.encrypted": "0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osd_id": "1",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.type": "block",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.vdo": "0"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             },
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "type": "block",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "vg_name": "ceph_vg1"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:         }
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:     ],
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:     "2": [
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:         {
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "devices": [
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "/dev/loop5"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             ],
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_name": "ceph_lv2",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_size": "21470642176",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "name": "ceph_lv2",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "tags": {
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.cluster_name": "ceph",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.crush_device_class": "",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.encrypted": "0",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osd_id": "2",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.type": "block",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:                 "ceph.vdo": "0"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             },
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "type": "block",
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:             "vg_name": "ceph_vg2"
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:         }
Oct 14 09:48:15 compute-0 inspiring_tu[427911]:     ]
Oct 14 09:48:15 compute-0 inspiring_tu[427911]: }
Oct 14 09:48:15 compute-0 systemd[1]: libpod-1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6.scope: Deactivated successfully.
Oct 14 09:48:15 compute-0 podman[427894]: 2025-10-14 09:48:15.983060694 +0000 UTC m=+0.990241531 container died 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922-merged.mount: Deactivated successfully.
Oct 14 09:48:16 compute-0 podman[427894]: 2025-10-14 09:48:16.051691809 +0000 UTC m=+1.058872576 container remove 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:48:16 compute-0 systemd[1]: libpod-conmon-1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6.scope: Deactivated successfully.
Oct 14 09:48:16 compute-0 sudo[427786]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:16 compute-0 sudo[427932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:16 compute-0 sudo[427932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:16 compute-0 sudo[427932]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:16 compute-0 ceph-mon[74249]: pgmap v2859: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:16 compute-0 sudo[427957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:48:16 compute-0 sudo[427957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:16 compute-0 sudo[427957]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:16 compute-0 unix_chkpwd[427995]: password check failed for user (root)
Oct 14 09:48:16 compute-0 sshd-session[427916]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 14 09:48:16 compute-0 sudo[427982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:16 compute-0 sudo[427982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:16 compute-0 nova_compute[259627]: 2025-10-14 09:48:16.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:16 compute-0 sudo[427982]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:16 compute-0 sudo[428008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:48:16 compute-0 sudo[428008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:16 compute-0 podman[428072]: 2025-10-14 09:48:16.838708301 +0000 UTC m=+0.055806890 container create 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:48:16 compute-0 systemd[1]: Started libpod-conmon-3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f.scope.
Oct 14 09:48:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:16 compute-0 podman[428072]: 2025-10-14 09:48:16.819794877 +0000 UTC m=+0.036893496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:48:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:48:16 compute-0 podman[428072]: 2025-10-14 09:48:16.943196715 +0000 UTC m=+0.160295334 container init 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:48:16 compute-0 podman[428072]: 2025-10-14 09:48:16.951764885 +0000 UTC m=+0.168863504 container start 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:48:16 compute-0 podman[428072]: 2025-10-14 09:48:16.955800354 +0000 UTC m=+0.172898973 container attach 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:48:16 compute-0 dazzling_turing[428089]: 167 167
Oct 14 09:48:16 compute-0 podman[428072]: 2025-10-14 09:48:16.963658507 +0000 UTC m=+0.180757126 container died 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:48:16 compute-0 systemd[1]: libpod-3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f.scope: Deactivated successfully.
Oct 14 09:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-80113aaae461cfeb15253ec63192479511335354395d349eb35c20322f09ddb6-merged.mount: Deactivated successfully.
Oct 14 09:48:17 compute-0 podman[428072]: 2025-10-14 09:48:17.009943153 +0000 UTC m=+0.227041752 container remove 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:48:17 compute-0 systemd[1]: libpod-conmon-3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f.scope: Deactivated successfully.
Oct 14 09:48:17 compute-0 podman[428113]: 2025-10-14 09:48:17.181973215 +0000 UTC m=+0.043571921 container create e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:48:17 compute-0 systemd[1]: Started libpod-conmon-e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06.scope.
Oct 14 09:48:17 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:48:17 compute-0 podman[428113]: 2025-10-14 09:48:17.164071725 +0000 UTC m=+0.025670421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:48:17 compute-0 podman[428113]: 2025-10-14 09:48:17.282483691 +0000 UTC m=+0.144082457 container init e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:48:17 compute-0 podman[428113]: 2025-10-14 09:48:17.294613819 +0000 UTC m=+0.156212525 container start e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:48:17 compute-0 podman[428113]: 2025-10-14 09:48:17.298551835 +0000 UTC m=+0.160150511 container attach e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:48:17 compute-0 nova_compute[259627]: 2025-10-14 09:48:17.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:18 compute-0 ceph-mon[74249]: pgmap v2860: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:18 compute-0 agitated_cerf[428129]: {
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "osd_id": 2,
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "type": "bluestore"
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:     },
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "osd_id": 1,
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "type": "bluestore"
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:     },
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "osd_id": 0,
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:         "type": "bluestore"
Oct 14 09:48:18 compute-0 agitated_cerf[428129]:     }
Oct 14 09:48:18 compute-0 agitated_cerf[428129]: }
Oct 14 09:48:18 compute-0 systemd[1]: libpod-e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06.scope: Deactivated successfully.
Oct 14 09:48:18 compute-0 systemd[1]: libpod-e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06.scope: Consumed 1.059s CPU time.
Oct 14 09:48:18 compute-0 podman[428113]: 2025-10-14 09:48:18.346083132 +0000 UTC m=+1.207681878 container died e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd-merged.mount: Deactivated successfully.
Oct 14 09:48:18 compute-0 podman[428113]: 2025-10-14 09:48:18.42464777 +0000 UTC m=+1.286246476 container remove e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:48:18 compute-0 systemd[1]: libpod-conmon-e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06.scope: Deactivated successfully.
Oct 14 09:48:18 compute-0 sudo[428008]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:48:18 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:48:18 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:18 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 426223b8-f791-4f97-8486-f84d11baf428 does not exist
Oct 14 09:48:18 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 1e207904-f74f-4720-85ac-5e7400eaf33d does not exist
Oct 14 09:48:18 compute-0 sshd-session[427916]: Failed password for root from 193.46.255.244 port 12418 ssh2
Oct 14 09:48:18 compute-0 sudo[428176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:48:18 compute-0 sudo[428176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:18 compute-0 sudo[428176]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:18 compute-0 sudo[428201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:48:18 compute-0 sudo[428201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:48:18 compute-0 sudo[428201]: pam_unix(sudo:session): session closed for user root
Oct 14 09:48:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:19 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:48:19 compute-0 ceph-mon[74249]: pgmap v2861: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:19 compute-0 unix_chkpwd[428226]: password check failed for user (root)
Oct 14 09:48:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:21 compute-0 nova_compute[259627]: 2025-10-14 09:48:21.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:21 compute-0 sshd-session[427916]: Failed password for root from 193.46.255.244 port 12418 ssh2
Oct 14 09:48:21 compute-0 ceph-mon[74249]: pgmap v2862: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:22 compute-0 nova_compute[259627]: 2025-10-14 09:48:22.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:23 compute-0 unix_chkpwd[428227]: password check failed for user (root)
Oct 14 09:48:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:23 compute-0 ceph-mon[74249]: pgmap v2863: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:24 compute-0 sshd-session[427916]: Failed password for root from 193.46.255.244 port 12418 ssh2
Oct 14 09:48:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:25 compute-0 podman[428229]: 2025-10-14 09:48:25.684439187 +0000 UTC m=+0.090978154 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent)
Oct 14 09:48:25 compute-0 podman[428228]: 2025-10-14 09:48:25.71433528 +0000 UTC m=+0.120873757 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 14 09:48:25 compute-0 ceph-mon[74249]: pgmap v2864: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:26 compute-0 nova_compute[259627]: 2025-10-14 09:48:26.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:26 compute-0 sshd-session[427916]: Received disconnect from 193.46.255.244 port 12418:11:  [preauth]
Oct 14 09:48:26 compute-0 sshd-session[427916]: Disconnected from authenticating user root 193.46.255.244 port 12418 [preauth]
Oct 14 09:48:26 compute-0 sshd-session[427916]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 14 09:48:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:27 compute-0 nova_compute[259627]: 2025-10-14 09:48:27.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:27 compute-0 ceph-mon[74249]: pgmap v2865: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:29 compute-0 ceph-mon[74249]: pgmap v2866: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:31 compute-0 nova_compute[259627]: 2025-10-14 09:48:31.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:32 compute-0 ceph-mon[74249]: pgmap v2867: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:48:32
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', '.mgr']
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:48:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:32 compute-0 nova_compute[259627]: 2025-10-14 09:48:32.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:48:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:48:34 compute-0 ceph-mon[74249]: pgmap v2868: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:35 compute-0 nova_compute[259627]: 2025-10-14 09:48:35.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:36 compute-0 ceph-mon[74249]: pgmap v2869: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:36 compute-0 nova_compute[259627]: 2025-10-14 09:48:36.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:38 compute-0 nova_compute[259627]: 2025-10-14 09:48:38.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:38 compute-0 ceph-mon[74249]: pgmap v2870: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:40 compute-0 ceph-mon[74249]: pgmap v2871: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:41 compute-0 nova_compute[259627]: 2025-10-14 09:48:41.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:41 compute-0 podman[428271]: 2025-10-14 09:48:41.657710432 +0000 UTC m=+0.073057794 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 14 09:48:41 compute-0 podman[428270]: 2025-10-14 09:48:41.679182778 +0000 UTC m=+0.088592744 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 09:48:42 compute-0 ceph-mon[74249]: pgmap v2872: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:43 compute-0 nova_compute[259627]: 2025-10-14 09:48:43.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:48:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:48:44 compute-0 ceph-mon[74249]: pgmap v2873: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:46 compute-0 ceph-mon[74249]: pgmap v2874: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:46 compute-0 nova_compute[259627]: 2025-10-14 09:48:46.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:48 compute-0 nova_compute[259627]: 2025-10-14 09:48:48.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:48 compute-0 ceph-mon[74249]: pgmap v2875: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:50 compute-0 ceph-mon[74249]: pgmap v2876: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:51 compute-0 nova_compute[259627]: 2025-10-14 09:48:51.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:52 compute-0 ceph-mon[74249]: pgmap v2877: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:53 compute-0 nova_compute[259627]: 2025-10-14 09:48:53.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:53 compute-0 nova_compute[259627]: 2025-10-14 09:48:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:54 compute-0 ceph-mon[74249]: pgmap v2878: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:55 compute-0 nova_compute[259627]: 2025-10-14 09:48:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:48:55 compute-0 nova_compute[259627]: 2025-10-14 09:48:55.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:48:56 compute-0 ceph-mon[74249]: pgmap v2879: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:56 compute-0 nova_compute[259627]: 2025-10-14 09:48:56.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:56 compute-0 podman[428311]: 2025-10-14 09:48:56.713219561 +0000 UTC m=+0.110100023 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 09:48:56 compute-0 podman[428310]: 2025-10-14 09:48:56.734543774 +0000 UTC m=+0.136310345 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:48:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:58 compute-0 nova_compute[259627]: 2025-10-14 09:48:58.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:48:58 compute-0 ceph-mon[74249]: pgmap v2880: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:48:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:48:59 compute-0 nova_compute[259627]: 2025-10-14 09:48:59.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:00 compute-0 ceph-mon[74249]: pgmap v2881: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:01 compute-0 nova_compute[259627]: 2025-10-14 09:49:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:02 compute-0 ceph-mon[74249]: pgmap v2882: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:49:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:49:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:49:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:49:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:49:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:49:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:02 compute-0 nova_compute[259627]: 2025-10-14 09:49:02.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.020 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.021 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.021 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.021 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.022 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:49:03 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2925532800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.511 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.690 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.691 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3584MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.691 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.692 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.778 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.779 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:49:03 compute-0 nova_compute[259627]: 2025-10-14 09:49:03.793 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:49:04 compute-0 ceph-mon[74249]: pgmap v2883: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2925532800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:49:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:49:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3921402581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:49:04 compute-0 nova_compute[259627]: 2025-10-14 09:49:04.269 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:49:04 compute-0 nova_compute[259627]: 2025-10-14 09:49:04.276 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:49:04 compute-0 nova_compute[259627]: 2025-10-14 09:49:04.295 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:49:04 compute-0 nova_compute[259627]: 2025-10-14 09:49:04.296 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:49:04 compute-0 nova_compute[259627]: 2025-10-14 09:49:04.297 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:49:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3921402581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:49:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:49:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1321879941' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:49:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:49:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1321879941' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:49:06 compute-0 ceph-mon[74249]: pgmap v2884: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1321879941' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:49:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1321879941' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:49:06 compute-0 nova_compute[259627]: 2025-10-14 09:49:06.297 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:06 compute-0 nova_compute[259627]: 2025-10-14 09:49:06.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:06 compute-0 nova_compute[259627]: 2025-10-14 09:49:06.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:06 compute-0 nova_compute[259627]: 2025-10-14 09:49:06.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:49:06 compute-0 nova_compute[259627]: 2025-10-14 09:49:06.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:49:07 compute-0 nova_compute[259627]: 2025-10-14 09:49:07.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:49:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:49:07.066 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:49:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:49:07.067 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:49:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:49:07.067 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:49:08 compute-0 nova_compute[259627]: 2025-10-14 09:49:08.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:08 compute-0 ceph-mon[74249]: pgmap v2885: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:08 compute-0 nova_compute[259627]: 2025-10-14 09:49:08.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:10 compute-0 ceph-mon[74249]: pgmap v2886: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:10 compute-0 nova_compute[259627]: 2025-10-14 09:49:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:11 compute-0 nova_compute[259627]: 2025-10-14 09:49:11.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:11 compute-0 nova_compute[259627]: 2025-10-14 09:49:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:11 compute-0 nova_compute[259627]: 2025-10-14 09:49:11.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:49:12 compute-0 ceph-mon[74249]: pgmap v2887: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:12 compute-0 podman[428397]: 2025-10-14 09:49:12.664595969 +0000 UTC m=+0.070650335 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 14 09:49:12 compute-0 podman[428396]: 2025-10-14 09:49:12.687941582 +0000 UTC m=+0.089318453 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 14 09:49:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:12 compute-0 nova_compute[259627]: 2025-10-14 09:49:12.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:13 compute-0 nova_compute[259627]: 2025-10-14 09:49:13.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:14 compute-0 ceph-mon[74249]: pgmap v2888: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:16 compute-0 ceph-mon[74249]: pgmap v2889: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:16 compute-0 nova_compute[259627]: 2025-10-14 09:49:16.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:16 compute-0 nova_compute[259627]: 2025-10-14 09:49:16.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:18 compute-0 nova_compute[259627]: 2025-10-14 09:49:18.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:18 compute-0 ceph-mon[74249]: pgmap v2890: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:18 compute-0 sudo[428434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:18 compute-0 sudo[428434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:18 compute-0 sudo[428434]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:18 compute-0 sudo[428459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:49:18 compute-0 sudo[428459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:18 compute-0 sudo[428459]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:18 compute-0 sudo[428484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:18 compute-0 sudo[428484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:18 compute-0 sudo[428484]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:19 compute-0 sudo[428509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:49:19 compute-0 sudo[428509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:19 compute-0 sudo[428509]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:49:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:49:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:49:19 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:49:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:49:19 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:49:19 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev dc31a80b-0eba-468c-b2f8-0ea21ba60128 does not exist
Oct 14 09:49:19 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5c32462b-ec66-44e7-ae3e-cce66fcec746 does not exist
Oct 14 09:49:19 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 001b3d70-dea1-4784-8520-dc6dca6cb889 does not exist
Oct 14 09:49:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:49:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:49:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:49:19 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:49:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:49:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:49:19 compute-0 sudo[428566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:19 compute-0 sudo[428566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:19 compute-0 sudo[428566]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:19 compute-0 sudo[428591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:49:19 compute-0 sudo[428591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:19 compute-0 sudo[428591]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:19 compute-0 sudo[428616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:19 compute-0 sudo[428616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:19 compute-0 sudo[428616]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:20 compute-0 sudo[428641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:49:20 compute-0 sudo[428641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:20 compute-0 ceph-mon[74249]: pgmap v2891: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:20 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:49:20 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:49:20 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:49:20 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:49:20 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:49:20 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:49:20 compute-0 podman[428709]: 2025-10-14 09:49:20.468715698 +0000 UTC m=+0.077888392 container create ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:49:20 compute-0 systemd[1]: Started libpod-conmon-ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3.scope.
Oct 14 09:49:20 compute-0 podman[428709]: 2025-10-14 09:49:20.434356115 +0000 UTC m=+0.043528859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:49:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:49:20 compute-0 podman[428709]: 2025-10-14 09:49:20.566407556 +0000 UTC m=+0.175580300 container init ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 09:49:20 compute-0 podman[428709]: 2025-10-14 09:49:20.580632865 +0000 UTC m=+0.189805519 container start ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 09:49:20 compute-0 podman[428709]: 2025-10-14 09:49:20.585058013 +0000 UTC m=+0.194230757 container attach ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:49:20 compute-0 crazy_hertz[428725]: 167 167
Oct 14 09:49:20 compute-0 systemd[1]: libpod-ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3.scope: Deactivated successfully.
Oct 14 09:49:20 compute-0 podman[428709]: 2025-10-14 09:49:20.587719969 +0000 UTC m=+0.196892633 container died ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 09:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d64c3d00755456c07a4ab33419e3187951c0eda5c41d86c7adf39c20f1dcdf2b-merged.mount: Deactivated successfully.
Oct 14 09:49:20 compute-0 podman[428709]: 2025-10-14 09:49:20.636964587 +0000 UTC m=+0.246137241 container remove ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:49:20 compute-0 systemd[1]: libpod-conmon-ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3.scope: Deactivated successfully.
Oct 14 09:49:20 compute-0 podman[428749]: 2025-10-14 09:49:20.890334083 +0000 UTC m=+0.075671698 container create 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 09:49:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:20 compute-0 systemd[1]: Started libpod-conmon-92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e.scope.
Oct 14 09:49:20 compute-0 podman[428749]: 2025-10-14 09:49:20.857638821 +0000 UTC m=+0.042976466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:49:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:21 compute-0 podman[428749]: 2025-10-14 09:49:21.006436672 +0000 UTC m=+0.191774257 container init 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:49:21 compute-0 podman[428749]: 2025-10-14 09:49:21.024222658 +0000 UTC m=+0.209560263 container start 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:49:21 compute-0 podman[428749]: 2025-10-14 09:49:21.028670427 +0000 UTC m=+0.214007992 container attach 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:49:21 compute-0 nova_compute[259627]: 2025-10-14 09:49:21.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:22 compute-0 vibrant_murdock[428766]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:49:22 compute-0 vibrant_murdock[428766]: --> relative data size: 1.0
Oct 14 09:49:22 compute-0 vibrant_murdock[428766]: --> All data devices are unavailable
Oct 14 09:49:22 compute-0 systemd[1]: libpod-92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e.scope: Deactivated successfully.
Oct 14 09:49:22 compute-0 podman[428749]: 2025-10-14 09:49:22.199845735 +0000 UTC m=+1.385183380 container died 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:49:22 compute-0 systemd[1]: libpod-92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e.scope: Consumed 1.135s CPU time.
Oct 14 09:49:22 compute-0 ceph-mon[74249]: pgmap v2892: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9-merged.mount: Deactivated successfully.
Oct 14 09:49:22 compute-0 podman[428749]: 2025-10-14 09:49:22.287357812 +0000 UTC m=+1.472695417 container remove 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 09:49:22 compute-0 systemd[1]: libpod-conmon-92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e.scope: Deactivated successfully.
Oct 14 09:49:22 compute-0 sudo[428641]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:22 compute-0 sudo[428808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:22 compute-0 sudo[428808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:22 compute-0 sudo[428808]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:22 compute-0 sudo[428833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:49:22 compute-0 sudo[428833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:22 compute-0 sudo[428833]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:22 compute-0 sudo[428858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:22 compute-0 sudo[428858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:22 compute-0 sudo[428858]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:22 compute-0 sudo[428883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:49:22 compute-0 sudo[428883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:22 compute-0 podman[428948]: 2025-10-14 09:49:22.999460505 +0000 UTC m=+0.038804383 container create 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:49:23 compute-0 systemd[1]: Started libpod-conmon-78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683.scope.
Oct 14 09:49:23 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:49:23 compute-0 podman[428948]: 2025-10-14 09:49:22.982616182 +0000 UTC m=+0.021960060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:49:23 compute-0 podman[428948]: 2025-10-14 09:49:23.08280207 +0000 UTC m=+0.122146018 container init 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:49:23 compute-0 podman[428948]: 2025-10-14 09:49:23.09217526 +0000 UTC m=+0.131519128 container start 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:49:23 compute-0 podman[428948]: 2025-10-14 09:49:23.096709951 +0000 UTC m=+0.136053849 container attach 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 09:49:23 compute-0 xenodochial_roentgen[428964]: 167 167
Oct 14 09:49:23 compute-0 systemd[1]: libpod-78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683.scope: Deactivated successfully.
Oct 14 09:49:23 compute-0 podman[428948]: 2025-10-14 09:49:23.099146381 +0000 UTC m=+0.138490249 container died 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-970cbae057c94ab4d02ca170ba516a54eb5b753ce4a61ba2e02ec8b7c0739d67-merged.mount: Deactivated successfully.
Oct 14 09:49:23 compute-0 podman[428948]: 2025-10-14 09:49:23.13699496 +0000 UTC m=+0.176338828 container remove 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:49:23 compute-0 systemd[1]: libpod-conmon-78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683.scope: Deactivated successfully.
Oct 14 09:49:23 compute-0 nova_compute[259627]: 2025-10-14 09:49:23.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:23 compute-0 podman[428988]: 2025-10-14 09:49:23.328178561 +0000 UTC m=+0.056711033 container create 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:49:23 compute-0 systemd[1]: Started libpod-conmon-43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b.scope.
Oct 14 09:49:23 compute-0 podman[428988]: 2025-10-14 09:49:23.305955726 +0000 UTC m=+0.034488168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:49:23 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:23 compute-0 podman[428988]: 2025-10-14 09:49:23.437111494 +0000 UTC m=+0.165644036 container init 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:49:23 compute-0 podman[428988]: 2025-10-14 09:49:23.452973193 +0000 UTC m=+0.181505695 container start 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:49:23 compute-0 podman[428988]: 2025-10-14 09:49:23.45731988 +0000 UTC m=+0.185852412 container attach 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]: {
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:     "0": [
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:         {
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "devices": [
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "/dev/loop3"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             ],
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_name": "ceph_lv0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_size": "21470642176",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "name": "ceph_lv0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "tags": {
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cluster_name": "ceph",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.crush_device_class": "",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.encrypted": "0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osd_id": "0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.type": "block",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.vdo": "0"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             },
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "type": "block",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "vg_name": "ceph_vg0"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:         }
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:     ],
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:     "1": [
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:         {
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "devices": [
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "/dev/loop4"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             ],
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_name": "ceph_lv1",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_size": "21470642176",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "name": "ceph_lv1",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "tags": {
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cluster_name": "ceph",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.crush_device_class": "",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.encrypted": "0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osd_id": "1",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.type": "block",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.vdo": "0"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             },
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "type": "block",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "vg_name": "ceph_vg1"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:         }
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:     ],
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:     "2": [
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:         {
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "devices": [
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "/dev/loop5"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             ],
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_name": "ceph_lv2",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_size": "21470642176",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "name": "ceph_lv2",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "tags": {
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.cluster_name": "ceph",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.crush_device_class": "",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.encrypted": "0",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osd_id": "2",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.type": "block",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:                 "ceph.vdo": "0"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             },
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "type": "block",
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:             "vg_name": "ceph_vg2"
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:         }
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]:     ]
Oct 14 09:49:24 compute-0 confident_mcnulty[429004]: }
Oct 14 09:49:24 compute-0 systemd[1]: libpod-43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b.scope: Deactivated successfully.
Oct 14 09:49:24 compute-0 podman[428988]: 2025-10-14 09:49:24.245259174 +0000 UTC m=+0.973791696 container died 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:49:24 compute-0 ceph-mon[74249]: pgmap v2893: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad-merged.mount: Deactivated successfully.
Oct 14 09:49:24 compute-0 podman[428988]: 2025-10-14 09:49:24.323781479 +0000 UTC m=+1.052313951 container remove 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:49:24 compute-0 systemd[1]: libpod-conmon-43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b.scope: Deactivated successfully.
Oct 14 09:49:24 compute-0 sudo[428883]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:24 compute-0 sudo[429027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:24 compute-0 sudo[429027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:24 compute-0 sudo[429027]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:24 compute-0 sudo[429052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:49:24 compute-0 sudo[429052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:24 compute-0 sudo[429052]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:24 compute-0 sudo[429077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:24 compute-0 sudo[429077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:24 compute-0 sudo[429077]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:24 compute-0 sudo[429102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:49:24 compute-0 sudo[429102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:25 compute-0 podman[429168]: 2025-10-14 09:49:25.206587521 +0000 UTC m=+0.067654361 container create f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:49:25 compute-0 systemd[1]: Started libpod-conmon-f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187.scope.
Oct 14 09:49:25 compute-0 podman[429168]: 2025-10-14 09:49:25.178952093 +0000 UTC m=+0.040018983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:49:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:49:25 compute-0 podman[429168]: 2025-10-14 09:49:25.312353696 +0000 UTC m=+0.173420576 container init f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 09:49:25 compute-0 podman[429168]: 2025-10-14 09:49:25.324285919 +0000 UTC m=+0.185352749 container start f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:49:25 compute-0 podman[429168]: 2025-10-14 09:49:25.32797324 +0000 UTC m=+0.189040070 container attach f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 09:49:25 compute-0 laughing_edison[429184]: 167 167
Oct 14 09:49:25 compute-0 systemd[1]: libpod-f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187.scope: Deactivated successfully.
Oct 14 09:49:25 compute-0 podman[429168]: 2025-10-14 09:49:25.334417508 +0000 UTC m=+0.195484338 container died f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-54ff7967ae7b59f4a2654c5f1ba7bca4a52a167b5c036d6b7c4dae6ca451d54c-merged.mount: Deactivated successfully.
Oct 14 09:49:25 compute-0 podman[429168]: 2025-10-14 09:49:25.382906197 +0000 UTC m=+0.243973007 container remove f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:49:25 compute-0 systemd[1]: libpod-conmon-f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187.scope: Deactivated successfully.
Oct 14 09:49:25 compute-0 podman[429209]: 2025-10-14 09:49:25.594718385 +0000 UTC m=+0.072944801 container create b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:49:25 compute-0 systemd[1]: Started libpod-conmon-b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9.scope.
Oct 14 09:49:25 compute-0 podman[429209]: 2025-10-14 09:49:25.566598215 +0000 UTC m=+0.044824671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:49:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:49:25 compute-0 podman[429209]: 2025-10-14 09:49:25.697583139 +0000 UTC m=+0.175809615 container init b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:49:25 compute-0 podman[429209]: 2025-10-14 09:49:25.711565262 +0000 UTC m=+0.189791668 container start b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:49:25 compute-0 podman[429209]: 2025-10-14 09:49:25.71595815 +0000 UTC m=+0.194184566 container attach b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:49:26 compute-0 ceph-mon[74249]: pgmap v2894: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:26 compute-0 nova_compute[259627]: 2025-10-14 09:49:26.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]: {
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "osd_id": 2,
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "type": "bluestore"
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:     },
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "osd_id": 1,
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "type": "bluestore"
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:     },
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "osd_id": 0,
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:         "type": "bluestore"
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]:     }
Oct 14 09:49:26 compute-0 sleepy_chandrasekhar[429225]: }
Oct 14 09:49:26 compute-0 systemd[1]: libpod-b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9.scope: Deactivated successfully.
Oct 14 09:49:26 compute-0 systemd[1]: libpod-b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9.scope: Consumed 1.128s CPU time.
Oct 14 09:49:26 compute-0 podman[429209]: 2025-10-14 09:49:26.834439154 +0000 UTC m=+1.312665590 container died b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9-merged.mount: Deactivated successfully.
Oct 14 09:49:26 compute-0 podman[429209]: 2025-10-14 09:49:26.896915577 +0000 UTC m=+1.375141963 container remove b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:49:26 compute-0 systemd[1]: libpod-conmon-b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9.scope: Deactivated successfully.
Oct 14 09:49:26 compute-0 sudo[429102]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:49:26 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:49:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:49:26 compute-0 podman[429261]: 2025-10-14 09:49:26.951668351 +0000 UTC m=+0.074285634 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:49:26 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:49:26 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b53546ac-bacb-4a68-92f8-5a5f08b58239 does not exist
Oct 14 09:49:26 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 59ea3e8c-c5b2-46d2-a68b-8b3c87ee1119 does not exist
Oct 14 09:49:27 compute-0 sudo[429305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:49:27 compute-0 sudo[429305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:27 compute-0 sudo[429305]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:27 compute-0 podman[429259]: 2025-10-14 09:49:27.031869799 +0000 UTC m=+0.156681046 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 09:49:27 compute-0 sudo[429335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:49:27 compute-0 sudo[429335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:49:27 compute-0 sudo[429335]: pam_unix(sudo:session): session closed for user root
Oct 14 09:49:27 compute-0 ceph-mon[74249]: pgmap v2895: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:49:27 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:49:28 compute-0 nova_compute[259627]: 2025-10-14 09:49:28.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:29 compute-0 ceph-mon[74249]: pgmap v2896: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s
Oct 14 09:49:30 compute-0 nova_compute[259627]: 2025-10-14 09:49:30.995 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:30 compute-0 nova_compute[259627]: 2025-10-14 09:49:30.995 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:49:31 compute-0 nova_compute[259627]: 2025-10-14 09:49:31.013 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:49:31 compute-0 nova_compute[259627]: 2025-10-14 09:49:31.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 14 09:49:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 14 09:49:31 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 14 09:49:31 compute-0 ceph-mon[74249]: pgmap v2897: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:49:32
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms']
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:49:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 307 B/s wr, 3 op/s
Oct 14 09:49:33 compute-0 ceph-mon[74249]: osdmap e299: 3 total, 3 up, 3 in
Oct 14 09:49:33 compute-0 nova_compute[259627]: 2025-10-14 09:49:33.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:49:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:49:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 14 09:49:34 compute-0 ceph-mon[74249]: pgmap v2899: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 307 B/s wr, 3 op/s
Oct 14 09:49:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 14 09:49:34 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 14 09:49:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 383 B/s wr, 3 op/s
Oct 14 09:49:35 compute-0 ceph-mon[74249]: osdmap e300: 3 total, 3 up, 3 in
Oct 14 09:49:36 compute-0 ceph-mon[74249]: pgmap v2901: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 383 B/s wr, 3 op/s
Oct 14 09:49:36 compute-0 nova_compute[259627]: 2025-10-14 09:49:36.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 14 09:49:38 compute-0 ceph-mon[74249]: pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 14 09:49:38 compute-0 nova_compute[259627]: 2025-10-14 09:49:38.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Oct 14 09:49:40 compute-0 ceph-mon[74249]: pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Oct 14 09:49:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.6 MiB/s wr, 39 op/s
Oct 14 09:49:41 compute-0 nova_compute[259627]: 2025-10-14 09:49:41.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:42 compute-0 ceph-mon[74249]: pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.6 MiB/s wr, 39 op/s
Oct 14 09:49:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 4.1 MiB/s wr, 35 op/s
Oct 14 09:49:43 compute-0 nova_compute[259627]: 2025-10-14 09:49:43.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:43 compute-0 podman[429360]: 2025-10-14 09:49:43.709648744 +0000 UTC m=+0.108098864 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:49:43 compute-0 podman[429361]: 2025-10-14 09:49:43.711600532 +0000 UTC m=+0.108649697 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:49:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:49:44 compute-0 ceph-mon[74249]: pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 4.1 MiB/s wr, 35 op/s
Oct 14 09:49:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 3.7 MiB/s wr, 32 op/s
Oct 14 09:49:46 compute-0 ceph-mon[74249]: pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 3.7 MiB/s wr, 32 op/s
Oct 14 09:49:46 compute-0 nova_compute[259627]: 2025-10-14 09:49:46.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.4 MiB/s wr, 29 op/s
Oct 14 09:49:48 compute-0 ceph-mon[74249]: pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.4 MiB/s wr, 29 op/s
Oct 14 09:49:48 compute-0 nova_compute[259627]: 2025-10-14 09:49:48.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:50 compute-0 ceph-mon[74249]: pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:51 compute-0 nova_compute[259627]: 2025-10-14 09:49:51.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:52 compute-0 ceph-mon[74249]: pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:53 compute-0 nova_compute[259627]: 2025-10-14 09:49:53.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:49:53.948 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:49:53 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:49:53.949 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:49:54 compute-0 nova_compute[259627]: 2025-10-14 09:49:54.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:54 compute-0 ceph-mon[74249]: pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:55 compute-0 nova_compute[259627]: 2025-10-14 09:49:55.996 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:49:56 compute-0 ceph-mon[74249]: pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:56 compute-0 nova_compute[259627]: 2025-10-14 09:49:56.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:57 compute-0 podman[429401]: 2025-10-14 09:49:57.677218495 +0000 UTC m=+0.081928161 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:49:57 compute-0 podman[429400]: 2025-10-14 09:49:57.726519355 +0000 UTC m=+0.136744766 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 09:49:58 compute-0 ceph-mon[74249]: pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.158420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398158474, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2001, "num_deletes": 254, "total_data_size": 3309233, "memory_usage": 3361280, "flush_reason": "Manual Compaction"}
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398179300, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3243289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59129, "largest_seqno": 61129, "table_properties": {"data_size": 3234029, "index_size": 5881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18516, "raw_average_key_size": 20, "raw_value_size": 3215594, "raw_average_value_size": 3518, "num_data_blocks": 261, "num_entries": 914, "num_filter_entries": 914, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435184, "oldest_key_time": 1760435184, "file_creation_time": 1760435398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 20940 microseconds, and 14496 cpu microseconds.
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.179358) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3243289 bytes OK
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.179383) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.181405) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.181428) EVENT_LOG_v1 {"time_micros": 1760435398181420, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.181450) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3300828, prev total WAL file size 3300828, number of live WAL files 2.
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.182980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3167KB)], [140(8097KB)]
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398183072, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11535342, "oldest_snapshot_seqno": -1}
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8075 keys, 9812094 bytes, temperature: kUnknown
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398250160, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9812094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9760361, "index_size": 30436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 210497, "raw_average_key_size": 26, "raw_value_size": 9618436, "raw_average_value_size": 1191, "num_data_blocks": 1183, "num_entries": 8075, "num_filter_entries": 8075, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.250427) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9812094 bytes
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.251861) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.7 rd, 146.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.9 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 8598, records dropped: 523 output_compression: NoCompression
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.251879) EVENT_LOG_v1 {"time_micros": 1760435398251870, "job": 86, "event": "compaction_finished", "compaction_time_micros": 67179, "compaction_time_cpu_micros": 45652, "output_level": 6, "num_output_files": 1, "total_output_size": 9812094, "num_input_records": 8598, "num_output_records": 8075, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398252806, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398254755, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.182821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:49:58 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:49:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:49:58 compute-0 nova_compute[259627]: 2025-10-14 09:49:58.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:49:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:49:59 compute-0 nova_compute[259627]: 2025-10-14 09:49:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:00 compute-0 ceph-mon[74249]: pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 511 B/s wr, 6 op/s
Oct 14 09:50:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 14 09:50:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 14 09:50:01 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 14 09:50:01 compute-0 nova_compute[259627]: 2025-10-14 09:50:01.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:02 compute-0 ceph-mon[74249]: pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 511 B/s wr, 6 op/s
Oct 14 09:50:02 compute-0 ceph-mon[74249]: osdmap e301: 3 total, 3 up, 3 in
Oct 14 09:50:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:50:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:50:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:50:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:50:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:50:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:50:02 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:50:02.952 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:50:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 614 B/s wr, 8 op/s
Oct 14 09:50:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:03 compute-0 nova_compute[259627]: 2025-10-14 09:50:03.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:04 compute-0 ceph-mon[74249]: pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 614 B/s wr, 8 op/s
Oct 14 09:50:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 614 B/s wr, 8 op/s
Oct 14 09:50:04 compute-0 nova_compute[259627]: 2025-10-14 09:50:04.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.013 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.014 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:50:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:50:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614101391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.617 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.618 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3630MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.619 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.619 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.709 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.709 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.728 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:50:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:50:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514776378' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:50:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:50:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514776378' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.745 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.746 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.765 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.788 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:50:05 compute-0 nova_compute[259627]: 2025-10-14 09:50:05.808 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:50:06 compute-0 ceph-mon[74249]: pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 614 B/s wr, 8 op/s
Oct 14 09:50:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/614101391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:50:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1514776378' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:50:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1514776378' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:50:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:50:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839488715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:50:06 compute-0 nova_compute[259627]: 2025-10-14 09:50:06.250 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:50:06 compute-0 nova_compute[259627]: 2025-10-14 09:50:06.258 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:50:06 compute-0 nova_compute[259627]: 2025-10-14 09:50:06.282 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:50:06 compute-0 nova_compute[259627]: 2025-10-14 09:50:06.285 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:50:06 compute-0 nova_compute[259627]: 2025-10-14 09:50:06.285 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:50:06 compute-0 nova_compute[259627]: 2025-10-14 09:50:06.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:50:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:50:07.067 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:50:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:50:07.068 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:50:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:50:07.068 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:50:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1839488715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:50:08 compute-0 ceph-mon[74249]: pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:50:08 compute-0 nova_compute[259627]: 2025-10-14 09:50:08.286 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 14 09:50:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 14 09:50:08 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 14 09:50:08 compute-0 nova_compute[259627]: 2025-10-14 09:50:08.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 20 op/s
Oct 14 09:50:08 compute-0 nova_compute[259627]: 2025-10-14 09:50:08.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:08 compute-0 nova_compute[259627]: 2025-10-14 09:50:08.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:50:08 compute-0 nova_compute[259627]: 2025-10-14 09:50:08.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:50:09 compute-0 nova_compute[259627]: 2025-10-14 09:50:09.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:50:09 compute-0 ceph-mon[74249]: osdmap e302: 3 total, 3 up, 3 in
Oct 14 09:50:09 compute-0 ceph-mon[74249]: pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 20 op/s
Oct 14 09:50:10 compute-0 nova_compute[259627]: 2025-10-14 09:50:10.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 838 B/s wr, 17 op/s
Oct 14 09:50:11 compute-0 nova_compute[259627]: 2025-10-14 09:50:11.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:11 compute-0 nova_compute[259627]: 2025-10-14 09:50:11.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:11 compute-0 nova_compute[259627]: 2025-10-14 09:50:11.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:50:12 compute-0 ceph-mon[74249]: pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 838 B/s wr, 17 op/s
Oct 14 09:50:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 14 09:50:12 compute-0 nova_compute[259627]: 2025-10-14 09:50:12.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:12 compute-0 nova_compute[259627]: 2025-10-14 09:50:12.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:13 compute-0 nova_compute[259627]: 2025-10-14 09:50:13.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:14 compute-0 ceph-mon[74249]: pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 14 09:50:14 compute-0 podman[429489]: 2025-10-14 09:50:14.667702192 +0000 UTC m=+0.071511316 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:50:14 compute-0 podman[429488]: 2025-10-14 09:50:14.704462894 +0000 UTC m=+0.113474676 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 14 09:50:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 14 09:50:16 compute-0 ceph-mon[74249]: pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 14 09:50:16 compute-0 nova_compute[259627]: 2025-10-14 09:50:16.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:18 compute-0 ceph-mon[74249]: pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:18 compute-0 nova_compute[259627]: 2025-10-14 09:50:18.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:20 compute-0 ceph-mon[74249]: pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:21 compute-0 nova_compute[259627]: 2025-10-14 09:50:21.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:22 compute-0 ceph-mon[74249]: pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:23 compute-0 nova_compute[259627]: 2025-10-14 09:50:23.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:24 compute-0 ceph-mon[74249]: pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:26 compute-0 ceph-mon[74249]: pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:26 compute-0 nova_compute[259627]: 2025-10-14 09:50:26.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:27 compute-0 sudo[429526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:27 compute-0 sudo[429526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:27 compute-0 sudo[429526]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:27 compute-0 sudo[429551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:50:27 compute-0 sudo[429551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:27 compute-0 sudo[429551]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:27 compute-0 sudo[429576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:27 compute-0 sudo[429576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:27 compute-0 sudo[429576]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:27 compute-0 sudo[429601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:50:27 compute-0 sudo[429601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:27 compute-0 sudo[429601]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:50:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:50:28 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:50:28 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:50:28 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev adcb5ea5-b51e-40cf-91e2-46f429deda08 does not exist
Oct 14 09:50:28 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6850cd8a-4082-41c8-a009-94c7e25c50e9 does not exist
Oct 14 09:50:28 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev ad342af6-0a29-4a35-96f1-eb776736fdfe does not exist
Oct 14 09:50:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:50:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:50:28 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:50:28 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:50:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:50:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:50:28 compute-0 sudo[429657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:28 compute-0 sudo[429657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:28 compute-0 sudo[429657]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:28 compute-0 sudo[429691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:50:28 compute-0 sudo[429691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:28 compute-0 podman[429682]: 2025-10-14 09:50:28.219281968 +0000 UTC m=+0.071406313 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:50:28 compute-0 sudo[429691]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:28 compute-0 sudo[429747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:28 compute-0 sudo[429747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:28 compute-0 sudo[429747]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:28 compute-0 podman[429681]: 2025-10-14 09:50:28.287952503 +0000 UTC m=+0.137524605 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 09:50:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:28 compute-0 sudo[429777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:50:28 compute-0 sudo[429777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:28 compute-0 nova_compute[259627]: 2025-10-14 09:50:28.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:28 compute-0 podman[429844]: 2025-10-14 09:50:28.735399311 +0000 UTC m=+0.067641699 container create d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:50:28 compute-0 systemd[1]: Started libpod-conmon-d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6.scope.
Oct 14 09:50:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:50:28 compute-0 podman[429844]: 2025-10-14 09:50:28.709221009 +0000 UTC m=+0.041463417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:50:28 compute-0 podman[429844]: 2025-10-14 09:50:28.813213781 +0000 UTC m=+0.145456219 container init d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:50:28 compute-0 podman[429844]: 2025-10-14 09:50:28.82172903 +0000 UTC m=+0.153971428 container start d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:50:28 compute-0 podman[429844]: 2025-10-14 09:50:28.825917473 +0000 UTC m=+0.158159921 container attach d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:50:28 compute-0 frosty_leavitt[429860]: 167 167
Oct 14 09:50:28 compute-0 systemd[1]: libpod-d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6.scope: Deactivated successfully.
Oct 14 09:50:28 compute-0 conmon[429860]: conmon d278bd26f90dced9fbe4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6.scope/container/memory.events
Oct 14 09:50:28 compute-0 podman[429844]: 2025-10-14 09:50:28.829113941 +0000 UTC m=+0.161356379 container died d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:50:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c0ed4d24f93668d3baf0d258b5f2cce0b5d5d6dc181ffa7d8927bc69236faf-merged.mount: Deactivated successfully.
Oct 14 09:50:28 compute-0 podman[429844]: 2025-10-14 09:50:28.879984029 +0000 UTC m=+0.212226397 container remove d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 09:50:28 compute-0 systemd[1]: libpod-conmon-d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6.scope: Deactivated successfully.
Oct 14 09:50:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:29 compute-0 podman[429885]: 2025-10-14 09:50:29.058574451 +0000 UTC m=+0.044837781 container create 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:50:29 compute-0 systemd[1]: Started libpod-conmon-02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5.scope.
Oct 14 09:50:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:29 compute-0 podman[429885]: 2025-10-14 09:50:29.035495775 +0000 UTC m=+0.021759115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:50:29 compute-0 podman[429885]: 2025-10-14 09:50:29.134502094 +0000 UTC m=+0.120765404 container init 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:50:29 compute-0 podman[429885]: 2025-10-14 09:50:29.14083114 +0000 UTC m=+0.127094450 container start 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 09:50:29 compute-0 podman[429885]: 2025-10-14 09:50:29.144225623 +0000 UTC m=+0.130488933 container attach 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:50:30 compute-0 ceph-mon[74249]: pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:30 compute-0 ecstatic_allen[429901]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:50:30 compute-0 ecstatic_allen[429901]: --> relative data size: 1.0
Oct 14 09:50:30 compute-0 ecstatic_allen[429901]: --> All data devices are unavailable
Oct 14 09:50:30 compute-0 systemd[1]: libpod-02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5.scope: Deactivated successfully.
Oct 14 09:50:30 compute-0 systemd[1]: libpod-02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5.scope: Consumed 1.073s CPU time.
Oct 14 09:50:30 compute-0 podman[429885]: 2025-10-14 09:50:30.26078083 +0000 UTC m=+1.247044160 container died 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6-merged.mount: Deactivated successfully.
Oct 14 09:50:30 compute-0 podman[429885]: 2025-10-14 09:50:30.332822008 +0000 UTC m=+1.319085328 container remove 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:50:30 compute-0 systemd[1]: libpod-conmon-02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5.scope: Deactivated successfully.
Oct 14 09:50:30 compute-0 sudo[429777]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:30 compute-0 sudo[429942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:30 compute-0 sudo[429942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:30 compute-0 sudo[429942]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:30 compute-0 sudo[429967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:50:30 compute-0 sudo[429967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:30 compute-0 sudo[429967]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:30 compute-0 sudo[429992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:30 compute-0 sudo[429992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:30 compute-0 sudo[429992]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:30 compute-0 sudo[430017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:50:30 compute-0 sudo[430017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:30 compute-0 podman[430083]: 2025-10-14 09:50:30.995291173 +0000 UTC m=+0.043800705 container create 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:50:31 compute-0 systemd[1]: Started libpod-conmon-96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68.scope.
Oct 14 09:50:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:50:31 compute-0 podman[430083]: 2025-10-14 09:50:31.06363174 +0000 UTC m=+0.112141292 container init 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:50:31 compute-0 podman[430083]: 2025-10-14 09:50:31.071457302 +0000 UTC m=+0.119966824 container start 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:50:31 compute-0 podman[430083]: 2025-10-14 09:50:30.978269406 +0000 UTC m=+0.026778988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:50:31 compute-0 podman[430083]: 2025-10-14 09:50:31.07462076 +0000 UTC m=+0.123130302 container attach 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:50:31 compute-0 quirky_shaw[430100]: 167 167
Oct 14 09:50:31 compute-0 systemd[1]: libpod-96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68.scope: Deactivated successfully.
Oct 14 09:50:31 compute-0 podman[430083]: 2025-10-14 09:50:31.076341622 +0000 UTC m=+0.124851194 container died 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-84bdeb626b9881c4629faed433cdbf7fcffd6e3d7171860d87e8e79b488ce9b0-merged.mount: Deactivated successfully.
Oct 14 09:50:31 compute-0 podman[430083]: 2025-10-14 09:50:31.119382238 +0000 UTC m=+0.167891800 container remove 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:50:31 compute-0 systemd[1]: libpod-conmon-96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68.scope: Deactivated successfully.
Oct 14 09:50:31 compute-0 podman[430124]: 2025-10-14 09:50:31.31180109 +0000 UTC m=+0.044576595 container create b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:50:31 compute-0 systemd[1]: Started libpod-conmon-b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6.scope.
Oct 14 09:50:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:31 compute-0 podman[430124]: 2025-10-14 09:50:31.289891752 +0000 UTC m=+0.022667267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:50:31 compute-0 podman[430124]: 2025-10-14 09:50:31.394278993 +0000 UTC m=+0.127054548 container init b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:50:31 compute-0 podman[430124]: 2025-10-14 09:50:31.406115724 +0000 UTC m=+0.138891199 container start b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:50:31 compute-0 podman[430124]: 2025-10-14 09:50:31.413755911 +0000 UTC m=+0.146531386 container attach b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:50:31 compute-0 nova_compute[259627]: 2025-10-14 09:50:31.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:32 compute-0 ceph-mon[74249]: pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]: {
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:     "0": [
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:         {
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "devices": [
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "/dev/loop3"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             ],
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_name": "ceph_lv0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_size": "21470642176",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "name": "ceph_lv0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "tags": {
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cluster_name": "ceph",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.crush_device_class": "",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.encrypted": "0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osd_id": "0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.type": "block",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.vdo": "0"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             },
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "type": "block",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "vg_name": "ceph_vg0"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:         }
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:     ],
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:     "1": [
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:         {
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "devices": [
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "/dev/loop4"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             ],
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_name": "ceph_lv1",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_size": "21470642176",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "name": "ceph_lv1",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "tags": {
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cluster_name": "ceph",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.crush_device_class": "",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.encrypted": "0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osd_id": "1",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.type": "block",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.vdo": "0"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             },
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "type": "block",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "vg_name": "ceph_vg1"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:         }
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:     ],
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:     "2": [
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:         {
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "devices": [
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "/dev/loop5"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             ],
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_name": "ceph_lv2",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_size": "21470642176",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "name": "ceph_lv2",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "tags": {
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.cluster_name": "ceph",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.crush_device_class": "",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.encrypted": "0",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osd_id": "2",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.type": "block",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:                 "ceph.vdo": "0"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             },
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "type": "block",
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:             "vg_name": "ceph_vg2"
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:         }
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]:     ]
Oct 14 09:50:32 compute-0 compassionate_fermat[430141]: }
Oct 14 09:50:32 compute-0 systemd[1]: libpod-b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6.scope: Deactivated successfully.
Oct 14 09:50:32 compute-0 podman[430124]: 2025-10-14 09:50:32.167520627 +0000 UTC m=+0.900296122 container died b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779-merged.mount: Deactivated successfully.
Oct 14 09:50:32 compute-0 podman[430124]: 2025-10-14 09:50:32.228593555 +0000 UTC m=+0.961369030 container remove b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:50:32 compute-0 systemd[1]: libpod-conmon-b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6.scope: Deactivated successfully.
Oct 14 09:50:32 compute-0 sudo[430017]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:32 compute-0 sudo[430164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:32 compute-0 sudo[430164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:32 compute-0 sudo[430164]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:32 compute-0 sudo[430189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:50:32 compute-0 sudo[430189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:32 compute-0 sudo[430189]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:32 compute-0 sudo[430214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:32 compute-0 sudo[430214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:32 compute-0 sudo[430214]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:32 compute-0 sudo[430239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:50:32 compute-0 sudo[430239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:50:32 compute-0 podman[430305]: 2025-10-14 09:50:32.818326035 +0000 UTC m=+0.054593111 container create c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:50:32 compute-0 systemd[1]: Started libpod-conmon-c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a.scope.
Oct 14 09:50:32 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:50:32 compute-0 podman[430305]: 2025-10-14 09:50:32.791214219 +0000 UTC m=+0.027481355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:50:32 compute-0 podman[430305]: 2025-10-14 09:50:32.895800426 +0000 UTC m=+0.132067502 container init c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 09:50:32 compute-0 podman[430305]: 2025-10-14 09:50:32.901806963 +0000 UTC m=+0.138073999 container start c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:50:32 compute-0 mystifying_shirley[430321]: 167 167
Oct 14 09:50:32 compute-0 systemd[1]: libpod-c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a.scope: Deactivated successfully.
Oct 14 09:50:32 compute-0 podman[430305]: 2025-10-14 09:50:32.907500463 +0000 UTC m=+0.143767509 container attach c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:50:32 compute-0 podman[430305]: 2025-10-14 09:50:32.907962814 +0000 UTC m=+0.144229890 container died c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:50:32
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', '.rgw.root']
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c87a1b13522cc304fb0bc0dc53e8323cae2a7b4e2b440353a6b85407efd33028-merged.mount: Deactivated successfully.
Oct 14 09:50:32 compute-0 podman[430305]: 2025-10-14 09:50:32.960000101 +0000 UTC m=+0.196267157 container remove c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:50:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:32 compute-0 systemd[1]: libpod-conmon-c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a.scope: Deactivated successfully.
Oct 14 09:50:33 compute-0 podman[430344]: 2025-10-14 09:50:33.127352557 +0000 UTC m=+0.044634056 container create a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:50:33 compute-0 systemd[1]: Started libpod-conmon-a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d.scope.
Oct 14 09:50:33 compute-0 podman[430344]: 2025-10-14 09:50:33.105564743 +0000 UTC m=+0.022846272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:50:33 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:50:33 compute-0 podman[430344]: 2025-10-14 09:50:33.235794588 +0000 UTC m=+0.153076077 container init a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:50:33 compute-0 podman[430344]: 2025-10-14 09:50:33.244483641 +0000 UTC m=+0.161765130 container start a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:50:33 compute-0 podman[430344]: 2025-10-14 09:50:33.249976036 +0000 UTC m=+0.167257545 container attach a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:50:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:50:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:50:33 compute-0 nova_compute[259627]: 2025-10-14 09:50:33.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:34 compute-0 ceph-mon[74249]: pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:34 compute-0 nice_goodall[430362]: {
Oct 14 09:50:34 compute-0 nice_goodall[430362]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "osd_id": 2,
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "type": "bluestore"
Oct 14 09:50:34 compute-0 nice_goodall[430362]:     },
Oct 14 09:50:34 compute-0 nice_goodall[430362]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "osd_id": 1,
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "type": "bluestore"
Oct 14 09:50:34 compute-0 nice_goodall[430362]:     },
Oct 14 09:50:34 compute-0 nice_goodall[430362]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "osd_id": 0,
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:50:34 compute-0 nice_goodall[430362]:         "type": "bluestore"
Oct 14 09:50:34 compute-0 nice_goodall[430362]:     }
Oct 14 09:50:34 compute-0 nice_goodall[430362]: }
Oct 14 09:50:34 compute-0 systemd[1]: libpod-a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d.scope: Deactivated successfully.
Oct 14 09:50:34 compute-0 podman[430344]: 2025-10-14 09:50:34.344974505 +0000 UTC m=+1.262256004 container died a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 14 09:50:34 compute-0 systemd[1]: libpod-a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d.scope: Consumed 1.103s CPU time.
Oct 14 09:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64-merged.mount: Deactivated successfully.
Oct 14 09:50:34 compute-0 podman[430344]: 2025-10-14 09:50:34.395546645 +0000 UTC m=+1.312828144 container remove a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:50:34 compute-0 systemd[1]: libpod-conmon-a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d.scope: Deactivated successfully.
Oct 14 09:50:34 compute-0 sudo[430239]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:50:34 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:50:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:50:34 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:50:34 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b7395472-f44c-4cf5-963e-69ca8911f515 does not exist
Oct 14 09:50:34 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 56d0ef9a-6c6f-4067-9265-500c1a8bc48b does not exist
Oct 14 09:50:34 compute-0 sudo[430407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:50:34 compute-0 sudo[430407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:34 compute-0 sudo[430407]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:34 compute-0 sudo[430432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:50:34 compute-0 sudo[430432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:50:34 compute-0 sudo[430432]: pam_unix(sudo:session): session closed for user root
Oct 14 09:50:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:35 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:50:35 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:50:35 compute-0 ceph-mon[74249]: pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:36 compute-0 nova_compute[259627]: 2025-10-14 09:50:36.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:37 compute-0 nova_compute[259627]: 2025-10-14 09:50:37.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:38 compute-0 ceph-mon[74249]: pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:38 compute-0 nova_compute[259627]: 2025-10-14 09:50:38.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:40 compute-0 ceph-mon[74249]: pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:40 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:41 compute-0 nova_compute[259627]: 2025-10-14 09:50:41.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:42 compute-0 ceph-mon[74249]: pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:42 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.315002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443315142, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 257, "total_data_size": 743714, "memory_usage": 756872, "flush_reason": "Manual Compaction"}
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443323705, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 737304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61130, "largest_seqno": 61772, "table_properties": {"data_size": 733771, "index_size": 1376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8007, "raw_average_key_size": 19, "raw_value_size": 726608, "raw_average_value_size": 1738, "num_data_blocks": 61, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435399, "oldest_key_time": 1760435399, "file_creation_time": 1760435443, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 8747 microseconds, and 5591 cpu microseconds.
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.323805) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 737304 bytes OK
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.323832) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.325484) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.325504) EVENT_LOG_v1 {"time_micros": 1760435443325497, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.325524) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 740228, prev total WAL file size 740228, number of live WAL files 2.
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.326204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373635' seq:0, type:0; will stop at (end)
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(720KB)], [143(9582KB)]
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443326255, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10549398, "oldest_snapshot_seqno": -1}
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7963 keys, 10433399 bytes, temperature: kUnknown
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443392006, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10433399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10381039, "index_size": 31319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 209158, "raw_average_key_size": 26, "raw_value_size": 10239676, "raw_average_value_size": 1285, "num_data_blocks": 1219, "num_entries": 7963, "num_filter_entries": 7963, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435443, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.392290) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10433399 bytes
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.393776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 158.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(28.5) write-amplify(14.2) OK, records in: 8493, records dropped: 530 output_compression: NoCompression
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.393795) EVENT_LOG_v1 {"time_micros": 1760435443393786, "job": 88, "event": "compaction_finished", "compaction_time_micros": 65855, "compaction_time_cpu_micros": 47612, "output_level": 6, "num_output_files": 1, "total_output_size": 10433399, "num_input_records": 8493, "num_output_records": 7963, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443394126, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443396272, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.326130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:50:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:50:43 compute-0 nova_compute[259627]: 2025-10-14 09:50:43.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:50:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:50:44 compute-0 ceph-mon[74249]: pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:44 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:45 compute-0 podman[430457]: 2025-10-14 09:50:45.69419107 +0000 UTC m=+0.101075321 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:50:45 compute-0 podman[430458]: 2025-10-14 09:50:45.705408365 +0000 UTC m=+0.102482215 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 09:50:46 compute-0 ceph-mon[74249]: pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:46 compute-0 nova_compute[259627]: 2025-10-14 09:50:46.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:46 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:48 compute-0 ceph-mon[74249]: pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:48 compute-0 nova_compute[259627]: 2025-10-14 09:50:48.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:48 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:50 compute-0 ceph-mon[74249]: pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:50 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:51 compute-0 nova_compute[259627]: 2025-10-14 09:50:51.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:52 compute-0 ceph-mon[74249]: pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:52 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:53 compute-0 nova_compute[259627]: 2025-10-14 09:50:53.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:54 compute-0 ceph-mon[74249]: pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:54 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:56 compute-0 ceph-mon[74249]: pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:56 compute-0 nova_compute[259627]: 2025-10-14 09:50:56.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:56 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:57 compute-0 nova_compute[259627]: 2025-10-14 09:50:57.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:50:58 compute-0 ceph-mon[74249]: pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:50:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:50:58 compute-0 nova_compute[259627]: 2025-10-14 09:50:58.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:50:58 compute-0 podman[430499]: 2025-10-14 09:50:58.679919995 +0000 UTC m=+0.087374625 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 09:50:58 compute-0 podman[430498]: 2025-10-14 09:50:58.718496641 +0000 UTC m=+0.130422161 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:50:58 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:00 compute-0 ceph-mon[74249]: pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:00 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:01 compute-0 nova_compute[259627]: 2025-10-14 09:51:01.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:01 compute-0 nova_compute[259627]: 2025-10-14 09:51:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:51:02 compute-0 ceph-mon[74249]: pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:51:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:51:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:51:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:51:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:51:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:51:02 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:03 compute-0 nova_compute[259627]: 2025-10-14 09:51:03.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:04 compute-0 ceph-mon[74249]: pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:04 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:51:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1816818552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:51:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:51:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1816818552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:51:05 compute-0 nova_compute[259627]: 2025-10-14 09:51:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.011 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.011 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:51:06 compute-0 ceph-mon[74249]: pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1816818552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:51:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1816818552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:51:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:51:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794768121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.504 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.682 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.684 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.684 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.684 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.747 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.748 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:51:06 compute-0 nova_compute[259627]: 2025-10-14 09:51:06.763 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:51:06 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:51:07.069 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:51:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:51:07.069 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:51:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:51:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:51:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3794768121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:51:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:51:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991354429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:51:07 compute-0 nova_compute[259627]: 2025-10-14 09:51:07.210 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:51:07 compute-0 nova_compute[259627]: 2025-10-14 09:51:07.218 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:51:07 compute-0 nova_compute[259627]: 2025-10-14 09:51:07.240 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:51:07 compute-0 nova_compute[259627]: 2025-10-14 09:51:07.243 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:51:07 compute-0 nova_compute[259627]: 2025-10-14 09:51:07.244 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:51:08 compute-0 ceph-mon[74249]: pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3991354429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:51:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:08 compute-0 nova_compute[259627]: 2025-10-14 09:51:08.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:08 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:10 compute-0 ceph-mon[74249]: pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:10 compute-0 nova_compute[259627]: 2025-10-14 09:51:10.245 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:51:10 compute-0 nova_compute[259627]: 2025-10-14 09:51:10.245 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:51:10 compute-0 nova_compute[259627]: 2025-10-14 09:51:10.246 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:51:10 compute-0 nova_compute[259627]: 2025-10-14 09:51:10.246 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:51:10 compute-0 nova_compute[259627]: 2025-10-14 09:51:10.512 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:51:10 compute-0 nova_compute[259627]: 2025-10-14 09:51:10.513 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:51:10 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:11 compute-0 nova_compute[259627]: 2025-10-14 09:51:11.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:12 compute-0 ceph-mon[74249]: pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:12 compute-0 nova_compute[259627]: 2025-10-14 09:51:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:51:12 compute-0 nova_compute[259627]: 2025-10-14 09:51:12.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:51:12 compute-0 nova_compute[259627]: 2025-10-14 09:51:12.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:51:12 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:13 compute-0 nova_compute[259627]: 2025-10-14 09:51:13.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:14 compute-0 ceph-mon[74249]: pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:14 compute-0 nova_compute[259627]: 2025-10-14 09:51:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:51:14 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:16 compute-0 ceph-mon[74249]: pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:16 compute-0 nova_compute[259627]: 2025-10-14 09:51:16.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:16 compute-0 podman[430588]: 2025-10-14 09:51:16.657141093 +0000 UTC m=+0.072972151 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:51:16 compute-0 podman[430589]: 2025-10-14 09:51:16.690996814 +0000 UTC m=+0.091763732 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:51:16 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:18 compute-0 ceph-mon[74249]: pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:18 compute-0 nova_compute[259627]: 2025-10-14 09:51:18.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:18 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:20 compute-0 ceph-mon[74249]: pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:20 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:21 compute-0 nova_compute[259627]: 2025-10-14 09:51:21.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:22 compute-0 ceph-mon[74249]: pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:22 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:23 compute-0 nova_compute[259627]: 2025-10-14 09:51:23.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:24 compute-0 ceph-mon[74249]: pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:24 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:26 compute-0 ceph-mon[74249]: pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:26 compute-0 nova_compute[259627]: 2025-10-14 09:51:26.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:26 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:28 compute-0 ceph-mon[74249]: pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:28 compute-0 nova_compute[259627]: 2025-10-14 09:51:28.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:28 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:29 compute-0 podman[430629]: 2025-10-14 09:51:29.682827123 +0000 UTC m=+0.078683471 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 09:51:29 compute-0 podman[430628]: 2025-10-14 09:51:29.730181835 +0000 UTC m=+0.135345622 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:51:30 compute-0 ceph-mon[74249]: pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:30 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:31 compute-0 nova_compute[259627]: 2025-10-14 09:51:31.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:32 compute-0 ceph-mon[74249]: pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:51:32
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms']
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:51:32 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:51:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:51:33 compute-0 nova_compute[259627]: 2025-10-14 09:51:33.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:34 compute-0 ceph-mon[74249]: pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:34 compute-0 sudo[430672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:34 compute-0 sudo[430672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:34 compute-0 sudo[430672]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:34 compute-0 sudo[430697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:51:34 compute-0 sudo[430697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:34 compute-0 sudo[430697]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:34 compute-0 sudo[430722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:34 compute-0 sudo[430722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:34 compute-0 sudo[430722]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:34 compute-0 sudo[430747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:51:34 compute-0 sudo[430747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:34 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:35 compute-0 sudo[430747]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:35 compute-0 ceph-mon[74249]: pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:51:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:51:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:51:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:51:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:51:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:51:35 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev bf68feab-2c23-4c18-bdd1-c09cf325c287 does not exist
Oct 14 09:51:35 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a7c64a0c-e529-4489-a6c3-bdaa22ce19f7 does not exist
Oct 14 09:51:35 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 7ffc019e-1bd3-48de-bcf6-54020dfbccfb does not exist
Oct 14 09:51:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:51:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:51:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:51:35 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:51:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:51:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:51:35 compute-0 sudo[430803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:35 compute-0 sudo[430803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:35 compute-0 sudo[430803]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:35 compute-0 sudo[430828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:51:35 compute-0 sudo[430828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:35 compute-0 sudo[430828]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:35 compute-0 sudo[430853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:35 compute-0 sudo[430853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:35 compute-0 sudo[430853]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:35 compute-0 sudo[430878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:51:35 compute-0 sudo[430878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:36 compute-0 podman[430945]: 2025-10-14 09:51:36.015368566 +0000 UTC m=+0.057415899 container create 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:51:36 compute-0 systemd[1]: Started libpod-conmon-860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff.scope.
Oct 14 09:51:36 compute-0 podman[430945]: 2025-10-14 09:51:35.992240599 +0000 UTC m=+0.034287942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:51:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:51:36 compute-0 podman[430945]: 2025-10-14 09:51:36.131148107 +0000 UTC m=+0.173195430 container init 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:51:36 compute-0 podman[430945]: 2025-10-14 09:51:36.144060894 +0000 UTC m=+0.186108207 container start 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:51:36 compute-0 podman[430945]: 2025-10-14 09:51:36.14754276 +0000 UTC m=+0.189590073 container attach 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:51:36 compute-0 musing_elbakyan[430962]: 167 167
Oct 14 09:51:36 compute-0 systemd[1]: libpod-860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff.scope: Deactivated successfully.
Oct 14 09:51:36 compute-0 podman[430945]: 2025-10-14 09:51:36.153354632 +0000 UTC m=+0.195401945 container died 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b20eed22d6fff96061c119972c7f474fce7060950594ff5372b106e81f2359fe-merged.mount: Deactivated successfully.
Oct 14 09:51:36 compute-0 podman[430945]: 2025-10-14 09:51:36.204151599 +0000 UTC m=+0.246198912 container remove 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:51:36 compute-0 systemd[1]: libpod-conmon-860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff.scope: Deactivated successfully.
Oct 14 09:51:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:51:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:51:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:51:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:51:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:51:36 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:51:36 compute-0 podman[430987]: 2025-10-14 09:51:36.444621319 +0000 UTC m=+0.073406402 container create 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:51:36 compute-0 systemd[1]: Started libpod-conmon-97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea.scope.
Oct 14 09:51:36 compute-0 podman[430987]: 2025-10-14 09:51:36.414843288 +0000 UTC m=+0.043628411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:51:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:36 compute-0 podman[430987]: 2025-10-14 09:51:36.562590544 +0000 UTC m=+0.191375677 container init 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:51:36 compute-0 nova_compute[259627]: 2025-10-14 09:51:36.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:36 compute-0 podman[430987]: 2025-10-14 09:51:36.579837687 +0000 UTC m=+0.208622770 container start 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 09:51:36 compute-0 podman[430987]: 2025-10-14 09:51:36.584272306 +0000 UTC m=+0.213057359 container attach 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:51:36 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:37 compute-0 ceph-mon[74249]: pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:37 compute-0 jovial_babbage[431004]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:51:37 compute-0 jovial_babbage[431004]: --> relative data size: 1.0
Oct 14 09:51:37 compute-0 jovial_babbage[431004]: --> All data devices are unavailable
Oct 14 09:51:37 compute-0 systemd[1]: libpod-97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea.scope: Deactivated successfully.
Oct 14 09:51:37 compute-0 systemd[1]: libpod-97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea.scope: Consumed 1.066s CPU time.
Oct 14 09:51:37 compute-0 podman[430987]: 2025-10-14 09:51:37.673383679 +0000 UTC m=+1.302168772 container died 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4-merged.mount: Deactivated successfully.
Oct 14 09:51:37 compute-0 podman[430987]: 2025-10-14 09:51:37.732286714 +0000 UTC m=+1.361071757 container remove 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:51:37 compute-0 systemd[1]: libpod-conmon-97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea.scope: Deactivated successfully.
Oct 14 09:51:37 compute-0 sudo[430878]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:37 compute-0 sudo[431045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:37 compute-0 sudo[431045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:37 compute-0 sudo[431045]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:37 compute-0 sudo[431070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:51:37 compute-0 sudo[431070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:37 compute-0 sudo[431070]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:37 compute-0 sudo[431095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:38 compute-0 sudo[431095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:38 compute-0 sudo[431095]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:38 compute-0 sudo[431120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:51:38 compute-0 sudo[431120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:38 compute-0 podman[431186]: 2025-10-14 09:51:38.459453676 +0000 UTC m=+0.037139813 container create 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 14 09:51:38 compute-0 systemd[1]: Started libpod-conmon-1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d.scope.
Oct 14 09:51:38 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:51:38 compute-0 podman[431186]: 2025-10-14 09:51:38.445533344 +0000 UTC m=+0.023219501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:51:38 compute-0 podman[431186]: 2025-10-14 09:51:38.548048769 +0000 UTC m=+0.125734986 container init 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:51:38 compute-0 podman[431186]: 2025-10-14 09:51:38.558717801 +0000 UTC m=+0.136403938 container start 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 09:51:38 compute-0 thirsty_vaughan[431203]: 167 167
Oct 14 09:51:38 compute-0 podman[431186]: 2025-10-14 09:51:38.562225827 +0000 UTC m=+0.139912054 container attach 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:51:38 compute-0 systemd[1]: libpod-1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d.scope: Deactivated successfully.
Oct 14 09:51:38 compute-0 podman[431186]: 2025-10-14 09:51:38.562932324 +0000 UTC m=+0.140618461 container died 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1758a20d685c22bd7a8c1edc605c6f676b4160a6a795f5b534afa36d91b5671-merged.mount: Deactivated successfully.
Oct 14 09:51:38 compute-0 podman[431186]: 2025-10-14 09:51:38.594995611 +0000 UTC m=+0.172681768 container remove 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:51:38 compute-0 systemd[1]: libpod-conmon-1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d.scope: Deactivated successfully.
Oct 14 09:51:38 compute-0 nova_compute[259627]: 2025-10-14 09:51:38.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:38 compute-0 podman[431227]: 2025-10-14 09:51:38.80974342 +0000 UTC m=+0.059024479 container create e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:51:38 compute-0 systemd[1]: Started libpod-conmon-e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa.scope.
Oct 14 09:51:38 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:38 compute-0 podman[431227]: 2025-10-14 09:51:38.785215798 +0000 UTC m=+0.034496957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:38 compute-0 podman[431227]: 2025-10-14 09:51:38.89489222 +0000 UTC m=+0.144173389 container init e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:51:38 compute-0 podman[431227]: 2025-10-14 09:51:38.900452166 +0000 UTC m=+0.149733225 container start e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 09:51:38 compute-0 podman[431227]: 2025-10-14 09:51:38.904055154 +0000 UTC m=+0.153336324 container attach e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 14 09:51:38 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:39 compute-0 determined_pasteur[431244]: {
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:     "0": [
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:         {
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "devices": [
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "/dev/loop3"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             ],
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_name": "ceph_lv0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_size": "21470642176",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "name": "ceph_lv0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "tags": {
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cluster_name": "ceph",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.crush_device_class": "",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.encrypted": "0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osd_id": "0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.type": "block",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.vdo": "0"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             },
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "type": "block",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "vg_name": "ceph_vg0"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:         }
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:     ],
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:     "1": [
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:         {
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "devices": [
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "/dev/loop4"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             ],
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_name": "ceph_lv1",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_size": "21470642176",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "name": "ceph_lv1",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "tags": {
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cluster_name": "ceph",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.crush_device_class": "",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.encrypted": "0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osd_id": "1",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.type": "block",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.vdo": "0"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             },
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "type": "block",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "vg_name": "ceph_vg1"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:         }
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:     ],
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:     "2": [
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:         {
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "devices": [
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "/dev/loop5"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             ],
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_name": "ceph_lv2",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_size": "21470642176",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "name": "ceph_lv2",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "tags": {
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.cluster_name": "ceph",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.crush_device_class": "",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.encrypted": "0",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osd_id": "2",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.type": "block",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:                 "ceph.vdo": "0"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             },
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "type": "block",
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:             "vg_name": "ceph_vg2"
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:         }
Oct 14 09:51:39 compute-0 determined_pasteur[431244]:     ]
Oct 14 09:51:39 compute-0 determined_pasteur[431244]: }
Oct 14 09:51:39 compute-0 systemd[1]: libpod-e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa.scope: Deactivated successfully.
Oct 14 09:51:39 compute-0 podman[431227]: 2025-10-14 09:51:39.596056574 +0000 UTC m=+0.845337643 container died e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7-merged.mount: Deactivated successfully.
Oct 14 09:51:39 compute-0 podman[431227]: 2025-10-14 09:51:39.659593853 +0000 UTC m=+0.908874922 container remove e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:51:39 compute-0 systemd[1]: libpod-conmon-e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa.scope: Deactivated successfully.
Oct 14 09:51:39 compute-0 sudo[431120]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:39 compute-0 sudo[431267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:39 compute-0 sudo[431267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:39 compute-0 sudo[431267]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:39 compute-0 sudo[431292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:51:39 compute-0 sudo[431292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:39 compute-0 sudo[431292]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:39 compute-0 sudo[431317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:39 compute-0 sudo[431317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:39 compute-0 sudo[431317]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:39 compute-0 sudo[431342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:51:39 compute-0 sudo[431342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:40 compute-0 ceph-mon[74249]: pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:40 compute-0 podman[431407]: 2025-10-14 09:51:40.357373744 +0000 UTC m=+0.079542432 container create 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:51:40 compute-0 systemd[1]: Started libpod-conmon-89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521.scope.
Oct 14 09:51:40 compute-0 podman[431407]: 2025-10-14 09:51:40.318192672 +0000 UTC m=+0.040361450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:51:40 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:51:40 compute-0 podman[431407]: 2025-10-14 09:51:40.432984139 +0000 UTC m=+0.155152837 container init 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:51:40 compute-0 podman[431407]: 2025-10-14 09:51:40.439374576 +0000 UTC m=+0.161543264 container start 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:51:40 compute-0 podman[431407]: 2025-10-14 09:51:40.443135568 +0000 UTC m=+0.165304316 container attach 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:51:40 compute-0 frosty_gauss[431423]: 167 167
Oct 14 09:51:40 compute-0 systemd[1]: libpod-89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521.scope: Deactivated successfully.
Oct 14 09:51:40 compute-0 podman[431407]: 2025-10-14 09:51:40.445709951 +0000 UTC m=+0.167878659 container died 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-51b539af613f0c954f4ee9f23b6f197e7392270c3ab448eb80ebfaf8824f6342-merged.mount: Deactivated successfully.
Oct 14 09:51:40 compute-0 podman[431407]: 2025-10-14 09:51:40.491345721 +0000 UTC m=+0.213514399 container remove 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:51:40 compute-0 systemd[1]: libpod-conmon-89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521.scope: Deactivated successfully.
Oct 14 09:51:40 compute-0 podman[431447]: 2025-10-14 09:51:40.670839866 +0000 UTC m=+0.049787673 container create 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:51:40 compute-0 systemd[1]: Started libpod-conmon-8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb.scope.
Oct 14 09:51:40 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:40 compute-0 podman[431447]: 2025-10-14 09:51:40.653752086 +0000 UTC m=+0.032699913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:51:40 compute-0 podman[431447]: 2025-10-14 09:51:40.761802437 +0000 UTC m=+0.140750254 container init 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 09:51:40 compute-0 podman[431447]: 2025-10-14 09:51:40.767546458 +0000 UTC m=+0.146494285 container start 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:51:40 compute-0 podman[431447]: 2025-10-14 09:51:40.771076145 +0000 UTC m=+0.150023962 container attach 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:51:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:41 compute-0 nova_compute[259627]: 2025-10-14 09:51:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]: {
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "osd_id": 2,
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "type": "bluestore"
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:     },
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "osd_id": 1,
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "type": "bluestore"
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:     },
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "osd_id": 0,
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:         "type": "bluestore"
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]:     }
Oct 14 09:51:41 compute-0 affectionate_mclaren[431463]: }
Oct 14 09:51:41 compute-0 systemd[1]: libpod-8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb.scope: Deactivated successfully.
Oct 14 09:51:41 compute-0 podman[431447]: 2025-10-14 09:51:41.822149486 +0000 UTC m=+1.201097323 container died 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:51:41 compute-0 systemd[1]: libpod-8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb.scope: Consumed 1.063s CPU time.
Oct 14 09:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f-merged.mount: Deactivated successfully.
Oct 14 09:51:41 compute-0 podman[431447]: 2025-10-14 09:51:41.907560531 +0000 UTC m=+1.286508368 container remove 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:51:41 compute-0 systemd[1]: libpod-conmon-8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb.scope: Deactivated successfully.
Oct 14 09:51:41 compute-0 sudo[431342]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:51:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:51:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:51:41 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:51:41 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 132937b2-df76-4d74-8c39-0a46088189cd does not exist
Oct 14 09:51:41 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 97971904-8cbb-48c0-b9c5-f0d39d43d1c4 does not exist
Oct 14 09:51:42 compute-0 ceph-mon[74249]: pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:51:42 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:51:42 compute-0 sudo[431511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:51:42 compute-0 sudo[431511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:42 compute-0 sudo[431511]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:42 compute-0 sudo[431536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:51:42 compute-0 sudo[431536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:51:42 compute-0 sudo[431536]: pam_unix(sudo:session): session closed for user root
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:43 compute-0 nova_compute[259627]: 2025-10-14 09:51:43.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:51:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:51:44 compute-0 ceph-mon[74249]: pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:46 compute-0 ceph-mon[74249]: pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:46 compute-0 nova_compute[259627]: 2025-10-14 09:51:46.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:51:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1354 writes, 6382 keys, 1354 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s
                                           Interval WAL: 1354 writes, 1354 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    105.1      0.72              0.30        44    0.016       0      0       0.0       0.0
                                             L6      1/0    9.95 MB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   4.8    182.7    154.6      2.33              1.30        43    0.054    274K    23K       0.0       0.0
                                            Sum      1/0    9.95 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.8    139.6    142.9      3.04              1.60        87    0.035    274K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.5    145.7    150.2      0.45              0.30        12    0.038     50K   3121       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   0.0    182.7    154.6      2.33              1.30        43    0.054    274K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    106.1      0.71              0.30        43    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.074, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.42 GB write, 0.08 MB/s write, 0.41 GB read, 0.08 MB/s read, 3.0 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 48.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000407 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3116,46.08 MB,15.1586%) FilterBlock(88,764.36 KB,0.245541%) IndexBlock(88,1.26 MB,0.413453%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 14 09:51:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:47 compute-0 podman[431562]: 2025-10-14 09:51:47.679365243 +0000 UTC m=+0.074381126 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:51:47 compute-0 podman[431561]: 2025-10-14 09:51:47.688968579 +0000 UTC m=+0.084095684 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 14 09:51:48 compute-0 ceph-mon[74249]: pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:48 compute-0 nova_compute[259627]: 2025-10-14 09:51:48.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:50 compute-0 ceph-mon[74249]: pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:51 compute-0 nova_compute[259627]: 2025-10-14 09:51:51.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:52 compute-0 ceph-mon[74249]: pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:53 compute-0 nova_compute[259627]: 2025-10-14 09:51:53.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:54 compute-0 ceph-mon[74249]: pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:56 compute-0 ceph-mon[74249]: pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:56 compute-0 nova_compute[259627]: 2025-10-14 09:51:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:58 compute-0 ceph-mon[74249]: pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:51:58 compute-0 nova_compute[259627]: 2025-10-14 09:51:58.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:51:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:51:59 compute-0 nova_compute[259627]: 2025-10-14 09:51:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:00 compute-0 ceph-mon[74249]: pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:00 compute-0 podman[431599]: 2025-10-14 09:52:00.721211581 +0000 UTC m=+0.120918708 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:52:00 compute-0 podman[431598]: 2025-10-14 09:52:00.735654556 +0000 UTC m=+0.142216661 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:52:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:01 compute-0 nova_compute[259627]: 2025-10-14 09:52:01.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:02 compute-0 ceph-mon[74249]: pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:52:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:52:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:52:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:52:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:52:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:52:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:03 compute-0 nova_compute[259627]: 2025-10-14 09:52:03.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:03 compute-0 nova_compute[259627]: 2025-10-14 09:52:03.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:04 compute-0 ceph-mon[74249]: pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:52:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/584146414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:52:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:52:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/584146414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:52:06 compute-0 ceph-mon[74249]: pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/584146414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:52:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/584146414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:52:06 compute-0 nova_compute[259627]: 2025-10-14 09:52:06.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:06 compute-0 nova_compute[259627]: 2025-10-14 09:52:06.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.066 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.067 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.067 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.068 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.068 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:52:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:52:07.069 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:52:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:52:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:52:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:52:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:52:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:52:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381485472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.509 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.673 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.674 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3611MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.674 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:52:07 compute-0 nova_compute[259627]: 2025-10-14 09:52:07.674 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.040 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.040 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.133 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:52:08 compute-0 ceph-mon[74249]: pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/381485472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:52:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:52:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923546474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.569 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.577 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.605 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.608 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.609 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:52:08 compute-0 nova_compute[259627]: 2025-10-14 09:52:08.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1923546474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:52:10 compute-0 ceph-mon[74249]: pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:11 compute-0 nova_compute[259627]: 2025-10-14 09:52:11.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:11 compute-0 nova_compute[259627]: 2025-10-14 09:52:11.605 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:11 compute-0 nova_compute[259627]: 2025-10-14 09:52:11.606 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:11 compute-0 nova_compute[259627]: 2025-10-14 09:52:11.606 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:52:11 compute-0 nova_compute[259627]: 2025-10-14 09:52:11.607 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:52:11 compute-0 nova_compute[259627]: 2025-10-14 09:52:11.679 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:52:11 compute-0 nova_compute[259627]: 2025-10-14 09:52:11.679 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:12 compute-0 ceph-mon[74249]: pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:12 compute-0 nova_compute[259627]: 2025-10-14 09:52:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:12 compute-0 nova_compute[259627]: 2025-10-14 09:52:12.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:52:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:13 compute-0 nova_compute[259627]: 2025-10-14 09:52:13.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:14 compute-0 ceph-mon[74249]: pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:14 compute-0 nova_compute[259627]: 2025-10-14 09:52:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:15 compute-0 nova_compute[259627]: 2025-10-14 09:52:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:16 compute-0 ceph-mon[74249]: pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:16 compute-0 nova_compute[259627]: 2025-10-14 09:52:16.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:18 compute-0 ceph-mon[74249]: pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:18 compute-0 podman[431690]: 2025-10-14 09:52:18.67278692 +0000 UTC m=+0.075971245 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:52:18 compute-0 podman[431689]: 2025-10-14 09:52:18.672818611 +0000 UTC m=+0.078393565 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:52:18 compute-0 nova_compute[259627]: 2025-10-14 09:52:18.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:20 compute-0 ceph-mon[74249]: pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:21 compute-0 nova_compute[259627]: 2025-10-14 09:52:21.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:22 compute-0 ceph-mon[74249]: pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:23 compute-0 nova_compute[259627]: 2025-10-14 09:52:23.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:24 compute-0 ceph-mon[74249]: pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:26 compute-0 ceph-mon[74249]: pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:26 compute-0 nova_compute[259627]: 2025-10-14 09:52:26.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:28 compute-0 ceph-mon[74249]: pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:28 compute-0 nova_compute[259627]: 2025-10-14 09:52:28.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:29 compute-0 ceph-mon[74249]: pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:31 compute-0 nova_compute[259627]: 2025-10-14 09:52:31.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:31 compute-0 podman[431727]: 2025-10-14 09:52:31.670127775 +0000 UTC m=+0.077212126 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 14 09:52:31 compute-0 podman[431726]: 2025-10-14 09:52:31.723283779 +0000 UTC m=+0.127976291 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 09:52:32 compute-0 ceph-mon[74249]: pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:52:32
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Oct 14 09:52:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:52:33 compute-0 ceph-mgr[74543]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3625056923
Oct 14 09:52:33 compute-0 nova_compute[259627]: 2025-10-14 09:52:33.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:34 compute-0 ceph-mon[74249]: pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:34 compute-0 sshd-session[431771]: Accepted publickey for zuul from 192.168.122.30 port 51450 ssh2: ECDSA SHA256:jaGWHGBmEwGLhBs5A5z51rEw7f54kxwV4dpIRk+zLbs
Oct 14 09:52:34 compute-0 systemd-logind[799]: New session 54 of user zuul.
Oct 14 09:52:35 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 14 09:52:35 compute-0 sshd-session[431771]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 14 09:52:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:36 compute-0 ceph-mon[74249]: pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:36 compute-0 nova_compute[259627]: 2025-10-14 09:52:36.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:37 compute-0 nova_compute[259627]: 2025-10-14 09:52:37.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:52:37.412 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:52:37 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:52:37.413 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:52:38 compute-0 ceph-mon[74249]: pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:38 compute-0 nova_compute[259627]: 2025-10-14 09:52:38.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:40 compute-0 ceph-mon[74249]: pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:41 compute-0 sshd-session[431774]: Connection closed by 192.168.122.30 port 51450
Oct 14 09:52:41 compute-0 sshd-session[431771]: pam_unix(sshd:session): session closed for user zuul
Oct 14 09:52:41 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 14 09:52:41 compute-0 systemd-logind[799]: Session 54 logged out. Waiting for processes to exit.
Oct 14 09:52:41 compute-0 systemd-logind[799]: Removed session 54.
Oct 14 09:52:41 compute-0 nova_compute[259627]: 2025-10-14 09:52:41.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:41 compute-0 nova_compute[259627]: 2025-10-14 09:52:41.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:52:42 compute-0 ceph-mon[74249]: pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:42 compute-0 sudo[432028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:42 compute-0 sudo[432028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:42 compute-0 sudo[432028]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:42 compute-0 sudo[432053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:52:42 compute-0 sudo[432053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:42 compute-0 sudo[432053]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:42 compute-0 sudo[432078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:42 compute-0 sudo[432078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:42 compute-0 sudo[432078]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:42 compute-0 sudo[432103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:52:42 compute-0 sudo[432103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:42 compute-0 sudo[432103]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:52:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:52:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:52:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:52:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:52:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev f5eb4a16-5226-404c-b485-710b60932f25 does not exist
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 611ce85b-464b-4b4f-afd1-e207e333c6dc does not exist
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6ac81da6-998a-4b8b-9a5d-dd63ef61f31c does not exist
Oct 14 09:52:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:52:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:52:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:52:43 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:52:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:52:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:52:43 compute-0 sudo[432157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:43 compute-0 sudo[432157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:43 compute-0 sudo[432157]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:52:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:52:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:52:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:52:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:52:43 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:52:43 compute-0 sudo[432182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:52:43 compute-0 sudo[432182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:43 compute-0 sudo[432182]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:43 compute-0 sudo[432207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:43 compute-0 sudo[432207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:43 compute-0 sudo[432207]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:43 compute-0 sudo[432232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:52:43 compute-0 sudo[432232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:43 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:52:43.415 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:52:43 compute-0 podman[432297]: 2025-10-14 09:52:43.767704932 +0000 UTC m=+0.067221051 container create c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 09:52:43 compute-0 systemd[1]: Started libpod-conmon-c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e.scope.
Oct 14 09:52:43 compute-0 podman[432297]: 2025-10-14 09:52:43.744371959 +0000 UTC m=+0.043888098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:52:43 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:52:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:52:43 compute-0 podman[432297]: 2025-10-14 09:52:43.864522257 +0000 UTC m=+0.164038386 container init c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:52:43 compute-0 podman[432297]: 2025-10-14 09:52:43.873645601 +0000 UTC m=+0.173161760 container start c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:52:43 compute-0 podman[432297]: 2025-10-14 09:52:43.877957367 +0000 UTC m=+0.177473526 container attach c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:52:43 compute-0 sad_wilson[432313]: 167 167
Oct 14 09:52:43 compute-0 systemd[1]: libpod-c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e.scope: Deactivated successfully.
Oct 14 09:52:43 compute-0 podman[432297]: 2025-10-14 09:52:43.88176295 +0000 UTC m=+0.181279099 container died c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2225f479086b4b056008d60a18e10e6ffdb1574e910df239925c3b413d92a0e-merged.mount: Deactivated successfully.
Oct 14 09:52:43 compute-0 podman[432297]: 2025-10-14 09:52:43.936394241 +0000 UTC m=+0.235910400 container remove c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:52:43 compute-0 systemd[1]: libpod-conmon-c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e.scope: Deactivated successfully.
Oct 14 09:52:43 compute-0 nova_compute[259627]: 2025-10-14 09:52:43.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:44 compute-0 ceph-mon[74249]: pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:44 compute-0 podman[432338]: 2025-10-14 09:52:44.181928536 +0000 UTC m=+0.070972193 container create 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:52:44 compute-0 systemd[1]: Started libpod-conmon-9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f.scope.
Oct 14 09:52:44 compute-0 podman[432338]: 2025-10-14 09:52:44.154053402 +0000 UTC m=+0.043097069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:52:44 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:44 compute-0 podman[432338]: 2025-10-14 09:52:44.305633981 +0000 UTC m=+0.194677658 container init 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:52:44 compute-0 podman[432338]: 2025-10-14 09:52:44.32270839 +0000 UTC m=+0.211752037 container start 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 09:52:44 compute-0 podman[432338]: 2025-10-14 09:52:44.326856902 +0000 UTC m=+0.215900559 container attach 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:52:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:45 compute-0 gifted_wozniak[432354]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:52:45 compute-0 gifted_wozniak[432354]: --> relative data size: 1.0
Oct 14 09:52:45 compute-0 gifted_wozniak[432354]: --> All data devices are unavailable
Oct 14 09:52:45 compute-0 systemd[1]: libpod-9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f.scope: Deactivated successfully.
Oct 14 09:52:45 compute-0 podman[432338]: 2025-10-14 09:52:45.351139824 +0000 UTC m=+1.240183481 container died 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:52:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d-merged.mount: Deactivated successfully.
Oct 14 09:52:45 compute-0 podman[432338]: 2025-10-14 09:52:45.403829597 +0000 UTC m=+1.292873214 container remove 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:52:45 compute-0 systemd[1]: libpod-conmon-9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f.scope: Deactivated successfully.
Oct 14 09:52:45 compute-0 sudo[432232]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:45 compute-0 sudo[432397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:45 compute-0 sudo[432397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:45 compute-0 sudo[432397]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:45 compute-0 sudo[432422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:52:45 compute-0 sudo[432422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:45 compute-0 sudo[432422]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:45 compute-0 sudo[432447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:45 compute-0 sudo[432447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:45 compute-0 sudo[432447]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:45 compute-0 sudo[432472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:52:45 compute-0 sudo[432472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:46 compute-0 podman[432537]: 2025-10-14 09:52:46.102671155 +0000 UTC m=+0.069935997 container create 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 09:52:46 compute-0 systemd[1]: Started libpod-conmon-1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410.scope.
Oct 14 09:52:46 compute-0 ceph-mon[74249]: pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:46 compute-0 podman[432537]: 2025-10-14 09:52:46.071809857 +0000 UTC m=+0.039074749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:52:46 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:52:46 compute-0 podman[432537]: 2025-10-14 09:52:46.195834091 +0000 UTC m=+0.163098943 container init 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 09:52:46 compute-0 podman[432537]: 2025-10-14 09:52:46.204905033 +0000 UTC m=+0.172169855 container start 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 09:52:46 compute-0 agitated_satoshi[432554]: 167 167
Oct 14 09:52:46 compute-0 podman[432537]: 2025-10-14 09:52:46.208737807 +0000 UTC m=+0.176002639 container attach 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:52:46 compute-0 podman[432537]: 2025-10-14 09:52:46.210613483 +0000 UTC m=+0.177878305 container died 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:52:46 compute-0 systemd[1]: libpod-1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410.scope: Deactivated successfully.
Oct 14 09:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc40e1e071cc8e85e3aa1b091bb46b5e1dd4a4f1734149f923084a4c9ad919d0-merged.mount: Deactivated successfully.
Oct 14 09:52:46 compute-0 podman[432537]: 2025-10-14 09:52:46.245287994 +0000 UTC m=+0.212552816 container remove 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 14 09:52:46 compute-0 systemd[1]: libpod-conmon-1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410.scope: Deactivated successfully.
Oct 14 09:52:46 compute-0 podman[432577]: 2025-10-14 09:52:46.488367879 +0000 UTC m=+0.074558771 container create 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:52:46 compute-0 systemd[1]: Started libpod-conmon-2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de.scope.
Oct 14 09:52:46 compute-0 podman[432577]: 2025-10-14 09:52:46.459284865 +0000 UTC m=+0.045475827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:52:46 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:46 compute-0 podman[432577]: 2025-10-14 09:52:46.580343805 +0000 UTC m=+0.166534697 container init 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:52:46 compute-0 podman[432577]: 2025-10-14 09:52:46.589989812 +0000 UTC m=+0.176180684 container start 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:52:46 compute-0 podman[432577]: 2025-10-14 09:52:46.593612371 +0000 UTC m=+0.179803273 container attach 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:52:46 compute-0 nova_compute[259627]: 2025-10-14 09:52:46.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:47 compute-0 competent_gates[432594]: {
Oct 14 09:52:47 compute-0 competent_gates[432594]:     "0": [
Oct 14 09:52:47 compute-0 competent_gates[432594]:         {
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "devices": [
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "/dev/loop3"
Oct 14 09:52:47 compute-0 competent_gates[432594]:             ],
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_name": "ceph_lv0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_size": "21470642176",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "name": "ceph_lv0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "tags": {
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cluster_name": "ceph",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.crush_device_class": "",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.encrypted": "0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osd_id": "0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.type": "block",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.vdo": "0"
Oct 14 09:52:47 compute-0 competent_gates[432594]:             },
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "type": "block",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "vg_name": "ceph_vg0"
Oct 14 09:52:47 compute-0 competent_gates[432594]:         }
Oct 14 09:52:47 compute-0 competent_gates[432594]:     ],
Oct 14 09:52:47 compute-0 competent_gates[432594]:     "1": [
Oct 14 09:52:47 compute-0 competent_gates[432594]:         {
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "devices": [
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "/dev/loop4"
Oct 14 09:52:47 compute-0 competent_gates[432594]:             ],
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_name": "ceph_lv1",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_size": "21470642176",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "name": "ceph_lv1",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "tags": {
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cluster_name": "ceph",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.crush_device_class": "",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.encrypted": "0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osd_id": "1",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.type": "block",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.vdo": "0"
Oct 14 09:52:47 compute-0 competent_gates[432594]:             },
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "type": "block",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "vg_name": "ceph_vg1"
Oct 14 09:52:47 compute-0 competent_gates[432594]:         }
Oct 14 09:52:47 compute-0 competent_gates[432594]:     ],
Oct 14 09:52:47 compute-0 competent_gates[432594]:     "2": [
Oct 14 09:52:47 compute-0 competent_gates[432594]:         {
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "devices": [
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "/dev/loop5"
Oct 14 09:52:47 compute-0 competent_gates[432594]:             ],
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_name": "ceph_lv2",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_size": "21470642176",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "name": "ceph_lv2",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "tags": {
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.cluster_name": "ceph",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.crush_device_class": "",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.encrypted": "0",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osd_id": "2",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.type": "block",
Oct 14 09:52:47 compute-0 competent_gates[432594]:                 "ceph.vdo": "0"
Oct 14 09:52:47 compute-0 competent_gates[432594]:             },
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "type": "block",
Oct 14 09:52:47 compute-0 competent_gates[432594]:             "vg_name": "ceph_vg2"
Oct 14 09:52:47 compute-0 competent_gates[432594]:         }
Oct 14 09:52:47 compute-0 competent_gates[432594]:     ]
Oct 14 09:52:47 compute-0 competent_gates[432594]: }
Oct 14 09:52:47 compute-0 systemd[1]: libpod-2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de.scope: Deactivated successfully.
Oct 14 09:52:47 compute-0 podman[432577]: 2025-10-14 09:52:47.37157488 +0000 UTC m=+0.957765812 container died 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb-merged.mount: Deactivated successfully.
Oct 14 09:52:47 compute-0 podman[432577]: 2025-10-14 09:52:47.449557104 +0000 UTC m=+1.035747966 container remove 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 14 09:52:47 compute-0 systemd[1]: libpod-conmon-2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de.scope: Deactivated successfully.
Oct 14 09:52:47 compute-0 sudo[432472]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:47 compute-0 sudo[432613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:47 compute-0 sudo[432613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:47 compute-0 sudo[432613]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:47 compute-0 sudo[432638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:52:47 compute-0 sudo[432638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:47 compute-0 sudo[432638]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:47 compute-0 sudo[432663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:47 compute-0 sudo[432663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:47 compute-0 sudo[432663]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:47 compute-0 sudo[432688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:52:47 compute-0 sudo[432688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:48 compute-0 ceph-mon[74249]: pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:48 compute-0 podman[432753]: 2025-10-14 09:52:48.17321221 +0000 UTC m=+0.070958202 container create 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:52:48 compute-0 systemd[1]: Started libpod-conmon-0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0.scope.
Oct 14 09:52:48 compute-0 podman[432753]: 2025-10-14 09:52:48.145189013 +0000 UTC m=+0.042935075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:52:48 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:52:48 compute-0 podman[432753]: 2025-10-14 09:52:48.278884873 +0000 UTC m=+0.176630945 container init 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:52:48 compute-0 podman[432753]: 2025-10-14 09:52:48.285808412 +0000 UTC m=+0.183554374 container start 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 09:52:48 compute-0 objective_curie[432770]: 167 167
Oct 14 09:52:48 compute-0 podman[432753]: 2025-10-14 09:52:48.292570988 +0000 UTC m=+0.190317030 container attach 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:52:48 compute-0 systemd[1]: libpod-0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0.scope: Deactivated successfully.
Oct 14 09:52:48 compute-0 podman[432753]: 2025-10-14 09:52:48.294230149 +0000 UTC m=+0.191976111 container died 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7bd97cc1c05027765deedbe5239c9ec16b97349867167553784f8c5d4133cd-merged.mount: Deactivated successfully.
Oct 14 09:52:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:48 compute-0 podman[432753]: 2025-10-14 09:52:48.34277764 +0000 UTC m=+0.240523602 container remove 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:52:48 compute-0 systemd[1]: libpod-conmon-0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0.scope: Deactivated successfully.
Oct 14 09:52:48 compute-0 podman[432794]: 2025-10-14 09:52:48.515650142 +0000 UTC m=+0.043892728 container create f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 09:52:48 compute-0 systemd[1]: Started libpod-conmon-f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01.scope.
Oct 14 09:52:48 compute-0 podman[432794]: 2025-10-14 09:52:48.498820969 +0000 UTC m=+0.027063565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:52:48 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:52:48 compute-0 podman[432794]: 2025-10-14 09:52:48.628439749 +0000 UTC m=+0.156682345 container init f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 09:52:48 compute-0 podman[432794]: 2025-10-14 09:52:48.642669638 +0000 UTC m=+0.170912224 container start f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:52:48 compute-0 podman[432794]: 2025-10-14 09:52:48.646642716 +0000 UTC m=+0.174885312 container attach f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:52:48 compute-0 nova_compute[259627]: 2025-10-14 09:52:48.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]: {
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "osd_id": 2,
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "type": "bluestore"
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:     },
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "osd_id": 1,
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "type": "bluestore"
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:     },
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "osd_id": 0,
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:         "type": "bluestore"
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]:     }
Oct 14 09:52:49 compute-0 quizzical_feynman[432812]: }
Oct 14 09:52:49 compute-0 podman[432841]: 2025-10-14 09:52:49.659379246 +0000 UTC m=+0.062062744 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:52:49 compute-0 systemd[1]: libpod-f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01.scope: Deactivated successfully.
Oct 14 09:52:49 compute-0 systemd[1]: libpod-f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01.scope: Consumed 1.038s CPU time.
Oct 14 09:52:49 compute-0 podman[432842]: 2025-10-14 09:52:49.683895277 +0000 UTC m=+0.085442357 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 09:52:49 compute-0 podman[432886]: 2025-10-14 09:52:49.719586673 +0000 UTC m=+0.029695679 container died f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5-merged.mount: Deactivated successfully.
Oct 14 09:52:49 compute-0 podman[432886]: 2025-10-14 09:52:49.77123455 +0000 UTC m=+0.081343556 container remove f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:52:49 compute-0 systemd[1]: libpod-conmon-f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01.scope: Deactivated successfully.
Oct 14 09:52:49 compute-0 sudo[432688]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:52:49 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:52:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:52:49 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:52:49 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev e12c5edf-a088-49f6-a15a-b8fa14e8af3b does not exist
Oct 14 09:52:49 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 33d4238e-a4ba-4319-864d-74579f2e80f8 does not exist
Oct 14 09:52:49 compute-0 sudo[432901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:52:49 compute-0 sudo[432901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:49 compute-0 sudo[432901]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:50 compute-0 sudo[432926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:52:50 compute-0 sudo[432926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:52:50 compute-0 sudo[432926]: pam_unix(sudo:session): session closed for user root
Oct 14 09:52:50 compute-0 ceph-mon[74249]: pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:50 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:52:50 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:52:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:51 compute-0 nova_compute[259627]: 2025-10-14 09:52:51.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:52 compute-0 ceph-mon[74249]: pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:53 compute-0 nova_compute[259627]: 2025-10-14 09:52:53.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:54 compute-0 ceph-mon[74249]: pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:56 compute-0 ceph-mon[74249]: pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:56 compute-0 nova_compute[259627]: 2025-10-14 09:52:56.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:58 compute-0 ceph-mon[74249]: pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:52:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:52:58 compute-0 nova_compute[259627]: 2025-10-14 09:52:58.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:52:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:00 compute-0 ceph-mon[74249]: pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:01 compute-0 nova_compute[259627]: 2025-10-14 09:53:01.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:01 compute-0 nova_compute[259627]: 2025-10-14 09:53:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:02 compute-0 ceph-mon[74249]: pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:02 compute-0 podman[432952]: 2025-10-14 09:53:02.671166699 +0000 UTC m=+0.068922322 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 14 09:53:02 compute-0 podman[432951]: 2025-10-14 09:53:02.714503943 +0000 UTC m=+0.116130060 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 14 09:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:53:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:53:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:04 compute-0 nova_compute[259627]: 2025-10-14 09:53:04.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:04 compute-0 ceph-mon[74249]: pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:53:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4090996532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:53:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:53:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4090996532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:53:05 compute-0 nova_compute[259627]: 2025-10-14 09:53:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:06 compute-0 ceph-mon[74249]: pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4090996532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:53:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4090996532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:53:06 compute-0 nova_compute[259627]: 2025-10-14 09:53:06.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:53:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:53:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:53:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:53:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:53:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:53:07 compute-0 ceph-mon[74249]: pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:07 compute-0 nova_compute[259627]: 2025-10-14 09:53:07.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.053 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.054 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.054 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.055 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.056 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:53:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:53:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149712963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.513 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:53:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2149712963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.734 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.735 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3588MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.735 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.735 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.816 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.817 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:53:08 compute-0 nova_compute[259627]: 2025-10-14 09:53:08.922 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:53:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:09 compute-0 nova_compute[259627]: 2025-10-14 09:53:09.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:53:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481508238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:53:09 compute-0 nova_compute[259627]: 2025-10-14 09:53:09.387 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:53:09 compute-0 nova_compute[259627]: 2025-10-14 09:53:09.393 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:53:09 compute-0 nova_compute[259627]: 2025-10-14 09:53:09.414 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:53:09 compute-0 nova_compute[259627]: 2025-10-14 09:53:09.416 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:53:09 compute-0 nova_compute[259627]: 2025-10-14 09:53:09.417 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:53:09 compute-0 ceph-mon[74249]: pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1481508238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:53:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:11 compute-0 nova_compute[259627]: 2025-10-14 09:53:11.413 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:11 compute-0 nova_compute[259627]: 2025-10-14 09:53:11.414 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:11 compute-0 nova_compute[259627]: 2025-10-14 09:53:11.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:12 compute-0 ceph-mon[74249]: pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:12 compute-0 nova_compute[259627]: 2025-10-14 09:53:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:12 compute-0 nova_compute[259627]: 2025-10-14 09:53:12.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:53:12 compute-0 nova_compute[259627]: 2025-10-14 09:53:12.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:53:13 compute-0 nova_compute[259627]: 2025-10-14 09:53:13.005 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:53:13 compute-0 nova_compute[259627]: 2025-10-14 09:53:13.006 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:13 compute-0 nova_compute[259627]: 2025-10-14 09:53:13.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:53:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:14 compute-0 nova_compute[259627]: 2025-10-14 09:53:14.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:14 compute-0 ceph-mon[74249]: pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:15 compute-0 nova_compute[259627]: 2025-10-14 09:53:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:16 compute-0 ceph-mon[74249]: pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:16 compute-0 nova_compute[259627]: 2025-10-14 09:53:16.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:17 compute-0 nova_compute[259627]: 2025-10-14 09:53:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:18 compute-0 ceph-mon[74249]: pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:19 compute-0 nova_compute[259627]: 2025-10-14 09:53:19.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:20 compute-0 ceph-mon[74249]: pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:20 compute-0 podman[433038]: 2025-10-14 09:53:20.681916498 +0000 UTC m=+0.085142430 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:53:20 compute-0 podman[433039]: 2025-10-14 09:53:20.695958753 +0000 UTC m=+0.092675315 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:53:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:21 compute-0 nova_compute[259627]: 2025-10-14 09:53:21.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:22 compute-0 ceph-mon[74249]: pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:24 compute-0 nova_compute[259627]: 2025-10-14 09:53:24.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:24 compute-0 ceph-mon[74249]: pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:26 compute-0 ceph-mon[74249]: pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:26 compute-0 nova_compute[259627]: 2025-10-14 09:53:26.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:28 compute-0 ceph-mon[74249]: pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:29 compute-0 nova_compute[259627]: 2025-10-14 09:53:29.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:53:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 709 writes, 2213 keys, 709 commit groups, 1.0 writes per commit group, ingest: 1.43 MB, 0.00 MB/s
                                           Interval WAL: 710 writes, 312 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:53:30 compute-0 ceph-mon[74249]: pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:31 compute-0 nova_compute[259627]: 2025-10-14 09:53:31.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:32 compute-0 ceph-mon[74249]: pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:53:32
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta', 'volumes', 'images']
Oct 14 09:53:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:53:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:53:33 compute-0 podman[433081]: 2025-10-14 09:53:33.654845046 +0000 UTC m=+0.069782094 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:53:33 compute-0 podman[433080]: 2025-10-14 09:53:33.731892476 +0000 UTC m=+0.142939868 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller)
Oct 14 09:53:34 compute-0 nova_compute[259627]: 2025-10-14 09:53:34.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:34 compute-0 ceph-mon[74249]: pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:53:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 178K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.70 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 762 writes, 2197 keys, 762 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s
                                           Interval WAL: 762 writes, 346 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:53:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:36 compute-0 ceph-mon[74249]: pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:36 compute-0 nova_compute[259627]: 2025-10-14 09:53:36.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:38 compute-0 ceph-mon[74249]: pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:39 compute-0 nova_compute[259627]: 2025-10-14 09:53:39.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:53:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.71 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 818 writes, 2112 keys, 818 commit groups, 1.0 writes per commit group, ingest: 1.15 MB, 0.00 MB/s
                                           Interval WAL: 818 writes, 377 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:53:40 compute-0 ceph-mon[74249]: pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:40 compute-0 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 09:53:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:41 compute-0 nova_compute[259627]: 2025-10-14 09:53:41.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:42 compute-0 ceph-mon[74249]: pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:53:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:53:44 compute-0 nova_compute[259627]: 2025-10-14 09:53:44.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:44 compute-0 ceph-mon[74249]: pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:46 compute-0 ceph-mon[74249]: pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:46 compute-0 nova_compute[259627]: 2025-10-14 09:53:46.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:47 compute-0 unix_chkpwd[433125]: password check failed for user (root)
Oct 14 09:53:47 compute-0 sshd-session[433123]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 14 09:53:47 compute-0 ceph-mon[74249]: pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:49 compute-0 nova_compute[259627]: 2025-10-14 09:53:49.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:49 compute-0 sshd-session[433123]: Failed password for root from 80.94.93.119 port 12588 ssh2
Oct 14 09:53:49 compute-0 sshd-session[433126]: Accepted publickey for zuul from 192.168.122.30 port 37260 ssh2: ECDSA SHA256:jaGWHGBmEwGLhBs5A5z51rEw7f54kxwV4dpIRk+zLbs
Oct 14 09:53:49 compute-0 systemd-logind[799]: New session 55 of user zuul.
Oct 14 09:53:49 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 14 09:53:49 compute-0 sshd-session[433126]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 14 09:53:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:53:49.712 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 09:53:49 compute-0 nova_compute[259627]: 2025-10-14 09:53:49.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:49 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:53:49.714 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 09:53:50 compute-0 sudo[433212]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Oct 14 09:53:50 compute-0 sudo[433212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:50 compute-0 sudo[433184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:50 compute-0 sudo[433184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:50 compute-0 sudo[433212]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:50 compute-0 sudo[433184]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:50 compute-0 sudo[433227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:53:50 compute-0 ceph-mon[74249]: pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:50 compute-0 sudo[433227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:50 compute-0 sudo[433227]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:50 compute-0 sudo[433277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Oct 14 09:53:50 compute-0 sudo[433277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:50 compute-0 sudo[433274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:50 compute-0 sudo[433274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:50 compute-0 sudo[433274]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.307139) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630307232, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1667, "num_deletes": 251, "total_data_size": 2718819, "memory_usage": 2765888, "flush_reason": "Manual Compaction"}
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 14 09:53:50 compute-0 sudo[433303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:53:50 compute-0 sudo[433303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630429500, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2681686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61773, "largest_seqno": 63439, "table_properties": {"data_size": 2673909, "index_size": 4719, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15622, "raw_average_key_size": 19, "raw_value_size": 2658480, "raw_average_value_size": 3395, "num_data_blocks": 210, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435444, "oldest_key_time": 1760435444, "file_creation_time": 1760435630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 123379 microseconds, and 8649 cpu microseconds.
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:53:50 compute-0 groupadd[433290]: group added to /etc/group: name=podman, GID=42479
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.430522) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2681686 bytes OK
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.430883) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.469995) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.470098) EVENT_LOG_v1 {"time_micros": 1760435630470083, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.470135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2711675, prev total WAL file size 2711675, number of live WAL files 2.
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.472639) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2618KB)], [146(10188KB)]
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630472682, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13115085, "oldest_snapshot_seqno": -1}
Oct 14 09:53:50 compute-0 groupadd[433290]: group added to /etc/gshadow: name=podman
Oct 14 09:53:50 compute-0 groupadd[433290]: new group: name=podman, GID=42479
Oct 14 09:53:50 compute-0 sudo[433277]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:50 compute-0 sudo[433340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Oct 14 09:53:50 compute-0 sudo[433340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8232 keys, 11371204 bytes, temperature: kUnknown
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630655296, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11371204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11316072, "index_size": 33451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 215353, "raw_average_key_size": 26, "raw_value_size": 11168907, "raw_average_value_size": 1356, "num_data_blocks": 1304, "num_entries": 8232, "num_filter_entries": 8232, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:53:50 compute-0 unix_chkpwd[433350]: password check failed for user (root)
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.655686) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11371204 bytes
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.684041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.8 rd, 62.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.1) write-amplify(4.2) OK, records in: 8746, records dropped: 514 output_compression: NoCompression
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.684109) EVENT_LOG_v1 {"time_micros": 1760435630684084, "job": 90, "event": "compaction_finished", "compaction_time_micros": 182727, "compaction_time_cpu_micros": 41766, "output_level": 6, "num_output_files": 1, "total_output_size": 11371204, "num_input_records": 8746, "num_output_records": 8232, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630684810, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630687334, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.472527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:53:50 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:53:50 compute-0 usermod[433347]: add 'zuul' to group 'podman'
Oct 14 09:53:50 compute-0 usermod[433347]: add 'zuul' to shadow group 'podman'
Oct 14 09:53:50 compute-0 sudo[433303]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:53:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:53:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:53:50 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:53:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:53:51 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:53:51 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d0d6f2d8-0d72-4d0b-90c4-6324938c56ad does not exist
Oct 14 09:53:51 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6d8a1119-cef5-49fd-b0fb-ac0a2f8030ad does not exist
Oct 14 09:53:51 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 059ee0e8-9259-4e3e-83de-67f241b8a80e does not exist
Oct 14 09:53:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:53:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:53:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:53:51 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:53:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:53:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:53:51 compute-0 sudo[433340]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:51 compute-0 sudo[433377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Oct 14 09:53:51 compute-0 sudo[433377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:51 compute-0 sudo[433377]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 sudo[433374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:51 compute-0 sudo[433374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:51 compute-0 sudo[433412]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Oct 14 09:53:51 compute-0 sudo[433374]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 sudo[433412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:51 compute-0 sudo[433412]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 sudo[433438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Oct 14 09:53:51 compute-0 sudo[433438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:51 compute-0 sudo[433438]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 podman[433384]: 2025-10-14 09:53:51.201716216 +0000 UTC m=+0.093134977 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:53:51 compute-0 podman[433395]: 2025-10-14 09:53:51.203352576 +0000 UTC m=+0.095923915 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 09:53:51 compute-0 sudo[433469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Oct 14 09:53:51 compute-0 sudo[433469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:51 compute-0 sudo[433426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:53:51 compute-0 sudo[433426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:51 compute-0 sudo[433426]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 sudo[433477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:51 compute-0 sudo[433477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:51 compute-0 sudo[433477]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 sudo[433502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:53:51 compute-0 sudo[433502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:51 compute-0 sudo[433469]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 sudo[433527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Oct 14 09:53:51 compute-0 sudo[433527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:51 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:53:51 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:53:51 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:53:51 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:53:51 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:53:51 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:53:51 compute-0 sudo[433527]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:51 compute-0 sudo[433543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Oct 14 09:53:51 compute-0 sudo[433543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:51 compute-0 systemd[1]: Reloading.
Oct 14 09:53:51 compute-0 systemd-rc-local-generator[433595]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 09:53:51 compute-0 systemd-sysv-generator[433599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 09:53:51 compute-0 podman[433583]: 2025-10-14 09:53:51.747241301 +0000 UTC m=+0.064188636 container create cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:53:51 compute-0 podman[433583]: 2025-10-14 09:53:51.713562335 +0000 UTC m=+0.030509680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:53:51 compute-0 nova_compute[259627]: 2025-10-14 09:53:51.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:52 compute-0 sudo[433543]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:52 compute-0 systemd[1]: Started libpod-conmon-cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1.scope.
Oct 14 09:53:52 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:53:52 compute-0 sudo[433625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Oct 14 09:53:52 compute-0 sudo[433625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:52 compute-0 podman[433583]: 2025-10-14 09:53:52.423644339 +0000 UTC m=+0.740591724 container init cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:53:52 compute-0 ceph-mon[74249]: pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:52 compute-0 podman[433583]: 2025-10-14 09:53:52.433436819 +0000 UTC m=+0.750384154 container start cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 09:53:52 compute-0 podman[433583]: 2025-10-14 09:53:52.43797815 +0000 UTC m=+0.754925565 container attach cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:53:52 compute-0 nice_jennings[433626]: 167 167
Oct 14 09:53:52 compute-0 systemd[1]: libpod-cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1.scope: Deactivated successfully.
Oct 14 09:53:52 compute-0 podman[433583]: 2025-10-14 09:53:52.443237969 +0000 UTC m=+0.760185354 container died cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 09:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3191cab1474d733d1bffe39d23696916ddd2a8057099cb9990cfac5afd18e90-merged.mount: Deactivated successfully.
Oct 14 09:53:52 compute-0 podman[433583]: 2025-10-14 09:53:52.498465244 +0000 UTC m=+0.815412609 container remove cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:53:52 compute-0 systemd[1]: libpod-conmon-cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1.scope: Deactivated successfully.
Oct 14 09:53:52 compute-0 sudo[433625]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:52 compute-0 sudo[433651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Oct 14 09:53:52 compute-0 sudo[433651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:52 compute-0 systemd[1]: Reloading.
Oct 14 09:53:52 compute-0 podman[433652]: 2025-10-14 09:53:52.676529164 +0000 UTC m=+0.044189346 container create 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:53:52 compute-0 sshd-session[433123]: Failed password for root from 80.94.93.119 port 12588 ssh2
Oct 14 09:53:52 compute-0 systemd-rc-local-generator[433692]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 09:53:52 compute-0 systemd-sysv-generator[433698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 09:53:52 compute-0 podman[433652]: 2025-10-14 09:53:52.656945393 +0000 UTC m=+0.024605575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:53:52 compute-0 systemd[1]: Started libpod-conmon-2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494.scope.
Oct 14 09:53:52 compute-0 systemd[1]: Starting Podman API Socket...
Oct 14 09:53:52 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 14 09:53:53 compute-0 sudo[433651]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:53 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:53 compute-0 sudo[433707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Oct 14 09:53:53 compute-0 sudo[433707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:53 compute-0 sudo[433707]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:53 compute-0 podman[433652]: 2025-10-14 09:53:53.064762539 +0000 UTC m=+0.432422741 container init 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 09:53:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:53 compute-0 sudo[433711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Oct 14 09:53:53 compute-0 podman[433652]: 2025-10-14 09:53:53.085091618 +0000 UTC m=+0.452751780 container start 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:53:53 compute-0 podman[433652]: 2025-10-14 09:53:53.088571923 +0000 UTC m=+0.456232135 container attach 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:53:53 compute-0 sudo[433711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:53 compute-0 sudo[433711]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:53 compute-0 sudo[433716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Oct 14 09:53:53 compute-0 sudo[433716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:53 compute-0 sudo[433716]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:53 compute-0 sudo[433719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Oct 14 09:53:53 compute-0 sudo[433719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:53 compute-0 sudo[433719]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:53 compute-0 sudo[433722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Oct 14 09:53:53 compute-0 sudo[433722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:53 compute-0 sudo[433722]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:53 compute-0 sudo[433725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Oct 14 09:53:53 compute-0 dbus-broker-launch[782]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct 14 09:53:53 compute-0 sudo[433725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:53 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Oct 14 09:53:53 compute-0 systemd[1]: Closed Podman API Socket.
Oct 14 09:53:53 compute-0 systemd[1]: Stopping Podman API Socket...
Oct 14 09:53:53 compute-0 systemd[1]: Starting Podman API Socket...
Oct 14 09:53:53 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 14 09:53:53 compute-0 sudo[433725]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:53 compute-0 sudo[433246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Oct 14 09:53:53 compute-0 sudo[433246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:53:53 compute-0 sudo[433246]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:53 compute-0 ceph-mon[74249]: pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:53 compute-0 sshd-session[433731]: Accepted publickey for zuul from 192.168.122.30 port 37266 ssh2: ECDSA SHA256:jaGWHGBmEwGLhBs5A5z51rEw7f54kxwV4dpIRk+zLbs
Oct 14 09:53:53 compute-0 systemd-logind[799]: New session 56 of user zuul.
Oct 14 09:53:53 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct 14 09:53:53 compute-0 sshd-session[433731]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 14 09:53:53 compute-0 systemd[1]: Starting Podman API Service...
Oct 14 09:53:53 compute-0 systemd[1]: Started Podman API Service.
Oct 14 09:53:53 compute-0 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 14 09:53:53 compute-0 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="Setting parallel job count to 25"
Oct 14 09:53:53 compute-0 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="Using sqlite as database backend"
Oct 14 09:53:53 compute-0 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 14 09:53:53 compute-0 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 14 09:53:53 compute-0 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 14 09:53:53 compute-0 podman[433735]: @ - - [14/Oct/2025:09:53:53 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 14 09:53:53 compute-0 podman[433735]: @ - - [14/Oct/2025:09:53:53 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 29171 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 14 09:53:54 compute-0 unix_chkpwd[433768]: password check failed for user (root)
Oct 14 09:53:54 compute-0 peaceful_wilson[433705]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:53:54 compute-0 peaceful_wilson[433705]: --> relative data size: 1.0
Oct 14 09:53:54 compute-0 peaceful_wilson[433705]: --> All data devices are unavailable
Oct 14 09:53:54 compute-0 systemd[1]: libpod-2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494.scope: Deactivated successfully.
Oct 14 09:53:54 compute-0 systemd[1]: libpod-2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494.scope: Consumed 1.020s CPU time.
Oct 14 09:53:54 compute-0 podman[433652]: 2025-10-14 09:53:54.153087913 +0000 UTC m=+1.520748095 container died 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2-merged.mount: Deactivated successfully.
Oct 14 09:53:54 compute-0 nova_compute[259627]: 2025-10-14 09:53:54.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:54 compute-0 podman[433652]: 2025-10-14 09:53:54.337934609 +0000 UTC m=+1.705594811 container remove 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:53:54 compute-0 systemd[1]: libpod-conmon-2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494.scope: Deactivated successfully.
Oct 14 09:53:54 compute-0 sudo[433502]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:54 compute-0 sudo[433787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:54 compute-0 sudo[433787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:54 compute-0 sudo[433787]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:54 compute-0 sudo[433812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:53:54 compute-0 sudo[433812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:54 compute-0 sudo[433812]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:54 compute-0 sudo[433837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:54 compute-0 sudo[433837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:54 compute-0 sudo[433837]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:54 compute-0 sudo[433862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:53:54 compute-0 sudo[433862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:55 compute-0 podman[433926]: 2025-10-14 09:53:55.038344355 +0000 UTC m=+0.047708861 container create 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:53:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:55 compute-0 systemd[1]: Started libpod-conmon-516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13.scope.
Oct 14 09:53:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:53:55 compute-0 podman[433926]: 2025-10-14 09:53:55.107388459 +0000 UTC m=+0.116752975 container init 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:53:55 compute-0 podman[433926]: 2025-10-14 09:53:55.018388946 +0000 UTC m=+0.027753492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:53:55 compute-0 podman[433926]: 2025-10-14 09:53:55.115922539 +0000 UTC m=+0.125287045 container start 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:53:55 compute-0 youthful_bohr[433942]: 167 167
Oct 14 09:53:55 compute-0 systemd[1]: libpod-516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13.scope: Deactivated successfully.
Oct 14 09:53:55 compute-0 podman[433926]: 2025-10-14 09:53:55.123780082 +0000 UTC m=+0.133144598 container attach 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 09:53:55 compute-0 podman[433926]: 2025-10-14 09:53:55.126273403 +0000 UTC m=+0.135637899 container died 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f63532eccb636bb9b97985424c6bf9c95cd90be2d71946fcb8216ebf91f0bc6-merged.mount: Deactivated successfully.
Oct 14 09:53:55 compute-0 podman[433926]: 2025-10-14 09:53:55.1669083 +0000 UTC m=+0.176272806 container remove 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 09:53:55 compute-0 systemd[1]: libpod-conmon-516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13.scope: Deactivated successfully.
Oct 14 09:53:55 compute-0 podman[433966]: 2025-10-14 09:53:55.341008622 +0000 UTC m=+0.046231575 container create 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:53:55 compute-0 systemd[1]: Started libpod-conmon-09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28.scope.
Oct 14 09:53:55 compute-0 podman[433966]: 2025-10-14 09:53:55.318371706 +0000 UTC m=+0.023594709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:53:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:55 compute-0 podman[433966]: 2025-10-14 09:53:55.461814256 +0000 UTC m=+0.167037169 container init 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 09:53:55 compute-0 podman[433966]: 2025-10-14 09:53:55.473926483 +0000 UTC m=+0.179149436 container start 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:53:55 compute-0 podman[433966]: 2025-10-14 09:53:55.477956482 +0000 UTC m=+0.183179435 container attach 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:53:55 compute-0 nova_compute[259627]: 2025-10-14 09:53:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:53:55 compute-0 nova_compute[259627]: 2025-10-14 09:53:55.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:53:56 compute-0 ceph-mon[74249]: pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]: {
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:     "0": [
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:         {
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "devices": [
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "/dev/loop3"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             ],
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_name": "ceph_lv0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_size": "21470642176",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "name": "ceph_lv0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "tags": {
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cluster_name": "ceph",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.crush_device_class": "",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.encrypted": "0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osd_id": "0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.type": "block",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.vdo": "0"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             },
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "type": "block",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "vg_name": "ceph_vg0"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:         }
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:     ],
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:     "1": [
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:         {
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "devices": [
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "/dev/loop4"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             ],
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_name": "ceph_lv1",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_size": "21470642176",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "name": "ceph_lv1",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "tags": {
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cluster_name": "ceph",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.crush_device_class": "",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.encrypted": "0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osd_id": "1",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.type": "block",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.vdo": "0"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             },
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "type": "block",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "vg_name": "ceph_vg1"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:         }
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:     ],
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:     "2": [
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:         {
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "devices": [
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "/dev/loop5"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             ],
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_name": "ceph_lv2",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_size": "21470642176",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "name": "ceph_lv2",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "tags": {
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.cluster_name": "ceph",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.crush_device_class": "",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.encrypted": "0",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osd_id": "2",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.type": "block",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:                 "ceph.vdo": "0"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             },
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "type": "block",
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:             "vg_name": "ceph_vg2"
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:         }
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]:     ]
Oct 14 09:53:56 compute-0 adoring_zhukovsky[433982]: }
Oct 14 09:53:56 compute-0 systemd[1]: libpod-09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28.scope: Deactivated successfully.
Oct 14 09:53:56 compute-0 podman[433966]: 2025-10-14 09:53:56.324108414 +0000 UTC m=+1.029331357 container died 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 14 09:53:56 compute-0 sshd-session[433123]: Failed password for root from 80.94.93.119 port 12588 ssh2
Oct 14 09:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c-merged.mount: Deactivated successfully.
Oct 14 09:53:56 compute-0 podman[433966]: 2025-10-14 09:53:56.377953455 +0000 UTC m=+1.083176368 container remove 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:53:56 compute-0 systemd[1]: libpod-conmon-09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28.scope: Deactivated successfully.
Oct 14 09:53:56 compute-0 sudo[433862]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:56 compute-0 sudo[434026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:56 compute-0 sudo[434026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:56 compute-0 sudo[434026]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:56 compute-0 sudo[434051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:53:56 compute-0 sudo[434051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:56 compute-0 sudo[434051]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:56 compute-0 sudo[434076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:56 compute-0 sudo[434076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:56 compute-0 sudo[434076]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:56 compute-0 sudo[434101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:53:56 compute-0 sudo[434101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:56 compute-0 nova_compute[259627]: 2025-10-14 09:53:56.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:57 compute-0 podman[434166]: 2025-10-14 09:53:57.120681749 +0000 UTC m=+0.052727315 container create fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:53:57 compute-0 systemd[1]: Started libpod-conmon-fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03.scope.
Oct 14 09:53:57 compute-0 podman[434166]: 2025-10-14 09:53:57.092330914 +0000 UTC m=+0.024376520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:53:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:53:57 compute-0 podman[434166]: 2025-10-14 09:53:57.206226508 +0000 UTC m=+0.138272124 container init fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:53:57 compute-0 podman[434166]: 2025-10-14 09:53:57.216133991 +0000 UTC m=+0.148179517 container start fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:53:57 compute-0 podman[434166]: 2025-10-14 09:53:57.219887273 +0000 UTC m=+0.151932839 container attach fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:53:57 compute-0 dazzling_almeida[434182]: 167 167
Oct 14 09:53:57 compute-0 systemd[1]: libpod-fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03.scope: Deactivated successfully.
Oct 14 09:53:57 compute-0 podman[434166]: 2025-10-14 09:53:57.225903421 +0000 UTC m=+0.157949007 container died fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:53:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-928b8fa07419d032aff04863604fe3a1da8d79bfc6b542ca5825fc7713e6df4f-merged.mount: Deactivated successfully.
Oct 14 09:53:57 compute-0 podman[434166]: 2025-10-14 09:53:57.272619867 +0000 UTC m=+0.204665423 container remove fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:53:57 compute-0 systemd[1]: libpod-conmon-fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03.scope: Deactivated successfully.
Oct 14 09:53:57 compute-0 sshd-session[433123]: Received disconnect from 80.94.93.119 port 12588:11:  [preauth]
Oct 14 09:53:57 compute-0 sshd-session[433123]: Disconnected from authenticating user root 80.94.93.119 port 12588 [preauth]
Oct 14 09:53:57 compute-0 sshd-session[433123]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 14 09:53:57 compute-0 podman[434206]: 2025-10-14 09:53:57.503150224 +0000 UTC m=+0.070151752 container create 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:53:57 compute-0 systemd[1]: Started libpod-conmon-2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73.scope.
Oct 14 09:53:57 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:53:57 compute-0 podman[434206]: 2025-10-14 09:53:57.476806718 +0000 UTC m=+0.043808266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:53:57 compute-0 podman[434206]: 2025-10-14 09:53:57.587827022 +0000 UTC m=+0.154828590 container init 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 09:53:57 compute-0 podman[434206]: 2025-10-14 09:53:57.596743231 +0000 UTC m=+0.163744749 container start 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:53:57 compute-0 podman[434206]: 2025-10-14 09:53:57.60037322 +0000 UTC m=+0.167374748 container attach 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 09:53:58 compute-0 ceph-mon[74249]: pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]: {
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "osd_id": 2,
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "type": "bluestore"
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:     },
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "osd_id": 1,
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "type": "bluestore"
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:     },
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "osd_id": 0,
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:         "type": "bluestore"
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]:     }
Oct 14 09:53:58 compute-0 distracted_rhodes[434223]: }
Oct 14 09:53:58 compute-0 systemd[1]: libpod-2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73.scope: Deactivated successfully.
Oct 14 09:53:58 compute-0 podman[434257]: 2025-10-14 09:53:58.576531802 +0000 UTC m=+0.034326603 container died 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 09:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e-merged.mount: Deactivated successfully.
Oct 14 09:53:58 compute-0 podman[434257]: 2025-10-14 09:53:58.625872583 +0000 UTC m=+0.083667384 container remove 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:53:58 compute-0 systemd[1]: libpod-conmon-2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73.scope: Deactivated successfully.
Oct 14 09:53:58 compute-0 sudo[434101]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:53:58 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:53:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:53:58 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:53:58 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 68a6e1c7-fa11-4c3f-848f-d8d5019e06c8 does not exist
Oct 14 09:53:58 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b116c2b8-d403-46b1-ab6e-b77af63b42d1 does not exist
Oct 14 09:53:58 compute-0 sudo[434272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:53:58 compute-0 sudo[434272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:58 compute-0 sudo[434272]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:58 compute-0 sudo[434297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:53:58 compute-0 sudo[434297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:53:58 compute-0 sudo[434297]: pam_unix(sudo:session): session closed for user root
Oct 14 09:53:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:59 compute-0 nova_compute[259627]: 2025-10-14 09:53:59.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:53:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:53:59 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:53:59 compute-0 ceph-mon[74249]: pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:53:59 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:53:59.716 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 09:54:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:02 compute-0 nova_compute[259627]: 2025-10-14 09:54:02.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:02 compute-0 ceph-mon[74249]: pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:54:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:54:02 compute-0 nova_compute[259627]: 2025-10-14 09:54:02.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:04 compute-0 ceph-mon[74249]: pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:04 compute-0 nova_compute[259627]: 2025-10-14 09:54:04.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:04 compute-0 podman[434323]: 2025-10-14 09:54:04.696465524 +0000 UTC m=+0.095411772 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 14 09:54:04 compute-0 podman[434322]: 2025-10-14 09:54:04.742233747 +0000 UTC m=+0.142754413 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 09:54:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:54:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3690690533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:54:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:54:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3690690533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:54:05 compute-0 nova_compute[259627]: 2025-10-14 09:54:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:06 compute-0 ceph-mon[74249]: pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3690690533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:54:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3690690533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:54:07 compute-0 nova_compute[259627]: 2025-10-14 09:54:07.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:54:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:54:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:54:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:54:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:54:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:54:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:08 compute-0 ceph-mon[74249]: pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:08 compute-0 podman[433735]: time="2025-10-14T09:54:08Z" level=info msg="Received shutdown.Stop(), terminating!" PID=433735
Oct 14 09:54:08 compute-0 systemd[1]: podman.service: Deactivated successfully.
Oct 14 09:54:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:09 compute-0 nova_compute[259627]: 2025-10-14 09:54:09.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:09 compute-0 nova_compute[259627]: 2025-10-14 09:54:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.014 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.015 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:54:10 compute-0 ceph-mon[74249]: pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:54:10 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248548850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.542 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.721 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.723 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3577MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.723 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.723 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.799 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.800 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:54:10 compute-0 nova_compute[259627]: 2025-10-14 09:54:10.836 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:54:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:11 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1248548850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:54:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:54:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435957043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:54:11 compute-0 nova_compute[259627]: 2025-10-14 09:54:11.328 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:54:11 compute-0 nova_compute[259627]: 2025-10-14 09:54:11.338 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:54:11 compute-0 nova_compute[259627]: 2025-10-14 09:54:11.357 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:54:11 compute-0 nova_compute[259627]: 2025-10-14 09:54:11.360 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:54:11 compute-0 nova_compute[259627]: 2025-10-14 09:54:11.360 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:54:12 compute-0 nova_compute[259627]: 2025-10-14 09:54:12.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:12 compute-0 ceph-mon[74249]: pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/435957043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:54:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:13 compute-0 nova_compute[259627]: 2025-10-14 09:54:13.356 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:13 compute-0 nova_compute[259627]: 2025-10-14 09:54:13.357 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:13 compute-0 nova_compute[259627]: 2025-10-14 09:54:13.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:13 compute-0 nova_compute[259627]: 2025-10-14 09:54:13.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:54:13 compute-0 nova_compute[259627]: 2025-10-14 09:54:13.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:54:13 compute-0 nova_compute[259627]: 2025-10-14 09:54:13.994 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:54:14 compute-0 ceph-mon[74249]: pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:14 compute-0 nova_compute[259627]: 2025-10-14 09:54:14.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:14 compute-0 nova_compute[259627]: 2025-10-14 09:54:14.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:14 compute-0 nova_compute[259627]: 2025-10-14 09:54:14.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:54:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:15 compute-0 nova_compute[259627]: 2025-10-14 09:54:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:16 compute-0 ceph-mon[74249]: pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:16 compute-0 sudo[434414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Oct 14 09:54:16 compute-0 sudo[434414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:54:16 compute-0 sudo[434414]: pam_unix(sudo:session): session closed for user root
Oct 14 09:54:16 compute-0 sudo[434439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Oct 14 09:54:16 compute-0 sudo[434439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:54:16 compute-0 sudo[434439]: pam_unix(sudo:session): session closed for user root
Oct 14 09:54:17 compute-0 sshd-session[433129]: Connection closed by 192.168.122.30 port 37260
Oct 14 09:54:17 compute-0 sshd-session[433126]: pam_unix(sshd:session): session closed for user zuul
Oct 14 09:54:17 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 14 09:54:17 compute-0 systemd[1]: session-55.scope: Consumed 1.592s CPU time.
Oct 14 09:54:17 compute-0 systemd-logind[799]: Session 55 logged out. Waiting for processes to exit.
Oct 14 09:54:17 compute-0 systemd-logind[799]: Removed session 55.
Oct 14 09:54:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:17 compute-0 nova_compute[259627]: 2025-10-14 09:54:17.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:17 compute-0 sshd-session[433734]: Connection closed by 192.168.122.30 port 37266
Oct 14 09:54:17 compute-0 sshd-session[433731]: pam_unix(sshd:session): session closed for user zuul
Oct 14 09:54:17 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct 14 09:54:17 compute-0 systemd-logind[799]: Session 56 logged out. Waiting for processes to exit.
Oct 14 09:54:17 compute-0 systemd-logind[799]: Removed session 56.
Oct 14 09:54:18 compute-0 ceph-mon[74249]: pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:18 compute-0 nova_compute[259627]: 2025-10-14 09:54:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:19 compute-0 nova_compute[259627]: 2025-10-14 09:54:19.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:20 compute-0 ceph-mon[74249]: pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:21 compute-0 podman[434465]: 2025-10-14 09:54:21.656143228 +0000 UTC m=+0.059909241 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 14 09:54:21 compute-0 podman[434464]: 2025-10-14 09:54:21.688996344 +0000 UTC m=+0.094087840 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.build-date=20251009)
Oct 14 09:54:22 compute-0 nova_compute[259627]: 2025-10-14 09:54:22.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:22 compute-0 ceph-mon[74249]: pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:23 compute-0 nova_compute[259627]: 2025-10-14 09:54:23.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:24 compute-0 ceph-mon[74249]: pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:24 compute-0 nova_compute[259627]: 2025-10-14 09:54:24.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:26 compute-0 ceph-mon[74249]: pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:27 compute-0 nova_compute[259627]: 2025-10-14 09:54:27.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:28 compute-0 ceph-mon[74249]: pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:29 compute-0 nova_compute[259627]: 2025-10-14 09:54:29.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:30 compute-0 ceph-mon[74249]: pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:32 compute-0 nova_compute[259627]: 2025-10-14 09:54:32.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:32 compute-0 ceph-mon[74249]: pgmap v3051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:54:32
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'images', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Oct 14 09:54:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:54:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:54:34 compute-0 ceph-mon[74249]: pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:34 compute-0 nova_compute[259627]: 2025-10-14 09:54:34.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:35 compute-0 podman[434505]: 2025-10-14 09:54:35.66495262 +0000 UTC m=+0.067331702 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:54:35 compute-0 podman[434504]: 2025-10-14 09:54:35.735228845 +0000 UTC m=+0.142625900 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:54:36 compute-0 ceph-mon[74249]: pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:37 compute-0 nova_compute[259627]: 2025-10-14 09:54:37.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:38 compute-0 ceph-mon[74249]: pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:39 compute-0 nova_compute[259627]: 2025-10-14 09:54:39.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:39 compute-0 nova_compute[259627]: 2025-10-14 09:54:39.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:39 compute-0 nova_compute[259627]: 2025-10-14 09:54:39.994 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:54:40 compute-0 nova_compute[259627]: 2025-10-14 09:54:40.009 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:54:40 compute-0 ceph-mon[74249]: pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Oct 14 09:54:42 compute-0 nova_compute[259627]: 2025-10-14 09:54:42.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:42 compute-0 ceph-mon[74249]: pgmap v3056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Oct 14 09:54:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.362153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683362216, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 673, "num_deletes": 250, "total_data_size": 831127, "memory_usage": 843360, "flush_reason": "Manual Compaction"}
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683370900, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 545268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63440, "largest_seqno": 64112, "table_properties": {"data_size": 542209, "index_size": 966, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8193, "raw_average_key_size": 20, "raw_value_size": 535804, "raw_average_value_size": 1349, "num_data_blocks": 44, "num_entries": 397, "num_filter_entries": 397, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435631, "oldest_key_time": 1760435631, "file_creation_time": 1760435683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 8815 microseconds, and 5433 cpu microseconds.
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.370966) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 545268 bytes OK
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.370995) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.372388) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.372412) EVENT_LOG_v1 {"time_micros": 1760435683372403, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.372438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 827582, prev total WAL file size 827582, number of live WAL files 2.
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.373466) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353030' seq:72057594037927935, type:22 .. '6D6772737461740032373531' seq:0, type:0; will stop at (end)
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(532KB)], [149(10MB)]
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683373537, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11916472, "oldest_snapshot_seqno": -1}
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8138 keys, 8892252 bytes, temperature: kUnknown
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683437563, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8892252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8841911, "index_size": 28916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20357, "raw_key_size": 213573, "raw_average_key_size": 26, "raw_value_size": 8700468, "raw_average_value_size": 1069, "num_data_blocks": 1117, "num_entries": 8138, "num_filter_entries": 8138, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.437812) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8892252 bytes
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.439276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.9 rd, 138.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.8 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(38.2) write-amplify(16.3) OK, records in: 8629, records dropped: 491 output_compression: NoCompression
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.439297) EVENT_LOG_v1 {"time_micros": 1760435683439288, "job": 92, "event": "compaction_finished", "compaction_time_micros": 64100, "compaction_time_cpu_micros": 47976, "output_level": 6, "num_output_files": 1, "total_output_size": 8892252, "num_input_records": 8629, "num_output_records": 8138, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683439545, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683442216, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.373401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:54:43 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:54:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:54:44 compute-0 ceph-mon[74249]: pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Oct 14 09:54:44 compute-0 nova_compute[259627]: 2025-10-14 09:54:44.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 09:54:46 compute-0 ceph-mon[74249]: pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 09:54:46 compute-0 nova_compute[259627]: 2025-10-14 09:54:46.989 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:54:47 compute-0 nova_compute[259627]: 2025-10-14 09:54:47.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:48 compute-0 ceph-mon[74249]: pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:54:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:54:49 compute-0 ceph-mon[74249]: pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:54:49 compute-0 nova_compute[259627]: 2025-10-14 09:54:49.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:54:52 compute-0 nova_compute[259627]: 2025-10-14 09:54:52.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:52 compute-0 ceph-mon[74249]: pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 09:54:52 compute-0 podman[434548]: 2025-10-14 09:54:52.658920122 +0000 UTC m=+0.075380361 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:54:52 compute-0 podman[434549]: 2025-10-14 09:54:52.692149727 +0000 UTC m=+0.100547938 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct 14 09:54:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Oct 14 09:54:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:54 compute-0 ceph-mon[74249]: pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Oct 14 09:54:54 compute-0 nova_compute[259627]: 2025-10-14 09:54:54.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Oct 14 09:54:55 compute-0 nova_compute[259627]: 2025-10-14 09:54:55.134 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:54:56 compute-0 ceph-mon[74249]: pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Oct 14 09:54:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Oct 14 09:54:57 compute-0 nova_compute[259627]: 2025-10-14 09:54:57.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:58 compute-0 ceph-mon[74249]: pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Oct 14 09:54:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:54:58 compute-0 sudo[434586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:54:58 compute-0 sudo[434586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:54:58 compute-0 sudo[434586]: pam_unix(sudo:session): session closed for user root
Oct 14 09:54:59 compute-0 sudo[434611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:54:59 compute-0 sudo[434611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:54:59 compute-0 sudo[434611]: pam_unix(sudo:session): session closed for user root
Oct 14 09:54:59 compute-0 sudo[434636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:54:59 compute-0 sudo[434636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:54:59 compute-0 sudo[434636]: pam_unix(sudo:session): session closed for user root
Oct 14 09:54:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:54:59 compute-0 sudo[434661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:54:59 compute-0 sudo[434661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:54:59 compute-0 nova_compute[259627]: 2025-10-14 09:54:59.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:54:59 compute-0 sudo[434661]: pam_unix(sudo:session): session closed for user root
Oct 14 09:54:59 compute-0 sudo[434718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:54:59 compute-0 sudo[434718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:54:59 compute-0 sudo[434718]: pam_unix(sudo:session): session closed for user root
Oct 14 09:54:59 compute-0 sudo[434743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:54:59 compute-0 sudo[434743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:54:59 compute-0 sudo[434743]: pam_unix(sudo:session): session closed for user root
Oct 14 09:54:59 compute-0 sudo[434768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:54:59 compute-0 sudo[434768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:54:59 compute-0 sudo[434768]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:00 compute-0 sudo[434793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 14 09:55:00 compute-0 sudo[434793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:00 compute-0 ceph-mon[74249]: pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:00 compute-0 sudo[434793]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 36d9e896-86ed-46db-a630-c8a7f90fcc41 does not exist
Oct 14 09:55:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 2babdfce-f984-4ddf-be43-ba7a1137ed5a does not exist
Oct 14 09:55:00 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3426db94-0338-4956-a58b-9ec067f83ca6 does not exist
Oct 14 09:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:55:00 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:55:00 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:55:00 compute-0 sudo[434835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:55:00 compute-0 sudo[434835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:00 compute-0 sudo[434835]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:00 compute-0 sudo[434860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:55:00 compute-0 sudo[434860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:00 compute-0 sudo[434860]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:00 compute-0 sudo[434885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:55:00 compute-0 sudo[434885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:00 compute-0 sudo[434885]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:00 compute-0 sudo[434910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:55:00 compute-0 sudo[434910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:00 compute-0 podman[434975]: 2025-10-14 09:55:00.962441204 +0000 UTC m=+0.054969299 container create bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:55:01 compute-0 systemd[1]: Started libpod-conmon-bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096.scope.
Oct 14 09:55:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:55:01 compute-0 podman[434975]: 2025-10-14 09:55:00.934829287 +0000 UTC m=+0.027357402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:55:01 compute-0 podman[434975]: 2025-10-14 09:55:01.04213128 +0000 UTC m=+0.134659385 container init bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:55:01 compute-0 podman[434975]: 2025-10-14 09:55:01.04945771 +0000 UTC m=+0.141985785 container start bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 09:55:01 compute-0 systemd[1]: libpod-bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096.scope: Deactivated successfully.
Oct 14 09:55:01 compute-0 hopeful_fermi[434992]: 167 167
Oct 14 09:55:01 compute-0 podman[434975]: 2025-10-14 09:55:01.05844314 +0000 UTC m=+0.150971265 container attach bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:55:01 compute-0 conmon[434992]: conmon bd8f2ea7a94676644c79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096.scope/container/memory.events
Oct 14 09:55:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:01 compute-0 podman[434997]: 2025-10-14 09:55:01.103089676 +0000 UTC m=+0.026500892 container died bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-897e05dff8e363364d29d3bb0805ef8f2946fd2fb910f3e4e374ba528351c46e-merged.mount: Deactivated successfully.
Oct 14 09:55:01 compute-0 podman[434997]: 2025-10-14 09:55:01.194217582 +0000 UTC m=+0.117628788 container remove bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:55:01 compute-0 systemd[1]: libpod-conmon-bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096.scope: Deactivated successfully.
Oct 14 09:55:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:55:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:55:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:55:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:55:01 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:55:01 compute-0 podman[435019]: 2025-10-14 09:55:01.397753006 +0000 UTC m=+0.060492565 container create a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:55:01 compute-0 systemd[1]: Started libpod-conmon-a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60.scope.
Oct 14 09:55:01 compute-0 podman[435019]: 2025-10-14 09:55:01.374893465 +0000 UTC m=+0.037633054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:55:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:01 compute-0 podman[435019]: 2025-10-14 09:55:01.513945067 +0000 UTC m=+0.176684696 container init a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:55:01 compute-0 podman[435019]: 2025-10-14 09:55:01.53364032 +0000 UTC m=+0.196379909 container start a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:55:01 compute-0 podman[435019]: 2025-10-14 09:55:01.541042312 +0000 UTC m=+0.203781951 container attach a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:55:02 compute-0 nova_compute[259627]: 2025-10-14 09:55:02.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:02 compute-0 ceph-mon[74249]: pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:02 compute-0 recursing_varahamihira[435035]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:55:02 compute-0 recursing_varahamihira[435035]: --> relative data size: 1.0
Oct 14 09:55:02 compute-0 recursing_varahamihira[435035]: --> All data devices are unavailable
Oct 14 09:55:02 compute-0 systemd[1]: libpod-a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60.scope: Deactivated successfully.
Oct 14 09:55:02 compute-0 podman[435019]: 2025-10-14 09:55:02.662792086 +0000 UTC m=+1.325531675 container died a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 09:55:02 compute-0 systemd[1]: libpod-a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60.scope: Consumed 1.082s CPU time.
Oct 14 09:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8-merged.mount: Deactivated successfully.
Oct 14 09:55:02 compute-0 podman[435019]: 2025-10-14 09:55:02.735960652 +0000 UTC m=+1.398700211 container remove a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 09:55:02 compute-0 systemd[1]: libpod-conmon-a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60.scope: Deactivated successfully.
Oct 14 09:55:02 compute-0 sudo[434910]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:55:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:55:02 compute-0 sudo[435076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:55:02 compute-0 sudo[435076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:02 compute-0 sudo[435076]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:02 compute-0 sudo[435101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:55:02 compute-0 sudo[435101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:02 compute-0 sudo[435101]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:03 compute-0 sudo[435126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:55:03 compute-0 sudo[435126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:03 compute-0 sudo[435126]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:03 compute-0 sudo[435151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:55:03 compute-0 sudo[435151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:03 compute-0 podman[435218]: 2025-10-14 09:55:03.498047681 +0000 UTC m=+0.047872276 container create ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:55:03 compute-0 systemd[1]: Started libpod-conmon-ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6.scope.
Oct 14 09:55:03 compute-0 podman[435218]: 2025-10-14 09:55:03.477600369 +0000 UTC m=+0.027424954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:55:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:55:03 compute-0 podman[435218]: 2025-10-14 09:55:03.593810881 +0000 UTC m=+0.143635456 container init ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:55:03 compute-0 podman[435218]: 2025-10-14 09:55:03.605572789 +0000 UTC m=+0.155397364 container start ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 09:55:03 compute-0 podman[435218]: 2025-10-14 09:55:03.609755542 +0000 UTC m=+0.159580177 container attach ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:55:03 compute-0 elegant_shockley[435235]: 167 167
Oct 14 09:55:03 compute-0 systemd[1]: libpod-ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6.scope: Deactivated successfully.
Oct 14 09:55:03 compute-0 podman[435218]: 2025-10-14 09:55:03.612802567 +0000 UTC m=+0.162627122 container died ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe206bf02bf46bda20460e6d968fb08ffd8032a0f91cb6e4591ef1eb9f2a8e34-merged.mount: Deactivated successfully.
Oct 14 09:55:03 compute-0 podman[435218]: 2025-10-14 09:55:03.661202494 +0000 UTC m=+0.211027059 container remove ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:55:03 compute-0 systemd[1]: libpod-conmon-ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6.scope: Deactivated successfully.
Oct 14 09:55:03 compute-0 podman[435259]: 2025-10-14 09:55:03.856581018 +0000 UTC m=+0.060235639 container create cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:55:03 compute-0 systemd[1]: Started libpod-conmon-cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b.scope.
Oct 14 09:55:03 compute-0 podman[435259]: 2025-10-14 09:55:03.83381046 +0000 UTC m=+0.037465101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:55:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:03 compute-0 podman[435259]: 2025-10-14 09:55:03.981193916 +0000 UTC m=+0.184848637 container init cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:55:03 compute-0 podman[435259]: 2025-10-14 09:55:03.993233801 +0000 UTC m=+0.196888422 container start cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:55:04 compute-0 podman[435259]: 2025-10-14 09:55:04.008223489 +0000 UTC m=+0.211878110 container attach cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:55:04 compute-0 ceph-mon[74249]: pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:04 compute-0 nova_compute[259627]: 2025-10-14 09:55:04.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]: {
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:     "0": [
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:         {
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "devices": [
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "/dev/loop3"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             ],
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_name": "ceph_lv0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_size": "21470642176",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "name": "ceph_lv0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "tags": {
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cluster_name": "ceph",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.crush_device_class": "",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.encrypted": "0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osd_id": "0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.type": "block",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.vdo": "0"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             },
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "type": "block",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "vg_name": "ceph_vg0"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:         }
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:     ],
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:     "1": [
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:         {
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "devices": [
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "/dev/loop4"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             ],
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_name": "ceph_lv1",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_size": "21470642176",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "name": "ceph_lv1",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "tags": {
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cluster_name": "ceph",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.crush_device_class": "",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.encrypted": "0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osd_id": "1",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.type": "block",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.vdo": "0"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             },
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "type": "block",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "vg_name": "ceph_vg1"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:         }
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:     ],
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:     "2": [
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:         {
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "devices": [
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "/dev/loop5"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             ],
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_name": "ceph_lv2",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_size": "21470642176",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "name": "ceph_lv2",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "tags": {
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.cluster_name": "ceph",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.crush_device_class": "",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.encrypted": "0",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osd_id": "2",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.type": "block",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:                 "ceph.vdo": "0"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             },
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "type": "block",
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:             "vg_name": "ceph_vg2"
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:         }
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]:     ]
Oct 14 09:55:04 compute-0 romantic_meninsky[435276]: }
Oct 14 09:55:04 compute-0 systemd[1]: libpod-cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b.scope: Deactivated successfully.
Oct 14 09:55:04 compute-0 podman[435259]: 2025-10-14 09:55:04.796944441 +0000 UTC m=+1.000599092 container died cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:55:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca-merged.mount: Deactivated successfully.
Oct 14 09:55:04 compute-0 podman[435259]: 2025-10-14 09:55:04.866966929 +0000 UTC m=+1.070621580 container remove cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:55:04 compute-0 systemd[1]: libpod-conmon-cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b.scope: Deactivated successfully.
Oct 14 09:55:04 compute-0 sudo[435151]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:04 compute-0 sudo[435299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:55:05 compute-0 sudo[435299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:05 compute-0 nova_compute[259627]: 2025-10-14 09:55:05.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:05 compute-0 sudo[435299]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:05 compute-0 sudo[435324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:55:05 compute-0 sudo[435324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:05 compute-0 sudo[435324]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:05 compute-0 sudo[435349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:55:05 compute-0 sudo[435349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:05 compute-0 sudo[435349]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:05 compute-0 sudo[435374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:55:05 compute-0 sudo[435374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:05 compute-0 podman[435439]: 2025-10-14 09:55:05.649291496 +0000 UTC m=+0.057057042 container create 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:55:05 compute-0 systemd[1]: Started libpod-conmon-72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e.scope.
Oct 14 09:55:05 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:55:05 compute-0 podman[435439]: 2025-10-14 09:55:05.627724546 +0000 UTC m=+0.035490182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:55:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:55:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612376368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:55:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:55:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612376368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:55:05 compute-0 podman[435439]: 2025-10-14 09:55:05.736750531 +0000 UTC m=+0.144516167 container init 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:55:05 compute-0 podman[435439]: 2025-10-14 09:55:05.747932606 +0000 UTC m=+0.155698162 container start 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:55:05 compute-0 podman[435439]: 2025-10-14 09:55:05.751606046 +0000 UTC m=+0.159371692 container attach 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 09:55:05 compute-0 vigilant_blackburn[435455]: 167 167
Oct 14 09:55:05 compute-0 systemd[1]: libpod-72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e.scope: Deactivated successfully.
Oct 14 09:55:05 compute-0 podman[435439]: 2025-10-14 09:55:05.755336408 +0000 UTC m=+0.163101964 container died 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d25f80195547d13b1fedc54899751c3ed28466c52765cccc15cd9dcd35ebfb9a-merged.mount: Deactivated successfully.
Oct 14 09:55:05 compute-0 podman[435439]: 2025-10-14 09:55:05.801177962 +0000 UTC m=+0.208943508 container remove 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 09:55:05 compute-0 systemd[1]: libpod-conmon-72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e.scope: Deactivated successfully.
Oct 14 09:55:05 compute-0 podman[435458]: 2025-10-14 09:55:05.841048401 +0000 UTC m=+0.093167867 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 14 09:55:05 compute-0 podman[435468]: 2025-10-14 09:55:05.929376068 +0000 UTC m=+0.144897136 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller)
Oct 14 09:55:05 compute-0 podman[435516]: 2025-10-14 09:55:05.978620356 +0000 UTC m=+0.055199255 container create 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:55:06 compute-0 systemd[1]: Started libpod-conmon-3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee.scope.
Oct 14 09:55:06 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:55:06 compute-0 podman[435516]: 2025-10-14 09:55:05.952784272 +0000 UTC m=+0.029363221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:55:06 compute-0 podman[435516]: 2025-10-14 09:55:06.057526992 +0000 UTC m=+0.134105891 container init 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 09:55:06 compute-0 podman[435516]: 2025-10-14 09:55:06.067403455 +0000 UTC m=+0.143982354 container start 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:55:06 compute-0 podman[435516]: 2025-10-14 09:55:06.076108238 +0000 UTC m=+0.152687137 container attach 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:55:06 compute-0 ceph-mon[74249]: pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/612376368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:55:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/612376368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:55:06 compute-0 nova_compute[259627]: 2025-10-14 09:55:06.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:55:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:55:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:55:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:55:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:55:07.072 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:55:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:07 compute-0 nova_compute[259627]: 2025-10-14 09:55:07.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:07 compute-0 naughty_brattain[435536]: {
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "osd_id": 2,
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "type": "bluestore"
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:     },
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "osd_id": 1,
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "type": "bluestore"
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:     },
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "osd_id": 0,
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:         "type": "bluestore"
Oct 14 09:55:07 compute-0 naughty_brattain[435536]:     }
Oct 14 09:55:07 compute-0 naughty_brattain[435536]: }
Oct 14 09:55:07 compute-0 systemd[1]: libpod-3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee.scope: Deactivated successfully.
Oct 14 09:55:07 compute-0 systemd[1]: libpod-3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee.scope: Consumed 1.150s CPU time.
Oct 14 09:55:07 compute-0 podman[435569]: 2025-10-14 09:55:07.264923539 +0000 UTC m=+0.035073292 container died 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 14 09:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b-merged.mount: Deactivated successfully.
Oct 14 09:55:07 compute-0 podman[435569]: 2025-10-14 09:55:07.347261999 +0000 UTC m=+0.117411682 container remove 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 09:55:07 compute-0 systemd[1]: libpod-conmon-3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee.scope: Deactivated successfully.
Oct 14 09:55:07 compute-0 sudo[435374]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:55:07 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:55:07 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:07 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a5971cf0-5dbd-4459-ad6c-d90e68f84833 does not exist
Oct 14 09:55:07 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 14d6076f-6b0b-4e4d-bbcb-41440809bcf4 does not exist
Oct 14 09:55:07 compute-0 sudo[435584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:55:07 compute-0 sudo[435584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:07 compute-0 sudo[435584]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:07 compute-0 sudo[435609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:55:07 compute-0 sudo[435609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:55:07 compute-0 sudo[435609]: pam_unix(sudo:session): session closed for user root
Oct 14 09:55:08 compute-0 ceph-mon[74249]: pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:55:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:09 compute-0 nova_compute[259627]: 2025-10-14 09:55:09.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:10 compute-0 ceph-mon[74249]: pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:10 compute-0 nova_compute[259627]: 2025-10-14 09:55:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.026 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.027 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.027 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.028 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.028 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:55:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:11 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:55:11 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465612663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.515 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.691 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.692 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3569MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.692 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.692 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.785 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.785 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.803 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.831 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.831 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.849 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.884 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 09:55:11 compute-0 nova_compute[259627]: 2025-10-14 09:55:11.917 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:55:12 compute-0 nova_compute[259627]: 2025-10-14 09:55:12.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:12 compute-0 ceph-mon[74249]: pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:12 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3465612663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:55:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:55:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602668938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:55:12 compute-0 nova_compute[259627]: 2025-10-14 09:55:12.396 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:55:12 compute-0 nova_compute[259627]: 2025-10-14 09:55:12.404 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:55:12 compute-0 nova_compute[259627]: 2025-10-14 09:55:12.436 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:55:12 compute-0 nova_compute[259627]: 2025-10-14 09:55:12.439 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:55:12 compute-0 nova_compute[259627]: 2025-10-14 09:55:12.439 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:55:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3602668938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:55:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:13 compute-0 nova_compute[259627]: 2025-10-14 09:55:13.435 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:13 compute-0 nova_compute[259627]: 2025-10-14 09:55:13.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:14 compute-0 ceph-mon[74249]: pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:14 compute-0 nova_compute[259627]: 2025-10-14 09:55:14.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:14 compute-0 nova_compute[259627]: 2025-10-14 09:55:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:14 compute-0 nova_compute[259627]: 2025-10-14 09:55:14.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:55:14 compute-0 nova_compute[259627]: 2025-10-14 09:55:14.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:55:15 compute-0 nova_compute[259627]: 2025-10-14 09:55:15.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:55:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:16 compute-0 ceph-mon[74249]: pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:16 compute-0 nova_compute[259627]: 2025-10-14 09:55:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:16 compute-0 nova_compute[259627]: 2025-10-14 09:55:16.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:16 compute-0 nova_compute[259627]: 2025-10-14 09:55:16.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:55:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:17 compute-0 nova_compute[259627]: 2025-10-14 09:55:17.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:17 compute-0 ceph-mon[74249]: pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:18 compute-0 nova_compute[259627]: 2025-10-14 09:55:18.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:55:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:19 compute-0 nova_compute[259627]: 2025-10-14 09:55:19.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:20 compute-0 ceph-mon[74249]: pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:22 compute-0 nova_compute[259627]: 2025-10-14 09:55:22.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:22 compute-0 ceph-mon[74249]: pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:23 compute-0 podman[435678]: 2025-10-14 09:55:23.687077248 +0000 UTC m=+0.090336378 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 14 09:55:23 compute-0 podman[435679]: 2025-10-14 09:55:23.695421143 +0000 UTC m=+0.102010135 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, tcib_managed=true)
Oct 14 09:55:24 compute-0 ceph-mon[74249]: pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:24 compute-0 nova_compute[259627]: 2025-10-14 09:55:24.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:26 compute-0 ceph-mon[74249]: pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:27 compute-0 nova_compute[259627]: 2025-10-14 09:55:27.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:28 compute-0 ceph-mon[74249]: pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:29 compute-0 nova_compute[259627]: 2025-10-14 09:55:29.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:30 compute-0 ceph-mon[74249]: pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:32 compute-0 nova_compute[259627]: 2025-10-14 09:55:32.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:32 compute-0 ceph-mon[74249]: pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:55:32
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Oct 14 09:55:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:55:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:55:34 compute-0 ceph-mon[74249]: pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:34 compute-0 nova_compute[259627]: 2025-10-14 09:55:34.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:36 compute-0 ceph-mon[74249]: pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:36 compute-0 podman[435716]: 2025-10-14 09:55:36.708058823 +0000 UTC m=+0.102752782 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:55:36 compute-0 podman[435715]: 2025-10-14 09:55:36.720422306 +0000 UTC m=+0.119181275 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 14 09:55:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:37 compute-0 nova_compute[259627]: 2025-10-14 09:55:37.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:38 compute-0 ceph-mon[74249]: pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:39 compute-0 nova_compute[259627]: 2025-10-14 09:55:39.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:40 compute-0 ceph-mon[74249]: pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:42 compute-0 nova_compute[259627]: 2025-10-14 09:55:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:42 compute-0 ceph-mon[74249]: pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:55:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:55:44 compute-0 ceph-mon[74249]: pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:44 compute-0 nova_compute[259627]: 2025-10-14 09:55:44.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:46 compute-0 ceph-mon[74249]: pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:47 compute-0 nova_compute[259627]: 2025-10-14 09:55:47.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:47 compute-0 ceph-mon[74249]: pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:49 compute-0 nova_compute[259627]: 2025-10-14 09:55:49.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:50 compute-0 ceph-mon[74249]: pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:52 compute-0 nova_compute[259627]: 2025-10-14 09:55:52.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:52 compute-0 ceph-mon[74249]: pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:54 compute-0 ceph-mon[74249]: pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:54 compute-0 podman[435762]: 2025-10-14 09:55:54.686399958 +0000 UTC m=+0.087935368 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 14 09:55:54 compute-0 podman[435763]: 2025-10-14 09:55:54.686529972 +0000 UTC m=+0.085384667 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 09:55:54 compute-0 nova_compute[259627]: 2025-10-14 09:55:54.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:56 compute-0 ceph-mon[74249]: pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:57 compute-0 nova_compute[259627]: 2025-10-14 09:55:57.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:55:58 compute-0 ceph-mon[74249]: pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:55:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:55:59 compute-0 nova_compute[259627]: 2025-10-14 09:55:59.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:00 compute-0 ceph-mon[74249]: pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:02 compute-0 nova_compute[259627]: 2025-10-14 09:56:02.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:02 compute-0 ceph-mon[74249]: pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:56:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:56:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:04 compute-0 ceph-mon[74249]: pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:04 compute-0 nova_compute[259627]: 2025-10-14 09:56:04.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:56:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1488486584' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:56:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:56:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1488486584' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:56:05 compute-0 nova_compute[259627]: 2025-10-14 09:56:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:06 compute-0 ceph-mon[74249]: pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1488486584' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:56:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1488486584' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:56:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:56:07.072 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:56:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:56:07.072 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:56:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:56:07.072 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:56:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:07 compute-0 nova_compute[259627]: 2025-10-14 09:56:07.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:07 compute-0 podman[435800]: 2025-10-14 09:56:07.207831107 +0000 UTC m=+0.084361261 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 09:56:07 compute-0 podman[435799]: 2025-10-14 09:56:07.254277867 +0000 UTC m=+0.136271475 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:56:07 compute-0 sudo[435844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:07 compute-0 sudo[435844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:07 compute-0 sudo[435844]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:07 compute-0 sudo[435869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:56:07 compute-0 sudo[435869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:07 compute-0 sudo[435869]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:07 compute-0 sudo[435894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:07 compute-0 sudo[435894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:07 compute-0 sudo[435894]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:07 compute-0 sudo[435919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 14 09:56:07 compute-0 sudo[435919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:07 compute-0 nova_compute[259627]: 2025-10-14 09:56:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:08 compute-0 sudo[435919]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:56:08 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:56:08 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:08 compute-0 ceph-mon[74249]: pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:08 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:08 compute-0 sudo[435965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:08 compute-0 sudo[435965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:08 compute-0 sudo[435965]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:08 compute-0 sudo[435990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:56:08 compute-0 sudo[435990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:08 compute-0 sudo[435990]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:08 compute-0 sudo[436015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:08 compute-0 sudo[436015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:08 compute-0 sudo[436015]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:08 compute-0 sudo[436040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:56:08 compute-0 sudo[436040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:09 compute-0 sudo[436040]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:09 compute-0 sudo[436098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:09 compute-0 sudo[436098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:09 compute-0 sudo[436098]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:09 compute-0 sudo[436123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:56:09 compute-0 sudo[436123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:09 compute-0 sudo[436123]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:09 compute-0 sudo[436148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:09 compute-0 sudo[436148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:09 compute-0 sudo[436148]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:09 compute-0 sudo[436173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- inventory --format=json-pretty --filter-for-batch
Oct 14 09:56:09 compute-0 sudo[436173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:09 compute-0 nova_compute[259627]: 2025-10-14 09:56:09.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:09 compute-0 podman[436237]: 2025-10-14 09:56:09.961675528 +0000 UTC m=+0.057233235 container create 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 09:56:10 compute-0 systemd[1]: Started libpod-conmon-08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4.scope.
Oct 14 09:56:10 compute-0 podman[436237]: 2025-10-14 09:56:09.941578535 +0000 UTC m=+0.037136302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:56:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:56:10 compute-0 podman[436237]: 2025-10-14 09:56:10.062734838 +0000 UTC m=+0.158292545 container init 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:56:10 compute-0 podman[436237]: 2025-10-14 09:56:10.077221293 +0000 UTC m=+0.172779010 container start 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:56:10 compute-0 podman[436237]: 2025-10-14 09:56:10.081284483 +0000 UTC m=+0.176842200 container attach 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:56:10 compute-0 adoring_blackburn[436253]: 167 167
Oct 14 09:56:10 compute-0 systemd[1]: libpod-08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4.scope: Deactivated successfully.
Oct 14 09:56:10 compute-0 podman[436237]: 2025-10-14 09:56:10.086887971 +0000 UTC m=+0.182445678 container died 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 09:56:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8995959ca183b817d05bcbf236b055b40b8fc69e7c732d2cdd4b70985f08dda9-merged.mount: Deactivated successfully.
Oct 14 09:56:10 compute-0 podman[436237]: 2025-10-14 09:56:10.137100573 +0000 UTC m=+0.232658240 container remove 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:56:10 compute-0 systemd[1]: libpod-conmon-08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4.scope: Deactivated successfully.
Oct 14 09:56:10 compute-0 ceph-mon[74249]: pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:10 compute-0 podman[436278]: 2025-10-14 09:56:10.368572382 +0000 UTC m=+0.055730708 container create feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:56:10 compute-0 systemd[1]: Started libpod-conmon-feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7.scope.
Oct 14 09:56:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:10 compute-0 podman[436278]: 2025-10-14 09:56:10.342530063 +0000 UTC m=+0.029688469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:56:10 compute-0 podman[436278]: 2025-10-14 09:56:10.447993551 +0000 UTC m=+0.135151877 container init feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 09:56:10 compute-0 podman[436278]: 2025-10-14 09:56:10.454234344 +0000 UTC m=+0.141392650 container start feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:56:10 compute-0 podman[436278]: 2025-10-14 09:56:10.457612217 +0000 UTC m=+0.144770543 container attach feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 14 09:56:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:11 compute-0 nova_compute[259627]: 2025-10-14 09:56:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:12 compute-0 great_cori[436294]: [
Oct 14 09:56:12 compute-0 great_cori[436294]:     {
Oct 14 09:56:12 compute-0 great_cori[436294]:         "available": false,
Oct 14 09:56:12 compute-0 great_cori[436294]:         "ceph_device": false,
Oct 14 09:56:12 compute-0 great_cori[436294]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 14 09:56:12 compute-0 great_cori[436294]:         "lsm_data": {},
Oct 14 09:56:12 compute-0 great_cori[436294]:         "lvs": [],
Oct 14 09:56:12 compute-0 great_cori[436294]:         "path": "/dev/sr0",
Oct 14 09:56:12 compute-0 great_cori[436294]:         "rejected_reasons": [
Oct 14 09:56:12 compute-0 great_cori[436294]:             "Insufficient space (<5GB)",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "Has a FileSystem"
Oct 14 09:56:12 compute-0 great_cori[436294]:         ],
Oct 14 09:56:12 compute-0 great_cori[436294]:         "sys_api": {
Oct 14 09:56:12 compute-0 great_cori[436294]:             "actuators": null,
Oct 14 09:56:12 compute-0 great_cori[436294]:             "device_nodes": "sr0",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "devname": "sr0",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "human_readable_size": "482.00 KB",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "id_bus": "ata",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "model": "QEMU DVD-ROM",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "nr_requests": "2",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "parent": "/dev/sr0",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "partitions": {},
Oct 14 09:56:12 compute-0 great_cori[436294]:             "path": "/dev/sr0",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "removable": "1",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "rev": "2.5+",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "ro": "0",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "rotational": "0",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "sas_address": "",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "sas_device_handle": "",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "scheduler_mode": "mq-deadline",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "sectors": 0,
Oct 14 09:56:12 compute-0 great_cori[436294]:             "sectorsize": "2048",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "size": 493568.0,
Oct 14 09:56:12 compute-0 great_cori[436294]:             "support_discard": "2048",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "type": "disk",
Oct 14 09:56:12 compute-0 great_cori[436294]:             "vendor": "QEMU"
Oct 14 09:56:12 compute-0 great_cori[436294]:         }
Oct 14 09:56:12 compute-0 great_cori[436294]:     }
Oct 14 09:56:12 compute-0 great_cori[436294]: ]
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.045 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.046 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.046 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.047 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.047 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:56:12 compute-0 systemd[1]: libpod-feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7.scope: Deactivated successfully.
Oct 14 09:56:12 compute-0 systemd[1]: libpod-feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7.scope: Consumed 1.672s CPU time.
Oct 14 09:56:12 compute-0 podman[438503]: 2025-10-14 09:56:12.134489983 +0000 UTC m=+0.045099017 container died feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:56:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1-merged.mount: Deactivated successfully.
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:12 compute-0 podman[438503]: 2025-10-14 09:56:12.208997691 +0000 UTC m=+0.119606625 container remove feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:56:12 compute-0 systemd[1]: libpod-conmon-feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7.scope: Deactivated successfully.
Oct 14 09:56:12 compute-0 sudo[436173]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:12 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 45670223-0286-4f9a-9df7-5432b65b3659 does not exist
Oct 14 09:56:12 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 2d232ea3-188e-4c9c-b268-8959765e6764 does not exist
Oct 14 09:56:12 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev eee15c4e-31e4-4959-ab0f-15c2629977dd does not exist
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:56:12 compute-0 sudo[438536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:12 compute-0 sudo[438536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:12 compute-0 sudo[438536]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:12 compute-0 sudo[438561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:56:12 compute-0 sudo[438561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:12 compute-0 sudo[438561]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:12 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:56:12 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794091761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.530 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:56:12 compute-0 sudo[438586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:12 compute-0 sudo[438586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:12 compute-0 sudo[438586]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:12 compute-0 sudo[438613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:56:12 compute-0 sudo[438613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.745 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.746 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3559MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.746 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.747 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.833 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.834 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:56:12 compute-0 nova_compute[259627]: 2025-10-14 09:56:12.854 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:56:13 compute-0 podman[438680]: 2025-10-14 09:56:13.085324793 +0000 UTC m=+0.064319409 container create cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:56:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:13 compute-0 systemd[1]: Started libpod-conmon-cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130.scope.
Oct 14 09:56:13 compute-0 podman[438680]: 2025-10-14 09:56:13.058114426 +0000 UTC m=+0.037109082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:56:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:56:13 compute-0 podman[438680]: 2025-10-14 09:56:13.192188165 +0000 UTC m=+0.171182811 container init cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:56:13 compute-0 podman[438680]: 2025-10-14 09:56:13.202044757 +0000 UTC m=+0.181039363 container start cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:56:13 compute-0 podman[438680]: 2025-10-14 09:56:13.206137268 +0000 UTC m=+0.185131884 container attach cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 09:56:13 compute-0 wonderful_dijkstra[438714]: 167 167
Oct 14 09:56:13 compute-0 systemd[1]: libpod-cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130.scope: Deactivated successfully.
Oct 14 09:56:13 compute-0 podman[438680]: 2025-10-14 09:56:13.21111361 +0000 UTC m=+0.190108226 container died cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-57a04142650eff79b844d7ad90e7f379daac20c5135719a6d1f1ccfb5a4b7c05-merged.mount: Deactivated successfully.
Oct 14 09:56:13 compute-0 podman[438680]: 2025-10-14 09:56:13.262928161 +0000 UTC m=+0.241922767 container remove cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:56:13 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2794091761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:56:13 compute-0 systemd[1]: libpod-conmon-cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130.scope: Deactivated successfully.
Oct 14 09:56:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:56:13 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143012848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:56:13 compute-0 nova_compute[259627]: 2025-10-14 09:56:13.314 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:56:13 compute-0 nova_compute[259627]: 2025-10-14 09:56:13.324 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:56:13 compute-0 nova_compute[259627]: 2025-10-14 09:56:13.347 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:56:13 compute-0 nova_compute[259627]: 2025-10-14 09:56:13.352 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:56:13 compute-0 nova_compute[259627]: 2025-10-14 09:56:13.352 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:56:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:13 compute-0 podman[438740]: 2025-10-14 09:56:13.481114025 +0000 UTC m=+0.068892062 container create c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 09:56:13 compute-0 systemd[1]: Started libpod-conmon-c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53.scope.
Oct 14 09:56:13 compute-0 podman[438740]: 2025-10-14 09:56:13.452578265 +0000 UTC m=+0.040356352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:56:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:13 compute-0 podman[438740]: 2025-10-14 09:56:13.586578833 +0000 UTC m=+0.174356870 container init c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:56:13 compute-0 podman[438740]: 2025-10-14 09:56:13.600243258 +0000 UTC m=+0.188021305 container start c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 09:56:13 compute-0 podman[438740]: 2025-10-14 09:56:13.60561028 +0000 UTC m=+0.193388327 container attach c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 14 09:56:14 compute-0 ceph-mon[74249]: pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:14 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1143012848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:56:14 compute-0 awesome_jang[438757]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:56:14 compute-0 awesome_jang[438757]: --> relative data size: 1.0
Oct 14 09:56:14 compute-0 awesome_jang[438757]: --> All data devices are unavailable
Oct 14 09:56:14 compute-0 systemd[1]: libpod-c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53.scope: Deactivated successfully.
Oct 14 09:56:14 compute-0 podman[438740]: 2025-10-14 09:56:14.72146428 +0000 UTC m=+1.309242327 container died c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:56:14 compute-0 systemd[1]: libpod-c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53.scope: Consumed 1.084s CPU time.
Oct 14 09:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d-merged.mount: Deactivated successfully.
Oct 14 09:56:14 compute-0 podman[438740]: 2025-10-14 09:56:14.792862872 +0000 UTC m=+1.380640919 container remove c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:56:14 compute-0 systemd[1]: libpod-conmon-c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53.scope: Deactivated successfully.
Oct 14 09:56:14 compute-0 sudo[438613]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:14 compute-0 nova_compute[259627]: 2025-10-14 09:56:14.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:14 compute-0 sudo[438801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:14 compute-0 sudo[438801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:14 compute-0 sudo[438801]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:15 compute-0 sudo[438826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:56:15 compute-0 sudo[438826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:15 compute-0 sudo[438826]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:15 compute-0 sudo[438851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:15 compute-0 sudo[438851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:15 compute-0 sudo[438851]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:15 compute-0 sudo[438876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:56:15 compute-0 sudo[438876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:15 compute-0 nova_compute[259627]: 2025-10-14 09:56:15.349 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:15 compute-0 podman[438941]: 2025-10-14 09:56:15.707473624 +0000 UTC m=+0.065002426 container create 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 09:56:15 compute-0 systemd[1]: Started libpod-conmon-18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd.scope.
Oct 14 09:56:15 compute-0 podman[438941]: 2025-10-14 09:56:15.6861222 +0000 UTC m=+0.043651012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:56:15 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:56:15 compute-0 podman[438941]: 2025-10-14 09:56:15.809581839 +0000 UTC m=+0.167110691 container init 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:56:15 compute-0 podman[438941]: 2025-10-14 09:56:15.822845205 +0000 UTC m=+0.180374007 container start 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:56:15 compute-0 podman[438941]: 2025-10-14 09:56:15.826988916 +0000 UTC m=+0.184517778 container attach 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:56:15 compute-0 dreamy_wescoff[438957]: 167 167
Oct 14 09:56:15 compute-0 systemd[1]: libpod-18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd.scope: Deactivated successfully.
Oct 14 09:56:15 compute-0 podman[438941]: 2025-10-14 09:56:15.832680226 +0000 UTC m=+0.190209028 container died 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcf2a79b7a80a839367cc846d14e9852f6a3a499fb14fdc5c9532ab7803cb89e-merged.mount: Deactivated successfully.
Oct 14 09:56:15 compute-0 podman[438941]: 2025-10-14 09:56:15.888142326 +0000 UTC m=+0.245671128 container remove 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 09:56:15 compute-0 systemd[1]: libpod-conmon-18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd.scope: Deactivated successfully.
Oct 14 09:56:15 compute-0 nova_compute[259627]: 2025-10-14 09:56:15.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:16 compute-0 podman[438982]: 2025-10-14 09:56:16.11816658 +0000 UTC m=+0.058341753 container create e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:56:16 compute-0 systemd[1]: Started libpod-conmon-e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524.scope.
Oct 14 09:56:16 compute-0 podman[438982]: 2025-10-14 09:56:16.092734926 +0000 UTC m=+0.032910179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:56:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:16 compute-0 podman[438982]: 2025-10-14 09:56:16.213422307 +0000 UTC m=+0.153597560 container init e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 09:56:16 compute-0 podman[438982]: 2025-10-14 09:56:16.229559833 +0000 UTC m=+0.169734986 container start e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:56:16 compute-0 podman[438982]: 2025-10-14 09:56:16.233771977 +0000 UTC m=+0.173947230 container attach e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 09:56:16 compute-0 ceph-mon[74249]: pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:16 compute-0 tender_poincare[438999]: {
Oct 14 09:56:16 compute-0 tender_poincare[438999]:     "0": [
Oct 14 09:56:16 compute-0 tender_poincare[438999]:         {
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "devices": [
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "/dev/loop3"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             ],
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_name": "ceph_lv0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_size": "21470642176",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "name": "ceph_lv0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "tags": {
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cluster_name": "ceph",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.crush_device_class": "",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.encrypted": "0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osd_id": "0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.type": "block",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.vdo": "0"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             },
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "type": "block",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "vg_name": "ceph_vg0"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:         }
Oct 14 09:56:16 compute-0 tender_poincare[438999]:     ],
Oct 14 09:56:16 compute-0 tender_poincare[438999]:     "1": [
Oct 14 09:56:16 compute-0 tender_poincare[438999]:         {
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "devices": [
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "/dev/loop4"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             ],
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_name": "ceph_lv1",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_size": "21470642176",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "name": "ceph_lv1",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "tags": {
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cluster_name": "ceph",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.crush_device_class": "",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.encrypted": "0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osd_id": "1",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.type": "block",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.vdo": "0"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             },
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "type": "block",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "vg_name": "ceph_vg1"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:         }
Oct 14 09:56:16 compute-0 tender_poincare[438999]:     ],
Oct 14 09:56:16 compute-0 tender_poincare[438999]:     "2": [
Oct 14 09:56:16 compute-0 tender_poincare[438999]:         {
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "devices": [
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "/dev/loop5"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             ],
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_name": "ceph_lv2",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_size": "21470642176",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "name": "ceph_lv2",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "tags": {
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.cluster_name": "ceph",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.crush_device_class": "",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.encrypted": "0",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osd_id": "2",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.type": "block",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:                 "ceph.vdo": "0"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             },
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "type": "block",
Oct 14 09:56:16 compute-0 tender_poincare[438999]:             "vg_name": "ceph_vg2"
Oct 14 09:56:16 compute-0 tender_poincare[438999]:         }
Oct 14 09:56:16 compute-0 tender_poincare[438999]:     ]
Oct 14 09:56:16 compute-0 tender_poincare[438999]: }
Oct 14 09:56:16 compute-0 nova_compute[259627]: 2025-10-14 09:56:16.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:16 compute-0 nova_compute[259627]: 2025-10-14 09:56:16.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:56:16 compute-0 nova_compute[259627]: 2025-10-14 09:56:16.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:56:16 compute-0 systemd[1]: libpod-e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524.scope: Deactivated successfully.
Oct 14 09:56:16 compute-0 podman[438982]: 2025-10-14 09:56:16.983587965 +0000 UTC m=+0.923763158 container died e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:56:16 compute-0 nova_compute[259627]: 2025-10-14 09:56:16.998 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7-merged.mount: Deactivated successfully.
Oct 14 09:56:17 compute-0 podman[438982]: 2025-10-14 09:56:17.053040559 +0000 UTC m=+0.993215722 container remove e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:56:17 compute-0 systemd[1]: libpod-conmon-e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524.scope: Deactivated successfully.
Oct 14 09:56:17 compute-0 sudo[438876]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:17 compute-0 sudo[439021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:17 compute-0 nova_compute[259627]: 2025-10-14 09:56:17.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:17 compute-0 sudo[439021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:17 compute-0 sudo[439021]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:17 compute-0 sudo[439046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:56:17 compute-0 sudo[439046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:17 compute-0 sudo[439046]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:17 compute-0 sudo[439071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:17 compute-0 sudo[439071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:17 compute-0 sudo[439071]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:17 compute-0 sudo[439096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:56:17 compute-0 sudo[439096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:18 compute-0 podman[439163]: 2025-10-14 09:56:18.010456612 +0000 UTC m=+0.066844111 container create 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:56:18 compute-0 systemd[1]: Started libpod-conmon-3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27.scope.
Oct 14 09:56:18 compute-0 podman[439163]: 2025-10-14 09:56:17.984496055 +0000 UTC m=+0.040883604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:56:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:56:18 compute-0 podman[439163]: 2025-10-14 09:56:18.112593728 +0000 UTC m=+0.168981257 container init 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:56:18 compute-0 podman[439163]: 2025-10-14 09:56:18.12490133 +0000 UTC m=+0.181288819 container start 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:56:18 compute-0 podman[439163]: 2025-10-14 09:56:18.129710018 +0000 UTC m=+0.186097577 container attach 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 09:56:18 compute-0 unruffled_banach[439179]: 167 167
Oct 14 09:56:18 compute-0 systemd[1]: libpod-3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27.scope: Deactivated successfully.
Oct 14 09:56:18 compute-0 podman[439163]: 2025-10-14 09:56:18.132388984 +0000 UTC m=+0.188776473 container died 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 09:56:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cfc7418709413762eee1ec9f9c6970e50fe28c53e7b2af041e45764e8fb9af8-merged.mount: Deactivated successfully.
Oct 14 09:56:18 compute-0 podman[439163]: 2025-10-14 09:56:18.187645489 +0000 UTC m=+0.244032978 container remove 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:56:18 compute-0 systemd[1]: libpod-conmon-3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27.scope: Deactivated successfully.
Oct 14 09:56:18 compute-0 ceph-mon[74249]: pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:18 compute-0 podman[439203]: 2025-10-14 09:56:18.434612499 +0000 UTC m=+0.061306045 container create 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:56:18 compute-0 systemd[1]: Started libpod-conmon-2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2.scope.
Oct 14 09:56:18 compute-0 podman[439203]: 2025-10-14 09:56:18.412589279 +0000 UTC m=+0.039282845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:56:18 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:56:18 compute-0 podman[439203]: 2025-10-14 09:56:18.556823458 +0000 UTC m=+0.183517004 container init 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:56:18 compute-0 podman[439203]: 2025-10-14 09:56:18.569050628 +0000 UTC m=+0.195744154 container start 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:56:18 compute-0 podman[439203]: 2025-10-14 09:56:18.573000845 +0000 UTC m=+0.199694451 container attach 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:56:18 compute-0 nova_compute[259627]: 2025-10-14 09:56:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:18 compute-0 nova_compute[259627]: 2025-10-14 09:56:18.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:18 compute-0 nova_compute[259627]: 2025-10-14 09:56:18.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:18 compute-0 nova_compute[259627]: 2025-10-14 09:56:18.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:56:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]: {
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "osd_id": 2,
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "type": "bluestore"
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:     },
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "osd_id": 1,
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "type": "bluestore"
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:     },
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "osd_id": 0,
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:         "type": "bluestore"
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]:     }
Oct 14 09:56:19 compute-0 vigorous_bhabha[439220]: }
Oct 14 09:56:19 compute-0 systemd[1]: libpod-2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2.scope: Deactivated successfully.
Oct 14 09:56:19 compute-0 podman[439203]: 2025-10-14 09:56:19.677966167 +0000 UTC m=+1.304659723 container died 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:56:19 compute-0 systemd[1]: libpod-2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2.scope: Consumed 1.116s CPU time.
Oct 14 09:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80-merged.mount: Deactivated successfully.
Oct 14 09:56:19 compute-0 podman[439203]: 2025-10-14 09:56:19.763788473 +0000 UTC m=+1.390481999 container remove 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:56:19 compute-0 systemd[1]: libpod-conmon-2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2.scope: Deactivated successfully.
Oct 14 09:56:19 compute-0 sudo[439096]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:56:19 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:56:19 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:19 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b675b71b-1932-4ad1-ad0b-6d505662cff8 does not exist
Oct 14 09:56:19 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev aac4e4df-fe79-4d2b-8c03-ab9221ac66a7 does not exist
Oct 14 09:56:19 compute-0 nova_compute[259627]: 2025-10-14 09:56:19.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:19 compute-0 sudo[439264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:56:19 compute-0 sudo[439264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:19 compute-0 sudo[439264]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:20 compute-0 sudo[439289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:56:20 compute-0 sudo[439289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:56:20 compute-0 sudo[439289]: pam_unix(sudo:session): session closed for user root
Oct 14 09:56:20 compute-0 ceph-mon[74249]: pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:20 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:20 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:56:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:22 compute-0 nova_compute[259627]: 2025-10-14 09:56:22.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:22 compute-0 ceph-mon[74249]: pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:24 compute-0 ceph-mon[74249]: pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:24 compute-0 nova_compute[259627]: 2025-10-14 09:56:24.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:25 compute-0 ceph-mon[74249]: pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:25 compute-0 podman[439315]: 2025-10-14 09:56:25.698945473 +0000 UTC m=+0.099956343 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:56:25 compute-0 podman[439314]: 2025-10-14 09:56:25.708286142 +0000 UTC m=+0.109359674 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:56:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:27 compute-0 nova_compute[259627]: 2025-10-14 09:56:27.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:28 compute-0 ceph-mon[74249]: pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:29 compute-0 nova_compute[259627]: 2025-10-14 09:56:29.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:30 compute-0 ceph-mon[74249]: pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:32 compute-0 ceph-mon[74249]: pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:32 compute-0 nova_compute[259627]: 2025-10-14 09:56:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:56:32
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', 'backups']
Oct 14 09:56:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:56:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:56:34 compute-0 ceph-mon[74249]: pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:34 compute-0 nova_compute[259627]: 2025-10-14 09:56:34.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:36 compute-0 ceph-mon[74249]: pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:37 compute-0 nova_compute[259627]: 2025-10-14 09:56:37.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:37 compute-0 podman[439352]: 2025-10-14 09:56:37.695996994 +0000 UTC m=+0.093897805 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:56:37 compute-0 podman[439351]: 2025-10-14 09:56:37.734616372 +0000 UTC m=+0.147262565 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 09:56:38 compute-0 ceph-mon[74249]: pgmap v3114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:39 compute-0 nova_compute[259627]: 2025-10-14 09:56:39.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:39 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:56:40 compute-0 ceph-mon[74249]: pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:42 compute-0 ceph-mon[74249]: pgmap v3116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:42 compute-0 nova_compute[259627]: 2025-10-14 09:56:42.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:56:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:56:44 compute-0 ceph-mon[74249]: pgmap v3117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:44 compute-0 nova_compute[259627]: 2025-10-14 09:56:44.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:46 compute-0 ceph-mon[74249]: pgmap v3118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:47 compute-0 nova_compute[259627]: 2025-10-14 09:56:47.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:48 compute-0 ceph-mon[74249]: pgmap v3119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:49 compute-0 nova_compute[259627]: 2025-10-14 09:56:49.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:49 compute-0 nova_compute[259627]: 2025-10-14 09:56:49.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:56:50 compute-0 ceph-mon[74249]: pgmap v3120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:52 compute-0 ceph-mon[74249]: pgmap v3121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:52 compute-0 nova_compute[259627]: 2025-10-14 09:56:52.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:54 compute-0 ceph-mon[74249]: pgmap v3122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:54 compute-0 nova_compute[259627]: 2025-10-14 09:56:54.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:56 compute-0 ceph-mon[74249]: pgmap v3123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:56 compute-0 podman[439400]: 2025-10-14 09:56:56.664494115 +0000 UTC m=+0.074768495 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 14 09:56:56 compute-0 podman[439401]: 2025-10-14 09:56:56.688785591 +0000 UTC m=+0.085061148 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 09:56:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:57 compute-0 nova_compute[259627]: 2025-10-14 09:56:57.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:56:58 compute-0 ceph-mon[74249]: pgmap v3124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:56:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:56:59 compute-0 nova_compute[259627]: 2025-10-14 09:56:59.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:00 compute-0 ceph-mon[74249]: pgmap v3125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:02 compute-0 nova_compute[259627]: 2025-10-14 09:57:02.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:02 compute-0 ceph-mon[74249]: pgmap v3126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:57:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:57:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:03 compute-0 ceph-mon[74249]: pgmap v3127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:04 compute-0 nova_compute[259627]: 2025-10-14 09:57:04.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:57:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131583838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:57:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:57:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131583838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:57:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 14 09:57:06 compute-0 ceph-mon[74249]: pgmap v3128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/131583838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:57:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/131583838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:57:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 14 09:57:06 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 14 09:57:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:57:07.073 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:57:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:57:07.073 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:57:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:57:07.074 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:57:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:07 compute-0 ceph-mon[74249]: osdmap e303: 3 total, 3 up, 3 in
Oct 14 09:57:07 compute-0 nova_compute[259627]: 2025-10-14 09:57:07.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:07 compute-0 nova_compute[259627]: 2025-10-14 09:57:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:07 compute-0 nova_compute[259627]: 2025-10-14 09:57:07.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:08 compute-0 ceph-mon[74249]: pgmap v3130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:08 compute-0 podman[439442]: 2025-10-14 09:57:08.708005302 +0000 UTC m=+0.111541348 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 09:57:08 compute-0 podman[439441]: 2025-10-14 09:57:08.731669153 +0000 UTC m=+0.140225612 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 09:57:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 14 09:57:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 14 09:57:09 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 14 09:57:09 compute-0 nova_compute[259627]: 2025-10-14 09:57:09.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:10 compute-0 ceph-mon[74249]: pgmap v3131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:10 compute-0 ceph-mon[74249]: osdmap e304: 3 total, 3 up, 3 in
Oct 14 09:57:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 457 KiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 09:57:12 compute-0 ceph-mon[74249]: pgmap v3133: 305 pgs: 305 active+clean; 457 KiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 09:57:12 compute-0 nova_compute[259627]: 2025-10-14 09:57:12.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 457 KiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 09:57:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 14 09:57:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 14 09:57:13 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 14 09:57:13 compute-0 nova_compute[259627]: 2025-10-14 09:57:13.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:57:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 14 09:57:14 compute-0 ceph-mon[74249]: pgmap v3134: 305 pgs: 305 active+clean; 457 KiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 09:57:14 compute-0 ceph-mon[74249]: osdmap e305: 3 total, 3 up, 3 in
Oct 14 09:57:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 14 09:57:14 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 14 09:57:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:57:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/453903008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.517 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.705 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.706 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3602MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.706 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.706 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.891 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.892 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.928 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:57:14 compute-0 nova_compute[259627]: 2025-10-14 09:57:14.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 457 KiB data, 992 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 6.7 KiB/s wr, 101 op/s
Oct 14 09:57:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:57:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1719453830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:57:15 compute-0 ceph-mon[74249]: osdmap e306: 3 total, 3 up, 3 in
Oct 14 09:57:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/453903008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:57:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1719453830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:57:15 compute-0 nova_compute[259627]: 2025-10-14 09:57:15.422 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:57:15 compute-0 nova_compute[259627]: 2025-10-14 09:57:15.430 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:57:15 compute-0 nova_compute[259627]: 2025-10-14 09:57:15.458 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:57:15 compute-0 nova_compute[259627]: 2025-10-14 09:57:15.461 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:57:15 compute-0 nova_compute[259627]: 2025-10-14 09:57:15.462 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:57:16 compute-0 ceph-mon[74249]: pgmap v3137: 305 pgs: 305 active+clean; 457 KiB data, 992 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 6.7 KiB/s wr, 101 op/s
Oct 14 09:57:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Oct 14 09:57:17 compute-0 nova_compute[259627]: 2025-10-14 09:57:17.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:17 compute-0 nova_compute[259627]: 2025-10-14 09:57:17.458 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:17 compute-0 nova_compute[259627]: 2025-10-14 09:57:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:18 compute-0 ceph-mon[74249]: pgmap v3138: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Oct 14 09:57:18 compute-0 nova_compute[259627]: 2025-10-14 09:57:18.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:18 compute-0 nova_compute[259627]: 2025-10-14 09:57:18.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:57:18 compute-0 nova_compute[259627]: 2025-10-14 09:57:18.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:57:19 compute-0 nova_compute[259627]: 2025-10-14 09:57:18.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:57:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 14 09:57:19 compute-0 nova_compute[259627]: 2025-10-14 09:57:19.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:19 compute-0 nova_compute[259627]: 2025-10-14 09:57:19.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:20 compute-0 sudo[439528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:20 compute-0 sudo[439528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:20 compute-0 sudo[439528]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:20 compute-0 sudo[439553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:57:20 compute-0 sudo[439553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:20 compute-0 sudo[439553]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:20 compute-0 sudo[439578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:20 compute-0 sudo[439578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:20 compute-0 sudo[439578]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:20 compute-0 ceph-mon[74249]: pgmap v3139: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 14 09:57:20 compute-0 sudo[439603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:57:20 compute-0 sudo[439603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:20 compute-0 sudo[439603]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:57:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:57:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:57:20 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:57:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:57:20 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:57:20 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c7cffe8f-6af6-48b0-bdf8-c398278c0c20 does not exist
Oct 14 09:57:20 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 29994571-11d2-4cb4-81fd-d73946811fc0 does not exist
Oct 14 09:57:20 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 73deb0b1-1767-415a-8457-4dbc14f236ee does not exist
Oct 14 09:57:20 compute-0 nova_compute[259627]: 2025-10-14 09:57:20.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:20 compute-0 nova_compute[259627]: 2025-10-14 09:57:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:57:20 compute-0 nova_compute[259627]: 2025-10-14 09:57:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:57:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:57:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:57:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:57:20 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:57:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:57:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:57:21 compute-0 sudo[439659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:21 compute-0 sudo[439659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:21 compute-0 sudo[439659]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:21 compute-0 sudo[439684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:57:21 compute-0 sudo[439684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:21 compute-0 sudo[439684]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 14 09:57:21 compute-0 sudo[439709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:21 compute-0 sudo[439709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:21 compute-0 sudo[439709]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:21 compute-0 sudo[439734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:57:21 compute-0 sudo[439734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:21 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:57:21 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:57:21 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:57:21 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:57:21 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:57:21 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:57:21 compute-0 ceph-mon[74249]: pgmap v3140: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 14 09:57:21 compute-0 podman[439801]: 2025-10-14 09:57:21.765942773 +0000 UTC m=+0.064616586 container create 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:57:21 compute-0 systemd[1]: Started libpod-conmon-2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc.scope.
Oct 14 09:57:21 compute-0 podman[439801]: 2025-10-14 09:57:21.73766571 +0000 UTC m=+0.036339583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:57:21 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:57:21 compute-0 podman[439801]: 2025-10-14 09:57:21.875352668 +0000 UTC m=+0.174026491 container init 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 09:57:21 compute-0 podman[439801]: 2025-10-14 09:57:21.890893979 +0000 UTC m=+0.189567802 container start 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:57:21 compute-0 podman[439801]: 2025-10-14 09:57:21.895840311 +0000 UTC m=+0.194514254 container attach 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:57:21 compute-0 condescending_cartwright[439818]: 167 167
Oct 14 09:57:21 compute-0 systemd[1]: libpod-2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc.scope: Deactivated successfully.
Oct 14 09:57:21 compute-0 conmon[439818]: conmon 2491bb4e0f09390cae52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc.scope/container/memory.events
Oct 14 09:57:21 compute-0 podman[439801]: 2025-10-14 09:57:21.900116156 +0000 UTC m=+0.198789979 container died 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 09:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c6a756aed0428c67fd63748e4483b2ea49dff39382edf33381dc37e2638435d-merged.mount: Deactivated successfully.
Oct 14 09:57:21 compute-0 podman[439801]: 2025-10-14 09:57:21.95041932 +0000 UTC m=+0.249093103 container remove 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:57:21 compute-0 systemd[1]: libpod-conmon-2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc.scope: Deactivated successfully.
Oct 14 09:57:22 compute-0 podman[439841]: 2025-10-14 09:57:22.176735023 +0000 UTC m=+0.071152077 container create 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:57:22 compute-0 systemd[1]: Started libpod-conmon-2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a.scope.
Oct 14 09:57:22 compute-0 podman[439841]: 2025-10-14 09:57:22.149075754 +0000 UTC m=+0.043492858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:57:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:22 compute-0 podman[439841]: 2025-10-14 09:57:22.283875972 +0000 UTC m=+0.178293076 container init 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:57:22 compute-0 podman[439841]: 2025-10-14 09:57:22.298353157 +0000 UTC m=+0.192770221 container start 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:57:22 compute-0 podman[439841]: 2025-10-14 09:57:22.302564851 +0000 UTC m=+0.196981975 container attach 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 09:57:22 compute-0 nova_compute[259627]: 2025-10-14 09:57:22.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct 14 09:57:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:23 compute-0 hopeful_spence[439858]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:57:23 compute-0 hopeful_spence[439858]: --> relative data size: 1.0
Oct 14 09:57:23 compute-0 hopeful_spence[439858]: --> All data devices are unavailable
Oct 14 09:57:23 compute-0 systemd[1]: libpod-2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a.scope: Deactivated successfully.
Oct 14 09:57:23 compute-0 systemd[1]: libpod-2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a.scope: Consumed 1.110s CPU time.
Oct 14 09:57:23 compute-0 podman[439841]: 2025-10-14 09:57:23.480383071 +0000 UTC m=+1.374800115 container died 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f-merged.mount: Deactivated successfully.
Oct 14 09:57:23 compute-0 podman[439841]: 2025-10-14 09:57:23.542820543 +0000 UTC m=+1.437237577 container remove 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 09:57:23 compute-0 systemd[1]: libpod-conmon-2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a.scope: Deactivated successfully.
Oct 14 09:57:23 compute-0 sudo[439734]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:23 compute-0 sudo[439898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:23 compute-0 sudo[439898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:23 compute-0 sudo[439898]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:23 compute-0 sudo[439923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:57:23 compute-0 sudo[439923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:23 compute-0 sudo[439923]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:23 compute-0 sudo[439948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:23 compute-0 sudo[439948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:23 compute-0 sudo[439948]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:23 compute-0 sudo[439973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:57:23 compute-0 sudo[439973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:24 compute-0 ceph-mon[74249]: pgmap v3141: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct 14 09:57:24 compute-0 podman[440039]: 2025-10-14 09:57:24.283924997 +0000 UTC m=+0.046279167 container create f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 09:57:24 compute-0 systemd[1]: Started libpod-conmon-f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087.scope.
Oct 14 09:57:24 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:57:24 compute-0 podman[440039]: 2025-10-14 09:57:24.344043222 +0000 UTC m=+0.106397392 container init f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 09:57:24 compute-0 podman[440039]: 2025-10-14 09:57:24.349700151 +0000 UTC m=+0.112054321 container start f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 09:57:24 compute-0 quirky_kapitsa[440055]: 167 167
Oct 14 09:57:24 compute-0 systemd[1]: libpod-f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087.scope: Deactivated successfully.
Oct 14 09:57:24 compute-0 podman[440039]: 2025-10-14 09:57:24.355731439 +0000 UTC m=+0.118085629 container attach f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:57:24 compute-0 podman[440039]: 2025-10-14 09:57:24.35617967 +0000 UTC m=+0.118533830 container died f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:57:24 compute-0 podman[440039]: 2025-10-14 09:57:24.264657454 +0000 UTC m=+0.027011674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b0d06c83ac7510af06f96c321939a266581e4fe9ce6b11f311aa99461a449db-merged.mount: Deactivated successfully.
Oct 14 09:57:24 compute-0 podman[440039]: 2025-10-14 09:57:24.388070202 +0000 UTC m=+0.150424372 container remove f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:57:24 compute-0 systemd[1]: libpod-conmon-f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087.scope: Deactivated successfully.
Oct 14 09:57:24 compute-0 podman[440080]: 2025-10-14 09:57:24.61048593 +0000 UTC m=+0.069584248 container create f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 09:57:24 compute-0 systemd[1]: Started libpod-conmon-f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf.scope.
Oct 14 09:57:24 compute-0 podman[440080]: 2025-10-14 09:57:24.580965476 +0000 UTC m=+0.040063854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:57:24 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:24 compute-0 podman[440080]: 2025-10-14 09:57:24.709995252 +0000 UTC m=+0.169093590 container init f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:57:24 compute-0 podman[440080]: 2025-10-14 09:57:24.723193235 +0000 UTC m=+0.182291563 container start f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 09:57:24 compute-0 podman[440080]: 2025-10-14 09:57:24.726974438 +0000 UTC m=+0.186072816 container attach f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:57:25 compute-0 nova_compute[259627]: 2025-10-14 09:57:25.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.9 MiB/s wr, 11 op/s
Oct 14 09:57:25 compute-0 jovial_curran[440097]: {
Oct 14 09:57:25 compute-0 jovial_curran[440097]:     "0": [
Oct 14 09:57:25 compute-0 jovial_curran[440097]:         {
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "devices": [
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "/dev/loop3"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             ],
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_name": "ceph_lv0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_size": "21470642176",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "name": "ceph_lv0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "tags": {
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cluster_name": "ceph",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.crush_device_class": "",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.encrypted": "0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osd_id": "0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.type": "block",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.vdo": "0"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             },
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "type": "block",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "vg_name": "ceph_vg0"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:         }
Oct 14 09:57:25 compute-0 jovial_curran[440097]:     ],
Oct 14 09:57:25 compute-0 jovial_curran[440097]:     "1": [
Oct 14 09:57:25 compute-0 jovial_curran[440097]:         {
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "devices": [
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "/dev/loop4"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             ],
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_name": "ceph_lv1",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_size": "21470642176",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "name": "ceph_lv1",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "tags": {
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cluster_name": "ceph",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.crush_device_class": "",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.encrypted": "0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osd_id": "1",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.type": "block",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.vdo": "0"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             },
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "type": "block",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "vg_name": "ceph_vg1"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:         }
Oct 14 09:57:25 compute-0 jovial_curran[440097]:     ],
Oct 14 09:57:25 compute-0 jovial_curran[440097]:     "2": [
Oct 14 09:57:25 compute-0 jovial_curran[440097]:         {
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "devices": [
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "/dev/loop5"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             ],
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_name": "ceph_lv2",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_size": "21470642176",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "name": "ceph_lv2",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "tags": {
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.cluster_name": "ceph",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.crush_device_class": "",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.encrypted": "0",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osd_id": "2",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.type": "block",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:                 "ceph.vdo": "0"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             },
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "type": "block",
Oct 14 09:57:25 compute-0 jovial_curran[440097]:             "vg_name": "ceph_vg2"
Oct 14 09:57:25 compute-0 jovial_curran[440097]:         }
Oct 14 09:57:25 compute-0 jovial_curran[440097]:     ]
Oct 14 09:57:25 compute-0 jovial_curran[440097]: }
Oct 14 09:57:25 compute-0 systemd[1]: libpod-f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf.scope: Deactivated successfully.
Oct 14 09:57:25 compute-0 podman[440080]: 2025-10-14 09:57:25.506396063 +0000 UTC m=+0.965494381 container died f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0-merged.mount: Deactivated successfully.
Oct 14 09:57:25 compute-0 podman[440080]: 2025-10-14 09:57:25.568705742 +0000 UTC m=+1.027804030 container remove f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:57:25 compute-0 systemd[1]: libpod-conmon-f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf.scope: Deactivated successfully.
Oct 14 09:57:25 compute-0 sudo[439973]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:25 compute-0 sudo[440120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:25 compute-0 sudo[440120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:25 compute-0 sudo[440120]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:25 compute-0 sudo[440145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:57:25 compute-0 sudo[440145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:25 compute-0 sudo[440145]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:25 compute-0 sudo[440170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:25 compute-0 sudo[440170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:25 compute-0 sudo[440170]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:25 compute-0 sudo[440195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:57:25 compute-0 sudo[440195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:26 compute-0 ceph-mon[74249]: pgmap v3142: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.9 MiB/s wr, 11 op/s
Oct 14 09:57:26 compute-0 podman[440262]: 2025-10-14 09:57:26.382995703 +0000 UTC m=+0.059082541 container create 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:57:26 compute-0 systemd[1]: Started libpod-conmon-5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e.scope.
Oct 14 09:57:26 compute-0 podman[440262]: 2025-10-14 09:57:26.353706074 +0000 UTC m=+0.029792962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:57:26 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:57:26 compute-0 podman[440262]: 2025-10-14 09:57:26.487058826 +0000 UTC m=+0.163145704 container init 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:57:26 compute-0 podman[440262]: 2025-10-14 09:57:26.498693842 +0000 UTC m=+0.174780650 container start 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 09:57:26 compute-0 podman[440262]: 2025-10-14 09:57:26.502615758 +0000 UTC m=+0.178702636 container attach 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 09:57:26 compute-0 brave_mestorf[440279]: 167 167
Oct 14 09:57:26 compute-0 systemd[1]: libpod-5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e.scope: Deactivated successfully.
Oct 14 09:57:26 compute-0 podman[440262]: 2025-10-14 09:57:26.50475347 +0000 UTC m=+0.180840328 container died 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-45ba3f2c0fd4b4883e4cbd62c17d5927abed22d17bc5a3ba03f2be6c4671a8b8-merged.mount: Deactivated successfully.
Oct 14 09:57:26 compute-0 podman[440262]: 2025-10-14 09:57:26.56219733 +0000 UTC m=+0.238284158 container remove 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 09:57:26 compute-0 systemd[1]: libpod-conmon-5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e.scope: Deactivated successfully.
Oct 14 09:57:26 compute-0 podman[440306]: 2025-10-14 09:57:26.816722395 +0000 UTC m=+0.078202530 container create 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 09:57:26 compute-0 systemd[1]: Started libpod-conmon-0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1.scope.
Oct 14 09:57:26 compute-0 podman[440306]: 2025-10-14 09:57:26.786296778 +0000 UTC m=+0.047776973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:57:26 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:57:26 compute-0 podman[440306]: 2025-10-14 09:57:26.941888054 +0000 UTC m=+0.203368189 container init 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 09:57:26 compute-0 podman[440306]: 2025-10-14 09:57:26.949940611 +0000 UTC m=+0.211420716 container start 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:57:26 compute-0 podman[440306]: 2025-10-14 09:57:26.95353661 +0000 UTC m=+0.215016795 container attach 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:57:26 compute-0 podman[440323]: 2025-10-14 09:57:26.981208799 +0000 UTC m=+0.114590441 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:57:27 compute-0 podman[440320]: 2025-10-14 09:57:27.0203348 +0000 UTC m=+0.148132555 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 09:57:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 1.7 MiB/s wr, 1 op/s
Oct 14 09:57:27 compute-0 nova_compute[259627]: 2025-10-14 09:57:27.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]: {
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "osd_id": 2,
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "type": "bluestore"
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:     },
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "osd_id": 1,
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "type": "bluestore"
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:     },
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "osd_id": 0,
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:         "type": "bluestore"
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]:     }
Oct 14 09:57:28 compute-0 optimistic_kalam[440324]: }
Oct 14 09:57:28 compute-0 systemd[1]: libpod-0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1.scope: Deactivated successfully.
Oct 14 09:57:28 compute-0 podman[440306]: 2025-10-14 09:57:28.080713609 +0000 UTC m=+1.342193724 container died 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:57:28 compute-0 systemd[1]: libpod-0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1.scope: Consumed 1.131s CPU time.
Oct 14 09:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b-merged.mount: Deactivated successfully.
Oct 14 09:57:28 compute-0 podman[440306]: 2025-10-14 09:57:28.158820505 +0000 UTC m=+1.420300650 container remove 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 09:57:28 compute-0 systemd[1]: libpod-conmon-0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1.scope: Deactivated successfully.
Oct 14 09:57:28 compute-0 sudo[440195]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:57:28 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:57:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:57:28 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:57:28 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5bb3f34a-adc8-4e03-9fce-6e85a96b6fde does not exist
Oct 14 09:57:28 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 13ece194-3953-4f17-85d7-5b3caf0ae5b8 does not exist
Oct 14 09:57:28 compute-0 ceph-mon[74249]: pgmap v3143: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 1.7 MiB/s wr, 1 op/s
Oct 14 09:57:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:57:28 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:57:28 compute-0 sudo[440403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:57:28 compute-0 sudo[440403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:28 compute-0 sudo[440403]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:28 compute-0 sudo[440428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:57:28 compute-0 sudo[440428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:57:28 compute-0 sudo[440428]: pam_unix(sudo:session): session closed for user root
Oct 14 09:57:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:30 compute-0 nova_compute[259627]: 2025-10-14 09:57:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:30 compute-0 ceph-mon[74249]: pgmap v3144: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.271626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850271718, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1658, "num_deletes": 254, "total_data_size": 2707493, "memory_usage": 2747392, "flush_reason": "Manual Compaction"}
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850290290, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2626859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64113, "largest_seqno": 65770, "table_properties": {"data_size": 2619133, "index_size": 4668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16003, "raw_average_key_size": 20, "raw_value_size": 2603579, "raw_average_value_size": 3287, "num_data_blocks": 209, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435684, "oldest_key_time": 1760435684, "file_creation_time": 1760435850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 18716 microseconds, and 12177 cpu microseconds.
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.290353) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2626859 bytes OK
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.290380) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.292158) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.292181) EVENT_LOG_v1 {"time_micros": 1760435850292174, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.292206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2700289, prev total WAL file size 2700289, number of live WAL files 2.
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.293637) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2565KB)], [152(8683KB)]
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850293686, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11519111, "oldest_snapshot_seqno": -1}
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8407 keys, 9767274 bytes, temperature: kUnknown
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850343961, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9767274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9714279, "index_size": 30866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 219806, "raw_average_key_size": 26, "raw_value_size": 9567259, "raw_average_value_size": 1138, "num_data_blocks": 1195, "num_entries": 8407, "num_filter_entries": 8407, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.344469) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9767274 bytes
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.347330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.9 rd, 193.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.1) write-amplify(3.7) OK, records in: 8930, records dropped: 523 output_compression: NoCompression
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.347370) EVENT_LOG_v1 {"time_micros": 1760435850347354, "job": 94, "event": "compaction_finished", "compaction_time_micros": 50536, "compaction_time_cpu_micros": 32624, "output_level": 6, "num_output_files": 1, "total_output_size": 9767274, "num_input_records": 8930, "num_output_records": 8407, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850348354, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850350362, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.293509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:57:30 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:57:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:32 compute-0 ceph-mon[74249]: pgmap v3145: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:32 compute-0 nova_compute[259627]: 2025-10-14 09:57:32.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:57:32
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', 'backups']
Oct 14 09:57:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:57:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:57:34 compute-0 ceph-mon[74249]: pgmap v3146: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:35 compute-0 nova_compute[259627]: 2025-10-14 09:57:35.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:36 compute-0 ceph-mon[74249]: pgmap v3147: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:37 compute-0 nova_compute[259627]: 2025-10-14 09:57:37.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:38 compute-0 ceph-mon[74249]: pgmap v3148: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:39 compute-0 podman[440454]: 2025-10-14 09:57:39.665343972 +0000 UTC m=+0.059083891 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:57:39 compute-0 podman[440453]: 2025-10-14 09:57:39.687861144 +0000 UTC m=+0.090756967 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 14 09:57:40 compute-0 nova_compute[259627]: 2025-10-14 09:57:40.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:40 compute-0 ceph-mon[74249]: pgmap v3149: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:42 compute-0 ceph-mon[74249]: pgmap v3150: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:42 compute-0 nova_compute[259627]: 2025-10-14 09:57:42.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:57:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:57:44 compute-0 ceph-mon[74249]: pgmap v3151: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:45 compute-0 nova_compute[259627]: 2025-10-14 09:57:45.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:46 compute-0 ceph-mon[74249]: pgmap v3152: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:47 compute-0 nova_compute[259627]: 2025-10-14 09:57:47.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:48 compute-0 ceph-mon[74249]: pgmap v3153: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:50 compute-0 nova_compute[259627]: 2025-10-14 09:57:50.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:50 compute-0 ceph-mon[74249]: pgmap v3154: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:52 compute-0 ceph-mon[74249]: pgmap v3155: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:52 compute-0 nova_compute[259627]: 2025-10-14 09:57:52.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:54 compute-0 ceph-mon[74249]: pgmap v3156: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:55 compute-0 nova_compute[259627]: 2025-10-14 09:57:55.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:56 compute-0 ceph-mon[74249]: pgmap v3157: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:57 compute-0 nova_compute[259627]: 2025-10-14 09:57:57.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:57:57 compute-0 podman[440498]: 2025-10-14 09:57:57.680618694 +0000 UTC m=+0.090094771 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 09:57:57 compute-0 podman[440497]: 2025-10-14 09:57:57.683041214 +0000 UTC m=+0.090368919 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 09:57:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:57:58 compute-0 ceph-mon[74249]: pgmap v3158: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:57:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:00 compute-0 nova_compute[259627]: 2025-10-14 09:58:00.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:00 compute-0 ceph-mon[74249]: pgmap v3159: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:02 compute-0 ceph-mon[74249]: pgmap v3160: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:02 compute-0 nova_compute[259627]: 2025-10-14 09:58:02.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:58:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:58:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:03 compute-0 ceph-mon[74249]: pgmap v3161: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:05 compute-0 nova_compute[259627]: 2025-10-14 09:58:05.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:58:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3522386918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:58:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:58:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3522386918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:58:06 compute-0 ceph-mon[74249]: pgmap v3162: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3522386918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:58:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/3522386918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:58:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:58:07.074 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:58:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:58:07.074 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:58:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:58:07.075 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:58:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:07 compute-0 nova_compute[259627]: 2025-10-14 09:58:07.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:07 compute-0 nova_compute[259627]: 2025-10-14 09:58:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:08 compute-0 ceph-mon[74249]: pgmap v3163: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:08 compute-0 nova_compute[259627]: 2025-10-14 09:58:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:10 compute-0 nova_compute[259627]: 2025-10-14 09:58:10.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 14 09:58:10 compute-0 ceph-mon[74249]: pgmap v3164: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:10 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 14 09:58:10 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 14 09:58:10 compute-0 podman[440539]: 2025-10-14 09:58:10.677625541 +0000 UTC m=+0.076215651 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 14 09:58:10 compute-0 podman[440538]: 2025-10-14 09:58:10.68978574 +0000 UTC m=+0.104675760 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller)
Oct 14 09:58:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Oct 14 09:58:11 compute-0 ceph-mon[74249]: osdmap e307: 3 total, 3 up, 3 in
Oct 14 09:58:12 compute-0 ceph-mon[74249]: pgmap v3166: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Oct 14 09:58:12 compute-0 nova_compute[259627]: 2025-10-14 09:58:12.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Oct 14 09:58:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:13 compute-0 nova_compute[259627]: 2025-10-14 09:58:13.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:58:14 compute-0 ceph-mon[74249]: pgmap v3167: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Oct 14 09:58:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:58:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141861825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.463 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.674 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.675 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.675 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.675 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.766 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:58:14 compute-0 nova_compute[259627]: 2025-10-14 09:58:14.767 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:58:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:58:15 compute-0 nova_compute[259627]: 2025-10-14 09:58:15.203 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:58:15 compute-0 nova_compute[259627]: 2025-10-14 09:58:15.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1141861825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:58:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:58:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1805237291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:58:15 compute-0 nova_compute[259627]: 2025-10-14 09:58:15.664 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:58:15 compute-0 nova_compute[259627]: 2025-10-14 09:58:15.670 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:58:15 compute-0 nova_compute[259627]: 2025-10-14 09:58:15.695 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:58:15 compute-0 nova_compute[259627]: 2025-10-14 09:58:15.696 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:58:15 compute-0 nova_compute[259627]: 2025-10-14 09:58:15.697 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:58:16 compute-0 ceph-mon[74249]: pgmap v3168: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 09:58:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1805237291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:58:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 14 09:58:17 compute-0 nova_compute[259627]: 2025-10-14 09:58:17.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:17 compute-0 nova_compute[259627]: 2025-10-14 09:58:17.693 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:18 compute-0 ceph-mon[74249]: pgmap v3169: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 14 09:58:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 14 09:58:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 14 09:58:18 compute-0 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 14 09:58:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 918 B/s wr, 18 op/s
Oct 14 09:58:19 compute-0 ceph-mon[74249]: osdmap e308: 3 total, 3 up, 3 in
Oct 14 09:58:19 compute-0 nova_compute[259627]: 2025-10-14 09:58:19.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:20 compute-0 nova_compute[259627]: 2025-10-14 09:58:20.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:20 compute-0 ceph-mon[74249]: pgmap v3171: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 918 B/s wr, 18 op/s
Oct 14 09:58:20 compute-0 nova_compute[259627]: 2025-10-14 09:58:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:20 compute-0 nova_compute[259627]: 2025-10-14 09:58:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:58:20 compute-0 nova_compute[259627]: 2025-10-14 09:58:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:58:21 compute-0 nova_compute[259627]: 2025-10-14 09:58:21.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:58:21 compute-0 nova_compute[259627]: 2025-10-14 09:58:21.006 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:21 compute-0 nova_compute[259627]: 2025-10-14 09:58:21.007 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:58:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Oct 14 09:58:21 compute-0 ceph-mon[74249]: pgmap v3172: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Oct 14 09:58:21 compute-0 nova_compute[259627]: 2025-10-14 09:58:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:21 compute-0 nova_compute[259627]: 2025-10-14 09:58:21.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:22 compute-0 nova_compute[259627]: 2025-10-14 09:58:22.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Oct 14 09:58:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:24 compute-0 ceph-mon[74249]: pgmap v3173: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Oct 14 09:58:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 409 B/s rd, 0 B/s wr, 0 op/s
Oct 14 09:58:25 compute-0 nova_compute[259627]: 2025-10-14 09:58:25.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:26 compute-0 ceph-mon[74249]: pgmap v3174: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 409 B/s rd, 0 B/s wr, 0 op/s
Oct 14 09:58:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:27 compute-0 nova_compute[259627]: 2025-10-14 09:58:27.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:28 compute-0 ceph-mon[74249]: pgmap v3175: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:28 compute-0 sudo[440624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:28 compute-0 sudo[440624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:28 compute-0 sudo[440624]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:28 compute-0 sudo[440662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:58:28 compute-0 sudo[440662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:28 compute-0 sudo[440662]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:28 compute-0 podman[440648]: 2025-10-14 09:58:28.634796915 +0000 UTC m=+0.069799204 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:58:28 compute-0 podman[440649]: 2025-10-14 09:58:28.658257551 +0000 UTC m=+0.092735757 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 09:58:28 compute-0 sudo[440715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:28 compute-0 sudo[440715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:28 compute-0 sudo[440715]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:28 compute-0 sudo[440740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 14 09:58:28 compute-0 sudo[440740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:29 compute-0 podman[440838]: 2025-10-14 09:58:29.224174277 +0000 UTC m=+0.066518313 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:58:29 compute-0 podman[440838]: 2025-10-14 09:58:29.328365273 +0000 UTC m=+0.170709259 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:58:30 compute-0 sudo[440740]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:58:30 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:58:30 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:30 compute-0 sudo[440999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:30 compute-0 sudo[440999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:30 compute-0 sudo[440999]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:30 compute-0 nova_compute[259627]: 2025-10-14 09:58:30.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:30 compute-0 sudo[441024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:58:30 compute-0 sudo[441024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:30 compute-0 sudo[441024]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:30 compute-0 ceph-mon[74249]: pgmap v3176: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:30 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:30 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:30 compute-0 sudo[441049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:30 compute-0 sudo[441049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:30 compute-0 sudo[441049]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:30 compute-0 sudo[441074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:58:30 compute-0 sudo[441074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:31 compute-0 sudo[441074]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:58:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:58:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:58:31 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:58:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:58:31 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:31 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c206ad64-4dfa-4b4a-a411-ac7855641e45 does not exist
Oct 14 09:58:31 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d6203163-cc42-41c0-bba5-fbc9133f438e does not exist
Oct 14 09:58:31 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev cf1755bd-d123-49f0-b7ab-020332432cbf does not exist
Oct 14 09:58:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:58:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:58:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:58:31 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:58:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:58:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:58:31 compute-0 sudo[441132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:31 compute-0 sudo[441132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:31 compute-0 sudo[441132]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:31 compute-0 sudo[441157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:58:31 compute-0 sudo[441157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:31 compute-0 sudo[441157]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:31 compute-0 sudo[441182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:31 compute-0 sudo[441182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:31 compute-0 sudo[441182]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:31 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:58:31 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:58:31 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:31 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:58:31 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:58:31 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:58:31 compute-0 sudo[441207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:58:31 compute-0 sudo[441207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:31 compute-0 podman[441274]: 2025-10-14 09:58:31.899487761 +0000 UTC m=+0.052859627 container create 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 09:58:31 compute-0 systemd[1]: Started libpod-conmon-4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186.scope.
Oct 14 09:58:31 compute-0 podman[441274]: 2025-10-14 09:58:31.873796852 +0000 UTC m=+0.027168748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:58:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:58:32 compute-0 podman[441274]: 2025-10-14 09:58:32.000849518 +0000 UTC m=+0.154221424 container init 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:58:32 compute-0 podman[441274]: 2025-10-14 09:58:32.009525171 +0000 UTC m=+0.162897037 container start 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 09:58:32 compute-0 podman[441274]: 2025-10-14 09:58:32.013059378 +0000 UTC m=+0.166431324 container attach 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 09:58:32 compute-0 dazzling_pike[441290]: 167 167
Oct 14 09:58:32 compute-0 systemd[1]: libpod-4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186.scope: Deactivated successfully.
Oct 14 09:58:32 compute-0 podman[441274]: 2025-10-14 09:58:32.015442576 +0000 UTC m=+0.168814452 container died 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 09:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2446c15a418f30e6087c8e572ae5e5c8fd00dbb4f1e35774b25e0121349fdfdd-merged.mount: Deactivated successfully.
Oct 14 09:58:32 compute-0 podman[441274]: 2025-10-14 09:58:32.065584716 +0000 UTC m=+0.218956612 container remove 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 09:58:32 compute-0 systemd[1]: libpod-conmon-4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186.scope: Deactivated successfully.
Oct 14 09:58:32 compute-0 podman[441316]: 2025-10-14 09:58:32.246461785 +0000 UTC m=+0.045664872 container create 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 09:58:32 compute-0 systemd[1]: Started libpod-conmon-45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a.scope.
Oct 14 09:58:32 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:58:32 compute-0 podman[441316]: 2025-10-14 09:58:32.231122348 +0000 UTC m=+0.030325465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:32 compute-0 podman[441316]: 2025-10-14 09:58:32.348641992 +0000 UTC m=+0.147845109 container init 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:58:32 compute-0 podman[441316]: 2025-10-14 09:58:32.364739927 +0000 UTC m=+0.163943014 container start 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 09:58:32 compute-0 podman[441316]: 2025-10-14 09:58:32.369975785 +0000 UTC m=+0.169178892 container attach 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:58:32 compute-0 ceph-mon[74249]: pgmap v3177: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:32 compute-0 nova_compute[259627]: 2025-10-14 09:58:32.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:58:32
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'backups']
Oct 14 09:58:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.432450) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913432510, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 778, "num_deletes": 257, "total_data_size": 963838, "memory_usage": 979704, "flush_reason": "Manual Compaction"}
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913442985, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 954724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65771, "largest_seqno": 66548, "table_properties": {"data_size": 950682, "index_size": 1757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8899, "raw_average_key_size": 19, "raw_value_size": 942495, "raw_average_value_size": 2031, "num_data_blocks": 78, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435851, "oldest_key_time": 1760435851, "file_creation_time": 1760435913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 10616 microseconds, and 6426 cpu microseconds.
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.443068) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 954724 bytes OK
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.443094) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.445783) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.445798) EVENT_LOG_v1 {"time_micros": 1760435913445793, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.445818) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 959887, prev total WAL file size 986375, number of live WAL files 2.
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.446474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373634' seq:72057594037927935, type:22 .. '6C6F676D0033303136' seq:0, type:0; will stop at (end)
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(932KB)], [155(9538KB)]
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913446527, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10721998, "oldest_snapshot_seqno": -1}
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:58:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8342 keys, 10606840 bytes, temperature: kUnknown
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913513374, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10606840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10552825, "index_size": 32078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 219383, "raw_average_key_size": 26, "raw_value_size": 10405387, "raw_average_value_size": 1247, "num_data_blocks": 1245, "num_entries": 8342, "num_filter_entries": 8342, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.513671) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10606840 bytes
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.515114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 158.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(22.3) write-amplify(11.1) OK, records in: 8871, records dropped: 529 output_compression: NoCompression
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.515143) EVENT_LOG_v1 {"time_micros": 1760435913515131, "job": 96, "event": "compaction_finished", "compaction_time_micros": 66920, "compaction_time_cpu_micros": 41599, "output_level": 6, "num_output_files": 1, "total_output_size": 10606840, "num_input_records": 8871, "num_output_records": 8342, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913515587, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913518831, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.446378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:58:33 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 09:58:33 compute-0 crazy_chaplygin[441333]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:58:33 compute-0 crazy_chaplygin[441333]: --> relative data size: 1.0
Oct 14 09:58:33 compute-0 crazy_chaplygin[441333]: --> All data devices are unavailable
Oct 14 09:58:33 compute-0 systemd[1]: libpod-45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a.scope: Deactivated successfully.
Oct 14 09:58:33 compute-0 systemd[1]: libpod-45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a.scope: Consumed 1.152s CPU time.
Oct 14 09:58:33 compute-0 podman[441316]: 2025-10-14 09:58:33.618342417 +0000 UTC m=+1.417545524 container died 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2-merged.mount: Deactivated successfully.
Oct 14 09:58:33 compute-0 podman[441316]: 2025-10-14 09:58:33.678242067 +0000 UTC m=+1.477445154 container remove 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 09:58:33 compute-0 systemd[1]: libpod-conmon-45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a.scope: Deactivated successfully.
Oct 14 09:58:33 compute-0 sudo[441207]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:33 compute-0 sudo[441374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:33 compute-0 sudo[441374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:33 compute-0 sudo[441374]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:33 compute-0 sudo[441399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:58:33 compute-0 sudo[441399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:33 compute-0 sudo[441399]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:33 compute-0 sudo[441424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:33 compute-0 sudo[441424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:33 compute-0 sudo[441424]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:34 compute-0 sudo[441449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:58:34 compute-0 sudo[441449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:34 compute-0 podman[441515]: 2025-10-14 09:58:34.379607986 +0000 UTC m=+0.038716811 container create 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 09:58:34 compute-0 systemd[1]: Started libpod-conmon-88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26.scope.
Oct 14 09:58:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:58:34 compute-0 ceph-mon[74249]: pgmap v3178: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:34 compute-0 podman[441515]: 2025-10-14 09:58:34.360297772 +0000 UTC m=+0.019406617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:58:34 compute-0 podman[441515]: 2025-10-14 09:58:34.458623305 +0000 UTC m=+0.117732170 container init 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:58:34 compute-0 podman[441515]: 2025-10-14 09:58:34.470574028 +0000 UTC m=+0.129682893 container start 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 09:58:34 compute-0 podman[441515]: 2025-10-14 09:58:34.474163516 +0000 UTC m=+0.133272381 container attach 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:58:34 compute-0 systemd[1]: libpod-88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26.scope: Deactivated successfully.
Oct 14 09:58:34 compute-0 hardcore_spence[441531]: 167 167
Oct 14 09:58:34 compute-0 podman[441515]: 2025-10-14 09:58:34.479807085 +0000 UTC m=+0.138915930 container died 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 09:58:34 compute-0 conmon[441531]: conmon 88f86d6c091f84d8ea7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26.scope/container/memory.events
Oct 14 09:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-efbfea3173a86a501d326ee3058809982db7246e1d5ce28b73826086250a40fd-merged.mount: Deactivated successfully.
Oct 14 09:58:34 compute-0 podman[441515]: 2025-10-14 09:58:34.524550223 +0000 UTC m=+0.183659068 container remove 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 09:58:34 compute-0 systemd[1]: libpod-conmon-88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26.scope: Deactivated successfully.
Oct 14 09:58:34 compute-0 podman[441555]: 2025-10-14 09:58:34.730169128 +0000 UTC m=+0.060719111 container create 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 14 09:58:34 compute-0 systemd[1]: Started libpod-conmon-3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b.scope.
Oct 14 09:58:34 compute-0 podman[441555]: 2025-10-14 09:58:34.699294261 +0000 UTC m=+0.029844294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:58:34 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:34 compute-0 podman[441555]: 2025-10-14 09:58:34.816457805 +0000 UTC m=+0.147007768 container init 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 09:58:34 compute-0 podman[441555]: 2025-10-14 09:58:34.831945265 +0000 UTC m=+0.162495248 container start 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:58:34 compute-0 podman[441555]: 2025-10-14 09:58:34.835940933 +0000 UTC m=+0.166490976 container attach 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:58:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:35 compute-0 nova_compute[259627]: 2025-10-14 09:58:35.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]: {
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:     "0": [
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:         {
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "devices": [
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "/dev/loop3"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             ],
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_name": "ceph_lv0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_size": "21470642176",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "name": "ceph_lv0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "tags": {
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cluster_name": "ceph",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.crush_device_class": "",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.encrypted": "0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osd_id": "0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.type": "block",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.vdo": "0"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             },
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "type": "block",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "vg_name": "ceph_vg0"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:         }
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:     ],
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:     "1": [
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:         {
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "devices": [
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "/dev/loop4"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             ],
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_name": "ceph_lv1",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_size": "21470642176",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "name": "ceph_lv1",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "tags": {
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cluster_name": "ceph",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.crush_device_class": "",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.encrypted": "0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osd_id": "1",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.type": "block",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.vdo": "0"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             },
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "type": "block",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "vg_name": "ceph_vg1"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:         }
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:     ],
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:     "2": [
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:         {
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "devices": [
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "/dev/loop5"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             ],
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_name": "ceph_lv2",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_size": "21470642176",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "name": "ceph_lv2",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "tags": {
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.cluster_name": "ceph",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.crush_device_class": "",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.encrypted": "0",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osd_id": "2",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.type": "block",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:                 "ceph.vdo": "0"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             },
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "type": "block",
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:             "vg_name": "ceph_vg2"
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:         }
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]:     ]
Oct 14 09:58:35 compute-0 dazzling_mirzakhani[441572]: }
Oct 14 09:58:35 compute-0 systemd[1]: libpod-3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b.scope: Deactivated successfully.
Oct 14 09:58:35 compute-0 podman[441555]: 2025-10-14 09:58:35.617676224 +0000 UTC m=+0.948226167 container died 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 09:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955-merged.mount: Deactivated successfully.
Oct 14 09:58:35 compute-0 podman[441555]: 2025-10-14 09:58:35.67089956 +0000 UTC m=+1.001449503 container remove 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 09:58:35 compute-0 systemd[1]: libpod-conmon-3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b.scope: Deactivated successfully.
Oct 14 09:58:35 compute-0 sudo[441449]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:35 compute-0 sudo[441592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:35 compute-0 sudo[441592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:35 compute-0 sudo[441592]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:35 compute-0 sudo[441617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:58:35 compute-0 sudo[441617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:35 compute-0 sudo[441617]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:35 compute-0 sudo[441642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:35 compute-0 sudo[441642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:35 compute-0 sudo[441642]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:36 compute-0 sudo[441667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:58:36 compute-0 sudo[441667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:36 compute-0 podman[441733]: 2025-10-14 09:58:36.342203892 +0000 UTC m=+0.038769972 container create eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 09:58:36 compute-0 systemd[1]: Started libpod-conmon-eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7.scope.
Oct 14 09:58:36 compute-0 podman[441733]: 2025-10-14 09:58:36.326056236 +0000 UTC m=+0.022622296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:58:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:58:36 compute-0 podman[441733]: 2025-10-14 09:58:36.438336771 +0000 UTC m=+0.134902851 container init eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:58:36 compute-0 podman[441733]: 2025-10-14 09:58:36.449520205 +0000 UTC m=+0.146086255 container start eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 09:58:36 compute-0 ceph-mon[74249]: pgmap v3179: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:36 compute-0 podman[441733]: 2025-10-14 09:58:36.453094293 +0000 UTC m=+0.149660343 container attach eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:58:36 compute-0 intelligent_bose[441749]: 167 167
Oct 14 09:58:36 compute-0 podman[441733]: 2025-10-14 09:58:36.457735837 +0000 UTC m=+0.154301887 container died eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:58:36 compute-0 systemd[1]: libpod-eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7.scope: Deactivated successfully.
Oct 14 09:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-75708747baacae0dfade7c36300025119f10b1062749d164639b909a524d4571-merged.mount: Deactivated successfully.
Oct 14 09:58:36 compute-0 podman[441733]: 2025-10-14 09:58:36.524104936 +0000 UTC m=+0.220670986 container remove eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:58:36 compute-0 systemd[1]: libpod-conmon-eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7.scope: Deactivated successfully.
Oct 14 09:58:36 compute-0 podman[441774]: 2025-10-14 09:58:36.701927479 +0000 UTC m=+0.049354582 container create 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 14 09:58:36 compute-0 systemd[1]: Started libpod-conmon-89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c.scope.
Oct 14 09:58:36 compute-0 podman[441774]: 2025-10-14 09:58:36.67875292 +0000 UTC m=+0.026180103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:58:36 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:58:36 compute-0 podman[441774]: 2025-10-14 09:58:36.794829298 +0000 UTC m=+0.142256421 container init 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 09:58:36 compute-0 podman[441774]: 2025-10-14 09:58:36.810619886 +0000 UTC m=+0.158046999 container start 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:58:36 compute-0 podman[441774]: 2025-10-14 09:58:36.8140524 +0000 UTC m=+0.161479523 container attach 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 09:58:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:37 compute-0 ceph-mon[74249]: pgmap v3180: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:37 compute-0 nova_compute[259627]: 2025-10-14 09:58:37.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]: {
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "osd_id": 2,
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "type": "bluestore"
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:     },
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "osd_id": 1,
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "type": "bluestore"
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:     },
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "osd_id": 0,
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:         "type": "bluestore"
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]:     }
Oct 14 09:58:37 compute-0 trusting_ramanujan[441791]: }
Oct 14 09:58:37 compute-0 systemd[1]: libpod-89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c.scope: Deactivated successfully.
Oct 14 09:58:37 compute-0 systemd[1]: libpod-89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c.scope: Consumed 1.038s CPU time.
Oct 14 09:58:37 compute-0 podman[441774]: 2025-10-14 09:58:37.842218909 +0000 UTC m=+1.189646152 container died 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d-merged.mount: Deactivated successfully.
Oct 14 09:58:37 compute-0 podman[441774]: 2025-10-14 09:58:37.906999888 +0000 UTC m=+1.254427001 container remove 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 09:58:37 compute-0 systemd[1]: libpod-conmon-89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c.scope: Deactivated successfully.
Oct 14 09:58:37 compute-0 sudo[441667]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:58:37 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:58:37 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:37 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 2aeed1d3-ee9b-475c-8232-2b24ba52d1ae does not exist
Oct 14 09:58:37 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 19542914-d189-4b30-9dfe-892eddfaf99c does not exist
Oct 14 09:58:38 compute-0 sudo[441838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:58:38 compute-0 sudo[441838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:38 compute-0 sudo[441838]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:38 compute-0 sudo[441863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:58:38 compute-0 sudo[441863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:58:38 compute-0 sudo[441863]: pam_unix(sudo:session): session closed for user root
Oct 14 09:58:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:38 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:38 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:58:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:39 compute-0 ceph-mon[74249]: pgmap v3181: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:40 compute-0 nova_compute[259627]: 2025-10-14 09:58:40.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:41 compute-0 podman[441888]: 2025-10-14 09:58:41.664137957 +0000 UTC m=+0.082503516 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 09:58:41 compute-0 podman[441889]: 2025-10-14 09:58:41.683751298 +0000 UTC m=+0.090155203 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 14 09:58:42 compute-0 ceph-mon[74249]: pgmap v3182: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:42 compute-0 nova_compute[259627]: 2025-10-14 09:58:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:58:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:58:44 compute-0 ceph-mon[74249]: pgmap v3183: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:45 compute-0 nova_compute[259627]: 2025-10-14 09:58:45.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:46 compute-0 ceph-mon[74249]: pgmap v3184: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:47 compute-0 nova_compute[259627]: 2025-10-14 09:58:47.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:48 compute-0 ceph-mon[74249]: pgmap v3185: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:50 compute-0 ceph-mon[74249]: pgmap v3186: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:50 compute-0 nova_compute[259627]: 2025-10-14 09:58:50.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:52 compute-0 ceph-mon[74249]: pgmap v3187: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:52 compute-0 nova_compute[259627]: 2025-10-14 09:58:52.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:52 compute-0 nova_compute[259627]: 2025-10-14 09:58:52.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:58:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:54 compute-0 ceph-mon[74249]: pgmap v3188: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:55 compute-0 nova_compute[259627]: 2025-10-14 09:58:55.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:56 compute-0 ceph-mon[74249]: pgmap v3189: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:57 compute-0 nova_compute[259627]: 2025-10-14 09:58:57.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:58:58 compute-0 ceph-mon[74249]: pgmap v3190: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:58:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:58:59 compute-0 podman[441936]: 2025-10-14 09:58:59.659857844 +0000 UTC m=+0.070711756 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 09:58:59 compute-0 podman[441935]: 2025-10-14 09:58:59.669933782 +0000 UTC m=+0.085976751 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 09:59:00 compute-0 nova_compute[259627]: 2025-10-14 09:59:00.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:00 compute-0 ceph-mon[74249]: pgmap v3191: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:02 compute-0 ceph-mon[74249]: pgmap v3192: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:59:02 compute-0 nova_compute[259627]: 2025-10-14 09:59:02.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:59:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:59:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:03 compute-0 sshd-session[441975]: Accepted publickey for zuul from 192.168.122.10 port 33490 ssh2: ECDSA SHA256:jaGWHGBmEwGLhBs5A5z51rEw7f54kxwV4dpIRk+zLbs
Oct 14 09:59:03 compute-0 systemd-logind[799]: New session 57 of user zuul.
Oct 14 09:59:03 compute-0 systemd[1]: Started Session 57 of User zuul.
Oct 14 09:59:03 compute-0 sshd-session[441975]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 14 09:59:04 compute-0 sudo[441979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Oct 14 09:59:04 compute-0 sudo[441979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 09:59:04 compute-0 ceph-mon[74249]: pgmap v3193: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:05 compute-0 nova_compute[259627]: 2025-10-14 09:59:05.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 09:59:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878180481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:59:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 09:59:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878180481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:59:06 compute-0 ceph-mon[74249]: pgmap v3194: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/878180481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 09:59:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/878180481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 09:59:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:59:07.075 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:59:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:59:07.075 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:59:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 09:59:07.076 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:59:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:07 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.22995 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:07 compute-0 nova_compute[259627]: 2025-10-14 09:59:07.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:07 compute-0 nova_compute[259627]: 2025-10-14 09:59:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:08 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.22997 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:08 compute-0 ceph-mon[74249]: pgmap v3195: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 14 09:59:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3464857549' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 09:59:08 compute-0 nova_compute[259627]: 2025-10-14 09:59:08.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:08 compute-0 nova_compute[259627]: 2025-10-14 09:59:08.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 09:59:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:09 compute-0 ceph-mon[74249]: from='client.22995 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:09 compute-0 ceph-mon[74249]: from='client.22997 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:09 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3464857549' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 09:59:09 compute-0 nova_compute[259627]: 2025-10-14 09:59:09.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:10 compute-0 nova_compute[259627]: 2025-10-14 09:59:10.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:10 compute-0 ceph-mon[74249]: pgmap v3196: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:12 compute-0 ovs-vsctl[442261]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 14 09:59:12 compute-0 ceph-mon[74249]: pgmap v3197: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:12 compute-0 podman[442284]: 2025-10-14 09:59:12.76927503 +0000 UTC m=+0.127128260 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 09:59:12 compute-0 podman[442283]: 2025-10-14 09:59:12.803749426 +0000 UTC m=+0.161313849 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:59:12 compute-0 nova_compute[259627]: 2025-10-14 09:59:12.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:13 compute-0 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 14 09:59:13 compute-0 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 14 09:59:13 compute-0 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 14 09:59:13 compute-0 nova_compute[259627]: 2025-10-14 09:59:13.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:59:14 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: cache status {prefix=cache status} (starting...)
Oct 14 09:59:14 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: client ls {prefix=client ls} (starting...)
Oct 14 09:59:14 compute-0 ceph-mon[74249]: pgmap v3198: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:59:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330810155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:59:14 compute-0 lvm[442661]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.526 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:59:14 compute-0 lvm[442661]: VG ceph_vg1 finished
Oct 14 09:59:14 compute-0 lvm[442689]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 14 09:59:14 compute-0 lvm[442689]: VG ceph_vg0 finished
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.726 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.728 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3458MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.728 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.728 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 09:59:14 compute-0 lvm[442704]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 09:59:14 compute-0 lvm[442704]: VG ceph_vg2 finished
Oct 14 09:59:14 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23003 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.972 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 09:59:14 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.973 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 09:59:15 compute-0 nova_compute[259627]: 2025-10-14 09:59:14.992 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 09:59:15 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: damage ls {prefix=damage ls} (starting...)
Oct 14 09:59:15 compute-0 kernel: block loop4: the capability attribute has been deprecated.
Oct 14 09:59:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:15 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump loads {prefix=dump loads} (starting...)
Oct 14 09:59:15 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23005 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:15 compute-0 nova_compute[259627]: 2025-10-14 09:59:15.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 09:59:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590939150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:59:15 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 14 09:59:15 compute-0 nova_compute[259627]: 2025-10-14 09:59:15.468 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 09:59:15 compute-0 nova_compute[259627]: 2025-10-14 09:59:15.474 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 09:59:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3330810155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:59:15 compute-0 ceph-mon[74249]: from='client.23003 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:15 compute-0 ceph-mon[74249]: pgmap v3199: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:15 compute-0 ceph-mon[74249]: from='client.23005 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1590939150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 09:59:15 compute-0 nova_compute[259627]: 2025-10-14 09:59:15.509 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 09:59:15 compute-0 nova_compute[259627]: 2025-10-14 09:59:15.511 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 09:59:15 compute-0 nova_compute[259627]: 2025-10-14 09:59:15.511 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 09:59:15 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 14 09:59:15 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 14 09:59:15 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 14 09:59:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 14 09:59:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383223823' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 14 09:59:16 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 14 09:59:16 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23013 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:16 compute-0 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 09:59:16 compute-0 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T09:59:16.190+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 09:59:16 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 14 09:59:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:59:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455265449' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:59:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/383223823' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 14 09:59:16 compute-0 ceph-mon[74249]: from='client.23013 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1455265449' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:59:16 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: ops {prefix=ops} (starting...)
Oct 14 09:59:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 14 09:59:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794983232' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 14 09:59:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 14 09:59:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1679887878' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 14 09:59:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134349536' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 14 09:59:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339085505' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: session ls {prefix=session ls} (starting...)
Oct 14 09:59:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:17 compute-0 unix_chkpwd[443051]: password check failed for user (root)
Oct 14 09:59:17 compute-0 sshd-session[442966]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 14 09:59:17 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: status {prefix=status} (starting...)
Oct 14 09:59:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3794983232' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1679887878' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3134349536' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3339085505' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mon[74249]: pgmap v3200: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:17 compute-0 nova_compute[259627]: 2025-10-14 09:59:17.508 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 14 09:59:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550033903' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23027 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:17 compute-0 nova_compute[259627]: 2025-10-14 09:59:17.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:17 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23031 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 14 09:59:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/692184495' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 09:59:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174512388' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 14 09:59:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506106079' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/550033903' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: from='client.23027 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: from='client.23031 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/692184495' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1174512388' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2506106079' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 14 09:59:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887664699' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 14 09:59:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 14 09:59:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658988925' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 14 09:59:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 14 09:59:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621050288' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 09:59:19 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23043 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:19 compute-0 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 14 09:59:19 compute-0 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T09:59:19.280+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 14 09:59:19 compute-0 sshd-session[442966]: Failed password for root from 91.224.92.108 port 45684 ssh2
Oct 14 09:59:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1887664699' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 14 09:59:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1658988925' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 14 09:59:19 compute-0 ceph-mon[74249]: pgmap v3201: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3621050288' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 09:59:19 compute-0 ceph-mon[74249]: from='client.23043 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:19 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23045 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 14 09:59:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313547730' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 14 09:59:20 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23049 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 14 09:59:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2620577048' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 14 09:59:20 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23053 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:20 compute-0 nova_compute[259627]: 2025-10-14 09:59:20.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:20 compute-0 ceph-mon[74249]: from='client.23045 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2313547730' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 14 09:59:20 compute-0 ceph-mon[74249]: from='client.23049 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2620577048' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 14 09:59:20 compute-0 ceph-mon[74249]: from='client.23053 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:20 compute-0 unix_chkpwd[443613]: password check failed for user (root)
Oct 14 09:59:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 14 09:59:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838274859' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 09:59:20 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:20 compute-0 nova_compute[259627]: 2025-10-14 09:59:20.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:20 compute-0 nova_compute[259627]: 2025-10-14 09:59:20.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:20 compute-0 nova_compute[259627]: 2025-10-14 09:59:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:19.796624+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 257712128 unmapped: 58433536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2820254 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa53c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3cdcb32c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce709000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce709000 session 0x55b3cdc6cd20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa51400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51400 session 0x55b3ce0e9a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:20.796793+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce127c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf610800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf610800 session 0x55b3cfba3c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:21.796999+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:22.797601+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:23.799985+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4d6000/0x0/0x4ffc00000, data 0x178847b/0x1918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:24.800784+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2896002 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:25.802153+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:26.802886+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02b400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02b400 session 0x55b3cb999860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:27.803073+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc48c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc48c000 session 0x55b3cbe8c5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4d6000/0x0/0x4ffc00000, data 0x178847b/0x1918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:28.803272+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:29.803493+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc48c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc48c000 session 0x55b3cbf223c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258039808 unmapped: 58105856 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.666875839s of 16.880754471s, submitted: 48
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cbdc7c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899535 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:30.803684+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512fc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfeac000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:31.803929+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:32.804345+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:33.804604+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:34.804856+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899667 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:35.804986+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:36.805322+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:37.805774+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258260992 unmapped: 57884672 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:38.806076+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258260992 unmapped: 57884672 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:39.806270+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258260992 unmapped: 57884672 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899667 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:40.806600+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258260992 unmapped: 57884672 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:41.806795+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258269184 unmapped: 57876480 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:42.807089+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258629632 unmapped: 57516032 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:43.807265+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258629632 unmapped: 57516032 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:44.807514+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258629632 unmapped: 57516032 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2966707 data_alloc: 234881024 data_used: 12066816
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:45.807635+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258629632 unmapped: 57516032 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3d0b64f00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfead800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfead800 session 0x55b3cb999680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce707400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707400 session 0x55b3ce0e9860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce707400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707400 session 0x55b3ce58a000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc48c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.489801407s of 16.518394470s, submitted: 7
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc48c000 session 0x55b3cbf3b4a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:46.807767+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3cb998b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cbe8c960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfead800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfead800 session 0x55b3cbdc7860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfead800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfead800 session 0x55b3ce5aef00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:47.807966+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:48.808131+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edd68000/0x0/0x4ffc00000, data 0x1ef44ed/0x2086000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:49.808285+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026439 data_alloc: 234881024 data_used: 12066816
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:50.808480+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:51.808608+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263471104 unmapped: 52674560 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed35d000/0x0/0x4ffc00000, data 0x28ff4ed/0x2a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:52.808750+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263774208 unmapped: 52371456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2d5000/0x0/0x4ffc00000, data 0x29874ed/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf3b5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf3b5800 session 0x55b3d0b64780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:53.808886+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccacd000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02a800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263782400 unmapped: 52363264 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2d5000/0x0/0x4ffc00000, data 0x29874ed/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:54.809062+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263790592 unmapped: 52355072 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117295 data_alloc: 234881024 data_used: 12820480
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:55.809209+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2d5000/0x0/0x4ffc00000, data 0x29874ed/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266592256 unmapped: 49553408 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:56.809318+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266592256 unmapped: 49553408 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:57.809455+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266592256 unmapped: 49553408 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:58.809617+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.901058197s of 12.385982513s, submitted: 139
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:59.809782+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2b6000/0x0/0x4ffc00000, data 0x29a64ed/0x2b38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3165899 data_alloc: 234881024 data_used: 20168704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:00.809936+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:01.810066+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:02.810172+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:03.810331+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:04.810539+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2b6000/0x0/0x4ffc00000, data 0x29a64ed/0x2b38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [0,0,0,1])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 267591680 unmapped: 48553984 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3207337 data_alloc: 234881024 data_used: 20238336
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:05.810660+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecd99000/0x0/0x4ffc00000, data 0x2ec34ed/0x3055000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 267624448 unmapped: 48521216 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:06.810794+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecd1d000/0x0/0x4ffc00000, data 0x2f3f4ed/0x30d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 267853824 unmapped: 48291840 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:07.810953+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 267853824 unmapped: 48291840 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3cb998960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf616800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf616800 session 0x55b3cde57e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa51400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51400 session 0x55b3cc7ec1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:08.811092+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3cfba23c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf3b5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.723741531s of 10.006159782s, submitted: 67
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf3b5800 session 0x55b3ce0e8f00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf616800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf616800 session 0x55b3ce126960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfead800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfead800 session 0x55b3cba943c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269221888 unmapped: 46923776 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf610c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf610c00 session 0x55b3cde565a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce5a25a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:09.811332+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269254656 unmapped: 46891008 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244870 data_alloc: 234881024 data_used: 20275200
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eca56000/0x0/0x4ffc00000, data 0x320455f/0x3398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:10.811523+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269254656 unmapped: 46891008 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:11.811666+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269254656 unmapped: 46891008 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:12.811868+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269393920 unmapped: 46751744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:13.812053+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269393920 unmapped: 46751744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa52400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52400 session 0x55b3ccada000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:14.812360+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269393920 unmapped: 46751744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251004 data_alloc: 234881024 data_used: 20275200
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:15.812515+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc64400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccacd000 session 0x55b3cbe3e3c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02a800 session 0x55b3ce10c5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269393920 unmapped: 46751744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eca0c000/0x0/0x4ffc00000, data 0x324d582/0x33e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf60e800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:16.812650+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf60e800 session 0x55b3cbf23e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269418496 unmapped: 46727168 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x2557510/0x26ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:17.812793+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:18.812924+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:19.813062+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122116 data_alloc: 234881024 data_used: 15335424
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:20.813226+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x2557510/0x26ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:21.813359+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:22.813470+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.288921356s of 14.564796448s, submitted: 78
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:23.813672+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269557760 unmapped: 46587904 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:24.813901+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269557760 unmapped: 46587904 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122644 data_alloc: 234881024 data_used: 15335424
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:25.814096+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270606336 unmapped: 45539328 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:26.814228+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee47c000/0x0/0x4ffc00000, data 0x281f510/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273940480 unmapped: 42205184 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:27.814375+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:28.814561+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:29.814703+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192304 data_alloc: 234881024 data_used: 17195008
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:30.814854+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee102000/0x0/0x4ffc00000, data 0x2b99510/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:31.815003+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee102000/0x0/0x4ffc00000, data 0x2b99510/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:32.815154+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.664649963s of 10.022263527s, submitted: 108
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:33.815319+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:34.815538+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee0e0000/0x0/0x4ffc00000, data 0x2bbb510/0x2d4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc64400 session 0x55b3cbe3fc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce1263c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191948 data_alloc: 234881024 data_used: 17195008
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ca5d6000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:35.815687+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d6000 session 0x55b3ce5a30e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271327232 unmapped: 44818432 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:36.815830+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271327232 unmapped: 44818432 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea21000/0x0/0x4ffc00000, data 0x227747b/0x2407000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:37.816041+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271335424 unmapped: 44810240 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea21000/0x0/0x4ffc00000, data 0x227747b/0x2407000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:38.816202+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271335424 unmapped: 44810240 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:39.816405+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271335424 unmapped: 44810240 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077816 data_alloc: 234881024 data_used: 12795904
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512fc00 session 0x55b3ce5aed20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfeac000 session 0x55b3ccaddc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:40.816583+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ca5d6000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d6000 session 0x55b3cbd82b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:41.816817+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:42.816999+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efe50000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:43.817187+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:44.817385+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2853828 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:45.817642+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:46.817807+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efe50000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:47.817967+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512c400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c400 session 0x55b3cc7ec5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf3b5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf3b5c00 session 0x55b3ccadc1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4800 session 0x55b3cdc6c3c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:48.818099+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ca5d7c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d7c00 session 0x55b3cc5990e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ca5d6000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.143781662s of 15.404740334s, submitted: 75
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d6000 session 0x55b3cb9992c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4800 session 0x55b3cfba3860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf3b5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf3b5c00 session 0x55b3ce10c780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512c400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c400 session 0x55b3d0b641e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174b000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b000 session 0x55b3ce58ad20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263184384 unmapped: 52961280 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:49.818201+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871810 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:50.818420+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:51.818560+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:52.830913+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e6c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6c00 session 0x55b3ccaa05a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:53.831703+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70cc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:54.831860+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2872450 data_alloc: 218103808 data_used: 2752512
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:55.831999+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263200768 unmapped: 52944896 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:56.832148+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263200768 unmapped: 52944896 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:57.832376+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263208960 unmapped: 52936704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:58.832525+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263208960 unmapped: 52936704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:59.832666+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263208960 unmapped: 52936704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882050 data_alloc: 218103808 data_used: 4030464
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:00.832826+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263208960 unmapped: 52936704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf611000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf611000 session 0x55b3cbe8dc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd60c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd60c00 session 0x55b3cde57860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbfc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbfc00 session 0x55b3cba95e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3cd3b23c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:01.832988+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.898473740s of 12.925726891s, submitted: 7
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3ccadc000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbfc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbfc00 session 0x55b3cdc6da40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd60c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd60c00 session 0x55b3cbe8c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e6c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6c00 session 0x55b3cb9414a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf611000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf611000 session 0x55b3cc7ff860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263249920 unmapped: 52895744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:02.833182+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263249920 unmapped: 52895744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:03.833406+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263249920 unmapped: 52895744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:04.833560+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262979584 unmapped: 53166080 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057502 data_alloc: 218103808 data_used: 4022272
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:05.833691+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a4000/0x0/0x4ffc00000, data 0x25f04ed/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce139400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce139400 session 0x55b3ce0e8000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263086080 unmapped: 53059584 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:06.833834+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a4000/0x0/0x4ffc00000, data 0x25f04ed/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd52000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd52000 session 0x55b3ce10cd20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 264192000 unmapped: 51953664 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf612800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf612800 session 0x55b3cbe3e5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbec00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:07.833942+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce127a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263389184 unmapped: 52756480 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd850800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:08.834076+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263389184 unmapped: 52756480 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:09.834245+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3159095 data_alloc: 234881024 data_used: 18636800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:10.835070+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a1000/0x0/0x4ffc00000, data 0x25fa4fd/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:11.835242+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:12.835450+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.488649368s of 10.946557045s, submitted: 125
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a1000/0x0/0x4ffc00000, data 0x25fa4fd/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:13.835638+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:14.835819+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3159271 data_alloc: 234881024 data_used: 18636800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:15.836101+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:16.836261+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:17.836424+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a1000/0x0/0x4ffc00000, data 0x25fa4fd/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:18.836634+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266289152 unmapped: 49856512 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:19.836807+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270188544 unmapped: 45957120 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3231281 data_alloc: 234881024 data_used: 19087360
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:20.837041+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c800 session 0x55b3ce10c3c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd52000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd52000 session 0x55b3ce5a32c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbec00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce0e8000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce139400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce139400 session 0x55b3cb941c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf612800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf612800 session 0x55b3d0b65680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:21.837225+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:22.837384+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed414000/0x0/0x4ffc00000, data 0x388655f/0x3a1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:23.837521+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:24.837685+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf614c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf614c00 session 0x55b3ce58ab40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3315168 data_alloc: 234881024 data_used: 19079168
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:25.837945+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbe400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbe400 session 0x55b3cdcb3680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.795685768s of 13.264044762s, submitted: 137
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e7000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e7000 session 0x55b3cdcb2000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270688256 unmapped: 45457408 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174b000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b000 session 0x55b3ce0e9e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:26.838112+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa52800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed3f3000/0x0/0x4ffc00000, data 0x38a755f/0x3a3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270704640 unmapped: 45441024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:27.838343+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed3f3000/0x0/0x4ffc00000, data 0x38a755f/0x3a3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269688832 unmapped: 46456832 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:28.838467+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:29.838618+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3370913 data_alloc: 234881024 data_used: 25350144
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:30.838801+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:31.838972+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:32.839147+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:33.839285+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed3f3000/0x0/0x4ffc00000, data 0x38a755f/0x3a3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02c000 session 0x55b3cb999c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd850800 session 0x55b3cdc6d680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbe400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:34.839413+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbe400 session 0x55b3cb9981e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271286272 unmapped: 44859392 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100258 data_alloc: 234881024 data_used: 13279232
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:35.839687+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271286272 unmapped: 44859392 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:36.839811+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.091932297s of 11.313192368s, submitted: 63
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271286272 unmapped: 44859392 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:37.839965+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271286272 unmapped: 44859392 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:38.840166+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeac8000/0x0/0x4ffc00000, data 0x21d54dd/0x2366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272482304 unmapped: 43663360 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee987000/0x0/0x4ffc00000, data 0x23074dd/0x2498000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:39.840299+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee987000/0x0/0x4ffc00000, data 0x23074dd/0x2498000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124442 data_alloc: 234881024 data_used: 13328384
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:40.840421+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:41.840560+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:42.840691+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:43.840769+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:44.840933+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124442 data_alloc: 234881024 data_used: 13328384
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:45.841082+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:46.841224+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:47.841440+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:48.841609+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:49.841764+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124618 data_alloc: 234881024 data_used: 13332480
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:50.841919+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.719812393s of 13.929898262s, submitted: 55
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52800 session 0x55b3d0b64960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f400 session 0x55b3ce5ae5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272867328 unmapped: 43278336 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d14c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:51.842136+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14c00 session 0x55b3cbf221e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:52.842315+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef3ca000/0x0/0x4ffc00000, data 0x18d347b/0x1a63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:53.842529+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:54.842926+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cbe8de00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70cc00 session 0x55b3cbf23c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978292 data_alloc: 218103808 data_used: 5054464
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:55.843079+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cde57e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:56.843237+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbe400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbe400 session 0x55b3ce58a1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f400 session 0x55b3cbe8c960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa52800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52800 session 0x55b3d0b64780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa52800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52800 session 0x55b3cb23e5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ce127e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efe50000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:57.843418+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:58.843613+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:59.843741+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925672 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:00.843878+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef9f3000/0x0/0x4ffc00000, data 0x12ab47b/0x143b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3cdc6da40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:01.844057+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.365673065s of 10.588297844s, submitted: 55
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef9cf000/0x0/0x4ffc00000, data 0x12cf47b/0x145f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:02.844186+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:03.844337+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:04.844627+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef9cf000/0x0/0x4ffc00000, data 0x12cf47b/0x145f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962481 data_alloc: 218103808 data_used: 7176192
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:05.844769+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:06.844939+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:07.845068+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:08.845198+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:09.845367+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962481 data_alloc: 218103808 data_used: 7176192
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:10.845508+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef9cf000/0x0/0x4ffc00000, data 0x12cf47b/0x145f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:11.845682+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.791116714s of 10.793682098s, submitted: 1
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:12.845826+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273645568 unmapped: 42500096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:13.845942+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274276352 unmapped: 41869312 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:14.846119+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeec4000/0x0/0x4ffc00000, data 0x1dcc47b/0x1f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067957 data_alloc: 218103808 data_used: 8536064
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:15.846253+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:16.846440+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:17.846624+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:18.846799+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:19.846935+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeec4000/0x0/0x4ffc00000, data 0x1dcc47b/0x1f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3056245 data_alloc: 218103808 data_used: 8536064
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:20.847094+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:21.847243+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:22.847396+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:23.847600+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:24.847795+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeecf000/0x0/0x4ffc00000, data 0x1dcf47b/0x1f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3056245 data_alloc: 218103808 data_used: 8536064
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:25.847960+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274104320 unmapped: 42041344 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:26.848169+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274104320 unmapped: 42041344 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:27.848367+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274104320 unmapped: 42041344 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:28.848492+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce0e4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.782558441s of 16.128824234s, submitted: 114
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce0e4c00 session 0x55b3d0b64d20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf60fc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf60fc00 session 0x55b3cbf223c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cbdc7860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeecf000/0x0/0x4ffc00000, data 0x1dcf47b/0x1f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3ce10c780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce0e4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce0e4c00 session 0x55b3ce126960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274112512 unmapped: 46235648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa52800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:29.848651+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52800 session 0x55b3cba95e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbec00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce126f00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbec00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce569860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cd3b32c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3ce0e8b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee83c000/0x0/0x4ffc00000, data 0x24614dd/0x25f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274112512 unmapped: 46235648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125818 data_alloc: 218103808 data_used: 8536064
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:30.849580+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274112512 unmapped: 46235648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:31.849768+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:32.849931+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:33.850079+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:34.850275+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62f000/0x0/0x4ffc00000, data 0x266e4dd/0x27ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125818 data_alloc: 218103808 data_used: 8536064
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:35.850623+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4000 session 0x55b3cbdc74a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:36.850763+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfeadc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274128896 unmapped: 46219264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62f000/0x0/0x4ffc00000, data 0x266e4dd/0x27ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:37.850998+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274128896 unmapped: 46219264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:38.851183+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ca5d6800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.248894691s of 10.399340630s, submitted: 34
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d6800 session 0x55b3ccadb0e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62f000/0x0/0x4ffc00000, data 0x266e4dd/0x27ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274153472 unmapped: 46194688 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:39.851334+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274153472 unmapped: 46194688 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3143644 data_alloc: 218103808 data_used: 10674176
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:40.851458+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 45899776 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:41.851659+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 45899776 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:42.851832+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62e000/0x0/0x4ffc00000, data 0x266e500/0x2800000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 45899776 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:43.852165+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:44.852431+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:45.852841+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3193564 data_alloc: 234881024 data_used: 17571840
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62e000/0x0/0x4ffc00000, data 0x266e500/0x2800000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:46.852989+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:47.853200+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:48.853599+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.857639313s of 10.003479004s, submitted: 19
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:49.853836+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278192128 unmapped: 42156032 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:50.854106+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278331392 unmapped: 42016768 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285960 data_alloc: 234881024 data_used: 18231296
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edc33000/0x0/0x4ffc00000, data 0x3069500/0x31fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [0,0,0,3])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:51.854365+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 41238528 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:52.855055+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 41238528 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed842000/0x0/0x4ffc00000, data 0x345a500/0x35ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:53.855316+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 41205760 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:54.855607+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 41205760 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:55.855887+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279207936 unmapped: 41140224 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326758 data_alloc: 234881024 data_used: 18632704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cba94000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfeadc00 session 0x55b3cdc6cd20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:56.856087+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279216128 unmapped: 41132032 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc63c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed836000/0x0/0x4ffc00000, data 0x3466500/0x35f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63c00 session 0x55b3cb986d20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:57.856280+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279232512 unmapped: 41115648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:58.856427+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279232512 unmapped: 41115648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:59.856610+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279232512 unmapped: 41115648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174c000 session 0x55b3cdc6cf00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.617760658s of 11.051534653s, submitted: 118
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce4c00 session 0x55b3cbe8d860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf614400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:00.856731+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276316160 unmapped: 44032000 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053456 data_alloc: 218103808 data_used: 10018816
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf614400 session 0x55b3cbd421e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:01.856885+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:02.857071+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef3e5000/0x0/0x4ffc00000, data 0x18b7500/0x1a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:03.857346+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:04.857744+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:05.857994+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046420 data_alloc: 218103808 data_used: 9908224
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:06.858350+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfeac400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef3e5000/0x0/0x4ffc00000, data 0x18b7500/0x1a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [0,0,3,2,1])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfeac400 session 0x55b3ccadc1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3cbe3e960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d16800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d16800 session 0x55b3cb99b860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf615000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615000 session 0x55b3cb9414a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf615000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615000 session 0x55b3cfba3680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:07.858533+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb56000/0x0/0x4ffc00000, data 0x2145562/0x22d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:08.858703+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:09.858968+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:10.859108+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122233 data_alloc: 218103808 data_used: 9908224
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:11.859266+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:12.859519+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:13.859680+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb56000/0x0/0x4ffc00000, data 0x2145562/0x22d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:14.859905+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:15.860045+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122233 data_alloc: 218103808 data_used: 9908224
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:16.860191+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb56000/0x0/0x4ffc00000, data 0x2145562/0x22d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e6c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6c00 session 0x55b3ce58ad20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:17.861064+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512d000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:18.861237+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 43679744 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb56000/0x0/0x4ffc00000, data 0x2145562/0x22d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:19.861394+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 43679744 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:20.861516+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122233 data_alloc: 218103808 data_used: 9908224
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 43679744 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:21.861715+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 43671552 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:22.861844+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 43671552 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.100730896s of 23.290626526s, submitted: 66
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:23.861999+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 43671552 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:24.862200+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 43671552 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb54000/0x0/0x4ffc00000, data 0x2146562/0x22d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [0,0,0,0,0,2])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3ce5af680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf615c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615c00 session 0x55b3cdc6c5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5000 session 0x55b3cfba34a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3ce5a34a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5000 session 0x55b3d0b65c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:25.862276+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3141437 data_alloc: 218103808 data_used: 9908224
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276635648 unmapped: 43712512 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee855000/0x0/0x4ffc00000, data 0x2446562/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:26.862391+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276635648 unmapped: 43712512 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:27.862548+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:28.862670+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:29.862863+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf615000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615000 session 0x55b3cc7ec1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:30.862999+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce707800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3195677 data_alloc: 234881024 data_used: 17453056
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:31.863164+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee855000/0x0/0x4ffc00000, data 0x2446562/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:32.863386+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:33.863544+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee855000/0x0/0x4ffc00000, data 0x2446562/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:34.863726+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:35.863874+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217917 data_alloc: 234881024 data_used: 20598784
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:36.864078+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.495318413s of 13.531723976s, submitted: 4
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:37.864222+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283041792 unmapped: 37306368 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:38.864362+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283328512 unmapped: 37019648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edf13000/0x0/0x4ffc00000, data 0x2d87562/0x2f1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:39.864499+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283328512 unmapped: 37019648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:40.864645+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298665 data_alloc: 234881024 data_used: 21823488
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283328512 unmapped: 37019648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:41.864772+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 34955264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:42.864928+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 34955264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:43.865097+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 34955264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb38000/0x0/0x4ffc00000, data 0x3163562/0x32f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:44.865289+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 34955264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:45.865442+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323501 data_alloc: 234881024 data_used: 21856256
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:46.865567+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:47.866334+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:48.866806+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb17000/0x0/0x4ffc00000, data 0x3184562/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:49.867613+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.379812241s of 12.855323792s, submitted: 129
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3cdcb30e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512d000 session 0x55b3cb23e5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3d0b64780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:50.867896+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3129701 data_alloc: 234881024 data_used: 13082624
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283082752 unmapped: 37265408 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:51.868277+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283082752 unmapped: 37265408 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eed07000/0x0/0x4ffc00000, data 0x1f94500/0x2126000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:52.868469+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283082752 unmapped: 37265408 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:53.868704+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283090944 unmapped: 37257216 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:54.868899+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eed07000/0x0/0x4ffc00000, data 0x1f94500/0x2126000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283090944 unmapped: 37257216 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ce126780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3cb9990e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa51000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:55.869079+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998871 data_alloc: 218103808 data_used: 5861376
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 37249024 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51000 session 0x55b3ccadd680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:56.869260+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283115520 unmapped: 37232640 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce0e4000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce0e4000 session 0x55b3cba941e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ccadd680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3d0b64780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3cdcb30e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa51000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51000 session 0x55b3cdc6c5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:57.869428+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:58.869572+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea70000/0x0/0x4ffc00000, data 0x222d47b/0x23bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:59.869849+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd850800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd850800 session 0x55b3ce58ad20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:00.870076+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3097520 data_alloc: 218103808 data_used: 5857280
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd850800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd850800 session 0x55b3cfba3680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cb9414a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:01.870254+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.632447243s of 11.898548126s, submitted: 70
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cb99b860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:02.870421+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa51000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea6f000/0x0/0x4ffc00000, data 0x222d4ae/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:03.870607+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:04.870770+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:05.870965+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196983 data_alloc: 234881024 data_used: 17985536
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:06.871112+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:07.871236+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:08.871391+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea6f000/0x0/0x4ffc00000, data 0x222d4ae/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:09.871628+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:10.871766+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196983 data_alloc: 234881024 data_used: 17985536
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:11.871962+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea6f000/0x0/0x4ffc00000, data 0x222d4ae/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:12.872160+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea6f000/0x0/0x4ffc00000, data 0x222d4ae/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.750034332s of 11.771839142s, submitted: 6
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:13.875118+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 37773312 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:14.875256+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:15.875635+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260713 data_alloc: 234881024 data_used: 18665472
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:16.875865+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:17.876137+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:18.876264+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x29f74ae/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:19.876552+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x29f74ae/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:20.876776+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260729 data_alloc: 234881024 data_used: 18665472
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3cbd421e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51000 session 0x55b3ce5ae5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:21.881183+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cbdc7c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:22.881375+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:23.881597+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:24.881897+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:25.882065+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x152a47b/0x16ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3cbf23860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707800 session 0x55b3cf5acf00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009038 data_alloc: 218103808 data_used: 5857280
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.279493332s of 12.683292389s, submitted: 106
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3ccadba40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:26.882279+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:27.882484+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:28.882703+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:29.883122+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:30.883310+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938558 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:31.883543+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:32.883720+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:33.883906+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3cc598000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:34.884104+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3ce10c960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cc7fe780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3ce5694a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce707800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707800 session 0x55b3cf5ad860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cbe8c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cba95e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cd3b32c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e7800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e7800 session 0x55b3d0b65c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:35.884324+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990635 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:36.884518+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:37.884729+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:38.884918+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:39.885077+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:40.885203+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990635 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:41.885413+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174b800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b800 session 0x55b3cb941e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3ce10cb40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:42.885585+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:43.885748+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3d0b645a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.588605881s of 17.711309433s, submitted: 27
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cde570e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e7800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:44.885935+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:45.886101+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002713 data_alloc: 218103808 data_used: 4149248
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279158784 unmapped: 45391872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:46.886250+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:47.886412+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:48.886599+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:49.886776+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:50.886919+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042873 data_alloc: 218103808 data_used: 9805824
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:51.887106+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:52.887191+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:53.887407+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:54.887591+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:55.887733+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.889083862s of 11.892253876s, submitted: 1
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048489 data_alloc: 218103808 data_used: 9805824
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:56.887871+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:57.888002+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:58.888174+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:59.888337+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:00.888515+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:01.888652+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:02.888786+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:03.888953+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:04.889260+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:05.889454+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:06.889607+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:07.889734+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:08.889903+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 139K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.74 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4462 writes, 17K keys, 4462 commit groups, 1.0 writes per commit group, ingest: 20.06 MB, 0.03 MB/s
                                           Interval WAL: 4462 writes, 1787 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:09.890064+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:10.890224+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6e800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.724911690s of 15.823020935s, submitted: 21
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:11.890351+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6e800 session 0x55b3cbf234a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70b000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70b000 session 0x55b3ccad7c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d14400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14400 session 0x55b3ccadbe00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d14400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,1])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14400 session 0x55b3ce1261e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ce58bc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1f3000/0x0/0x4ffc00000, data 0x169755f/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:12.890509+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:13.890631+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:14.890815+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:15.890976+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3065094 data_alloc: 218103808 data_used: 9809920
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:16.891140+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:17.891325+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1f3000/0x0/0x4ffc00000, data 0x169755f/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb4c00 session 0x55b3ce5af680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:18.891483+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02c000 session 0x55b3cde57680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:19.891659+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3cba94000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ccadb0e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279453696 unmapped: 45096960 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:20.891845+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf611400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068328 data_alloc: 218103808 data_used: 9809920
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279453696 unmapped: 45096960 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:21.892061+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1ce000/0x0/0x4ffc00000, data 0x16bb56f/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:22.892177+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:23.892291+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:24.892492+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:25.892679+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1ce000/0x0/0x4ffc00000, data 0x16bb56f/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073608 data_alloc: 218103808 data_used: 10457088
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:26.892922+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:27.893109+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.005136490s of 16.128065109s, submitted: 29
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:28.893309+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1cc000/0x0/0x4ffc00000, data 0x16bc56f/0x1851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:29.893546+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:30.893736+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073916 data_alloc: 218103808 data_used: 10457088
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:31.893872+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eebb8000/0x0/0x4ffc00000, data 0x1cd156f/0x1e66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279379968 unmapped: 45170688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:32.894098+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279257088 unmapped: 45293568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:33.894268+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:34.894464+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:35.894616+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146050 data_alloc: 218103808 data_used: 10694656
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:36.894787+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea54000/0x0/0x4ffc00000, data 0x1e2d56f/0x1fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:37.894942+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:38.895081+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.522329330s of 11.824616432s, submitted: 88
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:39.895253+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:40.895424+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea3d000/0x0/0x4ffc00000, data 0x1e4c56f/0x1fe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140686 data_alloc: 218103808 data_used: 10698752
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:41.895615+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:42.895782+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:43.895953+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:44.896234+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613800 session 0x55b3cdc6cd20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf611400 session 0x55b3cb9981e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84ec00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:45.896365+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84ec00 session 0x55b3cbe8dc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061845 data_alloc: 218103808 data_used: 9809920
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:46.896508+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef0d8000/0x0/0x4ffc00000, data 0x15ca4fd/0x175d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:47.896658+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:48.896821+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:49.897000+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:50.897218+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061845 data_alloc: 218103808 data_used: 9809920
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:51.897443+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef0d8000/0x0/0x4ffc00000, data 0x15ca4fd/0x175d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:52.897571+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.392840385s of 13.488458633s, submitted: 24
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3ce5af4a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e7800 session 0x55b3cd3b2f00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:53.897683+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cd3b30e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:54.897857+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:55.898051+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:56.898173+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:57.898320+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:58.898519+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:59.898752+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:00.898896+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:01.899093+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:02.899272+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:03.899423+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:04.899606+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:05.899773+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:06.899932+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:07.900085+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.573257446s of 15.623365402s, submitted: 15
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:08.900221+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 52625408 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:09.900390+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:10.900555+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:11.900707+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:12.900881+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:13.901161+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:14.901397+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:15.901536+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:16.901692+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:17.901898+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:18.902102+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:19.902335+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:20.902488+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:21.902685+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:22.902823+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:23.902976+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:24.903194+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:25.903366+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:26.903516+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:27.903706+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:28.903890+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:29.904070+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:30.904250+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:31.904488+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:32.904695+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:33.904846+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:34.905058+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc62c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.134544373s of 26.442840576s, submitted: 90
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3ccadba40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6fc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6fc00 session 0x55b3cbf23860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa50000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3cbdc7c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa50000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3cbd421e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512e400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3cb99b860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:35.905254+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:36.905450+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:37.905591+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:38.905747+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:39.905910+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:40.906081+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:41.906215+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:42.906410+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:43.906618+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:44.906804+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3cb9414a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70c800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:45.906963+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:46.907120+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:47.907263+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:48.907411+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:49.907852+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:50.908060+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010266 data_alloc: 218103808 data_used: 6356992
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:51.908232+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:52.908414+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:53.909087+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:54.909283+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:55.909585+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272015360 unmapped: 52535296 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.337787628s of 21.399259567s, submitted: 5
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042632 data_alloc: 218103808 data_used: 6356992
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:56.909750+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275701760 unmapped: 48848896 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:57.909986+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275709952 unmapped: 48840704 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:58.910477+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:59.910789+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:00.911129+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081410 data_alloc: 218103808 data_used: 6582272
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:01.911431+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:02.911630+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:03.911935+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:04.912185+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:05.912352+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081410 data_alloc: 218103808 data_used: 6582272
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:06.912496+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:07.912739+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:08.912907+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:09.913147+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:10.913272+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce5a25a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02ac00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3d0b64b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02ac00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3cb986960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce569a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.855088234s of 15.102807999s, submitted: 57
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082882 data_alloc: 218103808 data_used: 6582272
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:11.913485+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275603456 unmapped: 48947200 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3cfba2d20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa50000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3ce39d0e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512e400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3ce1274a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512e400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:12.913606+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3cdc6d0e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce58ab40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275857408 unmapped: 48693248 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:13.913770+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:14.913991+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:15.914175+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126608 data_alloc: 218103808 data_used: 6582272
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:16.914343+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:17.914585+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:18.914786+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:19.914945+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:20.915138+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3cbe8c780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:21.915277+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126757 data_alloc: 218103808 data_used: 6582272
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.413484573s of 10.376444817s, submitted: 44
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d15800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:22.915434+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:23.915655+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:24.915881+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee927000/0x0/0x4ffc00000, data 0x1f65500/0x20f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:25.916286+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:26.916486+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157769 data_alloc: 218103808 data_used: 10874880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee927000/0x0/0x4ffc00000, data 0x1f65500/0x20f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:27.916649+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:28.916828+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:29.917099+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:30.917217+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee925000/0x0/0x4ffc00000, data 0x1f66500/0x20f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:31.917373+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158077 data_alloc: 218103808 data_used: 10874880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 14 09:59:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755925548' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:32.917505+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee925000/0x0/0x4ffc00000, data 0x1f66500/0x20f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:33.917694+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.610369682s of 12.620978355s, submitted: 2
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:34.918069+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276971520 unmapped: 47579136 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:35.918257+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276971520 unmapped: 47579136 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:36.918348+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201721 data_alloc: 218103808 data_used: 10940416
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:37.918492+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed285000/0x0/0x4ffc00000, data 0x2467500/0x25f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:38.918617+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:39.918784+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:40.918957+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:41.919139+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201721 data_alloc: 218103808 data_used: 10940416
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:42.919278+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:43.919459+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:44.919671+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:45.919845+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:46.919965+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200861 data_alloc: 218103808 data_used: 10952704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.501975060s of 12.738298416s, submitted: 59
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3cb987860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d15800 session 0x55b3ce126f00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:47.920106+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ccaa0780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:48.920252+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:49.920440+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edce3000/0x0/0x4ffc00000, data 0x1a0a47b/0x1b9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:50.920589+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3cfba3680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135c000 session 0x55b3cbdc7680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:51.920715+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d400 session 0x55b3d0b641e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:52.920983+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:53.921258+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:54.921586+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:55.921736+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:56.921912+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:57.922107+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:58.922271+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:59.922429+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:00.922545+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:01.922741+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:02.922880+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:03.923091+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:04.923317+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:05.923497+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:06.923698+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:07.923901+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:08.924119+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:09.924273+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:10.924439+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:11.924632+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:12.924852+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:13.925142+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:14.925325+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:15.925451+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:16.925638+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:17.925803+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:18.925962+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:19.926074+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:20.926237+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:21.926419+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:22.926582+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:23.926738+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:24.926917+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:25.927112+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:26.927298+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:27.927506+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:28.927757+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:29.927943+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:30.928143+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:31.928395+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:32.928639+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276496384 unmapped: 48054272 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:33.928803+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e6400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.562591553s of 46.725173950s, submitted: 51
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276496384 unmapped: 48054272 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6400 session 0x55b3cbe8c1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e6400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6400 session 0x55b3cc8025a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce10d2c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70c800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3ce10d0e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d400 session 0x55b3ce5ae000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:34.929073+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edeb5000/0x0/0x4ffc00000, data 0x18384dd/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:35.929283+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:36.929455+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058685 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:37.929662+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:38.929833+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:39.929973+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:40.930112+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edeb5000/0x0/0x4ffc00000, data 0x18384dd/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:41.930257+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058685 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:42.930444+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512e800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:43.930591+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e800 session 0x55b3ce0e9a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70c800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.344988823s of 10.475560188s, submitted: 43
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:44.930737+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:45.930877+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:46.931068+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063134 data_alloc: 218103808 data_used: 2838528
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277938176 unmapped: 50290688 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:47.931236+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:48.931363+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:49.931541+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:50.931728+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:51.931880+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134014 data_alloc: 234881024 data_used: 12783616
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:52.932092+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:53.932227+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:54.932423+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:55.932624+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:56.932852+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134494 data_alloc: 234881024 data_used: 12795904
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.812140465s of 12.816347122s, submitted: 1
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:57.933051+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 43753472 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:58.933212+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:59.933390+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:00.933568+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:01.933773+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3255354 data_alloc: 234881024 data_used: 13996032
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:02.934465+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:03.934821+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:04.935102+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:05.935940+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:06.937414+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250878 data_alloc: 234881024 data_used: 14000128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:07.937968+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x26444dd/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:08.939159+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:09.940365+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:10.941319+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:11.943394+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.050657272s of 14.411172867s, submitted: 131
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x26444dd/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250758 data_alloc: 234881024 data_used: 14000128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:12.943555+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02d800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:13.943761+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02d800 session 0x55b3cba952c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5b800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5b800 session 0x55b3ccaa0f00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84ec00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84ec00 session 0x55b3ce0e83c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5fc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5fc00 session 0x55b3cdcb23c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174bc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174bc00 session 0x55b3ce58b4a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:14.944139+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:15.944305+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb526000/0x0/0x4ffc00000, data 0x30274dd/0x31b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:16.944445+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326714 data_alloc: 234881024 data_used: 14000128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:17.944715+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:18.945084+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:19.945253+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc63800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63800 session 0x55b3cb941e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:20.945493+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce707400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707400 session 0x55b3d0b65a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cbd43e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa53c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3ccaa0000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:21.945683+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326262 data_alloc: 234881024 data_used: 14000128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd60400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:22.945834+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:23.945951+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 47489024 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:24.946097+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:25.946211+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:26.946387+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399382 data_alloc: 234881024 data_used: 21020672
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:27.946517+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:28.946662+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:29.946845+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.038463593s of 18.175695419s, submitted: 14
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:30.947000+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:31.947156+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb521000/0x0/0x4ffc00000, data 0x302b4dd/0x31bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3400218 data_alloc: 234881024 data_used: 21020672
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:32.947351+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:33.947580+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289325056 unmapped: 43106304 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:34.947793+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289488896 unmapped: 42942464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:35.947969+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:36.948146+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450960 data_alloc: 234881024 data_used: 21569536
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:37.948342+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:38.948444+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:39.948637+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:40.948811+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:41.948974+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450960 data_alloc: 234881024 data_used: 21569536
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:42.949152+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:43.949336+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:44.949536+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.170224190s of 15.374196053s, submitted: 52
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:45.949699+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:46.949840+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb095000/0x0/0x4ffc00000, data 0x34b84dd/0x3649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449236 data_alloc: 234881024 data_used: 21671936
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:47.949986+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5400 session 0x55b3ccadc000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd60400 session 0x55b3ccadb4a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5400 session 0x55b3cfba2b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:48.950189+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebefe000/0x0/0x4ffc00000, data 0x264f4dd/0x27e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:49.950352+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:50.950494+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:51.950695+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259330 data_alloc: 218103808 data_used: 10498048
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:52.950847+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce5ae1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3cde570e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:53.951068+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5d400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5d400 session 0x55b3cc7ec1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebef2000/0x0/0x4ffc00000, data 0x265b4dd/0x27ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:54.951268+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:55.951416+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:56.951541+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:57.951697+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:58.951854+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:59.952055+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:00.952165+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:01.952373+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:02.952553+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:03.952733+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:04.952958+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:05.953096+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:06.953264+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:07.953449+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:08.953607+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:09.953743+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:10.953955+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:11.954118+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:12.954298+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:13.954458+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:14.954719+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:15.954925+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:16.955094+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:17.955207+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:18.955382+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:19.955560+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:20.955717+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:21.955859+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:22.956003+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:23.956220+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:24.956435+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:25.956559+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:26.956730+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:27.956901+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:28.957082+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174bc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174bc00 session 0x55b3ce5a2960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cbe8c960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:29.957223+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce0f2000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cde57680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84e800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.481346130s of 44.651317596s, submitted: 51
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84e800 session 0x55b3cba94000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84e800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84e800 session 0x55b3ccada000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3ccad6d20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba94b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cf5acf00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:30.957427+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:31.957616+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021392 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:32.957804+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:33.957935+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:34.958093+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:35.958226+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:36.958420+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021392 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:37.958683+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135c400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135c400 session 0x55b3cbf22960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282935296 unmapped: 49496064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:38.958833+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282935296 unmapped: 49496064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:39.959213+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:40.959787+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:41.960050+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038361 data_alloc: 218103808 data_used: 4456448
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:42.960306+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:43.960869+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:44.961340+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:45.961796+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:46.962216+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038361 data_alloc: 218103808 data_used: 4456448
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:47.962453+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282951680 unmapped: 49479680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:48.962642+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.536443710s of 19.611005783s, submitted: 14
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 49332224 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:49.962794+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285564928 unmapped: 46866432 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:50.962930+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:51.963090+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095057 data_alloc: 218103808 data_used: 5337088
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:52.963346+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:53.963702+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0f1000/0x0/0x4ffc00000, data 0x14454ae/0x15d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:54.963979+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285655040 unmapped: 46776320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:55.964333+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:56.964586+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086421 data_alloc: 218103808 data_used: 5337088
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:57.964757+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:58.964993+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:59.965318+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e6000/0x0/0x4ffc00000, data 0x14664ae/0x15f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:00.965439+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:01.965628+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086741 data_alloc: 218103808 data_used: 5345280
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:02.965799+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e6000/0x0/0x4ffc00000, data 0x14664ae/0x15f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 47439872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:03.965977+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa53c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.636660576s of 14.872914314s, submitted: 73
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3ccaddc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512d800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512d800 session 0x55b3ce5a3860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3d0b64d20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3cc7ffc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:04.966164+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3ccaddc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:05.966289+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:06.966493+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157783 data_alloc: 218103808 data_used: 5345280
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:07.966682+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:08.966947+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:09.967092+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cbf22960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:10.967238+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70c800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d15c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:11.967343+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3198420 data_alloc: 218103808 data_used: 10727424
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 46170112 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:12.967483+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:13.967601+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:14.967767+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:15.967895+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:16.968088+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214100 data_alloc: 234881024 data_used: 12992512
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:17.968271+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:18.968447+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:19.968646+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:20.968808+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.728010178s of 16.851394653s, submitted: 25
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:21.968951+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336672 data_alloc: 234881024 data_used: 14049280
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:22.969140+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 36315136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:23.969285+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 36978688 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:24.969521+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:25.969698+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb921000/0x0/0x4ffc00000, data 0x2c2a4ae/0x2dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:26.969826+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb921000/0x0/0x4ffc00000, data 0x2c2a4ae/0x2dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340164 data_alloc: 234881024 data_used: 14286848
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:27.969976+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:28.970114+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:29.970263+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:30.970449+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:31.970565+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb91f000/0x0/0x4ffc00000, data 0x2c2d4ae/0x2dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337508 data_alloc: 234881024 data_used: 14286848
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:32.970692+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:33.970844+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:34.971000+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:35.971191+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3ccada000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.508075714s of 14.903322220s, submitted: 136
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d15c00 session 0x55b3cb940f00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:36.971330+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290496512 unmapped: 41934848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3ccadc1e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e0000/0x0/0x4ffc00000, data 0x146c4ae/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102246 data_alloc: 218103808 data_used: 5394432
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:37.971468+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290496512 unmapped: 41934848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:38.971638+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290504704 unmapped: 41926656 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:39.971843+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x14734ae/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290504704 unmapped: 41926656 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cc7ff2c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba95c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d2ff1400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:40.971970+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d2ff1400 session 0x55b3cde57e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:41.972130+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:42.972328+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:43.972538+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:44.972732+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:45.972872+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:46.973006+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:47.973144+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:48.973276+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:49.973424+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:50.973539+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:51.973696+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:52.973860+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:53.974222+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:54.974478+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:55.974626+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:56.974822+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:57.975083+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:58.975248+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:59.975373+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:00.975528+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:01.975668+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:02.975777+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:03.975950+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:04.976239+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:05.976364+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:06.976521+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:07.976699+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:08.976880+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:09.977136+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:10.977342+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:11.977555+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:12.977720+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:13.977954+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:14.978420+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:15.978582+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf610800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.965858459s of 40.134490967s, submitted: 59
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf610800 session 0x55b3ce5a32c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbec00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce58a5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc1c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce39cd20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc1c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce126780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cc5990e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:16.978786+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:17.979009+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030916 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed576000/0x0/0x4ffc00000, data 0xfd847b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:18.979776+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:19.979977+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:20.980256+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:21.980476+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:22.980805+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030916 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed576000/0x0/0x4ffc00000, data 0xfd847b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:23.980986+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cd3b2000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:24.984241+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84f400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:25.984380+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:26.984510+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:27.984665+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045352 data_alloc: 218103808 data_used: 4300800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:28.984853+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:29.985006+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:30.985204+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:31.985382+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:32.985630+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045352 data_alloc: 218103808 data_used: 4300800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:33.985854+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:34.986091+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290570240 unmapped: 41861120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:35.986239+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290570240 unmapped: 41861120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.726144791s of 19.748613358s, submitted: 2
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:36.986348+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291405824 unmapped: 41025536 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:37.986499+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134708 data_alloc: 218103808 data_used: 5177344
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:38.986698+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:39.986955+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:40.987187+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:41.987371+0000)
Oct 14 09:59:21 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23061 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:42.987541+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134868 data_alloc: 218103808 data_used: 5181440
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:43.987711+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:44.987977+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:45.988169+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:46.988340+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:47.988463+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3135188 data_alloc: 218103808 data_used: 5189632
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:48.988617+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:49.988734+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:50.988931+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cba943c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce5aeb40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce568960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:51.989096+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce58b680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc1c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.785821915s of 15.976529121s, submitted: 45
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce58ab40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135dc00 session 0x55b3ce569a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cbe8da40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cfba3a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce58a960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:52.989215+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153364 data_alloc: 218103808 data_used: 5189632
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:53.989373+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:54.989906+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec945000/0x0/0x4ffc00000, data 0x1c0947b/0x1d99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:55.990090+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:56.990258+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292462592 unmapped: 39968768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:57.990416+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153364 data_alloc: 218103808 data_used: 5189632
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292462592 unmapped: 39968768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:58.990572+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174c000 session 0x55b3cba95a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292618240 unmapped: 39813120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec921000/0x0/0x4ffc00000, data 0x1c2d47b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:59.990726+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5e000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174b800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:00.990876+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:01.991008+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:02.991270+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180149 data_alloc: 218103808 data_used: 8339456
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:03.991411+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec921000/0x0/0x4ffc00000, data 0x1c2d47b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:04.991557+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:05.991719+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:06.991887+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:07.992104+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180149 data_alloc: 218103808 data_used: 8339456
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.082937241s of 16.159908295s, submitted: 8
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:08.992263+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec91f000/0x0/0x4ffc00000, data 0x1c2e47b/0x1dbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:09.992420+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:10.992579+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293462016 unmapped: 38969344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:11.992719+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:12.992904+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:13.993071+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:14.993265+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:15.993416+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:16.993569+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:17.993732+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:18.993911+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:19.994165+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:20.994449+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:21.994632+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:22.994792+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:23.994979+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:24.995552+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:25.995778+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5e000 session 0x55b3cba945a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.404949188s of 17.572023392s, submitted: 33
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b800 session 0x55b3cc599c20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293502976 unmapped: 38928384 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ccadd860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:26.996320+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293519360 unmapped: 38912000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:27.996834+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3141441 data_alloc: 218103808 data_used: 5251072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293519360 unmapped: 38912000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:28.997087+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293527552 unmapped: 38903808 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70dc00 session 0x55b3cc7ec5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84f400 session 0x55b3cbd43a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5e000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:29.997219+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc44000/0x0/0x4ffc00000, data 0x190a47b/0x1a9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5e000 session 0x55b3cba952c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:30.997570+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:31.997876+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:32.998057+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:33.998368+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:34.998659+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:35.998919+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:36.999104+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:37.999262+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:38.999607+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:39.999974+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:41.000392+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:42.000647+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:43.000839+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:44.001129+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:45.001338+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:46.001607+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:47.001819+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:48.001986+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:49.002388+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:50.002582+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:51.002897+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:52.003158+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:53.003334+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:54.003504+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:55.003725+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:56.003872+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:57.004098+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:58.004224+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:59.004357+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:00.004529+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:01.004692+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:02.004853+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf615400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.787899017s of 36.916973114s, submitted: 30
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:03.005039+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615400 session 0x55b3ce5a21e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb23e5a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf615400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615400 session 0x55b3cbe3ed20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba94d20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84f400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84f400 session 0x55b3ce58ba40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075374 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:04.005222+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:05.005411+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:06.005587+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0x12584dd/0x13e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:07.005725+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:08.005852+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075374 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:09.006074+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc62c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3cbdc7860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc62c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3ce1274a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:10.006275+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3ce5a30e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf6f4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cfba25a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:11.006413+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02ac00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174b000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:12.006580+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:13.006751+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096732 data_alloc: 218103808 data_used: 5402624
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:14.006865+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:15.007086+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:16.007240+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:17.007471+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:18.007677+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096732 data_alloc: 218103808 data_used: 5402624
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:19.007856+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:20.008051+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:21.008220+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:22.008371+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.343187332s of 19.452217102s, submitted: 33
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292077568 unmapped: 40353792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:23.008509+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187434 data_alloc: 218103808 data_used: 5513216
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292077568 unmapped: 40353792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:24.014611+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec6fa000/0x0/0x4ffc00000, data 0x1e4a4ed/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec6fd000/0x0/0x4ffc00000, data 0x1e4e4ed/0x1fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:25.014824+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:26.015097+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:27.015257+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:28.015429+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3189592 data_alloc: 218103808 data_used: 5500928
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:29.015654+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:30.015853+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:31.016060+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:32.016221+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:33.016406+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188172 data_alloc: 218103808 data_used: 5505024
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:34.016546+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:35.016763+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:36.016917+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:37.017072+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:38.017218+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188172 data_alloc: 218103808 data_used: 5505024
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:39.017380+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.344091415s of 16.666501999s, submitted: 103
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cb99b860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d000 session 0x55b3cb986960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d000 session 0x55b3cbe8c780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb987860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc62c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:40.017496+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3cdcb25a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cb9863c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf6f4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cfba2f00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf6f4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cbe3e960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cbdc6d20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:41.017747+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:42.017885+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:43.018126+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199049 data_alloc: 218103808 data_used: 5505024
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:44.018249+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:45.018390+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:46.018550+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:47.018638+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf614c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf614c00 session 0x55b3cfba3e00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccb61400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc63000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291282944 unmapped: 41148416 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:48.018748+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199181 data_alloc: 218103808 data_used: 5505024
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:49.018842+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:50.018939+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:51.019100+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:52.019252+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:53.019411+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202541 data_alloc: 218103808 data_used: 6029312
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:54.019525+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:55.019795+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.294864655s of 16.384281158s, submitted: 18
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:56.020206+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:57.020486+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:58.021407+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291332096 unmapped: 41099264 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202957 data_alloc: 218103808 data_used: 6066176
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:59.021573+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292683776 unmapped: 39747584 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:00.021770+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:01.021946+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292478976 unmapped: 39952384 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:02.022071+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292495360 unmapped: 39936000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec1ae000/0x0/0x4ffc00000, data 0x23984ed/0x252a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:03.022232+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:04.022459+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:05.022691+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:06.022883+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:07.023103+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:08.023251+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:09.023382+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.73 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2257 writes, 9325 keys, 2257 commit groups, 1.0 writes per commit group, ingest: 11.79 MB, 0.02 MB/s
                                           Interval WAL: 2257 writes, 876 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:10.023576+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:11.023707+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:12.023855+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:13.024035+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:14.024172+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:15.024378+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:16.024533+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:17.024703+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:18.024892+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:19.025082+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:20.025264+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:21.025434+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:22.025579+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:23.025766+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:24.025930+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:25.026138+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.300401688s of 30.381093979s, submitted: 46
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccb61400 session 0x55b3cbf234a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63000 session 0x55b3cde56b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:26.026299+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb99b2c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:27.026520+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:28.026711+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194413 data_alloc: 218103808 data_used: 5505024
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:29.026913+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3ce0f3860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b000 session 0x55b3cba95a40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:30.027099+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccb61400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccb61400 session 0x55b3ccadd680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:31.027338+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:32.027637+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:33.028191+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:34.028372+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:35.028649+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:36.028933+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:37.029200+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:38.029613+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:39.029940+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:40.030406+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:41.030711+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:42.031006+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:43.031379+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:44.031623+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:45.031886+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:46.032144+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:47.032457+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:48.032732+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:49.032949+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 41279488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:50.033107+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 41279488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.373273849s of 24.570438385s, submitted: 43
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3d0b64960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3cb998960
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cba943c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5000 session 0x55b3cb999680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc63c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63c00 session 0x55b3ce39d860
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:51.033306+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292216832 unmapped: 40214528 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:52.033473+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292216832 unmapped: 40214528 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ccadad20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:53.033598+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099194 data_alloc: 218103808 data_used: 2686976
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:54.033722+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed1fc000/0x0/0x4ffc00000, data 0x1350500/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:55.033902+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:56.034046+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce39dc20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3ce126780
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613800 session 0x55b3cf5ade00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:57.034204+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:58.034340+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:59.034544+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:00.034744+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:01.034925+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:02.035100+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:03.035251+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:04.035425+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:05.035646+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:06.035846+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:07.036073+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:08.036245+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:09.036413+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.473459244s of 18.660179138s, submitted: 60
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:10.036596+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291807232 unmapped: 40624128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:11.036704+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:12.036813+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:13.036988+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:14.037136+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:15.037340+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:16.037565+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:17.038302+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:18.038478+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:19.038599+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:20.038766+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:21.038921+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:22.039138+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:23.039360+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:24.039551+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:25.039754+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:26.041116+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:27.041277+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:28.041482+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:29.041657+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:30.041867+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:31.042051+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:32.042247+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:33.042420+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:34.042638+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:35.042918+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:36.043097+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:37.043297+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:38.043508+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:39.043705+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:40.043870+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:41.044112+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:42.044293+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:43.044453+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:44.044627+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291872768 unmapped: 40558592 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:45.044841+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:46.045056+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:47.045200+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:48.045342+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:49.045537+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:50.045720+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:51.045855+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:52.046058+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:53.046306+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:54.046460+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:55.046638+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:56.046814+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:57.046946+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.734951019s of 48.040054321s, submitted: 90
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 293 ms_handle_reset con 0x55b3cbd50000 session 0x55b3cba943c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291897344 unmapped: 40534016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:58.047099+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:59.047309+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061748 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:00.047484+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:01.047659+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed2ec000/0x0/0x4ffc00000, data 0xe5004c/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291921920 unmapped: 40509440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:02.047810+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291921920 unmapped: 40509440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:03.047946+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291938304 unmapped: 40493056 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:04.048134+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291946496 unmapped: 40484864 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:05.049844+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:06.050896+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:07.051341+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:08.052579+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:09.053262+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:10.053932+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:11.054528+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:12.055151+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:13.055543+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:14.055679+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:15.055949+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:16.056128+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:17.056478+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:18.056806+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:19.057088+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:20.057297+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:21.057560+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:22.057727+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:23.057980+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:24.058075+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:25.058651+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:26.058834+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:27.059108+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:28.059257+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291995648 unmapped: 40435712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:29.059424+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291995648 unmapped: 40435712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:30.059627+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:31.059783+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:32.059919+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:33.060132+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:34.060342+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:35.060572+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:36.060758+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292020224 unmapped: 40411136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:37.061335+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292020224 unmapped: 40411136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:38.061746+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:39.062480+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:40.062900+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:41.063216+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:42.063366+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:43.063586+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:44.063725+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:45.064081+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:46.064288+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292036608 unmapped: 40394752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:47.064485+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292036608 unmapped: 40394752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:48.064689+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:49.064891+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:50.065058+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:51.065216+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:52.065387+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:53.065530+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292061184 unmapped: 40370176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:54.065734+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292061184 unmapped: 40370176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:55.065918+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:56.066068+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:57.066217+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:58.066340+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:59.066531+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:00.066742+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:01.066966+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:02.067130+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:03.067268+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:04.067417+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:05.067653+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:06.067910+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:07.068064+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:08.068234+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:09.068414+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:10.068564+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:11.068656+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:12.068821+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:13.068994+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:14.069200+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:15.069317+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:16.132860+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:17.133112+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292118528 unmapped: 40312832 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:18.133302+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:19.133454+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:20.133646+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:21.133788+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:22.133985+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:23.134160+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:24.134363+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:25.134539+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:26.134698+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:27.134882+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:28.135114+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:29.135310+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:30.135479+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:31.135580+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:32.135761+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:33.135891+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292143104 unmapped: 40288256 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:34.136067+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:35.136244+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:36.136583+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:37.136725+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:38.136858+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:39.137081+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:40.137224+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:41.137397+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:42.137565+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292175872 unmapped: 40255488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:43.137781+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292175872 unmapped: 40255488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:44.137957+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:45.138188+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:46.138359+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:47.138566+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:48.138774+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:49.138925+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292200448 unmapped: 40230912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:50.139158+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292208640 unmapped: 40222720 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5f800
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 113.866653442s of 113.942031860s, submitted: 31
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 ms_handle_reset con 0x55b3cdd5f800 session 0x55b3ccadd680
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:51.139338+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292208640 unmapped: 40222720 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 ms_handle_reset con 0x55b3cbd50000 session 0x55b3cb99b2c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:52.139482+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:53.139688+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:54.139941+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2ea000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060002 data_alloc: 234881024 data_used: 11603968
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:55.140268+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:56.140408+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2ea000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:57.140571+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:58.140848+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:59.140976+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060002 data_alloc: 234881024 data_used: 11603968
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e7400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:00.141090+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299130880 unmapped: 33300480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 295 ms_handle_reset con 0x55b3cf4e7400 session 0x55b3cfba25a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:01.141202+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 295 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e365d/0x376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:02.141379+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5cc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.775878906s of 11.974079132s, submitted: 56
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:03.141486+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 296 ms_handle_reset con 0x55b3cdd5cc00 session 0x55b3cc5990e0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:04.141608+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2954206 data_alloc: 218103808 data_used: 151552
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:05.141811+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:06.142109+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:07.142236+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf54000/0x0/0x4ffc00000, data 0x1e520b/0x378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:08.142444+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:09.142703+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956988 data_alloc: 218103808 data_used: 151552
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:10.142933+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: get_auth_request con 0x55b3ccafd800 auth_method 0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:11.143124+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 297 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x1e6c6e/0x37b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:12.143244+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:13.143420+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:14.143586+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84f000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.541853905s of 11.648766518s, submitted: 41
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2959762 data_alloc: 218103808 data_used: 151552
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 ms_handle_reset con 0x55b3cd84f000 session 0x55b3ce5aed20
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:15.143796+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 38494208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:16.144052+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:17.144286+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:18.144546+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:19.144765+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:20.144977+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:21.145137+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:22.145364+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:23.145553+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:24.145749+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:25.145951+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:26.146108+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:27.146280+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:28.146475+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:29.146753+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:30.146907+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:31.147119+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:32.147271+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:33.147410+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:34.147655+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:35.147876+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:36.148128+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:37.148326+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:38.148475+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 38477824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:39.148659+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 38477824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:40.148826+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:41.149007+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:42.149213+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:43.149462+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:44.149642+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:45.149840+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:46.150034+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:47.150170+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:48.150677+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:49.151128+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:50.151449+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:51.151723+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:52.152088+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:53.152285+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:54.152820+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:55.153264+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:56.153596+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:57.153915+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:58.154198+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:59.154445+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:00.154620+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 38445056 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:01.154835+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:02.155105+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:03.155293+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:04.155505+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:05.155753+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:06.155915+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:07.156091+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:08.156413+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:09.156573+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 38420480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:10.156751+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 38420480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:11.156946+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 38412288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:12.157145+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 38412288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:13.157374+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:14.157556+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:15.157752+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:16.157931+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:17.158081+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:18.158278+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:19.158454+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:20.158645+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:21.158811+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:22.158987+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:23.159206+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:24.159384+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:25.159591+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:26.159735+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:27.160090+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:28.160310+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:29.160502+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:30.160722+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:31.160878+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:32.161123+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 38371328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:33.161269+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 38363136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:34.161398+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 38354944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:35.161634+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 38346752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:36.161777+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:37.161959+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:38.162141+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:39.162317+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:40.162493+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:41.162648+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:42.162790+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:43.162955+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:44.163173+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:45.163398+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:46.163733+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:47.163895+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:48.164093+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:49.164588+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294109184 unmapped: 38322176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:50.164722+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 38313984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:51.164894+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 38313984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:52.165094+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:53.165215+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:54.165349+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:55.165535+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:56.165709+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:57.165870+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:58.166074+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:59.166225+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:00.166421+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 106.011367798s of 106.076034546s, submitted: 19
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295182336 unmapped: 37249024 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:01.166615+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 54001664 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 299 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cb940b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:02.166799+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed748000/0x0/0x4ffc00000, data 0x9ea3f4/0xb85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135dc00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 53968896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:03.166983+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 53968896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 300 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cbe8c000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:04.167150+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:05.167374+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:06.167547+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:07.167691+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:08.167885+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:09.168116+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 53944320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:10.168330+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 53944320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:11.168468+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 53936128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:12.168590+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 53936128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:13.168770+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 53927936 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:14.168961+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:15.169233+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:16.169393+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:17.169563+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:18.169741+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:19.169964+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:20.170202+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:21.170402+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:22.170541+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:23.170708+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:24.170886+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:25.171066+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:26.171248+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:27.171431+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:28.171594+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:29.171739+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 53895168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:30.171904+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf60e000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.694004059s of 29.852605820s, submitted: 37
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:31.172109+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 301 ms_handle_reset con 0x55b3cf60e000 session 0x55b3cbd825a0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:32.172291+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:33.172472+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:34.172661+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:35.172854+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035515 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:36.172964+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:37.173122+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 53846016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:38.173300+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:39.173482+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:40.173635+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:41.173813+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:42.173958+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:43.174160+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:44.174369+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:45.174602+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:46.174828+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:47.174968+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:48.175098+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:49.175224+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:50.175340+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:51.175499+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:52.175643+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:53.175784+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:54.175921+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:55.176350+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:56.176597+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:57.176742+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:58.177142+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:59.177588+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:00.177860+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:01.178194+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:02.178345+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:03.178573+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:04.178844+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:05.179128+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:06.179424+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:07.179724+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:08.179966+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:09.180131+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:10.180341+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:11.180557+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:12.180722+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:13.180942+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:14.181126+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:15.181347+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:16.181561+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:17.181736+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:18.181936+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:19.182133+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:20.182302+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:21.182450+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:22.182608+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:23.182787+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:24.183080+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295477248 unmapped: 53739520 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:25.183278+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:26.183489+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:27.183648+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:28.184104+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:29.184589+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:30.184999+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:31.185384+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:32.185950+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:33.186616+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:34.187083+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:35.187704+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:36.187860+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:37.188106+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:38.188327+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:39.188744+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:40.188892+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:41.189222+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:42.189342+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 53698560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:43.189623+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:44.189907+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:45.190167+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:46.190350+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:47.190544+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:48.190741+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:49.190938+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:50.191209+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:51.191487+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:52.191717+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:53.191963+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:54.192258+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:55.192533+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:56.192748+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:57.192982+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:58.193177+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:59.193327+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:00.193622+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:01.193879+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:02.194100+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:03.194299+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:04.194515+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:05.194675+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:06.194804+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:07.194944+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:08.195147+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:09.195323+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:10.195471+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:11.195610+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 53624832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:12.195764+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:13.195962+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:14.196124+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:15.196342+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:16.196512+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:17.196704+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:18.196893+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:19.197148+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:20.197354+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:21.197557+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:22.197763+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:23.197976+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:24.198146+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:25.198360+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:26.198634+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:27.198823+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:28.199068+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:29.199265+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:30.199417+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:31.199602+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:32.199875+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:33.200162+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:34.200308+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:35.200498+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:36.200834+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:37.201147+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:38.202800+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:39.203167+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:40.203615+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:41.204406+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:42.204832+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:43.205105+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:44.205572+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 53542912 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:45.205827+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:46.206131+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:47.206339+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:48.206550+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:49.206782+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:50.206956+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:51.207137+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:52.207371+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:53.207610+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:54.207803+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:55.208126+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:56.208337+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:57.208572+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:58.208724+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:59.208927+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:00.209110+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:01.209392+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:02.209651+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:03.209898+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:04.210094+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:05.210290+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:06.210892+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:07.211223+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:08.211479+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:09.211794+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:10.212885+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:11.213185+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:12.213362+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:13.213494+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:14.213648+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:15.213863+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:16.214008+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:17.214504+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:18.214642+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:19.214881+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:20.215149+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:21.215493+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:22.215669+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:23.215890+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:24.216063+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:25.216313+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:26.216515+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:27.216737+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:28.216953+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:29.217166+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:30.217377+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:31.217711+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:32.217984+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:33.218280+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:34.218560+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:35.218788+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:36.219078+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:37.219268+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:38.219453+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:39.219659+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:40.219850+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:41.220066+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:42.220273+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:43.220431+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:44.220588+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:45.220794+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:46.220971+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:47.221154+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:48.221334+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:49.221548+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:50.221758+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:51.221931+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:52.222082+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:53.222222+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:54.222392+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:55.222586+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:56.222710+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:57.222869+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:58.223080+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:59.223269+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:00.223450+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:01.223632+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:02.223789+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:03.223965+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:04.224207+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:05.224446+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:06.224626+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:07.224771+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:08.224967+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:09.225231+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.71 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 818 writes, 2112 keys, 818 commit groups, 1.0 writes per commit group, ingest: 1.15 MB, 0.00 MB/s
                                           Interval WAL: 818 writes, 377 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:10.226136+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets getting new tickets!
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:11.226888+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _finish_auth 0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:11.227888+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:12.229052+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:13.230223+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:14.230579+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:15.231281+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: mgrc ms_handle_reset ms_handle_reset con 0x55b3d1d16400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 09:59:21 compute-0 ceph-osd[89514]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: get_auth_request con 0x55b3cdd5f800 auth_method 0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: mgrc handle_mgr_configure stats_period=5
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:16.231924+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:17.232989+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:18.233245+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:19.233622+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:20.233789+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:21.233953+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:22.234091+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:23.234264+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:24.234541+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:25.234766+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:26.234914+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:27.235059+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:28.235311+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:29.235559+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:30.235788+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:31.236042+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:32.236270+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:33.236493+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:34.236672+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:35.236862+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:36.237062+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:37.237258+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:38.237469+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:39.237789+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:40.237982+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:41.238130+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:42.238270+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:43.238408+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:44.238602+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:45.238828+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:46.238975+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:47.239119+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:48.239265+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:49.240564+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:50.240697+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:51.240863+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:52.240998+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:53.241125+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:54.241245+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:55.241431+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:56.241613+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:57.241863+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:58.242060+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:59.242170+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:00.242308+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:01.242433+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:02.242592+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:03.242745+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:04.242903+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:05.243104+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:06.243271+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:07.243443+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:08.243586+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 278.443786621s of 278.633789062s, submitted: 64
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:09.243711+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:10.243855+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:11.243951+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:12.244062+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:13.244243+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:14.244376+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:15.244867+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:16.245144+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:17.245372+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:18.245922+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:19.246449+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:20.246990+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:21.247348+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:22.247617+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:23.247823+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:24.247996+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:25.248227+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:26.248604+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:27.248967+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:28.249325+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:29.249481+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:30.249604+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:31.249791+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:32.249962+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:33.250085+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:34.250281+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:35.250525+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:36.250675+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:37.250798+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:38.251218+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:39.251500+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:40.251662+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:41.251815+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:42.252003+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:43.252211+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:44.252393+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:45.252592+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:46.252761+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:47.252943+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:48.253263+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:49.253421+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:50.253580+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:51.253787+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:52.253915+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:53.254066+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:54.254185+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:55.254410+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:56.254569+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:57.254711+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:58.254919+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:59.255107+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:00.255332+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:01.255474+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:02.255648+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:03.255818+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:04.255996+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:05.256215+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:06.256330+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:07.256495+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:08.256672+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:09.256824+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:10.256999+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:11.257184+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 53239808 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:12.257385+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 53239808 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:13.257552+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 53231616 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:14.257697+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:15.257888+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:16.258098+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:17.258233+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:18.258440+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:19.258754+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:20.259056+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:21.259307+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:22.259519+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:23.259801+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:24.259952+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:25.260221+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:26.260420+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:27.260591+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:28.260719+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:29.260915+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 53198848 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:30.261114+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:31.261283+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:32.261431+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:33.261632+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:34.261793+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:35.262068+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:36.262251+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:37.262352+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:38.262527+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:39.262821+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:40.262962+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:41.263091+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:42.263252+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:43.263458+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:44.263671+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:45.263864+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:46.264040+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:47.264172+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:48.264352+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:49.264536+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:50.264724+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:51.264887+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:52.265080+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:53.265235+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:54.265530+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:55.265763+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:56.265908+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:57.266083+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:58.266269+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:59.266428+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 53108736 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:00.266619+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 53108736 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:01.266793+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:02.266998+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:03.267209+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:04.267393+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:05.267621+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:06.267809+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 53092352 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:07.268003+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 53084160 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:08.268399+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 53075968 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:09.268634+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 53075968 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:10.268798+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 53067776 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:11.268958+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:12.269140+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:13.269390+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:14.269541+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:15.269748+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:16.269946+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:17.270112+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:18.270250+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:19.270406+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:20.270730+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:21.270905+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:22.271085+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:23.271270+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:24.271471+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:25.271635+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 53035008 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:26.271794+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:27.271961+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:28.272134+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:29.272304+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:30.272466+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:31.272603+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 53018624 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:32.272773+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 53018624 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:33.272899+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 53010432 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:34.273055+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 53002240 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:35.273224+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf610400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 146.330047607s of 146.644271851s, submitted: 90
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 52994048 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 302 handle_osd_map epochs [303,303], i have 303, src has [1,303]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:36.273359+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 303 ms_handle_reset con 0x55b3cf610400 session 0x55b3d0b64b40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ed73e000/0x0/0x4ffc00000, data 0x9f1130/0xb8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 52928512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2986189 data_alloc: 218103808 data_used: 184320
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:37.273560+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 52928512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:38.273709+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf611400
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 52920320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:39.273939+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 304 ms_handle_reset con 0x55b3cf611400 session 0x55b3cdcb23c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 52920320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:40.274069+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edf3d000/0x0/0x4ffc00000, data 0x1f2cbb/0x390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:41.274216+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987987 data_alloc: 218103808 data_used: 192512
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:42.274412+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:43.274629+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f471e/0x393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccad0000
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 52895744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:44.274815+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 ms_handle_reset con 0x55b3ccad0000 session 0x55b3ce5afa40
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:45.274984+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:46.275150+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:47.275339+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:48.275564+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:49.275754+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:50.275945+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:51.276188+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:52.276386+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:53.276557+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:54.276673+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:55.276845+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:56.276970+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 52862976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:57.277082+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:58.277246+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:59.277392+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:00.277583+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:01.277865+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:02.278148+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:03.278399+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:04.278611+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:05.278831+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:06.279116+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:07.279258+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:08.279511+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:09.279673+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:10.279842+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:11.280037+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:12.280182+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:13.280392+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:14.280574+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:15.280779+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:16.280957+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:17.281139+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:18.281275+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:19.281434+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:20.281594+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:21.281760+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:22.281921+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:23.282109+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:24.282263+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:25.282418+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:26.282543+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:27.282678+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:28.282823+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:29.282996+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 52789248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:30.283187+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 52781056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:31.283347+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 52781056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:32.283498+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:33.283677+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:34.283841+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:35.284005+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:36.284193+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:37.284322+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:38.284477+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:39.284638+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb4c00
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.713172913s of 64.038635254s, submitted: 108
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:40.285109+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 307 ms_handle_reset con 0x55b3cbcb4c00 session 0x55b3cc8003c0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:41.285253+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:42.285439+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997229 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:43.285603+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:44.285728+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:45.285936+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:46.286106+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:47.286257+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997229 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:48.286418+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:49.286588+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:50.286722+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:51.286906+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:52.287090+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:53.287273+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:54.287480+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:55.287711+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:56.287847+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:57.288100+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:58.288286+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:59.288441+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:00.288758+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:01.289172+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:02.289350+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:03.289569+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:04.289696+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:05.289948+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:06.290072+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:07.290242+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 52658176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:08.290437+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 52649984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:09.290590+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 52641792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:10.298264+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:11.298481+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:12.298622+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:13.298805+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:14.298978+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:15.299232+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:16.299436+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:17.299642+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:18.299908+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:19.300156+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:20.300313+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:21.300499+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:22.300768+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:23.301002+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:24.301234+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 52617216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:25.301448+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 52617216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:26.301600+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 52609024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:27.301783+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:28.301943+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:29.302084+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:30.302252+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:31.302476+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:32.302711+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:33.302948+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:34.303113+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:35.303320+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:36.303520+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:37.303666+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:38.303876+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:39.304075+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 52576256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:40.304251+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 52576256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:41.304422+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:42.304581+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:43.304751+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:44.304910+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:45.305066+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:46.305183+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:47.305319+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 52559872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:21 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:21 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:48.305453+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}'
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'config show' '{prefix=config show}'
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 52559872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:49.305575+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 52944896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 09:59:21 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:50.305716+0000)
Oct 14 09:59:21 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:21 compute-0 ceph-osd[89514]: do_command 'log dump' '{prefix=log dump}'
Oct 14 09:59:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1838274859' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 09:59:21 compute-0 ceph-mon[74249]: from='client.23057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:21 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/755925548' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 09:59:21 compute-0 ceph-mon[74249]: from='client.23061 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:21 compute-0 ceph-mon[74249]: pgmap v3202: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:21 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23065 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:21 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 14 09:59:21 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751310467' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 09:59:21 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23067 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:21 compute-0 nova_compute[259627]: 2025-10-14 09:59:21.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:21 compute-0 nova_compute[259627]: 2025-10-14 09:59:21.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 09:59:21 compute-0 nova_compute[259627]: 2025-10-14 09:59:21.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 09:59:22 compute-0 nova_compute[259627]: 2025-10-14 09:59:22.038 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 09:59:22 compute-0 nova_compute[259627]: 2025-10-14 09:59:22.040 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:22 compute-0 nova_compute[259627]: 2025-10-14 09:59:22.040 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 09:59:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513774428' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23071 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 14 09:59:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580102992' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mon[74249]: from='client.23065 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2751310467' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mon[74249]: from='client.23067 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2513774428' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mon[74249]: from='client.23071 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/580102992' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 09:59:22 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23075 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:22 compute-0 nova_compute[259627]: 2025-10-14 09:59:22.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:22 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 14 09:59:22 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118997230' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 14 09:59:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:23 compute-0 sshd-session[442966]: Failed password for root from 91.224.92.108 port 45684 ssh2
Oct 14 09:59:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:23 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23083 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:23 compute-0 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 09:59:23 compute-0 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T09:59:23.536+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 09:59:23 compute-0 ceph-mon[74249]: from='client.23075 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:23 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/118997230' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 14 09:59:23 compute-0 ceph-mon[74249]: pgmap v3203: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 14 09:59:23 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279325912' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 14 09:59:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410129110' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 14 09:59:24 compute-0 unix_chkpwd[444095]: password check failed for user (root)
Oct 14 09:59:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 14 09:59:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869743412' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 14 09:59:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626043090' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: from='client.23083 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3279325912' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1410129110' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2869743412' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1626043090' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 14 09:59:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256694217' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 14 09:59:24 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 14 09:59:24 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678340268' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 14 09:59:25 compute-0 crontab[444294]: (root) LIST (root)
Oct 14 09:59:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 14 09:59:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674093922' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 14 09:59:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 14 09:59:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3558314836' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 14 09:59:25 compute-0 nova_compute[259627]: 2025-10-14 09:59:25.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 14 09:59:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573561031' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 14 09:59:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4256694217' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 14 09:59:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1678340268' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 14 09:59:25 compute-0 ceph-mon[74249]: pgmap v3204: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2674093922' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 14 09:59:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3558314836' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 14 09:59:25 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1573561031' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 14 09:59:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 14 09:59:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914762314' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:40.606387+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 68681728 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:41.606614+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 68755456 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec185000/0x0/0x4ffc00000, data 0x1fb2b8c/0x2149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec185000/0x0/0x4ffc00000, data 0x1fb2b8c/0x2149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:42.607662+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 68837376 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:43.607874+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 68837376 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:44.608093+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 68837376 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291969 data_alloc: 218103808 data_used: 8671232
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:45.608254+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 68837376 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec185000/0x0/0x4ffc00000, data 0x1fb2b8c/0x2149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:46.608405+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.040349960s of 26.117074966s, submitted: 12
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301776896 unmapped: 63053824 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8b55a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb87f000/0x0/0x4ffc00000, data 0x28b8b8c/0x2a4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:47.608616+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 68362240 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:48.608749+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 68362240 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:49.609005+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 68362240 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3365155 data_alloc: 218103808 data_used: 8671232
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:50.609157+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 68362240 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:51.609273+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 67911680 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c7e0ef00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8b54000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:52.609425+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 67518464 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb611000/0x0/0x4ffc00000, data 0x2b20b8c/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c976a780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9fbaf00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:53.609560+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 67493888 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c92e1000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb5f3000/0x0/0x4ffc00000, data 0x2b34baf/0x2ccc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:54.609740+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 67485696 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444397 data_alloc: 234881024 data_used: 15724544
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:55.609855+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:56.610147+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:57.610406+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:58.610649+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb5f3000/0x0/0x4ffc00000, data 0x2b34baf/0x2ccc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:59.610985+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455757 data_alloc: 234881024 data_used: 17338368
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:00.611339+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:01.611617+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:02.611881+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:03.612086+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb5f3000/0x0/0x4ffc00000, data 0x2b34baf/0x2ccc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:04.612316+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.891954422s of 18.136959076s, submitted: 60
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3512547 data_alloc: 234881024 data_used: 18878464
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:05.612479+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb381000/0x0/0x4ffc00000, data 0x2db5baf/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 63881216 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:06.612620+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 63758336 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:07.612779+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 63758336 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:08.612984+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 63758336 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eafb7000/0x0/0x4ffc00000, data 0x317ebaf/0x3316000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb000 session 0x5597c8f4f2c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:09.613177+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571707 data_alloc: 234881024 data_used: 19238912
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:10.613378+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:11.613517+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:12.613637+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea824000/0x0/0x4ffc00000, data 0x3912baf/0x3aaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c976a960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c9fa5860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:13.613786+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9fbba40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8f4ed20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:14.614443+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea823000/0x0/0x4ffc00000, data 0x3912bbf/0x3aab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:15.614556+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573865 data_alloc: 234881024 data_used: 19247104
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c953a400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c92eb000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92e1000 session 0x5597c8baa3c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.135522842s of 11.417894363s, submitted: 79
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7841e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c92e1000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:16.614664+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 68083712 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92e1000 session 0x5597c6f923c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:17.614838+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb773000/0x0/0x4ffc00000, data 0x29c2b9c/0x2b5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:18.614973+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:19.615165+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:20.615302+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432703 data_alloc: 234881024 data_used: 15921152
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:21.615463+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:22.615630+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:23.615806+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb773000/0x0/0x4ffc00000, data 0x29c2b9c/0x2b5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:24.615943+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:25.616131+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432703 data_alloc: 234881024 data_used: 15921152
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:26.616257+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.419453621s of 10.544699669s, submitted: 41
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 67215360 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:27.616394+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 67076096 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:28.616462+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:29.616635+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb19f000/0x0/0x4ffc00000, data 0x2f96b9c/0x312e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:30.616804+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479817 data_alloc: 234881024 data_used: 15953920
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb19f000/0x0/0x4ffc00000, data 0x2f96b9c/0x312e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:31.616984+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:32.617193+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:33.617359+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:34.617525+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c953a400 session 0x5597c8bac3c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92eb000 session 0x5597c8ec2f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:35.617681+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478569 data_alloc: 234881024 data_used: 15953920
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c976a000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:36.617823+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb1a0000/0x0/0x4ffc00000, data 0x2f96b9c/0x312e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:37.617967+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:38.618096+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x222eb8c/0x23c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:39.618337+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x222eb8c/0x23c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c8b541e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c705b4a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:40.618497+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332568 data_alloc: 218103808 data_used: 9101312
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.989180565s of 14.203341484s, submitted: 50
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9fbb860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:41.618693+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec6ea000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:42.618845+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec6ea000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:43.619084+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:44.619236+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:45.619383+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236455 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:46.619522+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:47.619669+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec6ea000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:48.619825+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7840780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c92e1000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92e1000 session 0x5597c7ec6b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8ec6000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8bad4a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c6f114a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeec800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c9058780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c92eb000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec6ea000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92eb000 session 0x5597c93232c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7185e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8b55c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:49.620049+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298483712 unmapped: 77897728 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:50.620208+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390711 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298483712 unmapped: 77897728 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3a0000/0x0/0x4ffc00000, data 0x2d96bee/0x2f2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:51.620381+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298483712 unmapped: 77897728 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c9df10e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:52.620577+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeec800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c8f4e1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298483712 unmapped: 77897728 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c953a400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c953a400 session 0x5597c7e0f4a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:53.620708+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.416601181s of 12.653140068s, submitted: 71
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7184f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298655744 unmapped: 77725696 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:54.620864+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298663936 unmapped: 77717504 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:55.620991+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428332 data_alloc: 218103808 data_used: 9043968
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 76832768 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:56.621108+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x2dbabee/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:57.621244+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:58.621428+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:59.621671+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:00.621843+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527852 data_alloc: 234881024 data_used: 23109632
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x2dbabee/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeec800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c9383860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:01.621961+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c7840960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9fba000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8bda000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x2dbabee/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa50e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:02.622071+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:03.622208+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:04.622334+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb03a000/0x0/0x4ffc00000, data 0x30fcbee/0x3294000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.515716553s of 11.578465462s, submitted: 18
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:05.622484+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652682 data_alloc: 234881024 data_used: 24137728
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 67682304 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8f4e960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:06.622593+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 67100672 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c6c22b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:07.622766+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c8b54f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeec800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c9f8af00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309288960 unmapped: 67092480 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:08.623363+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309288960 unmapped: 67092480 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:09.623566+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 67051520 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:10.623718+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea4d2000/0x0/0x4ffc00000, data 0x3c5bbee/0x3df3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3689852 data_alloc: 234881024 data_used: 26611712
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:11.623828+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:12.623978+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:13.624133+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:14.624336+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:15.624536+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684304 data_alloc: 234881024 data_used: 26615808
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea4ba000/0x0/0x4ffc00000, data 0x3c7cbee/0x3e14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:16.624667+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 67010560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:17.624803+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 67010560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea4ba000/0x0/0x4ffc00000, data 0x3c7cbee/0x3e14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:18.624936+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 67010560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.954314232s of 14.283423424s, submitted: 137
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:19.625082+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 62652416 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4306bee/0x449e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:20.625197+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4306bee/0x449e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [0,0,3])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c7841e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3766476 data_alloc: 234881024 data_used: 27471872
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8baa3c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f69c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f69c00 session 0x5597c8f4ed20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805000 session 0x5597c9fbba40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 62267392 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e000 session 0x5597c976a960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c71841e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c7e78960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f69c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f69c00 session 0x5597c8ec65a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805000 session 0x5597c8b54960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:21.625296+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313851904 unmapped: 62529536 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9692000/0x0/0x4ffc00000, data 0x4aa3bfe/0x4c3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:22.625442+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313851904 unmapped: 62529536 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:23.625585+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313851904 unmapped: 62529536 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:24.625714+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e968c000/0x0/0x4ffc00000, data 0x4aa9bfe/0x4c42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313860096 unmapped: 62521344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca800000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca800000 session 0x5597c9058b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:25.625898+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9f8b4a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3824399 data_alloc: 234881024 data_used: 27480064
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313860096 unmapped: 62521344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c7840d20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f69c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:26.626101+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f69c00 session 0x5597c7e0e1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 314023936 unmapped: 62357504 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f71000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:27.626232+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 62341120 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:28.626338+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318554112 unmapped: 57827328 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:29.626473+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9663000/0x0/0x4ffc00000, data 0x4ad0c31/0x4c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:30.626623+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3882922 data_alloc: 251658240 data_used: 34734080
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:31.626763+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9663000/0x0/0x4ffc00000, data 0x4ad0c31/0x4c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:32.626881+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9663000/0x0/0x4ffc00000, data 0x4ad0c31/0x4c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:33.626998+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8d714a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c7e0e5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:34.627124+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.690410614s of 15.173391342s, submitted: 127
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9f8ad20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 57794560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:35.627242+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x4096c31/0x4231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768586 data_alloc: 251658240 data_used: 31678464
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 57794560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:36.627358+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 57794560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:37.627509+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318595072 unmapped: 57786368 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:38.627665+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 55803904 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:39.627850+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9776000/0x0/0x4ffc00000, data 0x49bdc31/0x4b58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [0,0,4])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:40.627933+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3860752 data_alloc: 251658240 data_used: 33177600
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96cc000/0x0/0x4ffc00000, data 0x4a67c31/0x4c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:41.628079+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:42.628230+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:43.628353+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:44.628452+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:45.628606+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.843324661s of 11.244002342s, submitted: 140
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3858116 data_alloc: 251658240 data_used: 33177600
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 55435264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:46.628734+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96cc000/0x0/0x4ffc00000, data 0x4a67c31/0x4c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 55435264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96a8000/0x0/0x4ffc00000, data 0x4a8bc31/0x4c26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:47.628957+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 55435264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:48.629119+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 55427072 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96a8000/0x0/0x4ffc00000, data 0x4a8bc31/0x4c26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:49.629299+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 55427072 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:50.629473+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3858756 data_alloc: 251658240 data_used: 33239040
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 55427072 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805000 session 0x5597c7e0eb40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f71000 session 0x5597c6c23a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:51.629620+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c8eb61e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322043904 unmapped: 54337536 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:52.629729+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322052096 unmapped: 54329344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:53.629869+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea409000/0x0/0x4ffc00000, data 0x391cbee/0x3ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322052096 unmapped: 54329344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:54.630074+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322052096 unmapped: 54329344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7841c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c91d70e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:55.630213+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.793248177s of 10.007752419s, submitted: 73
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659885 data_alloc: 234881024 data_used: 24469504
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305250304 unmapped: 71131136 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9fa45a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c8edf2c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8eded20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f71000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f71000 session 0x5597c8bda5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:56.630342+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c93974a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c7185e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c93232c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c7e0f4a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805000 session 0x5597c976a960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7e0fa40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:57.630512+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:58.630644+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eba6d000/0x0/0x4ffc00000, data 0x22b9bee/0x2451000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:59.630810+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:00.630951+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349757 data_alloc: 218103808 data_used: 4214784
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:01.631067+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:02.631195+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:03.631295+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:04.631419+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eba6d000/0x0/0x4ffc00000, data 0x22b9bee/0x2451000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:05.631566+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411649 data_alloc: 218103808 data_used: 12939264
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:06.631698+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:07.631867+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:08.632036+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:09.632284+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:10.632424+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411649 data_alloc: 218103808 data_used: 12939264
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eba6d000/0x0/0x4ffc00000, data 0x22b9bee/0x2451000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:11.632499+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:12.632620+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.468908310s of 16.836942673s, submitted: 103
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:13.632703+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:14.632872+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:15.633043+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467737 data_alloc: 218103808 data_used: 13017088
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:16.633186+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:17.633309+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:18.633473+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:19.633679+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:20.633838+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467753 data_alloc: 218103808 data_used: 13017088
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:21.633999+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:22.634179+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:23.634354+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:24.634528+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:25.634698+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467753 data_alloc: 218103808 data_used: 13017088
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:26.634882+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:27.635098+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:28.635283+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.922410965s of 16.136686325s, submitted: 54
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8badc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c9fba5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c6f11860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9f8a1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7ec6000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306323456 unmapped: 70057984 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9f8b0e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c79745a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c6c22960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:29.635482+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f69c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f69c00 session 0x5597c8edfc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8eb74a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c90583c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c9fbaf00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c8eb74a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cabd2c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cabd2c00 session 0x5597c6c22960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea6c3000/0x0/0x4ffc00000, data 0x3663bee/0x37fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:30.635639+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660219 data_alloc: 218103808 data_used: 13021184
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:31.635791+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:32.635939+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:33.636121+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c79745a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c6e000/0x0/0x4ffc00000, data 0x40b6c60/0x4250000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:34.636244+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9f8b0e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:35.636371+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c7ec6000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660219 data_alloc: 218103808 data_used: 13021184
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9f8a1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307970048 unmapped: 68411392 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:36.636574+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca7ffc00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307970048 unmapped: 68411392 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:37.636686+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca7fe000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca7fe000 session 0x5597c6f93860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307970048 unmapped: 68411392 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:38.636868+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8ec2d20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c6d000/0x0/0x4ffc00000, data 0x40b6c70/0x4251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8f4e000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c91763c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 67567616 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:39.637046+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.600818634s of 10.829821587s, submitted: 68
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 67567616 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:40.637189+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739229 data_alloc: 234881024 data_used: 24248320
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 66969600 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:41.637396+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:42.637548+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c6d000/0x0/0x4ffc00000, data 0x40b6c70/0x4251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:43.637694+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:44.637809+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:45.637971+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823389 data_alloc: 251658240 data_used: 33378304
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:46.638180+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:47.638361+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:48.638477+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c6d000/0x0/0x4ffc00000, data 0x40b6c70/0x4251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313401344 unmapped: 62980096 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:49.638623+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 314548224 unmapped: 61833216 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.474565506s of 10.712116241s, submitted: 69
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:50.638875+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3904073 data_alloc: 251658240 data_used: 34623488
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319307776 unmapped: 57073664 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:51.638993+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319528960 unmapped: 56852480 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:52.639151+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e0c000/0x0/0x4ffc00000, data 0x4f11c70/0x50ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321314816 unmapped: 55066624 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:53.639334+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321314816 unmapped: 55066624 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:54.639511+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321314816 unmapped: 55066624 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:55.639663+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3969485 data_alloc: 251658240 data_used: 35381248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d76000/0x0/0x4ffc00000, data 0x4f9ec70/0x5139000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 55001088 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:56.639904+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca7ffc00 session 0x5597c9fba5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9fac1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9f8a5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d76000/0x0/0x4ffc00000, data 0x4f9ec70/0x5139000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321241088 unmapped: 55140352 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:57.640108+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321249280 unmapped: 55132160 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:58.640283+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f81000/0x0/0x4ffc00000, data 0x3da4bee/0x3f3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321249280 unmapped: 55132160 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:59.640504+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c8edde00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321249280 unmapped: 55132160 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:00.640676+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.689114571s of 10.230315208s, submitted: 165
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c6f92f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567598 data_alloc: 234881024 data_used: 17534976
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:01.640871+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:02.641141+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:03.641301+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eac10000/0x0/0x4ffc00000, data 0x2ec8b7c/0x305e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:04.641467+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:05.641619+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567918 data_alloc: 234881024 data_used: 17543168
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8edc3c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:06.641804+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db5000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c911cf00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c91d7680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c71841e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7840960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9e43c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca7ffc00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca7ffc00 session 0x5597c7ec7e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c93974a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c8edc000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318865408 unmapped: 57516032 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:07.641931+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82f000/0x0/0x4ffc00000, data 0x34f8b8c/0x368f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318865408 unmapped: 57516032 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:08.642060+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318865408 unmapped: 57516032 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:09.642230+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318865408 unmapped: 57516032 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:10.642377+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619336 data_alloc: 234881024 data_used: 17543168
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82f000/0x0/0x4ffc00000, data 0x34f8b8c/0x368f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:11.642500+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82f000/0x0/0x4ffc00000, data 0x34f8b8c/0x368f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:12.642681+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:13.642818+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:14.642995+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82f000/0x0/0x4ffc00000, data 0x34f8b8c/0x368f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:15.643167+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7840b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619336 data_alloc: 234881024 data_used: 17543168
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318889984 unmapped: 57491456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:16.643297+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9fad680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db6400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db6400 session 0x5597c6f10000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318889984 unmapped: 57491456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.707530975s of 16.930496216s, submitted: 49
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:17.643385+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9fa43c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:18.643555+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82d000/0x0/0x4ffc00000, data 0x34f8bbf/0x3691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:19.643810+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:20.644602+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622459 data_alloc: 234881024 data_used: 17547264
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:21.644832+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:22.645003+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:23.645162+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:24.645293+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9facf00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82d000/0x0/0x4ffc00000, data 0x34f8bbf/0x3691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cab16000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16000 session 0x5597c8eddc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9fad860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f76c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f76c00 session 0x5597c91d65a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7e0ef00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c8f4e1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c8eb61e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cab16000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16000 session 0x5597c7975e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c92eb800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92eb800 session 0x5597c705bc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321101824 unmapped: 58949632 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:25.645548+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698042 data_alloc: 234881024 data_used: 17551360
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321101824 unmapped: 58949632 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:26.645815+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:27.645918+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:28.646079+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bda960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:29.646236+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c8bdb680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9322960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cab16000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16000 session 0x5597c9fa43c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9eb1000/0x0/0x4ffc00000, data 0x3e74bbf/0x400d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:30.647274+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.179798126s of 13.313076019s, submitted: 28
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743774 data_alloc: 234881024 data_used: 23924736
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320069632 unmapped: 59981824 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:31.647406+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320864256 unmapped: 59187200 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:32.647543+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322822144 unmapped: 57229312 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:33.647653+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322822144 unmapped: 57229312 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:34.647761+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9eb1000/0x0/0x4ffc00000, data 0x3e74bbf/0x400d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 57221120 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:35.647926+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3805694 data_alloc: 251658240 data_used: 32612352
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 57221120 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:36.648247+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 52199424 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9eb1000/0x0/0x4ffc00000, data 0x3e74bbf/0x400d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:37.648463+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 51961856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:38.648559+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327499776 unmapped: 52551680 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:39.648685+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:40.648832+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327499776 unmapped: 52551680 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9836000/0x0/0x4ffc00000, data 0x44efbbf/0x4688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872376 data_alloc: 251658240 data_used: 33492992
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9836000/0x0/0x4ffc00000, data 0x44efbbf/0x4688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:41.648956+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327499776 unmapped: 52551680 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.605174065s of 10.863492966s, submitted: 86
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:42.649125+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330178560 unmapped: 49872896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:43.649251+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 49848320 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:44.649420+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 49848320 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:45.649577+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 49848320 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3948346 data_alloc: 251658240 data_used: 33550336
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:46.649944+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 49848320 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f53000/0x0/0x4ffc00000, data 0x4dc9bbf/0x4f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:47.650097+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330235904 unmapped: 49815552 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:48.650249+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329572352 unmapped: 50479104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:49.650423+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329572352 unmapped: 50479104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c705ad20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8ec2b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f5c000/0x0/0x4ffc00000, data 0x4dc9bbf/0x4f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [0,0,1])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:50.650522+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 52215808 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c705a780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788147 data_alloc: 234881024 data_used: 26333184
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:51.650634+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 52215808 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:52.650828+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 52215808 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:53.650943+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 52215808 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9bfb000/0x0/0x4ffc00000, data 0x412db7c/0x42c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:54.651259+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327843840 unmapped: 52207616 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9bfb000/0x0/0x4ffc00000, data 0x412db7c/0x42c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.413324356s of 13.750521660s, submitted: 89
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9fbbe00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:55.651371+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327843840 unmapped: 52207616 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9fa5a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3535180 data_alloc: 218103808 data_used: 12926976
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:56.651654+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 59097088 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c8eb7860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9fba780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8ec25a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c8ec32c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9fbb0e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cab16000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16000 session 0x5597c8baa3c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c6f92b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:57.651775+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9059e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c90592c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:58.651920+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:59.652175+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:00.652371+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf7d000/0x0/0x4ffc00000, data 0x2daab8c/0x2f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf7d000/0x0/0x4ffc00000, data 0x2daab8c/0x2f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551998 data_alloc: 218103808 data_used: 12926976
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:01.652574+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9fad680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:02.652757+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cab16c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:03.652963+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:04.653060+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321118208 unmapped: 58933248 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:05.653175+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561874 data_alloc: 234881024 data_used: 14008320
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:06.653306+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf59000/0x0/0x4ffc00000, data 0x2dceb8c/0x2f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:07.653494+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf59000/0x0/0x4ffc00000, data 0x2dceb8c/0x2f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:08.653665+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:09.653882+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:10.654035+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561874 data_alloc: 234881024 data_used: 14008320
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:11.654182+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:12.654369+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf59000/0x0/0x4ffc00000, data 0x2dceb8c/0x2f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:13.654512+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.019678116s of 18.189313889s, submitted: 44
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321773568 unmapped: 58277888 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:14.654636+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323788800 unmapped: 56262656 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:15.655062+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650710 data_alloc: 234881024 data_used: 14180352
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:16.655225+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:17.655393+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:18.655565+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e939d000/0x0/0x4ffc00000, data 0x37e2b8c/0x3979000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:19.655747+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:20.656073+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650710 data_alloc: 234881024 data_used: 14180352
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:21.656229+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16c00 session 0x5597c9fa45a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c7e0fe00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e939d000/0x0/0x4ffc00000, data 0x37e2b8c/0x3979000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bdb2c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:22.656500+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323837952 unmapped: 56213504 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:23.656802+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323837952 unmapped: 56213504 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:24.656993+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323837952 unmapped: 56213504 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:25.657180+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323837952 unmapped: 56213504 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.915218353s of 12.252222061s, submitted: 89
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c8ec6780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7ec63c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349194 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:26.657298+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:27.657430+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:28.657557+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:29.657743+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:30.657961+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:31.658143+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349194 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:32.658342+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:33.658505+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:34.658693+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91c8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c8b54780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8eb6b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8b550e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8eb7c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [0,0,0,2,9])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c8b545a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9176960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9323a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9177e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9f8a5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:35.658896+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:36.659110+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388097 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:37.659322+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:38.659577+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:39.659853+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:40.660049+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:41.660292+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388097 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c6c22000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:42.660494+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cab16c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16c00 session 0x5597c976be00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:43.660688+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9058960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317218816 unmapped: 62832640 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.941007614s of 18.134647369s, submitted: 67
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8bdaf00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:44.660815+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317112320 unmapped: 62939136 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:45.660932+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316801024 unmapped: 63250432 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:46.661095+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399298 data_alloc: 218103808 data_used: 5677056
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:47.661213+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:48.661323+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:49.661471+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:50.661619+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:51.661753+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399298 data_alloc: 218103808 data_used: 5677056
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:52.661891+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:53.662029+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:54.662147+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:55.662312+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.857822418s of 11.883068085s, submitted: 7
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316809216 unmapped: 63242240 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:56.662493+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509754 data_alloc: 218103808 data_used: 6983680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea105000/0x0/0x4ffc00000, data 0x2a75b7c/0x2c0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319086592 unmapped: 60964864 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:57.662614+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:58.662768+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:59.662982+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:00.663068+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:01.663215+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516472 data_alloc: 218103808 data_used: 6881280
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ed7000/0x0/0x4ffc00000, data 0x2b09b7c/0x2c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:02.663359+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:03.663526+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ec0000/0x0/0x4ffc00000, data 0x2b28b7c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 42K writes, 164K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.73 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4857 writes, 19K keys, 4857 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s
                                           Interval WAL: 4857 writes, 1947 syncs, 2.49 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:04.663661+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:05.663815+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:06.663959+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ec0000/0x0/0x4ffc00000, data 0x2b28b7c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510628 data_alloc: 218103808 data_used: 6881280
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.153414726s of 11.554138184s, submitted: 132
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:07.664124+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:08.664301+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:09.664553+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:10.664671+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320364544 unmapped: 59686912 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9395c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c8b54000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d48400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c8f4f2c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:11.664788+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d48400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9322f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513901 data_alloc: 218103808 data_used: 6881280
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bdab40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9fac960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c7ec7e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320315392 unmapped: 67174400 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c6f11c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c9fa5a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7edd000/0x0/0x4ffc00000, data 0x3b0ab8c/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:12.664938+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320323584 unmapped: 67166208 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:13.665127+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7edd000/0x0/0x4ffc00000, data 0x3b0ab8c/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320323584 unmapped: 67166208 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:14.665287+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:15.665405+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:16.665573+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3631519 data_alloc: 218103808 data_used: 6885376
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:17.665744+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:18.665883+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8ec32c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8baad20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7eda000/0x0/0x4ffc00000, data 0x3b0db8c/0x3ca4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:19.666068+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d48400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9059e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.693615913s of 12.837650299s, submitted: 26
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bdb2c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:20.666192+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 67141632 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:21.666298+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634757 data_alloc: 218103808 data_used: 6885376
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 67141632 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:22.666417+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:23.666525+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:24.666639+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:25.666761+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:26.666911+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752677 data_alloc: 234881024 data_used: 23490560
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:27.667089+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:28.667284+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:29.667452+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:30.667586+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:31.667730+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753333 data_alloc: 234881024 data_used: 23494656
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.706723213s of 11.763413429s, submitted: 15
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 65896448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:32.667862+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326164480 unmapped: 61325312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:33.667980+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:34.668083+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:35.668257+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b6000/0x0/0x4ffc00000, data 0x4830baf/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:36.668413+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868965 data_alloc: 234881024 data_used: 24391680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:37.668596+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:38.668754+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:39.669008+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:40.669273+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:41.669417+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868737 data_alloc: 234881024 data_used: 24412160
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b4000/0x0/0x4ffc00000, data 0x4832baf/0x49ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:42.669639+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b4000/0x0/0x4ffc00000, data 0x4832baf/0x49ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:43.669807+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:44.669957+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.880608559s of 13.121785164s, submitted: 92
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bab4a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9df14a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:45.670091+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9fac5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:46.670258+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529945 data_alloc: 218103808 data_used: 6897664
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:47.670428+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e86ea000/0x0/0x4ffc00000, data 0x2b43b7c/0x2cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:48.670606+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:49.670830+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e86ea000/0x0/0x4ffc00000, data 0x2b43b7c/0x2cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:50.670994+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:51.671194+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 14 09:59:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12511987' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529945 data_alloc: 218103808 data_used: 6897664
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:52.671329+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c7e0e5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c8da74a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:53.671476+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9df0960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:54.671640+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:55.671806+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:56.671917+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:57.672077+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:58.672193+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:59.672382+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:00.672504+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:01.672740+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:02.672898+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:03.673073+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:04.673204+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:05.673380+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:06.673534+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:07.673706+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:08.673920+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.259336472s of 23.626934052s, submitted: 120
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:09.674127+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318980096 unmapped: 68509696 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:10.674268+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:11.674496+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:12.674672+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:13.674800+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:14.675191+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:15.675320+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:16.675468+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:17.675613+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:18.675805+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:19.676047+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:20.676190+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:21.676329+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:22.676524+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:23.676704+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:24.676974+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:25.677153+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:26.677324+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:27.677476+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:28.677634+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:29.677862+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:30.678087+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:31.678452+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:32.678578+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:33.678720+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:34.678846+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c91d70e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8b552c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c93234a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d48400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9fbb0e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.051465988s of 26.344564438s, submitted: 90
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bade00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8eb6960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c7e0eb40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c705ad20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c9058b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:35.679092+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:36.679259+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460245 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:37.679455+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:38.679644+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:39.679940+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:40.680125+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:41.680328+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460245 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:42.680509+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:43.680655+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:44.680824+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9f8bc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:45.680970+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.328366280s of 10.499547958s, submitted: 49
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:46.681128+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465278 data_alloc: 218103808 data_used: 4861952
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319340544 unmapped: 68149248 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:47.681273+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:48.681444+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:49.682141+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:50.682336+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:51.682490+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523198 data_alloc: 218103808 data_used: 13045760
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:52.682698+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:53.683206+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:54.683522+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:55.683740+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.949416161s of 10.952005386s, submitted: 1
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:56.683968+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554408 data_alloc: 218103808 data_used: 13467648
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322412544 unmapped: 65077248 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:57.684324+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e9e000/0x0/0x4ffc00000, data 0x2736c11/0x28cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:58.684797+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:59.685087+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:00.685391+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:01.685623+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567410 data_alloc: 234881024 data_used: 13811712
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:02.685850+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:03.686052+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e20000/0x0/0x4ffc00000, data 0x27b4c11/0x294d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:04.686201+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:05.686361+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:06.686558+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566358 data_alloc: 234881024 data_used: 13824000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:07.686747+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e00000/0x0/0x4ffc00000, data 0x27d5c11/0x296e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:08.686908+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:09.687074+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323551232 unmapped: 63938560 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:10.687198+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.258024216s of 14.533938408s, submitted: 65
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8da6000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8da63c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb800 session 0x5597c78414a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c6f934a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323551232 unmapped: 63938560 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:11.687382+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x27dbc20/0x2975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568414 data_alloc: 234881024 data_used: 13824000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330170368 unmapped: 57319424 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:12.687565+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8eb6780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9394d20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8f4e780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8f4f860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb800 session 0x5597c7e0f4a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:13.687707+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:14.687933+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:15.688106+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:16.688280+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623576 data_alloc: 234881024 data_used: 13828096
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:17.688466+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:18.688632+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:19.688791+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:20.688971+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.050421715s of 10.386352539s, submitted: 13
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c71854a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:21.689055+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624501 data_alloc: 234881024 data_used: 13828096
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:22.689184+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:23.689358+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:24.689567+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324468736 unmapped: 63021056 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:25.689722+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:26.689846+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672021 data_alloc: 234881024 data_used: 20504576
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:27.690056+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:28.690211+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:29.690412+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:30.690590+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:31.690742+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672373 data_alloc: 234881024 data_used: 20504576
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:32.690834+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:33.691044+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:34.691201+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.206857681s of 13.240119934s, submitted: 8
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331653120 unmapped: 55836672 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:35.691404+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:36.691568+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788835 data_alloc: 234881024 data_used: 21499904
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:37.691689+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:38.691936+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:39.692200+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331833344 unmapped: 55656448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:40.692340+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331833344 unmapped: 55656448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:41.692676+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788835 data_alloc: 234881024 data_used: 21499904
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331161600 unmapped: 56328192 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:42.692776+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331161600 unmapped: 56328192 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:43.692961+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:44.693125+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:45.693302+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b2d000/0x0/0x4ffc00000, data 0x3aa7c20/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:46.693433+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3784939 data_alloc: 234881024 data_used: 21671936
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8d714a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.424279213s of 12.732007027s, submitted: 100
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9f8a1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:47.693557+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8d71e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:48.693709+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8dfa000/0x0/0x4ffc00000, data 0x27dbc11/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:49.693906+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:50.694077+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c6489c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:51.697300+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8dee000/0x0/0x4ffc00000, data 0x27e7c11/0x2980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [0,0,1])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9323a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:52.697458+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:53.697641+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:54.697801+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:55.697945+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:56.698119+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:57.698318+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:58.698448+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:59.698615+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:00.698803+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:01.698988+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:02.699158+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:03.699338+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:04.699486+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:05.699629+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:06.699771+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:07.699971+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:08.700134+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:09.700353+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:10.700475+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:11.700611+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:12.700776+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:13.701004+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:14.701167+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:15.701301+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:16.701497+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:17.701642+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:18.701847+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:19.702086+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:20.702258+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:21.702437+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:22.702571+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:23.702759+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:24.702912+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:25.703061+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:26.703219+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:27.703403+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:28.703617+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:29.703870+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:30.704146+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:31.704350+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:32.704565+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:33.704729+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8f4ed20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8ec32c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c91761e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9fec000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9fec000 session 0x5597c705bc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.449398041s of 46.750865936s, submitted: 92
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c6f114a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:34.704886+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324829184 unmapped: 62660608 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:35.705138+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:36.705274+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441271 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:37.705466+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:38.705664+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:39.706060+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:40.706186+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 62636032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:41.706370+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8baa780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441271 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9fa43c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:42.706534+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:43.706706+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8babc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260800 session 0x5597c911cf00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:44.706888+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:45.707079+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:46.707226+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480549 data_alloc: 218103808 data_used: 9494528
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:47.707393+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:48.707531+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:49.707728+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:50.707883+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:51.708059+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480869 data_alloc: 218103808 data_used: 9551872
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:52.708212+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:53.708352+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:54.708501+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:55.708665+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:56.708813+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.233131409s of 23.287984848s, submitted: 4
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481685 data_alloc: 218103808 data_used: 9555968
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:57.708965+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323919872 unmapped: 63569920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:58.709117+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323928064 unmapped: 63561728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:59.709338+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:00.709515+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:01.709722+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507191 data_alloc: 218103808 data_used: 9674752
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:02.710582+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:03.710803+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:04.711731+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:05.712719+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:06.713123+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507207 data_alloc: 218103808 data_used: 9674752
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:07.714213+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:08.715176+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:09.716167+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:10.716925+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:11.717180+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507207 data_alloc: 218103808 data_used: 9674752
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:12.717840+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:13.717983+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.272737503s of 16.378507614s, submitted: 28
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323747840 unmapped: 63741952 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8f4f0e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:14.718142+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323215360 unmapped: 64274432 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:15.718321+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x2671b8c/0x2808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323223552 unmapped: 64266240 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:16.718488+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:17.718633+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3542575 data_alloc: 218103808 data_used: 9674752
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:18.718806+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:19.719080+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x2671b8c/0x2808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8ec6f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:20.719276+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91cb000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91cb000 session 0x5597c8edd4a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c8d71a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:21.719442+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9fad860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:22.719576+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544413 data_alloc: 218103808 data_used: 9674752
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:23.720161+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91cb000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:24.720315+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:25.720399+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:26.720560+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:27.720744+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570813 data_alloc: 218103808 data_used: 13381632
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:28.720921+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:29.721081+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:30.721289+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:31.721496+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:32.721714+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570813 data_alloc: 218103808 data_used: 13381632
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:33.721877+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.227336884s of 20.305019379s, submitted: 9
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:34.722086+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324141056 unmapped: 63348736 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e997f000/0x0/0x4ffc00000, data 0x2c8fb9c/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:35.722243+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:36.722402+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98e9000/0x0/0x4ffc00000, data 0x2d2db9c/0x2ec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:37.722547+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627635 data_alloc: 218103808 data_used: 13598720
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:38.722719+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:39.722915+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:40.723083+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324935680 unmapped: 62554112 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:41.723229+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:42.723409+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628135 data_alloc: 218103808 data_used: 13598720
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x2d51b9c/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:43.723623+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:44.723782+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:45.723945+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:46.724080+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x2d51b9c/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:47.760850+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628455 data_alloc: 218103808 data_used: 13606912
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91cb000 session 0x5597c705bc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9176b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.985599518s of 14.288364410s, submitted: 86
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:48.767144+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c705a5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:49.767331+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:50.767520+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:51.767733+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:52.800432+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515600 data_alloc: 218103808 data_used: 9674752
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:53.800600+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9fba780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9394f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9fba5a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:54.800757+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:55.800891+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:56.801086+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:57.801223+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:58.801393+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:59.801580+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:00.801774+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:01.801925+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:02.802082+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:03.802269+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:04.802454+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:05.802625+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:06.802776+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:07.802919+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:08.803081+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca804800 session 0x5597c9322d20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91cb000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:09.803257+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: mgrc ms_handle_reset ms_handle_reset con 0x5597c8fbb000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 09:59:26 compute-0 ceph-osd[88375]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: get_auth_request con 0x5597c9fec000 auth_method 0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: mgrc handle_mgr_configure stats_period=5
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db4400 session 0x5597c911c780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c7e2c800 session 0x5597c8bacb40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db4400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:10.803376+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:11.803529+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:12.803726+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:13.803887+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:14.804085+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:15.804214+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:16.804378+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:17.804549+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:18.804715+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:19.804939+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:20.805257+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:21.805424+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:22.805583+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:23.805759+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:24.805919+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:25.806097+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:26.806276+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:27.806495+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:28.806674+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:29.806891+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8da7c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9df1680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c7e792c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f74c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f74c00 session 0x5597c8ec6b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.484188080s of 41.605556488s, submitted: 34
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325509120 unmapped: 66715648 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8da70e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9f8b860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c7e78960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c976a000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca800000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca800000 session 0x5597c7840b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:30.807075+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:31.807231+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d07000/0x0/0x4ffc00000, data 0x2911b7c/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:32.807384+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527710 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:33.807515+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:34.807638+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:35.807868+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d07000/0x0/0x4ffc00000, data 0x2911b7c/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320536576 unmapped: 71688192 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c90592c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:36.808081+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320536576 unmapped: 71688192 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c976be00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:37.808255+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8baad20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527710 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c9323860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319496192 unmapped: 72728576 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:38.808394+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319496192 unmapped: 72728576 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:39.808541+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319447040 unmapped: 72777728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:40.808807+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:41.809145+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:42.809352+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633229 data_alloc: 234881024 data_used: 18477056
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:43.809870+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:44.810067+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:45.810749+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:46.810993+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:47.811858+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633229 data_alloc: 234881024 data_used: 18477056
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:48.812128+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.397739410s of 19.592414856s, submitted: 36
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:49.812408+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,1,0,3,1])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96e5000/0x0/0x4ffc00000, data 0x2f32b8c/0x30c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326262784 unmapped: 65961984 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:50.812610+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:51.812984+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:52.813190+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707751 data_alloc: 234881024 data_used: 19681280
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:53.813500+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9633000/0x0/0x4ffc00000, data 0x2fe3b8c/0x317a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:54.813660+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:55.813944+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9633000/0x0/0x4ffc00000, data 0x2fe3b8c/0x317a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:56.814109+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:57.814426+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700087 data_alloc: 234881024 data_used: 19681280
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:58.814584+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:59.814858+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:00.814972+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:01.815199+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:02.815366+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700087 data_alloc: 234881024 data_used: 19681280
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:03.815579+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c79745a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9176780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c6f11e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c93230e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.581520081s of 14.833774567s, submitted: 84
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c705ab40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:04.815686+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c8baad20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c976be00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c90592c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c976a000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:05.815842+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:06.816075+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:07.816296+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771905 data_alloc: 234881024 data_used: 19681280
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:08.816438+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:09.816700+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:10.816824+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:11.816988+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 65503232 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:12.817154+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3836865 data_alloc: 234881024 data_used: 27766784
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:13.817293+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:14.817420+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:15.817539+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:16.817681+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:17.817843+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3836865 data_alloc: 234881024 data_used: 27766784
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:18.818064+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:19.818295+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:20.818446+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.824155807s of 16.902429581s, submitted: 11
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329990144 unmapped: 62234624 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:21.818581+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 58564608 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:22.818725+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x38bab9c/0x3a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3878863 data_alloc: 234881024 data_used: 28008448
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 58564608 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:23.818892+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7659000/0x0/0x4ffc00000, data 0x3e17b9c/0x3faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:24.819085+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:25.819223+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:26.819387+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:27.819544+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886749 data_alloc: 234881024 data_used: 27828224
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:28.819706+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:29.819869+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:30.820085+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:31.820239+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:32.820383+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886749 data_alloc: 234881024 data_used: 27828224
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.783678055s of 12.008990288s, submitted: 48
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:33.820527+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:34.820680+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:35.820818+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c7e78960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:36.820970+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9df03c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:37.821138+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3710153 data_alloc: 234881024 data_used: 18591744
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:38.821311+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e848f000/0x0/0x4ffc00000, data 0x2fe8b8c/0x317f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:39.821499+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8ec65a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9df0b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:40.822661+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9df0f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:41.822829+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:42.822977+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:43.823087+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:44.823228+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:45.823352+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:46.823459+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:47.823579+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:48.823723+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:49.823967+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:50.824124+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:51.826658+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:52.826816+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:53.827081+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:54.827252+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:55.827437+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:56.827618+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:57.827842+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:58.828091+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:59.828304+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:00.828478+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:01.828669+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:02.828879+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:03.829086+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:04.829341+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:05.829462+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:06.829632+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:07.829799+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:08.829984+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:09.830249+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:10.830401+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:11.830579+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:12.830767+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:13.830910+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:14.831375+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:15.831510+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9059e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bdab40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9fa4b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8b550e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.645046234s of 42.893882751s, submitted: 79
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8d70b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7184000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9fb23c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c7840000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8eb7e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321708032 unmapped: 70516736 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:16.831767+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321708032 unmapped: 70516736 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:17.831981+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541334 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:18.832445+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:19.832718+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:20.832931+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:21.833161+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:22.833400+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321724416 unmapped: 70500352 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9fa50e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541334 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c8ecab40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:23.833628+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321724416 unmapped: 70500352 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c7e0fa40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8ec7c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:24.833807+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321617920 unmapped: 70606848 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:25.833960+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 70598656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:26.834095+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321634304 unmapped: 70590464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:27.834255+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614275 data_alloc: 234881024 data_used: 14381056
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:28.834395+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:29.834552+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:30.834700+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:31.834873+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:32.835101+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614275 data_alloc: 234881024 data_used: 14381056
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:33.835417+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:34.835666+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:35.835951+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.752676010s of 19.861513138s, submitted: 33
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:36.836080+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 62496768 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:37.836264+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681253 data_alloc: 234881024 data_used: 15691776
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:38.836559+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:39.837098+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:40.837278+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:41.837632+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:42.837851+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674689 data_alloc: 234881024 data_used: 15691776
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:43.838052+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:44.838236+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e883d000/0x0/0x4ffc00000, data 0x2c3ab8c/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:45.838397+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:46.838534+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:47.838685+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e883d000/0x0/0x4ffc00000, data 0x2c3ab8c/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674689 data_alloc: 234881024 data_used: 15691776
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:48.838826+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.384753227s of 12.701920509s, submitted: 117
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:49.839061+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:50.839164+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:51.839271+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9323a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c8b552c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9fac1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8fd0000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8fd0000 session 0x5597c8eddc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9fba000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c7ec63c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7ec6960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9df0000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8d70000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:52.839421+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717851 data_alloc: 234881024 data_used: 15691776
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:53.839542+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:54.839610+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:55.839745+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:56.839887+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:57.840056+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c7974f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c7ec6960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717851 data_alloc: 234881024 data_used: 15691776
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:58.840293+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c7ec63c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.189133644s of 10.343238831s, submitted: 42
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fba000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:59.840453+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:00.840563+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330391552 unmapped: 61833216 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:01.840710+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:02.840866+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748708 data_alloc: 234881024 data_used: 19931136
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:03.841048+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:04.841236+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:05.841354+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:06.841515+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:07.841714+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749364 data_alloc: 234881024 data_used: 19935232
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:08.841896+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:09.842116+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:10.842260+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.708535194s of 11.744414330s, submitted: 9
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,1,1])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:11.843170+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 57638912 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:12.843356+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3808274 data_alloc: 234881024 data_used: 20135936
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:13.843525+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:14.843699+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:15.843862+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:16.844052+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:17.844252+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803170 data_alloc: 234881024 data_used: 20140032
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:18.844395+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:19.845932+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:20.847119+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:21.849291+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:22.850323+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.447299004s of 12.729538918s, submitted: 84
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803346 data_alloc: 234881024 data_used: 20140032
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:23.850587+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e05000/0x0/0x4ffc00000, data 0x366fc21/0x3809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:24.850837+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:25.851092+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9fac1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335142912 unmapped: 57081856 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8da70e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:26.851236+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:27.851398+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686054 data_alloc: 234881024 data_used: 15679488
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:28.851541+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9f8a1e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9f8b860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:29.851856+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329187328 unmapped: 63037440 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8f4e780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e882a000/0x0/0x4ffc00000, data 0x2c4db8c/0x2de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:30.852196+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:31.852973+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:32.853448+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:33.853717+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:34.854084+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:35.854343+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:36.854583+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:37.854711+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:38.854906+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:39.855231+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:40.855523+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:41.856103+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:42.856425+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:43.856632+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:44.856896+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:45.857267+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:46.857539+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:47.857768+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:48.857997+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:49.858418+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:50.858658+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:51.858886+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:52.859120+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:53.859276+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:54.859461+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:55.859606+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:56.859812+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:57.859975+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:58.860133+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:59.860326+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:00.860477+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:01.860614+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:02.860777+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.834590912s of 39.179557800s, submitted: 109
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c705a960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:03.860964+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540834 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:04.861130+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:05.861414+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x2376b7c/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:06.861617+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:07.861768+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:08.861905+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540834 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9f8a3c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324239360 unmapped: 71663616 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c93223c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:09.862058+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324239360 unmapped: 71663616 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9058b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8b554a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:10.862223+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324386816 unmapped: 71516160 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:11.862349+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324395008 unmapped: 71507968 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:12.862465+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:13.862638+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614328 data_alloc: 234881024 data_used: 13717504
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:14.862784+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:15.862934+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:16.863107+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:17.863250+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:18.863355+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614328 data_alloc: 234881024 data_used: 13717504
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:19.863510+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:20.863723+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:21.864133+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.299053192s of 19.407587051s, submitted: 25
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:22.864248+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328425472 unmapped: 67477504 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:23.864390+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662080 data_alloc: 234881024 data_used: 14364672
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328425472 unmapped: 67477504 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:24.864577+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:25.864770+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x28e6b9f/0x2a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:26.864860+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:27.865008+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:28.865231+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x28e6b9f/0x2a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673472 data_alloc: 234881024 data_used: 14249984
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:29.865420+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:30.865593+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:31.865860+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:32.866077+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:33.867005+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666320 data_alloc: 234881024 data_used: 14249984
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b8e000/0x0/0x4ffc00000, data 0x28e9b9f/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:34.867219+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:35.867392+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:36.867550+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b8e000/0x0/0x4ffc00000, data 0x28e9b9f/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:37.867737+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:38.868175+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666320 data_alloc: 234881024 data_used: 14249984
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:39.868474+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.285617828s of 17.535190582s, submitted: 85
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa41e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:40.868580+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:41.868727+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d97000/0x0/0x4ffc00000, data 0x36e0b9f/0x3877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:42.868873+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:43.869126+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771938 data_alloc: 234881024 data_used: 14249984
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:44.869435+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:45.869689+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fad2c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8b54960
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:46.869828+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c911da40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d97000/0x0/0x4ffc00000, data 0x36e0b9f/0x3877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8f4fe00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:47.869909+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80f800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:48.870054+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3829579 data_alloc: 234881024 data_used: 19238912
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:49.870208+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:50.870349+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:51.870512+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d72000/0x0/0x4ffc00000, data 0x3704bc2/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:52.870679+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:53.870879+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3839499 data_alloc: 234881024 data_used: 19476480
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:54.871070+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.400768280s of 15.509059906s, submitted: 20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:55.871190+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:56.871375+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:57.871571+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d72000/0x0/0x4ffc00000, data 0x3704bc2/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:58.871800+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870475 data_alloc: 234881024 data_used: 19501056
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 74211328 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:59.871999+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 73474048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e760e000/0x0/0x4ffc00000, data 0x3e68bc2/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:00.872166+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e760e000/0x0/0x4ffc00000, data 0x3e68bc2/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:01.872349+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:02.872507+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:03.872682+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917109 data_alloc: 234881024 data_used: 20856832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 176K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2838 writes, 11K keys, 2838 commit groups, 1.0 writes per commit group, ingest: 12.66 MB, 0.02 MB/s
                                           Interval WAL: 2838 writes, 1140 syncs, 2.49 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:04.872826+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:05.872966+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:06.873101+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.800569534s of 12.333517075s, submitted: 84
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:07.873227+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:08.873366+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:09.873517+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:10.873875+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:11.874034+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:12.874176+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:13.874293+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:14.874516+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:15.874685+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:16.874877+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:17.875035+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:18.875169+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:19.875367+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:20.875501+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:21.875697+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:22.875885+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:23.876114+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:24.876306+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80f800 session 0x5597c9394d20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9df1e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:25.876467+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.377149582s of 18.387834549s, submitted: 2
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8da7680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:26.876679+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:27.876928+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:28.877106+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643825 data_alloc: 218103808 data_used: 9093120
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 73039872 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e885d000/0x0/0x4ffc00000, data 0x28eab9f/0x2a81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:29.878078+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c705bc20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c9f8b2c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c90583c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:30.879005+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:31.880152+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:32.880825+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:33.881262+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:34.881823+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:35.882283+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:36.882482+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:37.882665+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:38.882941+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:39.883188+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:40.883355+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:41.883576+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:42.883866+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:43.884078+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:44.884305+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:45.884441+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:46.884568+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:47.884721+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:48.884913+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:49.885159+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c93941e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8bab680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8edc000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fad860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.921491623s of 24.220367432s, submitted: 89
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8d70780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c91d7860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8edc780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c976af00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fad860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332398592 unmapped: 75571200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:50.885353+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332398592 unmapped: 75571200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:51.885489+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c93941e0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:52.885651+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:53.885837+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544403 data_alloc: 218103808 data_used: 4796416
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:54.885995+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334086144 unmapped: 73883648 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:55.886245+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8da7680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8baa3c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331317248 unmapped: 76652544 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:56.886400+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8ec6b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:57.886522+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:58.886719+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:59.886988+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:00.887260+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:01.887381+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:02.887582+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:03.887784+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:04.887971+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:05.888138+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:06.888283+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:07.888429+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:08.888611+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.342603683s of 18.682430267s, submitted: 72
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:09.888801+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 76595200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:10.889095+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:11.889228+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:12.889401+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [1])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:13.889579+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:14.889726+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:15.889883+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:16.890054+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:17.890251+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:18.890384+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:19.890574+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:20.890774+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:21.890908+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:22.891099+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:23.891269+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:24.891460+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:25.891629+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:26.891855+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:27.892094+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:28.892243+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:29.892427+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:30.892543+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:31.892636+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:32.892844+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:33.893085+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:34.893267+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:35.893460+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:36.893707+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:37.893895+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:38.894098+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:39.894338+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:40.894527+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:41.894682+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:42.894910+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:43.895143+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:44.895340+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:45.895540+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:46.895744+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:47.895917+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:48.896193+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:49.896456+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:50.896665+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:51.896812+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:52.897089+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:53.897259+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:54.897466+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:55.897594+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.002799988s of 47.298183441s, submitted: 90
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:56.897801+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331456512 unmapped: 76513280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 293 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fbb680
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:57.897968+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331456512 unmapped: 76513280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9618000/0x0/0x4ffc00000, data 0x1a50619/0x1be5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:58.898152+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500843 data_alloc: 218103808 data_used: 4218880
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:59.898358+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:00.898546+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:01.898744+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9618000/0x0/0x4ffc00000, data 0x1a50619/0x1be5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:02.898887+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:03.899110+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331481088 unmapped: 76488704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:04.900363+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331481088 unmapped: 76488704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:05.901201+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:06.902758+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:07.903531+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:08.904642+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:09.905626+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:10.906448+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:11.907162+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:12.907343+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:13.908069+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:14.908221+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:15.909097+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:16.909278+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:17.909591+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:18.909746+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:19.910093+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:20.910287+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:21.910460+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:22.910604+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:23.910796+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:24.910919+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:25.911115+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:26.911274+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:27.911444+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:28.911592+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:29.911763+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:30.911926+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:31.912147+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:32.912304+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:33.912488+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:34.912717+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:35.912981+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:36.913299+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:37.919641+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:38.920117+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:39.921932+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:40.922499+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:41.922782+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:42.923183+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:43.923672+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:44.924064+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:45.924816+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:46.925097+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:47.925276+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:48.925560+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:49.926276+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:50.926582+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:51.926944+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:52.927266+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:53.927584+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:54.927804+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:55.927955+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:56.928197+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:57.928611+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:58.928826+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:59.929223+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:00.970283+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:01.970435+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:02.970715+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:03.970926+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:04.971168+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:05.971593+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:06.971848+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:07.972177+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:08.972431+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:09.972624+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:10.972779+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:11.972939+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:12.973089+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:13.973270+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:14.973429+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:15.973565+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:16.973737+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:17.973867+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:18.974158+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:19.974421+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:20.974626+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:21.974805+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:22.975107+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:23.975306+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:24.975444+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:25.975634+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:26.975801+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:27.976137+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:28.976313+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:29.976542+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:30.976684+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:31.976892+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:32.977116+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:33.977308+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:34.977498+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:35.977660+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:36.977846+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:37.978035+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:38.978220+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:39.978454+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:40.978637+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:41.978786+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:42.978961+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:43.979161+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:44.979273+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:45.979443+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:46.979625+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:47.979786+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:48.979953+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331636736 unmapped: 76333056 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:49.980228+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331636736 unmapped: 76333056 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:50.980450+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 ms_handle_reset con 0x5597ca80e400 session 0x5597c9322d20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 ms_handle_reset con 0x5597c70cac00 session 0x5597c8d71a40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:51.980658+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:52.980830+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:53.981195+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:54.981434+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517001 data_alloc: 218103808 data_used: 11038720
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:55.981649+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:56.981816+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:57.982003+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:58.982271+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:59.982516+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.750770569s of 123.860015869s, submitted: 42
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519975 data_alloc: 218103808 data_used: 11038720
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 295 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa5c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:00.982714+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:01.982878+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:02.983040+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 296 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8b54f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:03.983197+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9e10000/0x0/0x4ffc00000, data 0x12557fb/0x13ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:04.983466+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312682 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:05.983609+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:06.983796+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:07.983985+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:08.984273+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:09.984471+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312682 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 296 heartbeat osd_stat(store_statfs(0x4eae11000/0x0/0x4ffc00000, data 0x2557eb/0x3ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 296 ms_handle_reset con 0x5597ca80e400 session 0x5597c9177860
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.878767967s of 10.074717522s, submitted: 53
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:10.984761+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:11.984921+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 297 heartbeat osd_stat(store_statfs(0x4eae0e000/0x0/0x4ffc00000, data 0x25724e/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:12.985099+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:13.985254+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 297 heartbeat osd_stat(store_statfs(0x4eae0e000/0x0/0x4ffc00000, data 0x25724e/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 73474048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:14.985430+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7975e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:15.985615+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:16.985931+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:17.986101+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:18.986255+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:19.986484+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:20.986696+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:21.986876+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:22.987113+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:23.987321+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:24.987489+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:25.987665+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:26.987832+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:27.988134+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:28.988298+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:29.988527+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:30.988670+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:31.988832+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:32.988979+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:33.989247+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:34.989448+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:35.989632+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:36.989867+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334536704 unmapped: 73433088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:37.990084+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334544896 unmapped: 73424896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:38.990275+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:39.990487+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:40.990666+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:41.990820+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:42.991002+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:43.991235+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:44.991449+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:45.991701+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:46.991938+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:47.992405+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:48.992982+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:49.993562+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:50.993927+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:51.994379+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:52.994837+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:53.995270+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:54.995630+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:55.995930+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:56.996253+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:57.996468+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:58.996709+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:59.997058+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:00.997311+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:01.997651+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:02.997918+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 73383936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:03.998179+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 73383936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:04.998419+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:05.998675+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:06.998887+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:07.999054+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:08.999267+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:09.999581+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:10.999908+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:12.000185+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:13.000572+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:14.000810+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:15.001057+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:16.001344+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:17.001571+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:18.001873+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:19.002169+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:20.002377+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:21.002585+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:22.002790+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:23.004115+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:24.004391+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:25.004705+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:26.004980+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:27.005276+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:28.005551+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:29.005778+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:30.006194+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:31.006514+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:32.006836+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:33.007129+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:34.007311+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:35.007544+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:36.007797+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:37.008055+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:38.008249+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:39.008430+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:40.008697+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:41.008957+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:42.009199+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 73310208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:43.009475+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 73310208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:44.009779+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:45.010106+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:46.010370+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:47.010569+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:48.010814+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:49.011114+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:50.011439+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:51.011701+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:52.011964+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:53.012186+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:54.012390+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:55.012642+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334684160 unmapped: 73285632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:56.012825+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334684160 unmapped: 73285632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:57.013074+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334692352 unmapped: 73277440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:58.013299+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334692352 unmapped: 73277440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:59.013515+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 73269248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:00.013809+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 73269248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8fbac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:01.014093+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 110.781181335s of 110.807998657s, submitted: 17
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334716928 unmapped: 81649664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:02.014335+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 299 ms_handle_reset con 0x5597c8fbac00 session 0x5597c8ec7e00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334733312 unmapped: 81633280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:03.014622+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 80560128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:04.014862+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 ms_handle_reset con 0x5597c70cac00 session 0x5597c7184f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 80543744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:05.015158+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:06.015351+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:07.015513+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:08.015703+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:09.015823+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:10.015990+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:11.016232+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:12.016367+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:13.016528+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:14.016684+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:15.016867+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:16.017072+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:17.017218+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:18.017390+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:19.017563+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:20.017808+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:21.018102+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:22.018259+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:23.018478+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:24.018655+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:25.018877+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:26.019076+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:27.019209+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:28.019358+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:29.019512+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:30.019703+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8fbac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447935 data_alloc: 218103808 data_used: 1089536
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:31.019876+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.965335846s of 30.084932327s, submitted: 23
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 301 ms_handle_reset con 0x5597c8fbac00 session 0x5597c9fad2c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:32.020078+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9d20000/0x0/0x4ffc00000, data 0x133e0c8/0x14dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:33.020274+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:34.020461+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:35.020605+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448852 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:36.020763+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:37.020963+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335912960 unmapped: 80453632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9d20000/0x0/0x4ffc00000, data 0x133e0c8/0x14dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:38.021100+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335912960 unmapped: 80453632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:39.021276+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:40.021469+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:41.021683+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:42.021877+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:43.022030+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:44.022376+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:45.022529+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:46.022704+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:47.022842+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335945728 unmapped: 80420864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:48.023002+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:49.023171+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:50.023320+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:51.023486+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:52.023615+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:53.023753+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:54.023882+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:55.024086+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:56.024803+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:57.025194+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:58.025357+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:59.025495+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:00.025690+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:01.025848+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:02.026048+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:03.026170+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:04.026463+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:05.026621+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:06.027086+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:07.027304+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:08.027540+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:09.027874+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:10.028100+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 14 09:59:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2049453786' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:11.028376+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:12.028562+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:13.028917+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:14.029141+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335994880 unmapped: 80371712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:15.029329+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:16.029563+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:17.029716+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:18.029902+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:19.030102+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:20.030332+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:21.030502+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:22.030707+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:23.030911+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:24.031136+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:25.031323+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:26.031480+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:27.031746+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:28.032390+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:29.033309+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:30.033969+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:31.034252+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:32.035087+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:33.035642+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:34.036281+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:35.036768+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:36.037122+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:37.037345+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:38.037703+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:39.037906+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:40.038311+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:41.038465+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:42.038640+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:43.038838+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:44.039084+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:45.039282+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:46.039446+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:47.039573+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:48.039764+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:49.040138+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 80297984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:50.040534+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:51.040892+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:52.041109+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:53.041304+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:54.041485+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:55.041687+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:56.041894+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:57.042060+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:58.042253+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:59.042420+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:00.042596+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:01.042781+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:02.042939+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:03.043119+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:04.043289+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:05.043412+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:06.043567+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:07.043767+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:08.043912+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:09.044091+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:10.044262+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:11.044393+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:12.044552+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:13.044724+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:14.044908+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:15.045104+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:16.045253+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:17.045407+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:18.045578+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:19.045796+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:20.046103+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:21.046298+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:22.046511+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 80232448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:23.046681+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:24.046928+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:25.047156+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:26.047346+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:27.047465+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:28.047609+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:29.047753+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:30.047972+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:31.048157+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:32.048725+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:33.049283+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:34.049517+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:35.050005+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:36.050300+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:37.050473+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:38.050730+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:39.051005+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:40.052265+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:41.052832+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:42.054326+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:43.055143+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:44.055626+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:45.055838+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:46.056097+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:47.056321+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:48.056682+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:49.056885+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:50.057163+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:51.057448+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:52.057925+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:53.058221+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:54.058440+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:55.058638+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:56.058815+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:57.058996+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:58.059274+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:59.059531+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:00.059765+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:01.059994+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:02.060245+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:03.060460+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:04.060700+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:05.062182+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:06.062548+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:07.063100+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:08.063295+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:09.063495+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:10.063701+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336240640 unmapped: 80125952 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:11.063826+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 80117760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:12.064299+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 80117760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:13.064476+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:14.064677+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:15.064954+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:16.065113+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:17.065318+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:18.065448+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:19.065614+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:20.065779+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:21.066069+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:22.066208+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:23.066423+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:24.066578+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:25.066743+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:26.066963+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336273408 unmapped: 80093184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:27.067232+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:28.067450+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:29.067696+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:30.067917+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:31.068069+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:32.068215+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:33.068410+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:34.068611+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:35.068742+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:36.068999+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:37.069183+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:38.069342+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:39.069548+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 80068608 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:40.069716+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 80068608 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:41.069875+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336306176 unmapped: 80060416 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:42.070076+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:43.070254+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:44.070424+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:45.070539+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:46.070687+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:47.070806+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:48.071100+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:49.071443+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:50.071625+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:51.071766+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:52.071928+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:53.072048+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:54.072191+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:55.072416+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:56.072586+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:57.072708+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:58.072865+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:59.073071+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:00.073286+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:01.073450+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:02.073637+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:03.073807+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:04.073973+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 178K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.70 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 762 writes, 2197 keys, 762 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s
                                           Interval WAL: 762 writes, 346 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:05.074118+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 80011264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets getting new tickets!
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:06.074437+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _finish_auth 0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:06.075859+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:07.074598+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:08.074788+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:09.074968+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597c91cb000 session 0x5597c91d6780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:10.075503+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: mgrc ms_handle_reset ms_handle_reset con 0x5597c9fec000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 09:59:26 compute-0 ceph-osd[88375]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: get_auth_request con 0x5597caeeb400 auth_method 0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: mgrc handle_mgr_configure stats_period=5
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597cc6e8400 session 0x5597c6c22b40
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597c8db4400 session 0x5597c9df0780
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8400
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:11.075644+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:12.075971+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:13.076248+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:14.076459+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:15.076697+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:16.076946+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:17.077150+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:18.077370+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:19.077575+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:20.077763+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:21.077948+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:22.078121+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:23.078280+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:24.078429+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:25.078544+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336388096 unmapped: 79978496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:26.078699+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:27.078833+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:28.079072+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:29.079258+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:30.079482+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:31.079654+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:32.079825+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:33.079965+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:34.080221+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:35.080425+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:36.080630+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:37.080866+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:38.081078+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:39.081255+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:40.081457+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:41.081621+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:42.081803+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:43.081948+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:44.082123+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:45.082286+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:46.082471+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336429056 unmapped: 79937536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:47.082667+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336429056 unmapped: 79937536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:48.082840+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:49.083064+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:50.083233+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:51.083371+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:52.083515+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:53.083663+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:54.083767+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:55.084208+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:56.084387+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:57.084538+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:58.084673+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:59.084855+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:00.085075+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:01.085190+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:02.085319+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:03.085505+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:04.085670+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 79888384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:05.085836+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:06.085998+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:07.086210+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:08.086347+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336494592 unmapped: 79872000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:09.086504+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 277.959259033s of 278.040557861s, submitted: 28
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336494592 unmapped: 79872000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:10.086707+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336543744 unmapped: 79822848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:11.086872+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:12.087061+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:13.087244+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:14.087775+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:15.088128+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:16.088314+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:17.088686+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:18.088902+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:19.089334+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:20.089836+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:21.090057+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:22.090244+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:23.090434+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:24.090614+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:25.090805+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:26.091069+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:27.091238+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:28.091477+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:29.091683+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:30.091877+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:31.091997+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:32.092182+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:33.092301+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 sshd-session[442966]: Failed password for root from 91.224.92.108 port 45684 ssh2
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:34.092467+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:35.092627+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:36.092762+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:37.092895+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:38.093109+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:39.093300+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:40.093557+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:41.093694+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 79765504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:42.093811+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 79765504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:43.094002+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:44.094209+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:45.094390+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:46.094549+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:47.094656+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:48.094862+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:49.095053+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:50.095202+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:51.095321+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:52.095526+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:53.095700+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:54.095854+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:55.096005+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:56.096258+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:57.096470+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:58.096640+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:59.096782+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:00.096985+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:01.097122+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:02.097325+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:03.097451+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:04.097645+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:05.097827+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:06.097977+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:07.098165+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 79716352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:08.098302+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 79716352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:09.098445+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:10.098581+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:11.098762+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:12.098936+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:13.099133+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:14.099327+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:15.099495+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:16.099614+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:17.099797+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:18.100052+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:19.100315+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:20.100745+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:21.101074+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:22.101234+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:23.101568+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:24.101826+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:25.102128+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:26.102360+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:27.102523+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:28.102756+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:29.103000+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:30.103297+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:31.103557+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:32.103733+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:33.103941+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:34.104147+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:35.104350+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:36.104580+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:37.104841+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:38.105061+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:39.105194+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:40.105408+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:41.105565+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:42.105726+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:43.105894+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:44.106330+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:45.106489+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:46.106624+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 79642624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:47.106738+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:48.106887+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:49.107042+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:50.107279+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:51.107459+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:52.107633+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:53.107787+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:54.107997+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:55.108217+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:56.108349+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:57.108545+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:58.108716+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:59.108901+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:00.109126+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:01.109322+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:02.109483+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:03.109657+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:04.109869+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:05.110121+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 79601664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:06.110266+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:07.110451+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:08.110648+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:09.110884+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:10.111155+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:11.111344+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:12.111561+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:13.111813+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:14.111982+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:15.112135+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:16.112323+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:17.112473+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:18.112731+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:19.112875+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:20.113104+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:21.113236+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:22.113447+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:23.113629+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:24.113852+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:25.114110+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:26.114240+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:27.114390+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:28.114541+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:29.114720+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:30.115060+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:31.115190+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:32.115335+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:33.115538+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 79536128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:34.115657+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 79536128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:35.115873+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 79527936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:36.116076+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 146.745452881s of 147.051895142s, submitted: 90
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 303 ms_handle_reset con 0x5597ca9b2000 session 0x5597c705ad20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:37.116264+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370800 data_alloc: 218103808 data_used: 1093632
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 303 heartbeat osd_stat(store_statfs(0x4ea98b000/0x0/0x4ffc00000, data 0x6d16fc/0x873000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:38.116419+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c7e2d800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:39.116533+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 79503360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 304 ms_handle_reset con 0x5597c7e2d800 session 0x5597c8da63c0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4ea987000/0x0/0x4ffc00000, data 0x6d32cd/0x876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:40.116651+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 79503360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:41.116815+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:42.116967+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342983 data_alloc: 218103808 data_used: 1101824
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eadf9000/0x0/0x4ffc00000, data 0x26329b/0x404000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:43.117152+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eadf9000/0x0/0x4ffc00000, data 0x26329b/0x404000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6d800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:44.117321+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 79462400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 ms_handle_reset con 0x5597c9f6d800 session 0x5597c8d70f00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x16d4d0e/0x1878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x16d4d0e/0x1878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:45.117504+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:46.117692+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:47.117905+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:48.118174+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:49.118378+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:50.118660+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:51.118862+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:52.119051+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:53.119200+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:54.119552+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:55.119772+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:56.119985+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:57.120260+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:58.122057+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:59.122305+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:00.122514+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:01.122801+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:02.123143+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:03.123341+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:04.123577+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:05.123744+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:06.124002+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:07.124219+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:08.124429+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:09.124574+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:10.124867+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:11.125111+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:12.125340+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:13.125540+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:14.125739+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:15.125913+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:16.126196+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:17.126378+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:18.126541+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:19.126712+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:20.126922+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:21.127099+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 79380480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:22.127264+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 79380480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:23.127386+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 79372288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:24.127541+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 79372288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:25.127682+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:26.127862+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:27.128079+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:28.128261+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:29.128479+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:30.128748+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 79347712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:31.128975+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 79347712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:32.129202+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:33.129367+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:34.129538+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:35.129720+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:36.129884+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:37.130090+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:38.130258+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:39.130394+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:40.160443+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c7e2d800
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.765888214s of 64.009460449s, submitted: 63
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 ms_handle_reset con 0x5597c7e2d800 session 0x5597c8ec65a0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 307 ms_handle_reset con 0x5597c70cac00 session 0x5597c8d71c20
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 79298560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:41.160616+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 79298560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:42.160780+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357510 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:43.160949+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:44.161094+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:45.161335+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 307 heartbeat osd_stat(store_statfs(0x4eadf0000/0x0/0x4ffc00000, data 0x26844c/0x40d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:46.161575+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:47.161756+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357510 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:48.161926+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337084416 unmapped: 79282176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:49.162102+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337100800 unmapped: 79265792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:50.162328+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337100800 unmapped: 79265792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:51.162503+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:52.162663+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:53.162813+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:54.163006+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:55.163220+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:56.163369+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:57.163538+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:58.163698+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:59.163857+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:00.164085+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:01.164234+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 79224832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:02.164499+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 79224832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:03.164745+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 79224832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:04.164941+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:05.165086+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:06.165217+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:07.166199+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:08.166333+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:09.166551+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:10.166838+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:11.167089+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:12.167251+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:13.167471+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:14.167690+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:15.167860+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:16.168079+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337174528 unmapped: 79192064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:17.168357+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 79183872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:18.168560+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 79183872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:19.168702+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 79183872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:20.168936+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:21.169243+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:22.169541+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:23.169758+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:24.169964+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:25.170170+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 79159296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:26.170396+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 79159296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:27.170593+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 79159296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:28.170830+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:29.171122+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:30.171380+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:31.171631+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:32.171874+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:33.172046+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:34.172194+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 79142912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:35.172464+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 79142912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:36.172646+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 79142912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:37.172839+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:38.172990+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:39.173176+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:40.173370+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:41.173562+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 79118336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:42.174229+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:43.174373+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:44.174495+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:45.174620+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:46.174743+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:47.174895+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:48.175074+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:49.175203+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 79126528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:50.175351+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 79126528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:51.175504+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 79126528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:52.175637+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'config diff' '{prefix=config diff}'
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:26 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:26 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:53.175802+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'config show' '{prefix=config show}'
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:54.175955+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 09:59:26 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:55.176161+0000)
Oct 14 09:59:26 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:26 compute-0 ceph-osd[88375]: do_command 'log dump' '{prefix=log dump}'
Oct 14 09:59:26 compute-0 rsyslogd[1002]: imjournal from <np0005486808:ceph-osd>: begin to drop messages due to rate-limiting
Oct 14 09:59:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 14 09:59:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797699581' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 14 09:59:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2914762314' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 14 09:59:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/12511987' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 14 09:59:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2049453786' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 14 09:59:26 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1797699581' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 14 09:59:26 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 14 09:59:26 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610372467' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 14 09:59:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 14 09:59:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785835210' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 14 09:59:27 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23115 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:27 compute-0 sshd-session[442966]: Received disconnect from 91.224.92.108 port 45684:11:  [preauth]
Oct 14 09:59:27 compute-0 sshd-session[442966]: Disconnected from authenticating user root 91.224.92.108 port 45684 [preauth]
Oct 14 09:59:27 compute-0 sshd-session[442966]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 14 09:59:27 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 14 09:59:27 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341412900' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 14 09:59:27 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23119 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/610372467' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 14 09:59:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/785835210' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 14 09:59:27 compute-0 ceph-mon[74249]: from='client.23115 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:27 compute-0 ceph-mon[74249]: pgmap v3205: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:27 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/341412900' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 14 09:59:27 compute-0 nova_compute[259627]: 2025-10-14 09:59:27.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:27 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23121 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:28 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23123 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:28 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23125 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:28 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23127 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:28 compute-0 ceph-mon[74249]: from='client.23119 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:28 compute-0 ceph-mon[74249]: from='client.23121 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:28 compute-0 ceph-mon[74249]: from='client.23123 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:28 compute-0 ceph-mon[74249]: from='client.23125 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:28 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23131 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:28 compute-0 nova_compute[259627]: 2025-10-14 09:59:28.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 14 09:59:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925824462' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 14 09:59:29 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23135 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:29 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 14 09:59:29 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366206759' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 14 09:59:29 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23139 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:29 compute-0 ceph-mon[74249]: from='client.23127 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:29 compute-0 ceph-mon[74249]: from='client.23131 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:29 compute-0 ceph-mon[74249]: pgmap v3206: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/925824462' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 14 09:59:29 compute-0 ceph-mon[74249]: from='client.23135 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:29 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2366206759' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 14 09:59:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 14 09:59:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151326092' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 09:59:30 compute-0 nova_compute[259627]: 2025-10-14 09:59:30.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:30 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 14 09:59:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/348892832' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 14 09:59:30 compute-0 podman[444934]: 2025-10-14 09:59:30.666487725 +0000 UTC m=+0.073710809 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 09:59:30 compute-0 podman[444931]: 2025-10-14 09:59:30.666926216 +0000 UTC m=+0.073764231 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 09:59:30 compute-0 ceph-mon[74249]: from='client.23139 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2151326092' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 09:59:30 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/348892832' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 14 09:59:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 14 09:59:30 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 14 09:59:31 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 14 09:59:31 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244955569' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 14 09:59:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:21.043473+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:22.043787+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0xcee289/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0xcee289/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:23.044852+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0xcee289/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:24.048147+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3193750 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:25.048680+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0xcee289/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:26.049753+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:27.049898+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:28.050326+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0xcee289/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:29.050759+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313942016 unmapped: 45940736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3193750 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:30.051249+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f6a9c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.563863754s of 16.837785721s, submitted: 77
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f6a9c00 session 0x55e30fcf70e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313720832 unmapped: 46161920 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f6a9c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:31.051463+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:32.051843+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:33.051990+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb107000/0x0/0x4ffc00000, data 0xcee2ac/0xe87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:34.052505+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194879 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:35.052725+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:36.053115+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:37.053440+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:38.069840+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:39.069998+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb107000/0x0/0x4ffc00000, data 0xcee2ac/0xe87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194879 data_alloc: 218103808 data_used: 1110016
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:40.070291+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 46153728 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:41.070465+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313737216 unmapped: 46145536 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:42.070587+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 46194688 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:43.070690+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 46194688 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:44.072463+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 46194688 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243679 data_alloc: 218103808 data_used: 7966720
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:45.072647+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb107000/0x0/0x4ffc00000, data 0xcee2ac/0xe87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313688064 unmapped: 46194688 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:46.072935+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fe8b680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e311e98f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e421c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66400 session 0x55e30e0afc20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff88000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.504066467s of 16.527477264s, submitted: 6
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff88000 session 0x55e30d5f61e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313827328 unmapped: 46055424 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:47.073073+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313827328 unmapped: 46055424 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:48.073270+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313827328 unmapped: 46055424 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:49.073546+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313827328 unmapped: 46055424 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288641 data_alloc: 218103808 data_used: 7970816
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:50.073819+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eac17000/0x0/0x4ffc00000, data 0x11de2ac/0x1377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 313827328 unmapped: 46055424 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:51.073964+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30ff76780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 317177856 unmapped: 42704896 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:52.074064+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e30e0a92c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315990016 unmapped: 43892736 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:53.074225+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30ff9d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66400 session 0x55e30e838d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffefc00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43728896 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:54.074385+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43728896 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398271 data_alloc: 218103808 data_used: 8622080
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:55.074527+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315842560 unmapped: 44040192 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:56.074682+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9fed000/0x0/0x4ffc00000, data 0x1e072bc/0x1fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315842560 unmapped: 44040192 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:57.074823+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315842560 unmapped: 44040192 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:58.074945+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.010482788s of 12.384558678s, submitted: 118
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315850752 unmapped: 44032000 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:27:59.075088+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9fed000/0x0/0x4ffc00000, data 0x1e072bc/0x1fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315850752 unmapped: 44032000 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433371 data_alloc: 234881024 data_used: 13713408
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:00.075248+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315850752 unmapped: 44032000 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:01.075406+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315850752 unmapped: 44032000 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9feb000/0x0/0x4ffc00000, data 0x1e092bc/0x1fa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:02.075543+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315850752 unmapped: 44032000 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:03.075672+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315850752 unmapped: 44032000 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:04.075884+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9feb000/0x0/0x4ffc00000, data 0x1e092bc/0x1fa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 322330624 unmapped: 37552128 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:05.076070+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533491 data_alloc: 234881024 data_used: 13701120
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 38756352 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:06.076195+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e92dc000/0x0/0x4ffc00000, data 0x2b0a2bc/0x2ca4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321339392 unmapped: 38543360 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:07.076339+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321339392 unmapped: 38543360 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:08.076509+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fa0d400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9296000/0x0/0x4ffc00000, data 0x2b482bc/0x2ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.663157463s of 10.000637054s, submitted: 114
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 320495616 unmapped: 39387136 heap: 359882752 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:09.076604+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fa0d400 session 0x55e30dd7c000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fcded20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e30f0f54a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d7dd0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66400 session 0x55e30e36c960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 55640064 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:10.076785+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e87c7000/0x0/0x4ffc00000, data 0x362d2bc/0x37c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633498 data_alloc: 234881024 data_used: 14032896
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 55640064 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:11.076949+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 55640064 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:12.077109+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd3f000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd3f000 session 0x55e30d69d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 55640064 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:13.077269+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d7870e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:14.077448+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 55640064 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e30fcded20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30dd7c000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:15.077576+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 55640064 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633354 data_alloc: 234881024 data_used: 14032896
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e87c4000/0x0/0x4ffc00000, data 0x36302bc/0x37ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:16.077726+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 55640064 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939000 session 0x55e30e0ae960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffefc00 session 0x55e30d885e00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d7b70e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:17.077873+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315523072 unmapped: 60104704 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a1a000/0x0/0x4ffc00000, data 0x23db2ac/0x2574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:18.077993+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316973056 unmapped: 58654720 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:19.078082+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316973056 unmapped: 58654720 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:20.078258+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316973056 unmapped: 58654720 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531659 data_alloc: 234881024 data_used: 20033536
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:21.078438+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316973056 unmapped: 58654720 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a1a000/0x0/0x4ffc00000, data 0x23db2ac/0x2574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a1a000/0x0/0x4ffc00000, data 0x23db2ac/0x2574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:22.078592+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316973056 unmapped: 58654720 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a1a000/0x0/0x4ffc00000, data 0x23db2ac/0x2574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:23.078753+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316973056 unmapped: 58654720 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.405856133s of 14.576059341s, submitted: 43
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:24.078972+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316973056 unmapped: 58654720 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:25.079186+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316973056 unmapped: 58654720 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532323 data_alloc: 234881024 data_used: 20033536
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:26.079361+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316981248 unmapped: 58646528 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:27.079486+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321241088 unmapped: 54386688 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:28.079606+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d3f000/0x0/0x4ffc00000, data 0x30b02ac/0x3249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321462272 unmapped: 54165504 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:29.079997+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321462272 unmapped: 54165504 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:30.080211+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321462272 unmapped: 54165504 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641917 data_alloc: 234881024 data_used: 20471808
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:31.080345+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321462272 unmapped: 54165504 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:32.080482+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321462272 unmapped: 54165504 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:33.080639+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321462272 unmapped: 54165504 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d06000/0x0/0x4ffc00000, data 0x30e12ac/0x327a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.724123001s of 10.030393600s, submitted: 107
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:34.080805+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321478656 unmapped: 54149120 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:35.080971+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 321478656 unmapped: 54149120 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636489 data_alloc: 234881024 data_used: 20471808
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66400 session 0x55e30ff9d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e30e3d41e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:36.081148+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315031552 unmapped: 60596224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea2d1000/0x0/0x4ffc00000, data 0x18f82ac/0x1a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:37.081340+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315031552 unmapped: 60596224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:38.081548+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315031552 unmapped: 60596224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:39.081695+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315031552 unmapped: 60596224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea2d1000/0x0/0x4ffc00000, data 0x18f82ac/0x1a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:40.081824+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315031552 unmapped: 60596224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376541 data_alloc: 218103808 data_used: 8622080
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f6a9c00 session 0x55e30ff77c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:41.081951+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e311e98780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:42.082131+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:43.082345+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb57c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:44.082565+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb57c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:45.082786+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3181085 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:46.082999+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:47.083322+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb57c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:48.083518+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.076914787s of 15.338603973s, submitted: 81
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:49.083654+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e41e5a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315047936 unmapped: 60579840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:50.083791+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315056128 unmapped: 60571648 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187655 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:51.083952+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315056128 unmapped: 60571648 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:52.084089+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fb910e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315056128 unmapped: 60571648 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e311e985a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:53.084234+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315056128 unmapped: 60571648 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f6a9c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f6a9c00 session 0x55e311e981e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66400 session 0x55e30ca01860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb725000/0x0/0x4ffc00000, data 0x6d124a/0x869000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [2])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:54.084395+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315064320 unmapped: 60563456 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:55.084552+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315064320 unmapped: 60563456 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3190970 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:56.084719+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315064320 unmapped: 60563456 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:57.084871+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315064320 unmapped: 60563456 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:58.085129+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315064320 unmapped: 60563456 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb725000/0x0/0x4ffc00000, data 0x6d124a/0x869000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:28:59.085295+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315064320 unmapped: 60563456 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:00.085464+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315064320 unmapped: 60563456 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194330 data_alloc: 218103808 data_used: 1630208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:01.085696+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315064320 unmapped: 60563456 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.942671776s of 12.979204178s, submitted: 10
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:02.085840+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e30d78a960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315383808 unmapped: 60243968 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:03.086040+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315383808 unmapped: 60243968 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:04.086245+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315383808 unmapped: 60243968 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb250000/0x0/0x4ffc00000, data 0xba624a/0xd3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:05.086404+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 315400192 unmapped: 60227584 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3232030 data_alloc: 218103808 data_used: 1646592
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:06.086515+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316137472 unmapped: 59490304 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:07.086647+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316096512 unmapped: 59531264 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e0afc20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:08.086798+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316096512 unmapped: 59531264 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:09.086932+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316096512 unmapped: 59531264 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f6a9c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ead88000/0x0/0x4ffc00000, data 0x106d26d/0x1206000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:10.087082+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308973 data_alloc: 218103808 data_used: 5607424
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:11.087180+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:12.087378+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.802371979s of 10.956833839s, submitted: 49
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:13.087517+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:14.087688+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x107026d/0x1209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:15.087840+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3315069 data_alloc: 218103808 data_used: 6631424
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x107026d/0x1209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:16.087966+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:17.088208+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x107026d/0x1209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:18.088378+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:19.088500+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 316104704 unmapped: 59523072 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea563000/0x0/0x4ffc00000, data 0x189226d/0x1a2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:20.088685+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383109 data_alloc: 218103808 data_used: 7573504
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:21.088854+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323092480 unmapped: 52535296 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939000 session 0x55e30e0afa40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea4f0000/0x0/0x4ffc00000, data 0x190526d/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:22.089176+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319840256 unmapped: 55787520 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:23.089319+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319840256 unmapped: 55787520 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:24.089498+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319840256 unmapped: 55787520 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:25.089637+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420949 data_alloc: 218103808 data_used: 7651328
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319840256 unmapped: 55787520 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:26.089824+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319840256 unmapped: 55787520 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9fe6000/0x0/0x4ffc00000, data 0x1e0f26d/0x1fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.446340561s of 13.720787048s, submitted: 57
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e2f3e00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:27.089945+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319840256 unmapped: 55787520 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:28.090101+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319840256 unmapped: 55787520 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f931c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:29.090261+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319856640 unmapped: 55771136 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9fe6000/0x0/0x4ffc00000, data 0x1e0f26d/0x1fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:30.090395+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458229 data_alloc: 234881024 data_used: 12935168
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319856640 unmapped: 55771136 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:31.090548+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319856640 unmapped: 55771136 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:32.090735+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319856640 unmapped: 55771136 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:33.090873+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319864832 unmapped: 55762944 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:34.091128+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f6a9c00 session 0x55e3109ded20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319864832 unmapped: 55762944 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9fe6000/0x0/0x4ffc00000, data 0x1e0f26d/0x1fa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30f0f4000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:35.091285+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322743 data_alloc: 218103808 data_used: 6930432
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 317079552 unmapped: 58548224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:36.091434+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ead50000/0x0/0x4ffc00000, data 0x10a524a/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 317079552 unmapped: 58548224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:37.091544+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 317079552 unmapped: 58548224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.783958435s of 10.850614548s, submitted: 18
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:38.091706+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 317079552 unmapped: 58548224 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:39.091859+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319856640 unmapped: 55771136 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:40.092055+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3439077 data_alloc: 218103808 data_used: 7716864
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319897600 unmapped: 55730176 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:41.092241+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319897600 unmapped: 55730176 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9f73000/0x0/0x4ffc00000, data 0x1e8324a/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:42.092421+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319897600 unmapped: 55730176 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:43.092603+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319897600 unmapped: 55730176 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9f73000/0x0/0x4ffc00000, data 0x1e8324a/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:44.092845+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319905792 unmapped: 55721984 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:45.093108+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3439077 data_alloc: 218103808 data_used: 7716864
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319905792 unmapped: 55721984 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:46.093359+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319913984 unmapped: 55713792 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:47.093613+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319913984 unmapped: 55713792 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:48.093776+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319913984 unmapped: 55713792 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:49.093948+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9f73000/0x0/0x4ffc00000, data 0x1e8324a/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319913984 unmapped: 55713792 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:50.094101+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440037 data_alloc: 218103808 data_used: 7786496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319913984 unmapped: 55713792 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:51.094283+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f931c00 session 0x55e30d793c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319913984 unmapped: 55713792 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.139610291s of 14.374824524s, submitted: 67
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:52.094418+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e30ff76f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9f73000/0x0/0x4ffc00000, data 0x1e8324a/0x201b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319913984 unmapped: 55713792 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:53.094564+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319913984 unmapped: 55713792 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:54.094749+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 55705600 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:55.094913+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256081 data_alloc: 218103808 data_used: 1646592
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 55705600 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fb903c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66400 session 0x55e30ff774a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:56.095054+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb259000/0x0/0x4ffc00000, data 0xb9d24a/0xd35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fb90b40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 55705600 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fd4c000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:57.095284+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 55705600 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:58.095409+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 55705600 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:29:59.095568+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 55705600 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721c00 session 0x55e30fb90d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:00.095706+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eaf33000/0x0/0x4ffc00000, data 0xec4217/0x105a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273840 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319922176 unmapped: 55705600 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f931c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f931c00 session 0x55e30ff9dc20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eaf33000/0x0/0x4ffc00000, data 0xec4217/0x105a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:01.095845+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30fe8ad20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d5f6780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eaf33000/0x0/0x4ffc00000, data 0xec4217/0x105a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319864832 unmapped: 55762944 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:02.095998+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eaf32000/0x0/0x4ffc00000, data 0xec4248/0x105c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319864832 unmapped: 55762944 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:03.096175+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:04.096405+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:05.096538+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330191 data_alloc: 218103808 data_used: 8650752
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:06.096681+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:07.096835+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:08.097029+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eaf32000/0x0/0x4ffc00000, data 0xec4248/0x105c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:09.097290+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:10.097463+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330191 data_alloc: 218103808 data_used: 8650752
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:11.097628+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:12.097807+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 319782912 unmapped: 55844864 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.736093521s of 20.968702316s, submitted: 54
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:13.097968+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 322469888 unmapped: 53157888 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea407000/0x0/0x4ffc00000, data 0x15df248/0x1777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:14.098208+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:15.098367+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410099 data_alloc: 218103808 data_used: 9551872
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:16.098530+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:17.098666+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:18.098800+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea37b000/0x0/0x4ffc00000, data 0x166a248/0x1802000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:19.098969+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:20.099169+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410099 data_alloc: 218103808 data_used: 9551872
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:21.099419+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:22.099596+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea35b000/0x0/0x4ffc00000, data 0x168b248/0x1823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:23.099727+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:24.099968+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea35b000/0x0/0x4ffc00000, data 0x168b248/0x1823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323780608 unmapped: 51847168 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea35b000/0x0/0x4ffc00000, data 0x168b248/0x1823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:25.100179+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405607 data_alloc: 218103808 data_used: 9560064
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323788800 unmapped: 51838976 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea35b000/0x0/0x4ffc00000, data 0x168b248/0x1823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:26.100432+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323788800 unmapped: 51838976 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:27.100645+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea35b000/0x0/0x4ffc00000, data 0x168b248/0x1823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323788800 unmapped: 51838976 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:28.100784+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323788800 unmapped: 51838976 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30fcdef00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fcdfa40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e0a0960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff9c960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.766450882s of 16.084453583s, submitted: 70
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d78b860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fb90d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ff9c3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:29.100904+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30dd7c000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e350c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e350c00 session 0x55e30d7b70e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea35b000/0x0/0x4ffc00000, data 0x168b248/0x1823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e0ae960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30ff9d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e311e98960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323960832 unmapped: 51666944 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30e41e5a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffbfc00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffbfc00 session 0x55e30e0afc20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30deb90e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30e421e00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:30.101042+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ca00960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30d6b81e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498072 data_alloc: 218103808 data_used: 9560064
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324067328 unmapped: 51560448 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:31.101188+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324067328 unmapped: 51560448 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:32.101386+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324067328 unmapped: 51560448 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:33.101561+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324067328 unmapped: 51560448 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:34.101805+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffe6000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffe6000 session 0x55e30e4210e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9903000/0x0/0x4ffc00000, data 0x20e2257/0x227b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324067328 unmapped: 51560448 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e311e981e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9903000/0x0/0x4ffc00000, data 0x20e2257/0x227b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:35.102007+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498072 data_alloc: 218103808 data_used: 9560064
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324067328 unmapped: 51560448 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:36.102118+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30ff76780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d6074a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324214784 unmapped: 51412992 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98de000/0x0/0x4ffc00000, data 0x210627a/0x22a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffbf000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:37.102458+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324214784 unmapped: 51412992 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90000 session 0x55e312a10d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:38.102852+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f92f400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f92f400 session 0x55e312a11c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 324214784 unmapped: 51412992 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:39.103763+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d69d4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.196200371s of 10.401590347s, submitted: 44
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fcdfe00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98b9000/0x0/0x4ffc00000, data 0x212a28a/0x22c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326139904 unmapped: 49487872 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:40.104663+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565124 data_alloc: 234881024 data_used: 17543168
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326139904 unmapped: 49487872 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:41.104862+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 49315840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:42.105457+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 49315840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:43.105681+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 49315840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:44.106055+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 49315840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:45.106238+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98b3000/0x0/0x4ffc00000, data 0x212f28a/0x22ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577080 data_alloc: 234881024 data_used: 18952192
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 49315840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:46.106613+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 49315840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:47.106801+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 49315840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98b3000/0x0/0x4ffc00000, data 0x212f28a/0x22ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:48.107005+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 49315840 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:49.107176+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.838401794s of 10.005731583s, submitted: 44
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9208000/0x0/0x4ffc00000, data 0x27db28a/0x2976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [1])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327704576 unmapped: 47923200 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:50.107299+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634492 data_alloc: 234881024 data_used: 19398656
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327770112 unmapped: 47857664 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:51.107436+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 332718080 unmapped: 42909696 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:52.107598+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 332726272 unmapped: 42901504 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:53.107712+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8365000/0x0/0x4ffc00000, data 0x367e28a/0x3819000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 332095488 unmapped: 43532288 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:54.107872+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 332095488 unmapped: 43532288 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:55.108061+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753392 data_alloc: 234881024 data_used: 20586496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 332095488 unmapped: 43532288 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:56.108332+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8365000/0x0/0x4ffc00000, data 0x367e28a/0x3819000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30d6065a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffbf000 session 0x55e30fcf6000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 332095488 unmapped: 43532288 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f932800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:57.108489+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f932800 session 0x55e30fd4c3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9390000/0x0/0x4ffc00000, data 0x2653267/0x27ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 332300288 unmapped: 43327488 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:58.108656+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331259904 unmapped: 44367872 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:30:59.108852+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331259904 unmapped: 44367872 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:00.109151+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30e3d43c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e312a103c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567120 data_alloc: 234881024 data_used: 12169216
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331259904 unmapped: 44367872 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.830406189s of 11.463669777s, submitted: 182
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30ff770e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e938e000/0x0/0x4ffc00000, data 0x2656267/0x27f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:01.109289+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323600384 unmapped: 52027392 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:02.109419+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323600384 unmapped: 52027392 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:03.109560+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323600384 unmapped: 52027392 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea3df000/0x0/0x4ffc00000, data 0x1606236/0x179e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:04.109791+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323600384 unmapped: 52027392 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:05.109990+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377673 data_alloc: 218103808 data_used: 3653632
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323600384 unmapped: 52027392 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:06.110215+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323600384 unmapped: 52027392 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30d7dde00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:07.110358+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30ff761e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30d7b6780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e311e99680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7921e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323624960 unmapped: 52002816 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:08.110465+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323624960 unmapped: 52002816 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:09.110633+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d5a000/0x0/0x4ffc00000, data 0x1c8c236/0x1e24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323624960 unmapped: 52002816 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:10.110799+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429722 data_alloc: 218103808 data_used: 3653632
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323624960 unmapped: 52002816 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:11.110977+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323624960 unmapped: 52002816 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:12.111151+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323624960 unmapped: 52002816 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d5a000/0x0/0x4ffc00000, data 0x1c8c236/0x1e24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:13.111321+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323633152 unmapped: 51994624 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:14.111520+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323641344 unmapped: 51986432 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:15.111686+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429722 data_alloc: 218103808 data_used: 3653632
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323641344 unmapped: 51986432 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:16.111849+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323641344 unmapped: 51986432 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:17.112081+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.665761948s of 16.830217361s, submitted: 41
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30ff772c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323641344 unmapped: 51986432 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d36000/0x0/0x4ffc00000, data 0x1cb0236/0x1e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:18.112272+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffbf000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323641344 unmapped: 51986432 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:19.112417+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323641344 unmapped: 51986432 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:20.112563+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435087 data_alloc: 218103808 data_used: 3653632
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323649536 unmapped: 51978240 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:21.112700+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323649536 unmapped: 51978240 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:22.112907+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d36000/0x0/0x4ffc00000, data 0x1cb0236/0x1e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323657728 unmapped: 51970048 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:23.113147+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323665920 unmapped: 51961856 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:24.113353+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffdd800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffdd800 session 0x55e30fd4c3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fcf6000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d6065a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 323665920 unmapped: 51961856 heap: 375627776 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30fcdfe00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d36000/0x0/0x4ffc00000, data 0x1cb0236/0x1e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [0,0,3,0,1,3])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:25.113529+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30d69d4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd3f400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd3f400 session 0x55e30e421e00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30deb90e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e0afc20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e311e98960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517512 data_alloc: 218103808 data_used: 3653632
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 325050368 unmapped: 53731328 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:26.113725+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 325050368 unmapped: 53731328 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:27.113872+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 325058560 unmapped: 53723136 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:28.114039+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 325058560 unmapped: 53723136 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30e0ae960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:29.114226+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffdf400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffdf400 session 0x55e30d7b70e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 325058560 unmapped: 53723136 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:30.114377+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30ff9c3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.626335144s of 12.808412552s, submitted: 53
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fb90d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573048 data_alloc: 234881024 data_used: 10498048
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 325369856 unmapped: 53411840 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e944c000/0x0/0x4ffc00000, data 0x25962db/0x2732000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:31.114580+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 325369856 unmapped: 53411840 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:32.114733+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327114752 unmapped: 51666944 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:33.114878+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e944c000/0x0/0x4ffc00000, data 0x25962db/0x2732000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e944c000/0x0/0x4ffc00000, data 0x25962db/0x2732000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327114752 unmapped: 51666944 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:34.115070+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327114752 unmapped: 51666944 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:35.115247+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3637180 data_alloc: 234881024 data_used: 19390464
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327114752 unmapped: 51666944 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:36.115377+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e944c000/0x0/0x4ffc00000, data 0x25962db/0x2732000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327114752 unmapped: 51666944 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:37.115515+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8b06000/0x0/0x4ffc00000, data 0x2ed62db/0x3072000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 328335360 unmapped: 50446336 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:38.115677+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 328392704 unmapped: 50388992 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:39.115837+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 328392704 unmapped: 50388992 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8ae6000/0x0/0x4ffc00000, data 0x2eee2db/0x308a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:40.115998+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8ae6000/0x0/0x4ffc00000, data 0x2eee2db/0x308a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721470 data_alloc: 234881024 data_used: 19554304
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 328392704 unmapped: 50388992 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8ae6000/0x0/0x4ffc00000, data 0x2eee2db/0x308a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:41.116142+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.085219383s of 11.385103226s, submitted: 82
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 47243264 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:42.116523+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335421440 unmapped: 43360256 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:43.116680+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335732736 unmapped: 43048960 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7e8b000/0x0/0x4ffc00000, data 0x3b4f2db/0x3ceb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:44.116907+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335732736 unmapped: 43048960 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:45.117070+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3841316 data_alloc: 234881024 data_used: 21782528
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335740928 unmapped: 43040768 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:46.117167+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335740928 unmapped: 43040768 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:47.117420+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335740928 unmapped: 43040768 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:48.118540+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335740928 unmapped: 43040768 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:49.118935+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7e6f000/0x0/0x4ffc00000, data 0x3b732db/0x3d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335740928 unmapped: 43040768 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fcded20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffbf000 session 0x55e30d7874a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:50.122639+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ca6dc00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ca6dc00 session 0x55e30fcde3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3663554 data_alloc: 234881024 data_used: 14786560
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335732736 unmapped: 43048960 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:51.122948+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335732736 unmapped: 43048960 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:52.123181+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335732736 unmapped: 43048960 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:53.123428+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8e70000/0x0/0x4ffc00000, data 0x2b722db/0x2d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335732736 unmapped: 43048960 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:54.123717+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.011702538s of 12.548091888s, submitted: 212
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335732736 unmapped: 43048960 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:55.124079+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e41f0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90000 session 0x55e30fb905a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664206 data_alloc: 234881024 data_used: 14794752
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335732736 unmapped: 43048960 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:56.124207+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30ff9d4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fcf7a40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335757312 unmapped: 43024384 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30e3d54a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:57.124364+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e312a11860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30e0a0f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fd4c000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fd4c000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 41967616 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:58.124708+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 41967616 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:31:59.124987+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e96f1000/0x0/0x4ffc00000, data 0x22f231e/0x248d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 41967616 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:00.125133+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90000 session 0x55e312a11860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589976 data_alloc: 234881024 data_used: 12242944
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e96f1000/0x0/0x4ffc00000, data 0x22f231e/0x248d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 41967616 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:01.125357+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90000 session 0x55e30fcf7a40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30ff9d4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fb905a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 41951232 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:02.125489+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 41951232 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:03.125669+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:04.125866+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 41951232 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0x22f531e/0x2490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:05.126043+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 41943040 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634945 data_alloc: 234881024 data_used: 18460672
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:06.126349+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 41205760 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0x22f531e/0x2490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:07.126496+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 41205760 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:08.126657+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 41181184 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0x22f531e/0x2490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:09.126841+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 41181184 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:10.127077+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 41181184 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635585 data_alloc: 234881024 data_used: 18522112
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:11.127272+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 41181184 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:12.127539+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 41181184 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.450141907s of 18.702842712s, submitted: 75
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:13.127979+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 41181184 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:14.128151+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 41590784 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905b000/0x0/0x4ffc00000, data 0x298831e/0x2b23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:15.128340+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 40534016 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717435 data_alloc: 234881024 data_used: 19505152
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:16.129081+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 40296448 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:17.129264+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 40296448 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:18.129443+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 40296448 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:19.130542+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 40296448 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8fac000/0x0/0x4ffc00000, data 0x2a3631e/0x2bd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:20.131335+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 40288256 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8fac000/0x0/0x4ffc00000, data 0x2a3631e/0x2bd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3710663 data_alloc: 234881024 data_used: 19509248
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:21.131570+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 40280064 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fb90d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ff9c3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:22.131720+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338534400 unmapped: 40247296 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e0a6f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:23.132054+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 40230912 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:24.132328+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338550784 unmapped: 40230912 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.726577759s of 11.250182152s, submitted: 151
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:25.132615+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 40222720 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9e14000/0x0/0x4ffc00000, data 0x1bd02bc/0x1d6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d5f6780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30fcdf4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539789 data_alloc: 234881024 data_used: 12304384
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:26.132764+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 40222720 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d78b860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:27.132960+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 46915584 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:28.133145+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 46915584 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:29.133361+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 46915584 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eafc7000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eafc7000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:30.133642+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 46915584 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285820 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:31.133932+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 46915584 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:32.134150+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 46915584 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:33.134369+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 46915584 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:34.134531+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 46915584 heap: 378781696 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eafc7000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:35.134787+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.740960121s of 10.906482697s, submitted: 55
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 38436864 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d5f70e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375088 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:36.135009+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:37.135264+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:38.135543+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea7fe000/0x0/0x4ffc00000, data 0x11ea217/0x1380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:39.135757+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:40.136003+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375088 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:41.136405+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:42.136616+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:43.136831+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:44.137075+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fb901e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea7fe000/0x0/0x4ffc00000, data 0x11ea217/0x1380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:45.137209+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331866112 unmapped: 51118080 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384777 data_alloc: 218103808 data_used: 1753088
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:46.137340+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331882496 unmapped: 51101696 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:47.137491+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea7d9000/0x0/0x4ffc00000, data 0x120e23a/0x13a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:48.137646+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:49.137791+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea7d9000/0x0/0x4ffc00000, data 0x120e23a/0x13a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:50.138070+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466537 data_alloc: 234881024 data_used: 13283328
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:51.138242+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:52.138457+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea7d9000/0x0/0x4ffc00000, data 0x120e23a/0x13a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:53.138593+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:54.138850+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:55.139000+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.723236084s of 20.830955505s, submitted: 16
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:56.139160+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523461 data_alloc: 234881024 data_used: 13307904
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 46276608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:57.139280+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:58.139453+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d59000/0x0/0x4ffc00000, data 0x1c8e23a/0x1e25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:59.139656+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 42K writes, 170K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5941 writes, 25K keys, 5941 commit groups, 1.0 writes per commit group, ingest: 27.85 MB, 0.05 MB/s
                                           Interval WAL: 5941 writes, 2282 syncs, 2.60 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:00.140108+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:01.140290+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571193 data_alloc: 234881024 data_used: 14442496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:02.140473+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d57000/0x0/0x4ffc00000, data 0x1c9023a/0x1e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:03.140636+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:04.140818+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:05.141095+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:06.141326+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569957 data_alloc: 234881024 data_used: 14450688
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:07.141468+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.252246857s of 11.519014359s, submitted: 92
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:08.141672+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d56000/0x0/0x4ffc00000, data 0x1c9123a/0x1e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:09.141845+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:10.142076+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:11.142250+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570185 data_alloc: 234881024 data_used: 14450688
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e85f860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:12.142368+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 45752320 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:13.142543+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:14.142741+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:15.142854+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:16.142969+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613919 data_alloc: 234881024 data_used: 14450688
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:17.143128+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:18.143242+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:19.143418+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:20.143580+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.748434067s of 12.825232506s, submitted: 17
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30ff76d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:21.143697+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613919 data_alloc: 234881024 data_used: 14450688
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 45735936 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:22.143844+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337616896 unmapped: 45367296 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:23.144108+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:24.144275+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:25.144428+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:26.144641+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3639039 data_alloc: 234881024 data_used: 17956864
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:27.144940+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:28.145093+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 45326336 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:29.145208+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 45326336 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:30.145350+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 45326336 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:31.145499+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3638335 data_alloc: 234881024 data_used: 17956864
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 45326336 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:32.145628+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.808189392s of 11.818248749s, submitted: 2
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 42573824 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:33.145779+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9442000/0x0/0x4ffc00000, data 0x259d23a/0x2734000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 42500096 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:34.146119+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 43171840 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:35.146262+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 43171840 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:36.146432+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680609 data_alloc: 234881024 data_used: 18194432
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 43171840 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e943b000/0x0/0x4ffc00000, data 0x25ab23a/0x2742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:37.146646+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 43171840 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:38.146851+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:39.147061+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:40.147293+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:41.147443+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680625 data_alloc: 234881024 data_used: 18194432
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e943b000/0x0/0x4ffc00000, data 0x25ab23a/0x2742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:42.147653+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e943b000/0x0/0x4ffc00000, data 0x25ab23a/0x2742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:43.147813+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:44.148069+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:45.149343+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30e0a1a40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.530860901s of 13.740232468s, submitted: 64
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:46.149536+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90000 session 0x55e30e41e3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576745 data_alloc: 234881024 data_used: 13524992
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 43139072 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:47.149890+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 43139072 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:48.150098+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d54000/0x0/0x4ffc00000, data 0x1c9323a/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 43139072 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:49.150254+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 43139072 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:50.150400+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 43130880 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:51.150540+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d54000/0x0/0x4ffc00000, data 0x1c9323a/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576929 data_alloc: 234881024 data_used: 13524992
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 43130880 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:52.150739+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 43130880 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:53.150922+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d78b2c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d7dc1e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 43130880 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e36d2c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:54.151043+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb375000/0x0/0x4ffc00000, data 0x64e23a/0x7e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:55.151216+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:56.151387+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307161 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:57.151569+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:58.151727+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:59.151899+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:00.152035+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:01.152232+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307161 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:02.152359+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:03.152440+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:04.152863+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:05.153079+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:06.153268+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307161 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:07.153490+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:08.153626+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.871526718s of 22.977365494s, submitted: 34
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:09.153814+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330588160 unmapped: 52396032 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:10.154006+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 52338688 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:11.154227+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:12.154369+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:13.154509+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:14.154696+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:15.154912+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:16.155123+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:17.155309+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:18.155489+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:19.155721+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:20.155887+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:21.156114+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:22.156355+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:23.156533+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:24.156768+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:25.156940+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:26.157144+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:27.157332+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:28.157519+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:29.157689+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:30.157930+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:31.158130+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:32.158307+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:33.158504+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331718656 unmapped: 51265536 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:34.158692+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331718656 unmapped: 51265536 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e311e985a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fcf6b40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e36cb40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e311e98780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:35.158838+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.959754944s of 26.327398300s, submitted: 106
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e2f2780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:36.159081+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377363 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:37.159277+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:38.159490+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:39.159672+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab8e000/0x0/0x4ffc00000, data 0xe5a217/0xff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:40.159837+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:41.160001+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab8e000/0x0/0x4ffc00000, data 0xe5a217/0xff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377363 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327516160 unmapped: 59670528 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:42.160260+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327516160 unmapped: 59670528 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:43.160430+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d7a8f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327516160 unmapped: 59670528 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab8e000/0x0/0x4ffc00000, data 0xe5a217/0xff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:44.160687+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fb91a40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327524352 unmapped: 59662336 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30ff9cb40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d7b6b40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:45.162181+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.409566879s of 10.530294418s, submitted: 23
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327696384 unmapped: 59490304 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:46.162323+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383042 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327696384 unmapped: 59490304 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:47.162432+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:48.162628+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:49.162876+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab69000/0x0/0x4ffc00000, data 0xe7e227/0x1015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab69000/0x0/0x4ffc00000, data 0xe7e227/0x1015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:50.163079+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:51.163218+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442242 data_alloc: 218103808 data_used: 9359360
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:52.163337+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:53.163828+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab69000/0x0/0x4ffc00000, data 0xe7e227/0x1015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:54.164412+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:55.164759+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab69000/0x0/0x4ffc00000, data 0xe7e227/0x1015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:56.165193+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442722 data_alloc: 218103808 data_used: 9371648
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.935249329s of 10.938068390s, submitted: 1
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331063296 unmapped: 56123392 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:57.165344+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8dae000/0x0/0x4ffc00000, data 0x1a91227/0x1c28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 53174272 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:58.165896+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:59.166215+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:00.166798+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8c8e000/0x0/0x4ffc00000, data 0x1ba3227/0x1d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:01.167322+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561558 data_alloc: 234881024 data_used: 10735616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:02.167589+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:03.167769+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:04.168130+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:05.168299+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8ca1000/0x0/0x4ffc00000, data 0x1ba6227/0x1d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:06.168441+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550086 data_alloc: 234881024 data_used: 10735616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:07.168564+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:08.168768+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:09.168898+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8ca1000/0x0/0x4ffc00000, data 0x1ba6227/0x1d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:10.169139+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:11.169339+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.522975922s of 14.898180008s, submitted: 128
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550086 data_alloc: 234881024 data_used: 10735616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:12.169486+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ff9cf00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:13.169679+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:14.170085+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8336000/0x0/0x4ffc00000, data 0x2511227/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:15.170294+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:16.170467+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624726 data_alloc: 234881024 data_used: 10735616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:17.170665+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:18.170872+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8336000/0x0/0x4ffc00000, data 0x2511227/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:19.171003+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30d606000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:20.171172+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8336000/0x0/0x4ffc00000, data 0x2511227/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffdc800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffdc800 session 0x55e30fe8ab40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:21.171339+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30e0af0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.732763290s of 10.002961159s, submitted: 15
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fd0c960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629128 data_alloc: 234881024 data_used: 10735616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 52224000 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:22.171480+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 52224000 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:23.171623+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 52224000 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:24.171841+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 52224000 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:25.172001+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:26.172204+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3699208 data_alloc: 234881024 data_used: 20529152
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:27.172323+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:28.172469+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:29.172610+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:30.172785+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:31.172961+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3699560 data_alloc: 234881024 data_used: 20529152
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:32.173174+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:33.173354+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:34.173570+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.206079483s of 13.230063438s, submitted: 5
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 45531136 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:35.173727+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 44965888 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:36.173852+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773124 data_alloc: 234881024 data_used: 21811200
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:37.173983+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:38.174131+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:39.174300+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:40.174479+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:41.174587+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773284 data_alloc: 234881024 data_used: 21815296
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:42.174785+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:43.174986+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:44.175258+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342458368 unmapped: 44728320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:45.175376+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:46.175527+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773604 data_alloc: 234881024 data_used: 21823488
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:47.175654+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30e0a6000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d6b8f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff86c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.841184616s of 13.115254402s, submitted: 88
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:48.175758+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff86c00 session 0x55e30d6b9860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:49.175916+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:50.176114+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:51.176285+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7aff000/0x0/0x4ffc00000, data 0x1ba8227/0x1d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e311e99a40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d8843c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561521 data_alloc: 234881024 data_used: 10739712
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 48717824 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:52.176412+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d793c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:53.176589+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:54.176801+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:55.176972+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:56.177143+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:57.177325+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:58.177456+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:59.177594+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:00.177737+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:01.177866+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:02.178077+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:03.178322+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:04.178660+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:05.178827+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:06.178963+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:07.179193+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:08.179508+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:09.179727+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:10.179828+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:11.179984+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:12.180115+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:13.180267+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:14.180480+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:15.180662+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:16.180787+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:17.180939+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:18.181107+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:19.181244+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:20.181419+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:21.181556+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:22.181713+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:23.181875+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:24.182080+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338509824 unmapped: 48676864 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:25.182247+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:26.182405+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:27.182574+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:28.182762+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:29.182928+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:30.183111+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:31.183293+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:32.183468+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:33.183637+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30e36c5a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d5d94a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fd0de00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30e0afc20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:34.183821+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 48660480 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.188243866s of 46.325260162s, submitted: 45
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30ff77a40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d787c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e311e99680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30e420d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fd0d2c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:35.184051+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:36.184266+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387791 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a33000/0x0/0x4ffc00000, data 0xc74227/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:37.184489+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:38.184734+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:39.184927+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:40.185136+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:41.185363+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a33000/0x0/0x4ffc00000, data 0xc74227/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387791 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:42.185606+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:43.185804+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d6b81e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:44.185988+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.333760262s of 10.453811646s, submitted: 30
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:45.186079+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:46.186272+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8622000/0x0/0x4ffc00000, data 0xc7424a/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388840 data_alloc: 218103808 data_used: 1114112
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:47.186421+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:48.186588+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:49.186802+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:50.187003+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8622000/0x0/0x4ffc00000, data 0xc7424a/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8622000/0x0/0x4ffc00000, data 0xc7424a/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:51.187182+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426920 data_alloc: 218103808 data_used: 6430720
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:52.187313+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:53.187516+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:54.187743+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:55.187880+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:56.188044+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8622000/0x0/0x4ffc00000, data 0xc7424a/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426920 data_alloc: 218103808 data_used: 6430720
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:57.188213+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.844214439s of 12.848373413s, submitted: 1
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7df3000/0x0/0x4ffc00000, data 0x149d24a/0x1635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:58.188336+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 45686784 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:59.188466+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 45645824 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:00.188658+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7dd0000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:01.188863+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513346 data_alloc: 218103808 data_used: 7389184
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:02.190235+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:03.191261+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7dd0000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:04.192281+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:05.192479+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:06.192870+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7dd0000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513666 data_alloc: 218103808 data_used: 7397376
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:07.195763+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:08.196134+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:09.196452+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:10.196729+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:11.196995+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513666 data_alloc: 218103808 data_used: 7397376
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7dd0000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:12.197248+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:13.197511+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 45514752 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ff76f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30e41f0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30e3d43c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30ff9d0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.032838821s of 16.300992966s, submitted: 80
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e311e981e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:14.197690+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fcf65a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff85c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff85c00 session 0x55e30d69d4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d885a40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e0a0f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:15.197966+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:16.198272+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7664000/0x0/0x4ffc00000, data 0x1c302bc/0x1dca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572254 data_alloc: 218103808 data_used: 7397376
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:17.198512+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:18.198834+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:19.199230+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7664000/0x0/0x4ffc00000, data 0x1c302bc/0x1dca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:20.199579+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:21.199891+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fd0c3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7664000/0x0/0x4ffc00000, data 0x1c302bc/0x1dca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577152 data_alloc: 218103808 data_used: 7397376
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:22.200086+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 43933696 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e311948400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:23.200226+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 43933696 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:24.200382+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343261184 unmapped: 43925504 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:25.200502+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:26.200629+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3625416 data_alloc: 234881024 data_used: 14028800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:27.200789+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e763f000/0x0/0x4ffc00000, data 0x1c542df/0x1def000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:28.200953+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:29.201173+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:30.201308+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:31.201488+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e763f000/0x0/0x4ffc00000, data 0x1c542df/0x1def000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3625416 data_alloc: 234881024 data_used: 14028800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:32.201625+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:33.201751+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:34.201890+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.092348099s of 20.392938614s, submitted: 60
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346988544 unmapped: 40198144 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:35.202049+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 39239680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6871000/0x0/0x4ffc00000, data 0x2a212df/0x2bbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:36.202242+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:37.202417+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749660 data_alloc: 234881024 data_used: 15351808
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:38.202602+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:39.202765+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6871000/0x0/0x4ffc00000, data 0x2a212df/0x2bbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:40.202921+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:41.203076+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:42.203261+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749660 data_alloc: 234881024 data_used: 15351808
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:43.203430+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:44.203560+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6871000/0x0/0x4ffc00000, data 0x2a212df/0x2bbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:45.203691+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.890325546s of 11.178516388s, submitted: 115
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 39116800 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:46.203887+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 39108608 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:47.204088+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744220 data_alloc: 234881024 data_used: 15425536
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 39108608 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e41ef00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:48.204208+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e311948400 session 0x55e30fcde960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 39100416 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d7dcd20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:49.204337+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:50.204507+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:51.204627+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:52.204824+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3524040 data_alloc: 218103808 data_used: 7462912
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:53.204967+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d7a90e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:54.205163+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fb91e00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:55.205274+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:56.205433+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:57.205546+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:58.205692+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:59.205827+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:00.205962+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:01.206122+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:02.206287+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:03.206464+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:04.206663+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:05.206858+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:06.206978+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:07.207099+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:08.207211+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:09.207369+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:10.207532+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff81000 session 0x55e30e0a10e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:11.207744+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:12.207894+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:13.208112+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:14.208280+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:15.208505+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:16.208666+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:17.208840+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:18.208973+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:19.209156+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:20.209327+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:21.209485+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:22.209635+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:23.209878+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:24.210133+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:25.210317+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 44384256 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:26.210524+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 44384256 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:27.210752+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 44384256 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:28.210950+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 44376064 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:29.211149+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 44376064 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.374755859s of 44.739795685s, submitted: 119
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:30.211295+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d7a90e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30e0a0f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 44367872 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:31.211486+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 44367872 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:32.211615+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405027 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 44367872 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:33.211751+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:34.211933+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:35.212147+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:36.212294+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d69d4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:37.212472+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fcf65a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405027 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e311e981e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:38.212612+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30ff9d0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:39.212753+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:40.212871+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:41.213121+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:42.213360+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438307 data_alloc: 218103808 data_used: 5828608
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:43.213976+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:44.214363+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:45.214879+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:46.215203+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:47.215642+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438307 data_alloc: 218103808 data_used: 5828608
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:48.216742+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:49.216926+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.489421844s of 19.609689713s, submitted: 29
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 44335104 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:50.217057+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:51.217356+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:52.217582+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548313 data_alloc: 218103808 data_used: 6111232
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:53.217740+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:54.218125+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:55.218376+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:56.218658+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:57.218784+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548313 data_alloc: 218103808 data_used: 6111232
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:58.218964+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:59.219121+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:00.219273+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 45629440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:01.219489+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d482400 session 0x55e311e990e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 45629440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:02.219706+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548313 data_alloc: 218103808 data_used: 6111232
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 45629440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:03.219855+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 45629440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:04.220092+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30ff9dc20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d787680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d69d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e311e98d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e311949800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.601642609s of 14.823879242s, submitted: 91
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e311949800 session 0x55e30ff76000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30ff9c000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 45817856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e421e00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d5f6b40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30fd0d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:05.220264+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e85bc000/0x0/0x4ffc00000, data 0x1d1a2db/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:06.220494+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:07.220663+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579341 data_alloc: 218103808 data_used: 6115328
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:08.220863+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e311949800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e311949800 session 0x55e30e4210e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:09.221078+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e85bc000/0x0/0x4ffc00000, data 0x1d1a2db/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e0a8f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:10.221200+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d7dde00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30fb914a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341270528 unmapped: 45916160 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e311949800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:11.221357+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341278720 unmapped: 45907968 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:12.221517+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595288 data_alloc: 218103808 data_used: 7569408
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8596000/0x0/0x4ffc00000, data 0x1d3e30e/0x1ed8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341286912 unmapped: 45899776 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:13.221665+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:14.221871+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:15.222029+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8596000/0x0/0x4ffc00000, data 0x1d3e30e/0x1ed8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:16.222174+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:17.222258+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612728 data_alloc: 218103808 data_used: 9990144
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:18.222379+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:19.222555+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:20.222702+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8596000/0x0/0x4ffc00000, data 0x1d3e30e/0x1ed8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:21.222835+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:22.222993+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.478260040s of 17.658832550s, submitted: 45
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642496 data_alloc: 218103808 data_used: 10014720
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8269000/0x0/0x4ffc00000, data 0x206b30e/0x2205000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 43786240 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:23.223218+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 43786240 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:24.223382+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e81a9000/0x0/0x4ffc00000, data 0x212b30e/0x22c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:25.223502+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8128000/0x0/0x4ffc00000, data 0x21ac30e/0x2346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:26.223626+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:27.223750+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669580 data_alloc: 234881024 data_used: 10985472
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:28.223872+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8128000/0x0/0x4ffc00000, data 0x21ac30e/0x2346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:29.224002+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:30.224227+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:31.224359+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x21cd30e/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:32.224503+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3665952 data_alloc: 234881024 data_used: 10989568
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:33.224647+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 43761664 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:34.224841+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 43761664 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:35.224969+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 43761664 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x21cd30e/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:36.225142+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.833963394s of 14.129931450s, submitted: 66
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30fcde1e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e311949800 session 0x55e30ca00960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d606000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:37.225278+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554685 data_alloc: 218103808 data_used: 6115328
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:38.225440+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899e000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:39.225619+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:40.225739+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e41f0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e3d52c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:41.225889+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:42.226055+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:43.226211+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:44.226609+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:45.226732+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 43720704 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:46.226883+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 43720704 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:47.227056+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:48.227199+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:49.227358+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:50.227542+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:51.227719+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:52.227889+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:53.228038+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:54.228235+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:55.228415+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:56.228578+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:57.228711+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:58.228872+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:59.229104+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:00.229244+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:01.229375+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 43696128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:02.229540+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 43696128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:03.229698+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 43696128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:04.229865+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 43696128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:05.230083+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 43687936 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:06.230220+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 43687936 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:07.230391+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 43687936 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:08.230549+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 43687936 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:09.230699+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 43679744 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:10.230830+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 43679744 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:11.231074+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 43679744 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:12.231245+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 43679744 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:13.231398+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 43671552 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:14.231574+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 43671552 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:15.231712+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff9da40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30d7b74a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30e85f2c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 43671552 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:16.231978+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7dd0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.772365570s of 40.016002655s, submitted: 65
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e312a114a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e2f3e00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d884f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e311e98780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7a8d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:17.232141+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448693 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f9000/0x0/0x4ffc00000, data 0xede279/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:18.232542+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:19.232744+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f9000/0x0/0x4ffc00000, data 0xede279/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:20.232974+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:21.233373+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:22.234071+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448693 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30deb83c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:23.234289+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d6b9860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:24.234581+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30d6061e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d787c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:25.234761+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343547904 unmapped: 49938432 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f7000/0x0/0x4ffc00000, data 0xede2ac/0x1077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:26.234917+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 49930240 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:27.235063+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515372 data_alloc: 218103808 data_used: 10084352
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:28.235224+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f7000/0x0/0x4ffc00000, data 0xede2ac/0x1077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:29.235555+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:30.235782+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:31.235980+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f7000/0x0/0x4ffc00000, data 0xede2ac/0x1077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:32.236152+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515372 data_alloc: 218103808 data_used: 10084352
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:33.236305+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:34.236553+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f7000/0x0/0x4ffc00000, data 0xede2ac/0x1077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:35.236725+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:36.236860+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.672105789s of 19.853677750s, submitted: 44
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:37.236992+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9371000/0x0/0x4ffc00000, data 0xf642ac/0x10fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590230 data_alloc: 218103808 data_used: 10219520
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:38.237158+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:39.237348+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:40.237557+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:41.237728+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a99000/0x0/0x4ffc00000, data 0x183c2ac/0x19d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:42.237950+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590230 data_alloc: 218103808 data_used: 10219520
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:43.238110+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:44.238294+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a97000/0x0/0x4ffc00000, data 0x183e2ac/0x19d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:45.238471+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:46.238622+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a97000/0x0/0x4ffc00000, data 0x183e2ac/0x19d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:47.238770+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589154 data_alloc: 218103808 data_used: 10223616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:48.238909+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.483092308s of 12.714237213s, submitted: 77
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:49.239131+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 47702016 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:50.239290+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 47702016 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:51.239432+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 47702016 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a96000/0x0/0x4ffc00000, data 0x183f2ac/0x19d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:52.239572+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff76f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703300 data_alloc: 218103808 data_used: 10223616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:53.239732+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:54.239922+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:55.240080+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:56.240224+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7b000/0x0/0x4ffc00000, data 0x265a2ac/0x27f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:57.240392+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30fcf72c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703300 data_alloc: 218103808 data_used: 10223616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:58.240526+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 53272576 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30fcf7680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f925c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f925c00 session 0x55e311e98f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:59.240715+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.225301743s of 10.326282501s, submitted: 25
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fb90b40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 53272576 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:00.240840+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 53272576 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:01.240983+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7a000/0x0/0x4ffc00000, data 0x265a2bc/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 53272576 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:02.241119+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799714 data_alloc: 234881024 data_used: 23601152
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:03.241264+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7a000/0x0/0x4ffc00000, data 0x265a2bc/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7a000/0x0/0x4ffc00000, data 0x265a2bc/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:04.241393+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:05.241539+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:06.241711+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:07.241906+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799714 data_alloc: 234881024 data_used: 23601152
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:08.242039+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:09.242179+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7a000/0x0/0x4ffc00000, data 0x265a2bc/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 45735936 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:10.242296+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 45735936 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.759096146s of 11.772933006s, submitted: 3
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:11.242423+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 40542208 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:12.242628+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e719f000/0x0/0x4ffc00000, data 0x31332bc/0x32cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 40198144 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3919252 data_alloc: 234881024 data_used: 25632768
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:13.242801+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 40165376 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:14.242977+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 40165376 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:15.243080+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361340928 unmapped: 40026112 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e700c000/0x0/0x4ffc00000, data 0x32b92bc/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:16.243248+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361340928 unmapped: 40026112 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e700c000/0x0/0x4ffc00000, data 0x32b92bc/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:17.243410+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361340928 unmapped: 40026112 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906216 data_alloc: 234881024 data_used: 25632768
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:18.243588+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:19.243775+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:20.243997+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:21.244246+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:22.244502+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffa000/0x0/0x4ffc00000, data 0x32da2bc/0x3474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906216 data_alloc: 234881024 data_used: 25632768
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:23.245087+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.417293549s of 12.718182564s, submitted: 129
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:24.245406+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffa000/0x0/0x4ffc00000, data 0x32da2bc/0x3474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffa000/0x0/0x4ffc00000, data 0x32da2bc/0x3474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:25.245640+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361480192 unmapped: 39886848 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30fcde780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30d7dc960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:26.245841+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 44490752 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30e421c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:27.246058+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 44490752 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601919 data_alloc: 218103808 data_used: 10223616
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:28.246474+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 44490752 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:29.246628+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 44490752 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e85f860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d5d9c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:30.246942+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fcded20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:31.247092+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:32.247258+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:33.247399+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:34.247865+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:35.248031+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:36.248314+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:37.248469+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:38.248673+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:39.248956+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:40.249216+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:41.249387+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:42.249537+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:43.249873+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:44.250157+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:45.250353+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:46.250529+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:47.250763+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:48.250991+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:49.251192+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:50.251329+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:51.251484+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:52.251654+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:53.251791+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:54.252006+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:55.252157+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:56.252299+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:57.252430+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:58.252579+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:59.252748+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:00.252904+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:01.253030+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:02.253180+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e312a10780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30d5f6d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30e0a6f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 57253888 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30dd7de00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.901283264s of 39.113162994s, submitted: 65
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fcdfe00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ca00960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30fe8ba40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30e838d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:03.253283+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30fd0dc20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470701 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:04.253390+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:05.253565+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:06.253681+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:07.253860+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:08.254075+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470701 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:09.254244+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:10.254388+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7b8d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:11.254544+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:12.254683+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:13.254821+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3522481 data_alloc: 218103808 data_used: 8404992
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:14.254980+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:15.255110+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:16.255273+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:17.255445+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:18.255573+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3530321 data_alloc: 218103808 data_used: 9543680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:19.255683+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:20.255822+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:21.256061+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:22.256279+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.354810715s of 19.462732315s, submitted: 21
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54460416 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:23.256468+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3588511 data_alloc: 218103808 data_used: 9900032
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54460416 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:24.256636+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54460416 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:25.256781+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:26.256931+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:27.257100+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:28.257195+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599941 data_alloc: 218103808 data_used: 10031104
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:29.257353+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:30.257509+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:31.257662+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:32.257797+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:33.257959+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599941 data_alloc: 218103808 data_used: 10031104
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:34.258142+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:35.258263+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:36.258423+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:37.258619+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:38.258762+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599941 data_alloc: 218103808 data_used: 10031104
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:39.260839+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30e3d50e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30dd7cb40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e310e51400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e310e51400 session 0x55e313f51860
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e311e99a40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.313724518s of 17.482471466s, submitted: 58
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 54419456 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,1,1])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30fd0d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30d6b81e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30e41e3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff88400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff88400 session 0x55e30d7921e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7a94a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:40.260996+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 54394880 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:41.261187+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:42.261360+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:43.261547+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3657942 data_alloc: 218103808 data_used: 10035200
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8667000/0x0/0x4ffc00000, data 0x1c6e299/0x1e07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:44.261690+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8667000/0x0/0x4ffc00000, data 0x1c6e299/0x1e07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:45.261871+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e312a11c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:46.262053+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30fd4c5a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff9cd20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:47.262172+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffe8c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffe8c00 session 0x55e30e0a0f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 52281344 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:48.262335+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660241 data_alloc: 218103808 data_used: 10039296
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 52273152 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:49.262468+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:50.262630+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e74c6000/0x0/0x4ffc00000, data 0x1c6e2a9/0x1e08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:51.262815+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:52.262953+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:53.263077+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702161 data_alloc: 234881024 data_used: 15888384
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:54.263267+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:55.263422+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.412758827s of 15.572922707s, submitted: 43
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:56.263606+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e74c4000/0x0/0x4ffc00000, data 0x1c6f2a9/0x1e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:57.263794+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:58.263973+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702469 data_alloc: 234881024 data_used: 15888384
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 49938432 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2800 writes, 11K keys, 2800 commit groups, 1.0 writes per commit group, ingest: 14.21 MB, 0.02 MB/s
                                           Interval WAL: 2799 writes, 1060 syncs, 2.64 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:59.264114+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 45424640 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:00.264304+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 45350912 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67f3000/0x0/0x4ffc00000, data 0x29392a9/0x2ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:01.264430+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 45981696 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:02.264583+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:03.264796+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3814005 data_alloc: 234881024 data_used: 16809984
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d1000/0x0/0x4ffc00000, data 0x29622a9/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:04.264990+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d1000/0x0/0x4ffc00000, data 0x29622a9/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d1000/0x0/0x4ffc00000, data 0x29622a9/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:05.265256+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:06.265386+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d1000/0x0/0x4ffc00000, data 0x29622a9/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:07.265500+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.632470131s of 12.327059746s, submitted: 108
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d0000/0x0/0x4ffc00000, data 0x29632a9/0x2afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:08.265661+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812485 data_alloc: 234881024 data_used: 16814080
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:09.265816+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:10.265958+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 45916160 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:11.266081+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 45916160 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:12.266270+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 45916160 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:13.266418+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812485 data_alloc: 234881024 data_used: 16814080
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:14.266600+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:15.266767+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:16.266914+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:17.267104+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:18.267270+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812485 data_alloc: 234881024 data_used: 16814080
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:19.267605+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:20.267762+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:21.267942+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:22.268126+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:23.268342+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812805 data_alloc: 234881024 data_used: 16822272
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.391826630s of 16.407047272s, submitted: 3
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:24.268556+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:25.268717+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30dd7cf00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30d8854a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:26.269208+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30fd0d2c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 45875200 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:27.269430+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 45875200 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:28.270319+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3610026 data_alloc: 218103808 data_used: 10092544
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 45875200 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:29.271236+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fd4c3c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 45867008 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:30.271350+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7b85000/0x0/0x4ffc00000, data 0x15b1227/0x1748000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e313f51680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:31.271531+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:32.271728+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:33.272122+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431522 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:34.272700+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:35.273122+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:36.273804+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:37.274327+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:38.274879+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431522 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:39.275108+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:40.275282+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:41.275656+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:42.276002+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:43.276294+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431522 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:44.276548+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:45.276729+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:46.276980+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:47.277193+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:48.277415+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431522 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:49.277553+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:50.277759+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.931455612s of 26.202787399s, submitted: 70
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d7863c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:51.277892+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e36c960
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e0afa40
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:52.278167+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30dd7d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30e2f32c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7fb7000/0x0/0x4ffc00000, data 0x1180227/0x1317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:53.278388+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519574 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:54.278592+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352518144 unmapped: 53051392 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:55.278815+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352518144 unmapped: 53051392 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:56.279111+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fe8b4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30d7a8f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fb91680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:57.279294+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:58.279470+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:59.279732+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:00.279910+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:01.280082+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:02.280243+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:03.280374+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:04.280511+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:05.280587+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:06.280733+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:07.280845+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:08.281001+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.590475082s of 18.700500488s, submitted: 24
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:09.281189+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:10.281321+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 55255040 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,1])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:11.281502+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:12.281875+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:13.282083+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:14.282269+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:15.282445+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:16.282616+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:17.282773+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:18.282971+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:19.283112+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:20.283252+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:21.283411+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:22.283591+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:23.283803+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:24.284002+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:25.284187+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:26.284343+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:27.284515+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:28.284701+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:29.284853+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:30.285077+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:31.285259+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:32.285411+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:33.285593+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:34.285777+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:35.285986+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:36.286244+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:37.286414+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:38.286584+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:39.286767+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:40.286945+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:41.287119+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:42.287957+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:43.288506+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:44.288707+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:45.288855+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:46.289086+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:47.289255+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:48.289433+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:49.289610+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:50.289829+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:51.290049+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:52.290211+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:53.290378+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:54.290639+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:55.290807+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:56.290984+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.899085999s of 47.251701355s, submitted: 106
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aeb000/0x0/0x4ffc00000, data 0x64e1f4/0x7e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:57.291131+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 293 ms_handle_reset con 0x55e30fd66000 session 0x55e30d6065a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 55173120 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:58.291250+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3439662 data_alloc: 218103808 data_used: 1114112
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:59.291413+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:00.291613+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e86d8000/0x0/0x4ffc00000, data 0x64fdb5/0x7e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:01.291788+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:02.291940+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:03.292107+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 55140352 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:04.292286+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 55140352 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:05.292697+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 55140352 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:06.293093+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:07.293360+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:08.294316+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:09.294472+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:10.294650+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:11.295135+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:12.295773+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:13.296147+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:14.296415+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:15.296651+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:16.296900+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:17.297148+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:18.297368+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:19.297524+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:20.297771+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:21.298065+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:22.298356+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:23.298530+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:24.298719+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:25.298850+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:26.299063+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:27.299217+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:28.299392+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:29.299530+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:30.299694+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:31.299866+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:32.300075+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:33.300249+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:34.300456+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:35.300662+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:36.300870+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350470144 unmapped: 55099392 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:37.301717+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:38.301931+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:39.302495+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:40.303392+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:41.303662+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:42.303938+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:43.304312+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:44.304843+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:45.305098+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:46.305487+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:47.305831+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:48.306195+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:49.306518+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:50.306768+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:51.306994+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:52.307247+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:53.307464+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 55074816 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:54.307679+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 55074816 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:55.307826+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:56.307982+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:57.308153+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:58.308281+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:59.308437+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:00.308621+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:01.308766+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350511104 unmapped: 55058432 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:02.308930+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350511104 unmapped: 55058432 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:03.309119+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350511104 unmapped: 55058432 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:04.310118+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350511104 unmapped: 55058432 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:05.310350+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 55050240 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:06.310529+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 55050240 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:07.310750+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 55050240 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:08.311076+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350527488 unmapped: 55042048 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:09.311255+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 55025664 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:10.311384+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 55017472 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:11.311546+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 55017472 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:12.311715+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 55017472 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:13.311922+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 55017472 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:14.312115+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 55009280 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:15.312244+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 55009280 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:16.312431+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 55009280 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:17.312589+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:18.312710+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:19.312958+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:20.313135+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:21.313270+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:22.313481+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:23.313637+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:24.313828+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:25.313981+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350576640 unmapped: 54992896 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:26.314147+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350576640 unmapped: 54992896 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:27.314266+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350576640 unmapped: 54992896 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:28.314374+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 54984704 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:29.314524+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350601216 unmapped: 54968320 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:30.314699+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350601216 unmapped: 54968320 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:31.314878+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350601216 unmapped: 54968320 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:32.315089+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350601216 unmapped: 54968320 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:33.315292+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350609408 unmapped: 54960128 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:34.315510+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350609408 unmapped: 54960128 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:35.315679+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350609408 unmapped: 54960128 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:36.315935+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:37.316196+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:38.316482+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:39.316720+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:40.316949+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:41.317148+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:42.317421+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:43.317680+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:44.317971+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:45.318197+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:46.318357+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:47.318534+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:48.318750+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:49.319092+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:50.319322+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff9d680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:51.319515+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 ms_handle_reset con 0x55e30ff89400 session 0x55e311e99c20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:52.319688+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:53.319903+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:54.320115+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442956 data_alloc: 218103808 data_used: 1126400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:55.320262+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:56.320430+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:57.320562+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350658560 unmapped: 54910976 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:58.320749+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 54902784 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:59.320864+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442956 data_alloc: 218103808 data_used: 1126400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 123.570800781s of 123.729423523s, submitted: 50
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 54902784 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:00.320973+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 295 ms_handle_reset con 0x55e30d721800 session 0x55e30fcf6780
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 54886400 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:01.321078+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e86d4000/0x0/0x4ffc00000, data 0x6533b6/0x7e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 54886400 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:02.321203+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 54886400 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:03.321339+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 296 ms_handle_reset con 0x55e30f933c00 session 0x55e30fcdf0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350715904 unmapped: 54853632 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:04.321516+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413453 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350724096 unmapped: 54845440 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:05.321633+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350724096 unmapped: 54845440 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:06.321761+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350724096 unmapped: 54845440 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:07.321987+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e8b41000/0x0/0x4ffc00000, data 0x1e4f87/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e8b41000/0x0/0x4ffc00000, data 0x1e4f87/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 54837248 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:08.322155+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 54837248 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:09.322300+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413453 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 54837248 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:10.322443+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.094017982s of 11.202492714s, submitted: 30
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350740480 unmapped: 54829056 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:11.322563+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e8b41000/0x0/0x4ffc00000, data 0x1e4f87/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350748672 unmapped: 54820864 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:12.322678+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350756864 unmapped: 54812672 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:13.322896+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350756864 unmapped: 54812672 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:14.323153+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416427 data_alloc: 218103808 data_used: 1118208
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 297 ms_handle_reset con 0x55e30fd66000 session 0x55e30d7874a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350756864 unmapped: 54812672 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:15.323330+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:16.323529+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:17.323692+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:18.323923+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:19.324115+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421525 data_alloc: 218103808 data_used: 1126400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:20.324299+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:21.324466+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 54779904 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:22.324613+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 54779904 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:23.324799+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 54779904 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:24.324960+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421525 data_alloc: 218103808 data_used: 1126400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:25.325118+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:26.325284+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:27.325585+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:28.325747+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:29.325967+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 54763520 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:30.326144+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 54763520 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:31.326293+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 54755328 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:32.326439+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 54755328 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:33.326633+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 54755328 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:34.326943+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 54755328 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:35.327131+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 54747136 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:36.327322+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 54747136 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:37.327492+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:38.327666+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:39.327810+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:40.327985+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:41.328108+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:42.328328+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:43.328495+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:44.328653+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 54730752 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:45.328775+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 54714368 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:46.328903+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:47.329133+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:48.329353+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:49.329703+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:50.330123+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:51.330344+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:52.331095+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:53.331702+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:54.332216+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350879744 unmapped: 54689792 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:55.332589+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350879744 unmapped: 54689792 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:56.332765+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:57.332990+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:58.333220+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:59.333379+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:00.333705+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:01.334008+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:02.334332+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:03.334555+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:04.334787+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:05.335090+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:06.335266+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:07.335455+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:08.335676+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:09.335869+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350912512 unmapped: 54657024 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:10.336068+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350912512 unmapped: 54657024 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:11.336264+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 54648832 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:12.336526+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 54648832 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:13.336759+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 54632448 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:14.336989+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 54632448 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:15.337180+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 54632448 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:16.337356+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 54632448 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:17.337539+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:18.337694+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:19.337829+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:20.337949+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:21.338068+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:22.338303+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:23.338505+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:24.338722+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:25.338913+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 54616064 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:26.339077+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 54607872 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:27.339286+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 54607872 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:28.339409+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 54607872 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:29.339560+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 54607872 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:30.339722+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 54599680 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:31.339874+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 54599680 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:32.340073+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 54599680 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:33.340223+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:34.340440+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:35.340571+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:36.340740+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:37.340913+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:38.341073+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:39.341227+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:40.341392+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:41.341536+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 54575104 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:42.341734+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 54575104 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:43.341957+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:44.342289+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:45.342507+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:46.342655+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:47.342849+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:48.343104+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:49.343270+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 54550528 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:50.343408+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:51.343546+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:52.343708+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:53.343857+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:54.344256+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:55.344384+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351035392 unmapped: 54534144 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:56.344539+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351035392 unmapped: 54534144 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:57.344741+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 54525952 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:58.344927+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 54517760 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:59.345079+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 54517760 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:00.345225+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 109.445167542s of 109.478462219s, submitted: 18
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 54517760 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:01.345368+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 54517760 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 299 ms_handle_reset con 0x55e30ff84000 session 0x55e3109df0e0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:02.345535+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 359448576 unmapped: 46120960 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:03.345677+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 62889984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 ms_handle_reset con 0x55e30ff84000 session 0x55e30fb90000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b37000/0x0/0x4ffc00000, data 0x11ea205/0x1387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:04.345831+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 61825024 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b37000/0x0/0x4ffc00000, data 0x11ea205/0x1387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:05.345980+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 61816832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:06.346108+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:07.346215+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:08.346315+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:09.346481+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:10.346698+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:11.346877+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:12.347102+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:13.347286+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:14.347590+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:15.347803+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:16.348065+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:17.348243+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:18.348430+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:19.348616+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:20.348783+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:21.349090+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:22.349268+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:23.349491+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:24.349703+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:25.349867+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:26.350089+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:27.350248+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:28.350404+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 61784064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:29.350569+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 61775872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:30.350747+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 61775872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.514171600s of 30.598321915s, submitted: 8
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:31.350866+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 301 ms_handle_reset con 0x55e30d721800 session 0x55e30e420f00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:32.351049+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e7b31000/0x0/0x4ffc00000, data 0x11ed842/0x138c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:33.381109+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:34.381354+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541533 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:35.381539+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:36.381682+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:37.381852+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e7b31000/0x0/0x4ffc00000, data 0x11ed842/0x138c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351174656 unmapped: 62791680 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:38.382148+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 62775296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:39.382312+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 62775296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544507 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:40.382496+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 62775296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:41.382645+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:42.382790+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:43.382984+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:44.383232+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544507 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:45.383438+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:46.383632+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:47.383766+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:48.383949+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:49.384174+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544507 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:50.384377+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:51.384532+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:52.430499+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:53.430647+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 62750720 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:54.430826+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544507 data_alloc: 218103808 data_used: 1142784
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:55.431097+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:56.431343+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:57.431590+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:58.432135+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:59.432319+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:00.432528+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:01.432815+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:02.432998+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:03.433271+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:04.433626+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:05.433919+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:06.434899+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:07.435834+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 62726144 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:08.436391+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 62726144 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:09.437174+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 62709760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:10.437815+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:11.438553+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:12.438959+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:13.439220+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:14.439437+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:15.439739+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:16.440004+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:17.440344+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:18.440616+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:19.440853+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:20.441184+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:21.441387+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:22.441634+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:23.441890+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 62685184 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:24.442152+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 62685184 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:25.442404+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:26.442669+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:27.443568+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:28.444112+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:29.444755+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:30.445305+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:31.445641+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:32.446191+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:33.446577+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:34.447151+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:35.447586+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:36.447777+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:37.448097+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:38.448391+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:39.448786+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:40.449125+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 62652416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:41.449539+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 62627840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:42.449793+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 62627840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:43.449976+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 62627840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:44.450336+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 62627840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:45.450714+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 62619648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:46.451067+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 62619648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:47.451338+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 62619648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:48.451538+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 62611456 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:49.451725+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:50.451916+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:51.452078+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:52.452310+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:53.452520+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:54.452826+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:55.452996+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 62595072 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:56.453181+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 62586880 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:57.453387+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:58.453565+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:59.453749+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:00.453976+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:01.454160+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:02.454352+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:03.454489+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:04.454684+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:05.454791+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:06.454940+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:07.455083+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:08.455210+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:09.455428+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:10.455559+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:11.455705+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:12.455914+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 62554112 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:13.456075+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:14.456285+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:15.456478+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:16.456727+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:17.456933+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:18.457131+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 62537728 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:19.457301+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 62537728 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:20.457489+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:21.457710+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:22.457947+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:23.458132+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:24.458377+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:25.458621+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:26.458803+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:27.459067+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:28.459318+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 62521344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:29.459509+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 62521344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:30.459695+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 62521344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:31.459864+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 62521344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:32.460409+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 62513152 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:33.462788+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 62504960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:34.464416+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 62504960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:35.466153+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:36.467658+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:37.468923+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:38.470952+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:39.471786+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:40.472925+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:41.473734+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 62488576 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:42.474324+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 62488576 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:43.474620+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 62488576 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:44.475156+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 62488576 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:45.475594+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 62480384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:46.475997+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 62480384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:47.476415+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 62472192 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:48.476703+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 62464000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:49.476972+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 62464000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:50.477239+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 62455808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:51.477466+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 62455808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:52.477705+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 62455808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:53.477850+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:54.478048+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:55.478312+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:56.478492+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:57.478641+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:58.478821+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:59.478967+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:00.479178+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:01.479331+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 62431232 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:02.479508+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:03.479656+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:04.480494+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:05.481149+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:06.481723+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:07.482281+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:08.482834+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:09.483346+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:10.483936+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:11.484158+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:12.484340+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:13.484487+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:14.484852+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:15.485032+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:16.485323+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:17.485462+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:18.485598+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:19.485777+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:20.485991+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:21.486160+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:22.486315+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 62398464 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:23.486528+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 62398464 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:24.486765+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 62398464 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:25.486963+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 62390272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:26.487164+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 62382080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:27.487383+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:28.487574+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:29.487768+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:30.487983+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:31.488321+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:32.488544+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:33.488743+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:34.489070+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:35.489249+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:36.489368+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:37.489572+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:38.489790+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:39.489969+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:40.491196+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:41.491386+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:42.491558+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:43.491747+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:44.491970+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:45.492132+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:46.492299+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:47.492454+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:48.492618+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 62332928 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:49.492781+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 62332928 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:50.492940+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 62324736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:51.493131+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:52.493348+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:53.493530+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:54.493711+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:55.493896+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:56.494113+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:57.494300+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:58.494449+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 62308352 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 709 writes, 2213 keys, 709 commit groups, 1.0 writes per commit group, ingest: 1.43 MB, 0.00 MB/s
                                           Interval WAL: 710 writes, 312 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:59.494626+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 62308352 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets getting new tickets!
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:00.494863+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _finish_auth 0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:00.496096+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:01.495078+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:02.495179+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:03.495339+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:04.495502+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: mgrc ms_handle_reset ms_handle_reset con 0x55e310670400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 09:59:31 compute-0 ceph-osd[87348]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: get_auth_request con 0x55e30e4d6c00 auth_method 0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: mgrc handle_mgr_configure stats_period=5
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:05.495610+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:06.495735+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:07.495893+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:08.496091+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 62291968 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:09.496309+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 62283776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:10.497985+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 ms_handle_reset con 0x55e30f930400 session 0x55e30d69c000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 62283776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:11.499687+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 62283776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:12.501349+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 62283776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:13.501557+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:14.502157+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:15.502876+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:16.504073+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:17.504254+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:18.505156+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 62267392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:19.505408+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 62267392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:20.506136+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 62267392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:21.506261+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:22.506564+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:23.506855+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:24.507056+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:25.507208+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:26.507399+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:27.507695+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:28.507836+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 62242816 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:29.507991+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 62234624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:30.508273+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 62234624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:31.508499+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 62234624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:32.508724+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:33.508922+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:34.509247+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:35.509407+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:36.509787+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:37.509972+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:38.510210+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:39.510508+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:40.510775+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:41.510902+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:42.511098+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:43.511252+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:44.511480+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:45.511616+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 62201856 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:46.511745+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 62201856 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:47.511912+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 62201856 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:48.512056+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:49.512170+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:50.512340+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:51.512461+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:52.512644+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:53.512783+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 62177280 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:54.512929+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 62177280 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:55.513112+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:56.513286+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:57.513424+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:58.513544+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:59.513670+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:00.513834+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 ms_handle_reset con 0x55e30d721000 session 0x55e311e983c0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:01.513978+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 62160896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:02.514117+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 62160896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:03.514277+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 62160896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:04.514475+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:05.514586+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:06.514732+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:07.514885+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:08.515148+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 277.841064453s of 277.904357910s, submitted: 24
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:09.515319+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 62095360 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:10.515489+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:11.515643+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:12.515795+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:13.516076+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:14.516601+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:15.517217+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:16.517465+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:17.517822+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:18.518062+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:19.519088+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:20.520080+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:21.520506+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:22.520954+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:23.521123+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:24.521426+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:25.521731+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:26.522241+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:27.522480+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:28.522650+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:29.522787+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:30.522901+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:31.523081+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:32.523214+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:33.523526+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:34.523747+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:35.523899+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:36.524051+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:37.524178+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:38.524338+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:39.524529+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:40.524849+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 62013440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:41.524988+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 62013440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:42.525152+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 62013440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:43.525420+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 62013440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:44.525697+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 62005248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:45.525853+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 62005248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:46.526073+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 62005248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:47.526226+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 62005248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:48.526453+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:49.526690+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:50.526905+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:51.527084+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:52.527266+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:53.527442+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:54.527706+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:55.527941+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:56.528125+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 61988864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:57.528311+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 61988864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:58.528558+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 61988864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:59.528673+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 61988864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:00.528761+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 61980672 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:01.528925+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 61980672 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:02.529123+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 61980672 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:03.529283+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 61980672 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:04.529557+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 61972480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:05.529685+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 61972480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:06.529852+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 61972480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:07.529978+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 61964288 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:08.530123+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:09.530289+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:10.530420+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:11.530542+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:12.530752+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:13.530983+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:14.531426+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:15.531558+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:16.531699+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:17.531832+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:18.532065+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:19.532262+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:20.532546+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:21.532814+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:22.533100+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:23.533321+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:24.533642+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:25.533838+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:26.534000+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:27.534201+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:28.534375+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:29.534527+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 61931520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:30.534676+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 61931520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:31.534871+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 61931520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:32.535037+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:33.535168+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:34.535386+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:35.535623+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:36.535927+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:37.536169+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 61915136 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:38.536319+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 61915136 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:39.536578+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:40.536737+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:41.536911+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:42.537103+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:43.537219+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:44.537506+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:45.537745+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:46.537901+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:47.538118+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:48.538288+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:49.538500+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:50.538704+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:51.538890+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:52.539117+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:53.539331+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:54.539596+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:55.539799+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:56.540248+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:57.540443+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:58.540687+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:59.540908+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:00.541116+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:01.541329+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 61865984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:02.541541+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 61865984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:03.541768+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 61865984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:04.541986+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 61865984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:05.542162+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:06.542362+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:07.542622+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:08.542891+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:09.543120+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:10.543335+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:11.543542+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:12.543739+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:13.543948+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:14.544296+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:15.544488+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:16.544710+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:17.544900+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 61841408 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:18.545093+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 61841408 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:19.545301+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 61841408 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:20.545503+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 61841408 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:21.545743+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 61833216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:22.545980+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 61833216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:23.546141+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 61833216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:24.546363+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 61816832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:25.546533+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 61816832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:26.546700+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:27.546919+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:28.547085+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:29.547261+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:30.547421+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:31.547610+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:32.547726+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:33.547865+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 61784064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:34.548056+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 61784064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:35.548214+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 61784064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 146.797119141s of 147.158462524s, submitted: 106
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 303 ms_handle_reset con 0x55e30ff89400 session 0x55e30e0a1e00
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:36.548335+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547248 data_alloc: 218103808 data_used: 1155072
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 61751296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:37.548459+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 61751296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f97b800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:38.548577+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 61751296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:39.548727+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 304 ms_handle_reset con 0x55e30f97b800 session 0x55e30fcdf4a0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 61734912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:40.548839+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 61734912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:41.549074+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8b29000/0x0/0x4ffc00000, data 0x1f2a37/0x394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443072 data_alloc: 218103808 data_used: 1155072
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 61718528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:42.549241+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 61718528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:43.549397+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d7800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 61702144 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8b29000/0x0/0x4ffc00000, data 0x1f2a37/0x394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [0,0,1])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 ms_handle_reset con 0x55e30e4d7800 session 0x55e30d7a9680
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:44.549600+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:45.549783+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:46.549953+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452346 data_alloc: 218103808 data_used: 1155072
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:47.550097+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:48.550305+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:49.550452+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:50.550633+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:51.550814+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452346 data_alloc: 218103808 data_used: 1155072
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:52.550991+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:53.551094+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:54.551594+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:55.551710+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:56.551876+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452346 data_alloc: 218103808 data_used: 1155072
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:57.555073+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 61652992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:58.555289+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 61652992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:59.555460+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 61652992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:00.555654+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 61652992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:01.555891+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452346 data_alloc: 218103808 data_used: 1155072
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 61644800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:02.556130+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 61644800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:03.556318+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 61644800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:04.556598+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 61644800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:05.556846+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:06.557471+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:07.557650+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:08.557825+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:09.558227+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:10.558482+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:11.558634+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:12.558844+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:13.559053+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:14.559306+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:15.559487+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:16.559633+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:17.559819+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:18.559966+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:19.560129+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:20.560348+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:21.560545+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:22.560750+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:23.560917+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:24.561122+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:25.561284+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:26.561481+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 61603840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:27.561632+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 61603840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:28.561795+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352370688 unmapped: 61595648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:29.561908+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:30.562104+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:31.562254+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:32.562385+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:33.562566+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:34.562723+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:35.562841+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:36.563041+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:37.563177+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 61571072 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:38.563322+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 61571072 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:39.563448+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d7800
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 63.272354126s of 63.442237854s, submitted: 57
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 61554688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: get_auth_request con 0x55e30d483400 auth_method 0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:40.563581+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 307 ms_handle_reset con 0x55e30e4d7800 session 0x55e30d7b6d20
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:41.563722+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454640 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:42.563873+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8b1f000/0x0/0x4ffc00000, data 0x1f7be8/0x39d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:43.564108+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:44.564307+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:45.564424+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 60473344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:46.578673+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454640 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 60473344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8b1f000/0x0/0x4ffc00000, data 0x1f7be8/0x39d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:47.578938+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 60473344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:48.579106+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:49.579314+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:50.579534+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:51.579801+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:52.580064+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:53.580216+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:54.580436+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:55.580637+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:56.580867+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:57.581124+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:58.581246+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:59.581395+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:00.581578+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:01.581731+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:02.595472+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:03.595649+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:04.595810+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:05.595974+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:06.596069+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:07.596312+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:08.596568+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 60424192 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:09.596900+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 60424192 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:10.597065+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:11.597195+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:12.597469+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:13.597720+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:14.597976+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:15.598205+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:16.598431+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:17.598709+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 60407808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:18.598875+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 60407808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:19.599090+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 60407808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:20.599308+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:21.599533+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:22.599722+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:23.600072+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:24.600347+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:25.600540+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:26.600678+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:27.601066+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:28.601250+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:29.601407+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:30.601607+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:31.601853+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:32.602071+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:33.602308+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:34.602491+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:35.602634+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:36.602871+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:37.603180+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:38.603446+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:39.603664+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:40.603873+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:41.604094+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:42.604291+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:43.604419+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:44.604579+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:45.604698+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 60358656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:46.606155+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 60358656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:47.606309+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 60358656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:48.606486+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:49.606684+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:50.606815+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:51.606976+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:52.607088+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:53.607212+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:54.607373+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:55.607509+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:56.607654+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 09:59:31 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 09:59:31 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 60334080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:57.608089+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'config diff' '{prefix=config diff}'
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353705984 unmapped: 60260352 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'config show' '{prefix=config show}'
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:58.608301+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353083392 unmapped: 60882944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:59.608478+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 60858368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 09:59:31 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:00.608641+0000)
Oct 14 09:59:31 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 09:59:31 compute-0 ceph-osd[87348]: do_command 'log dump' '{prefix=log dump}'
Oct 14 09:59:31 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23151 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:31 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 09:59:31 compute-0 ceph-mon[74249]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 14 09:59:31 compute-0 ceph-mon[74249]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 14 09:59:31 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3244955569' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 14 09:59:31 compute-0 ceph-mon[74249]: pgmap v3207: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 14 09:59:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408331916' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 14 09:59:32 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 14 09:59:32 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618336925' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:59:32 compute-0 ceph-mon[74249]: from='client.23151 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/408331916' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 14 09:59:32 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2618336925' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 09:59:32 compute-0 nova_compute[259627]: 2025-10-14 09:59:32.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:59:32
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.rgw.root', 'default.rgw.log']
Oct 14 09:59:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 09:59:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 14 09:59:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2331208599' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 09:59:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 14 09:59:33 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810345270' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 14 09:59:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2331208599' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 14 09:59:33 compute-0 ceph-mon[74249]: pgmap v3208: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:33 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3810345270' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 14 09:59:33 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23161 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:33 compute-0 systemd[1]: Starting Hostname Service...
Oct 14 09:59:34 compute-0 systemd[1]: Started Hostname Service.
Oct 14 09:59:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 14 09:59:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265609854' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 14 09:59:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 14 09:59:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114205993' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 14 09:59:34 compute-0 ceph-mon[74249]: from='client.23161 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:34 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2265609854' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 14 09:59:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:35 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23167 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:35 compute-0 nova_compute[259627]: 2025-10-14 09:59:35.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:35 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 14 09:59:35 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569788319' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 14 09:59:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/114205993' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 14 09:59:35 compute-0 ceph-mon[74249]: pgmap v3209: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:35 compute-0 ceph-mon[74249]: from='client.23167 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2569788319' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 14 09:59:36 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23171 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:36 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23173 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:36 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 14 09:59:36 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1153443781' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 14 09:59:36 compute-0 ceph-mon[74249]: from='client.23171 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:36 compute-0 ceph-mon[74249]: from='client.23173 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:36 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1153443781' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 14 09:59:37 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 14 09:59:37 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/896174513' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 14 09:59:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:37 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23179 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:37 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/896174513' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 14 09:59:37 compute-0 ceph-mon[74249]: pgmap v3210: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:37 compute-0 nova_compute[259627]: 2025-10-14 09:59:37.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23181 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:38 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:59:38 compute-0 sudo[445681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:38 compute-0 sudo[445681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:38 compute-0 sudo[445681]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:38 compute-0 sudo[445727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:59:38 compute-0 sudo[445727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:38 compute-0 sudo[445727]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:38 compute-0 sudo[445761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:38 compute-0 sudo[445761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:38 compute-0 sudo[445761]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 14 09:59:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2206957025' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 14 09:59:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:38 compute-0 sudo[445798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 09:59:38 compute-0 sudo[445798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 14 09:59:38 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2009324371' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 14 09:59:38 compute-0 ceph-mon[74249]: from='client.23179 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:38 compute-0 ceph-mon[74249]: from='client.23181 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2206957025' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 14 09:59:38 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2009324371' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 14 09:59:38 compute-0 sudo[445798]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:59:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 09:59:39 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 09:59:39 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:59:39 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev db93a6a2-a57b-486c-acda-75e0841cb5d4 does not exist
Oct 14 09:59:39 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 9c7c5d18-96aa-4bc8-8a69-1288448b96f1 does not exist
Oct 14 09:59:39 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 04f9cd4c-4e2a-4ce4-91c7-c1dc4fdaa4b2 does not exist
Oct 14 09:59:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 09:59:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 09:59:39 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 09:59:39 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:59:39 compute-0 sudo[445993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:39 compute-0 sudo[445993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:39 compute-0 sudo[445993]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:39 compute-0 sudo[446025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:59:39 compute-0 sudo[446025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:39 compute-0 sudo[446025]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:39 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23187 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:39 compute-0 sudo[446060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:39 compute-0 sudo[446060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:39 compute-0 sudo[446060]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:39 compute-0 sudo[446095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 09:59:39 compute-0 sudo[446095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:39 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23189 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:39 compute-0 podman[446245]: 2025-10-14 09:59:39.838415868 +0000 UTC m=+0.066529603 container create 64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 09:59:39 compute-0 systemd[1]: Started libpod-conmon-64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360.scope.
Oct 14 09:59:39 compute-0 podman[446245]: 2025-10-14 09:59:39.815792993 +0000 UTC m=+0.043906778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:59:39 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:59:39 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:59:39 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 09:59:39 compute-0 ceph-mon[74249]: pgmap v3211: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:39 compute-0 ceph-mon[74249]: from='client.23187 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:39 compute-0 podman[446245]: 2025-10-14 09:59:39.939434736 +0000 UTC m=+0.167548511 container init 64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 09:59:39 compute-0 podman[446245]: 2025-10-14 09:59:39.948937739 +0000 UTC m=+0.177051494 container start 64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:59:39 compute-0 podman[446245]: 2025-10-14 09:59:39.953429009 +0000 UTC m=+0.181542754 container attach 64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:59:39 compute-0 systemd[1]: libpod-64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360.scope: Deactivated successfully.
Oct 14 09:59:39 compute-0 magical_chatelet[446304]: 167 167
Oct 14 09:59:39 compute-0 conmon[446304]: conmon 64722d59d5965492ecb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360.scope/container/memory.events
Oct 14 09:59:39 compute-0 podman[446245]: 2025-10-14 09:59:39.959471857 +0000 UTC m=+0.187585622 container died 64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 09:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed0749fdb6138c745ea77cb6f17173ddaed85806f005bc6bcfb025ff92b6a463-merged.mount: Deactivated successfully.
Oct 14 09:59:40 compute-0 podman[446245]: 2025-10-14 09:59:40.017969193 +0000 UTC m=+0.246082938 container remove 64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatelet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 09:59:40 compute-0 systemd[1]: libpod-conmon-64722d59d5965492ecb1e4feae3c88efb6b65e68fd4021ef5e468edc4e2c4360.scope: Deactivated successfully.
Oct 14 09:59:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 14 09:59:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163630716' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 09:59:40 compute-0 podman[446378]: 2025-10-14 09:59:40.222477971 +0000 UTC m=+0.052224173 container create 73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 09:59:40 compute-0 systemd[1]: Started libpod-conmon-73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d.scope.
Oct 14 09:59:40 compute-0 podman[446378]: 2025-10-14 09:59:40.202393368 +0000 UTC m=+0.032139580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:59:40 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c92e4da959a8245f652c0523e26f250b76ebe32d6cdb44aceeb54e7732378f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c92e4da959a8245f652c0523e26f250b76ebe32d6cdb44aceeb54e7732378f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c92e4da959a8245f652c0523e26f250b76ebe32d6cdb44aceeb54e7732378f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c92e4da959a8245f652c0523e26f250b76ebe32d6cdb44aceeb54e7732378f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c92e4da959a8245f652c0523e26f250b76ebe32d6cdb44aceeb54e7732378f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:40 compute-0 podman[446378]: 2025-10-14 09:59:40.337010331 +0000 UTC m=+0.166756543 container init 73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 09:59:40 compute-0 podman[446378]: 2025-10-14 09:59:40.353410874 +0000 UTC m=+0.183157066 container start 73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:59:40 compute-0 podman[446378]: 2025-10-14 09:59:40.356773456 +0000 UTC m=+0.186519648 container attach 73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:59:40 compute-0 nova_compute[259627]: 2025-10-14 09:59:40.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 14 09:59:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159221059' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 14 09:59:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 14 09:59:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100003868' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:41 compute-0 ceph-mon[74249]: from='client.23189 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 09:59:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3163630716' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 09:59:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2159221059' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 14 09:59:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4100003868' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:41 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23197 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:41 compute-0 bold_rhodes[446426]: --> passed data devices: 0 physical, 3 LVM
Oct 14 09:59:41 compute-0 bold_rhodes[446426]: --> relative data size: 1.0
Oct 14 09:59:41 compute-0 bold_rhodes[446426]: --> All data devices are unavailable
Oct 14 09:59:41 compute-0 systemd[1]: libpod-73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d.scope: Deactivated successfully.
Oct 14 09:59:41 compute-0 systemd[1]: libpod-73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d.scope: Consumed 1.054s CPU time.
Oct 14 09:59:41 compute-0 podman[446378]: 2025-10-14 09:59:41.5102734 +0000 UTC m=+1.340019592 container died 73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:59:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 14 09:59:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229744212' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 09:59:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c92e4da959a8245f652c0523e26f250b76ebe32d6cdb44aceeb54e7732378f6-merged.mount: Deactivated successfully.
Oct 14 09:59:41 compute-0 podman[446378]: 2025-10-14 09:59:41.909182278 +0000 UTC m=+1.738928470 container remove 73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:59:41 compute-0 sudo[446095]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:41 compute-0 systemd[1]: libpod-conmon-73c4229dfe5b084bd8621eaf2452e787158bb11b71f5c8ed68db97d933984b9d.scope: Deactivated successfully.
Oct 14 09:59:42 compute-0 nova_compute[259627]: 2025-10-14 09:59:42.002 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 09:59:42 compute-0 nova_compute[259627]: 2025-10-14 09:59:42.003 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 09:59:42 compute-0 nova_compute[259627]: 2025-10-14 09:59:42.025 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 09:59:42 compute-0 sudo[446925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:42 compute-0 sudo[446925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:42 compute-0 sudo[446925]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:42 compute-0 sudo[446982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:59:42 compute-0 sudo[446982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:42 compute-0 sudo[446982]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:42 compute-0 ceph-mon[74249]: pgmap v3212: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:42 compute-0 ceph-mon[74249]: from='client.23197 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4229744212' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 09:59:42 compute-0 sudo[447021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:42 compute-0 sudo[447021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:42 compute-0 sudo[447021]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:42 compute-0 sudo[447061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 09:59:42 compute-0 sudo[447061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 14 09:59:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083845642' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 14 09:59:42 compute-0 ovs-appctl[447193]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 14 09:59:42 compute-0 ovs-appctl[447204]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 14 09:59:42 compute-0 ovs-appctl[447208]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 14 09:59:42 compute-0 podman[447210]: 2025-10-14 09:59:42.675291376 +0000 UTC m=+0.054617381 container create d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kilby, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 09:59:42 compute-0 systemd[1]: Started libpod-conmon-d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280.scope.
Oct 14 09:59:42 compute-0 podman[447210]: 2025-10-14 09:59:42.64934758 +0000 UTC m=+0.028673585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:59:42 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:59:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 14 09:59:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766494622' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:42 compute-0 podman[447210]: 2025-10-14 09:59:42.780842386 +0000 UTC m=+0.160168391 container init d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:59:42 compute-0 podman[447210]: 2025-10-14 09:59:42.789700644 +0000 UTC m=+0.169026609 container start d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:59:42 compute-0 romantic_kilby[447242]: 167 167
Oct 14 09:59:42 compute-0 systemd[1]: libpod-d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280.scope: Deactivated successfully.
Oct 14 09:59:42 compute-0 podman[447210]: 2025-10-14 09:59:42.794593514 +0000 UTC m=+0.173919509 container attach d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kilby, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:59:42 compute-0 conmon[447242]: conmon d4c252ffa8faf3afecc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280.scope/container/memory.events
Oct 14 09:59:42 compute-0 podman[447210]: 2025-10-14 09:59:42.796159632 +0000 UTC m=+0.175485647 container died d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aa7329261b8d29a8864ace8f009313252c0333d00420a9ae92c27609015dab4-merged.mount: Deactivated successfully.
Oct 14 09:59:42 compute-0 podman[447210]: 2025-10-14 09:59:42.867604255 +0000 UTC m=+0.246930230 container remove d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kilby, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:59:42 compute-0 systemd[1]: libpod-conmon-d4c252ffa8faf3afecc4075c12d8ec9793073a63e2192cdd4ceff63606c08280.scope: Deactivated successfully.
Oct 14 09:59:42 compute-0 podman[447259]: 2025-10-14 09:59:42.923994929 +0000 UTC m=+0.089818035 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 14 09:59:43 compute-0 nova_compute[259627]: 2025-10-14 09:59:43.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:43 compute-0 podman[447272]: 2025-10-14 09:59:43.053060066 +0000 UTC m=+0.210941957 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 14 09:59:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2083845642' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 14 09:59:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2766494622' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:43 compute-0 podman[447376]: 2025-10-14 09:59:43.16163555 +0000 UTC m=+0.059743907 container create b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 09:59:43 compute-0 systemd[1]: Started libpod-conmon-b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c.scope.
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:43 compute-0 podman[447376]: 2025-10-14 09:59:43.139321092 +0000 UTC m=+0.037429459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:59:43 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5c1eead6ef100573547c0f816cf76979e31c3f23c864a96f5773b49ddf7ab0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5c1eead6ef100573547c0f816cf76979e31c3f23c864a96f5773b49ddf7ab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5c1eead6ef100573547c0f816cf76979e31c3f23c864a96f5773b49ddf7ab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5c1eead6ef100573547c0f816cf76979e31c3f23c864a96f5773b49ddf7ab0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 14 09:59:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3875492698' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 14 09:59:43 compute-0 podman[447376]: 2025-10-14 09:59:43.28432094 +0000 UTC m=+0.182429307 container init b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 09:59:43 compute-0 podman[447376]: 2025-10-14 09:59:43.295873184 +0000 UTC m=+0.193981541 container start b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 09:59:43 compute-0 podman[447376]: 2025-10-14 09:59:43.300182479 +0000 UTC m=+0.198290846 container attach b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 09:59:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23207 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:43 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]: {
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:     "0": [
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:         {
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "devices": [
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "/dev/loop3"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             ],
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_name": "ceph_lv0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_size": "21470642176",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "name": "ceph_lv0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "tags": {
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cluster_name": "ceph",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.crush_device_class": "",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.encrypted": "0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osd_id": "0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.type": "block",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.vdo": "0"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             },
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "type": "block",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "vg_name": "ceph_vg0"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:         }
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:     ],
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:     "1": [
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:         {
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "devices": [
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "/dev/loop4"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             ],
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_name": "ceph_lv1",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_size": "21470642176",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "name": "ceph_lv1",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "tags": {
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cluster_name": "ceph",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.crush_device_class": "",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.encrypted": "0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osd_id": "1",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.type": "block",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.vdo": "0"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             },
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "type": "block",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "vg_name": "ceph_vg1"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:         }
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:     ],
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:     "2": [
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:         {
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "devices": [
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "/dev/loop5"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             ],
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_name": "ceph_lv2",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_size": "21470642176",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "name": "ceph_lv2",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "tags": {
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.cluster_name": "ceph",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.crush_device_class": "",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.encrypted": "0",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osd_id": "2",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.type": "block",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:                 "ceph.vdo": "0"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             },
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "type": "block",
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:             "vg_name": "ceph_vg2"
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:         }
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]:     ]
Oct 14 09:59:44 compute-0 ecstatic_pare[447414]: }
Oct 14 09:59:44 compute-0 ceph-mon[74249]: pgmap v3213: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3875492698' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 14 09:59:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 14 09:59:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246963863' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 14 09:59:44 compute-0 systemd[1]: libpod-b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c.scope: Deactivated successfully.
Oct 14 09:59:44 compute-0 podman[447376]: 2025-10-14 09:59:44.194177075 +0000 UTC m=+1.092285432 container died b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 14 09:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba5c1eead6ef100573547c0f816cf76979e31c3f23c864a96f5773b49ddf7ab0-merged.mount: Deactivated successfully.
Oct 14 09:59:44 compute-0 podman[447376]: 2025-10-14 09:59:44.258311148 +0000 UTC m=+1.156419505 container remove b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:59:44 compute-0 systemd[1]: libpod-conmon-b17d3f8abc3f43b97b469fb326f7c0a719eaf871b866e16e2ea728ddafebd27c.scope: Deactivated successfully.
Oct 14 09:59:44 compute-0 sudo[447061]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:44 compute-0 sudo[447653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:44 compute-0 sudo[447653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:44 compute-0 sudo[447653]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:44 compute-0 sudo[447710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 09:59:44 compute-0 sudo[447710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:44 compute-0 sudo[447710]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:44 compute-0 sudo[447750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:44 compute-0 sudo[447750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:44 compute-0 sudo[447750]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:44 compute-0 sudo[447788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 09:59:44 compute-0 sudo[447788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 14 09:59:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1119221236' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:45 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23213 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:45 compute-0 podman[447944]: 2025-10-14 09:59:45.046311044 +0000 UTC m=+0.049028904 container create dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:59:45 compute-0 systemd[1]: Started libpod-conmon-dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3.scope.
Oct 14 09:59:45 compute-0 podman[447944]: 2025-10-14 09:59:45.023491744 +0000 UTC m=+0.026209634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:59:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:59:45 compute-0 podman[447944]: 2025-10-14 09:59:45.139224074 +0000 UTC m=+0.141941944 container init dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 09:59:45 compute-0 podman[447944]: 2025-10-14 09:59:45.15087663 +0000 UTC m=+0.153594490 container start dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:59:45 compute-0 podman[447944]: 2025-10-14 09:59:45.156856056 +0000 UTC m=+0.159573916 container attach dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 09:59:45 compute-0 pedantic_jackson[447984]: 167 167
Oct 14 09:59:45 compute-0 systemd[1]: libpod-dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3.scope: Deactivated successfully.
Oct 14 09:59:45 compute-0 podman[447944]: 2025-10-14 09:59:45.161516581 +0000 UTC m=+0.164234451 container died dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 09:59:45 compute-0 ceph-mon[74249]: from='client.23207 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2246963863' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 14 09:59:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1119221236' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6cf0ff1b417e081e6d5cb66e1e2d0b239ca4ca8b66cf878cd8da5c859777387-merged.mount: Deactivated successfully.
Oct 14 09:59:45 compute-0 podman[447944]: 2025-10-14 09:59:45.20589729 +0000 UTC m=+0.208615150 container remove dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 09:59:45 compute-0 systemd[1]: libpod-conmon-dd433a2fa66037bee52a118948862c12cee78087c209e5318f455dd8830e75c3.scope: Deactivated successfully.
Oct 14 09:59:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:45 compute-0 podman[448072]: 2025-10-14 09:59:45.413929244 +0000 UTC m=+0.068154583 container create 49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:59:45 compute-0 nova_compute[259627]: 2025-10-14 09:59:45.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 14 09:59:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036755726' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 14 09:59:45 compute-0 systemd[1]: Started libpod-conmon-49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258.scope.
Oct 14 09:59:45 compute-0 podman[448072]: 2025-10-14 09:59:45.381884468 +0000 UTC m=+0.036109787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 09:59:45 compute-0 systemd[1]: Started libcrun container.
Oct 14 09:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c2dbcc2a4bdaff154743a6f351cd134533ec5b3279a52e32a6d191ece61fff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c2dbcc2a4bdaff154743a6f351cd134533ec5b3279a52e32a6d191ece61fff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c2dbcc2a4bdaff154743a6f351cd134533ec5b3279a52e32a6d191ece61fff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c2dbcc2a4bdaff154743a6f351cd134533ec5b3279a52e32a6d191ece61fff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 09:59:45 compute-0 podman[448072]: 2025-10-14 09:59:45.526384604 +0000 UTC m=+0.180609933 container init 49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_booth, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:59:45 compute-0 podman[448072]: 2025-10-14 09:59:45.535994729 +0000 UTC m=+0.190220038 container start 49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 09:59:45 compute-0 podman[448072]: 2025-10-14 09:59:45.541500324 +0000 UTC m=+0.195725673 container attach 49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 09:59:45 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23217 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:46 compute-0 ceph-mon[74249]: from='client.23213 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:46 compute-0 ceph-mon[74249]: pgmap v3214: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1036755726' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 14 09:59:46 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23219 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:46 compute-0 crazy_booth[448115]: {
Oct 14 09:59:46 compute-0 crazy_booth[448115]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "osd_id": 2,
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "type": "bluestore"
Oct 14 09:59:46 compute-0 crazy_booth[448115]:     },
Oct 14 09:59:46 compute-0 crazy_booth[448115]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "osd_id": 1,
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "type": "bluestore"
Oct 14 09:59:46 compute-0 crazy_booth[448115]:     },
Oct 14 09:59:46 compute-0 crazy_booth[448115]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "osd_id": 0,
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 09:59:46 compute-0 crazy_booth[448115]:         "type": "bluestore"
Oct 14 09:59:46 compute-0 crazy_booth[448115]:     }
Oct 14 09:59:46 compute-0 crazy_booth[448115]: }
Oct 14 09:59:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 14 09:59:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490710516' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:46 compute-0 systemd[1]: libpod-49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258.scope: Deactivated successfully.
Oct 14 09:59:46 compute-0 systemd[1]: libpod-49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258.scope: Consumed 1.061s CPU time.
Oct 14 09:59:46 compute-0 podman[448072]: 2025-10-14 09:59:46.59524032 +0000 UTC m=+1.249465619 container died 49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_booth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 09:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5c2dbcc2a4bdaff154743a6f351cd134533ec5b3279a52e32a6d191ece61fff-merged.mount: Deactivated successfully.
Oct 14 09:59:46 compute-0 podman[448072]: 2025-10-14 09:59:46.663162397 +0000 UTC m=+1.317387706 container remove 49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 09:59:46 compute-0 systemd[1]: libpod-conmon-49b21278f033668c542adef641e53ca402a06c58770d8d73fc204bf4df31f258.scope: Deactivated successfully.
Oct 14 09:59:46 compute-0 sudo[447788]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 09:59:46 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:59:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 09:59:46 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:59:46 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b6e8af60-1962-491e-a80d-4022d9d3e924 does not exist
Oct 14 09:59:46 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 64d1b0b3-d2eb-411f-8122-6695a6dde457 does not exist
Oct 14 09:59:46 compute-0 sudo[448437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 09:59:46 compute-0 sudo[448437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:46 compute-0 sudo[448437]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:46 compute-0 sudo[448497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 09:59:46 compute-0 sudo[448497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 09:59:46 compute-0 sudo[448497]: pam_unix(sudo:session): session closed for user root
Oct 14 09:59:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 14 09:59:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251197333' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 14 09:59:47 compute-0 ceph-mon[74249]: from='client.23217 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:47 compute-0 ceph-mon[74249]: from='client.23219 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3490710516' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 14 09:59:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:59:47 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 09:59:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2251197333' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23225 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23227 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 09:59:47 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 09:59:48 compute-0 nova_compute[259627]: 2025-10-14 09:59:48.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 14 09:59:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671525565' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 09:59:48 compute-0 ceph-mon[74249]: pgmap v3215: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:48 compute-0 ceph-mon[74249]: from='client.23225 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2671525565' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 09:59:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 14 09:59:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3761042929' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 14 09:59:48 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23233 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:49 compute-0 ceph-mon[74249]: from='client.23227 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3761042929' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 14 09:59:49 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23235 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 09:59:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2146290903' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 09:59:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 14 09:59:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623150730' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 14 09:59:50 compute-0 ceph-mon[74249]: from='client.23233 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:50 compute-0 ceph-mon[74249]: pgmap v3216: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:50 compute-0 ceph-mon[74249]: from='client.23235 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 09:59:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2146290903' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 09:59:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1623150730' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 14 09:59:50 compute-0 nova_compute[259627]: 2025-10-14 09:59:50.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:52 compute-0 ceph-mon[74249]: pgmap v3217: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:53 compute-0 nova_compute[259627]: 2025-10-14 09:59:53.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:54 compute-0 ceph-mon[74249]: pgmap v3218: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:54 compute-0 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 14 09:59:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:55 compute-0 nova_compute[259627]: 2025-10-14 09:59:55.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:56 compute-0 ceph-mon[74249]: pgmap v3219: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:58 compute-0 nova_compute[259627]: 2025-10-14 09:59:58.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 09:59:58 compute-0 systemd[1]: Starting Time & Date Service...
Oct 14 09:59:58 compute-0 systemd[1]: Started Time & Date Service.
Oct 14 09:59:58 compute-0 ceph-mon[74249]: pgmap v3220: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 09:59:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 09:59:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:00 compute-0 ceph-mon[74249]: pgmap v3221: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:00 compute-0 nova_compute[259627]: 2025-10-14 10:00:00.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:01 compute-0 podman[449564]: 2025-10-14 10:00:01.681174694 +0000 UTC m=+0.084226808 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 10:00:01 compute-0 podman[449563]: 2025-10-14 10:00:01.718362626 +0000 UTC m=+0.116680874 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 10:00:02 compute-0 ceph-mon[74249]: pgmap v3222: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:00:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:00:03 compute-0 nova_compute[259627]: 2025-10-14 10:00:03.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:04 compute-0 ceph-mon[74249]: pgmap v3223: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:05 compute-0 nova_compute[259627]: 2025-10-14 10:00:05.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 10:00:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722467207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:00:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 10:00:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722467207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:00:06 compute-0 ceph-mon[74249]: pgmap v3224: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2722467207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:00:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2722467207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:00:07.077 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:00:07.079 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:00:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:00:07.079 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:00:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:08 compute-0 nova_compute[259627]: 2025-10-14 10:00:08.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:08 compute-0 nova_compute[259627]: 2025-10-14 10:00:08.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:08 compute-0 ceph-mon[74249]: pgmap v3225: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:09 compute-0 ceph-mon[74249]: pgmap v3226: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:10 compute-0 nova_compute[259627]: 2025-10-14 10:00:10.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:10 compute-0 nova_compute[259627]: 2025-10-14 10:00:10.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:12 compute-0 ceph-mon[74249]: pgmap v3227: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:13 compute-0 nova_compute[259627]: 2025-10-14 10:00:13.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:13 compute-0 podman[449603]: 2025-10-14 10:00:13.707882384 +0000 UTC m=+0.106540055 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 10:00:13 compute-0 podman[449602]: 2025-10-14 10:00:13.730043588 +0000 UTC m=+0.138364356 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller)
Oct 14 10:00:13 compute-0 nova_compute[259627]: 2025-10-14 10:00:13.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.008 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:00:14 compute-0 ceph-mon[74249]: pgmap v3228: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:00:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1409244689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.501 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.663 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.664 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3389MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.665 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.665 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.727 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.728 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.745 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.777 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.778 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.791 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.815 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 10:00:14 compute-0 nova_compute[259627]: 2025-10-14 10:00:14.836 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:00:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:00:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256318261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:00:15 compute-0 nova_compute[259627]: 2025-10-14 10:00:15.327 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:00:15 compute-0 nova_compute[259627]: 2025-10-14 10:00:15.334 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 10:00:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1409244689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:00:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1256318261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:00:15 compute-0 nova_compute[259627]: 2025-10-14 10:00:15.350 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 10:00:15 compute-0 nova_compute[259627]: 2025-10-14 10:00:15.352 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 10:00:15 compute-0 nova_compute[259627]: 2025-10-14 10:00:15.352 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:00:15 compute-0 nova_compute[259627]: 2025-10-14 10:00:15.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:16 compute-0 ceph-mon[74249]: pgmap v3229: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:18 compute-0 nova_compute[259627]: 2025-10-14 10:00:18.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:18 compute-0 nova_compute[259627]: 2025-10-14 10:00:18.348 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:18 compute-0 ceph-mon[74249]: pgmap v3230: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:20 compute-0 ceph-mon[74249]: pgmap v3231: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:20 compute-0 nova_compute[259627]: 2025-10-14 10:00:20.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:21 compute-0 nova_compute[259627]: 2025-10-14 10:00:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:21 compute-0 nova_compute[259627]: 2025-10-14 10:00:21.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 10:00:21 compute-0 nova_compute[259627]: 2025-10-14 10:00:21.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 10:00:22 compute-0 nova_compute[259627]: 2025-10-14 10:00:22.017 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 10:00:22 compute-0 nova_compute[259627]: 2025-10-14 10:00:22.018 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:22 compute-0 nova_compute[259627]: 2025-10-14 10:00:22.018 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:22 compute-0 ceph-mon[74249]: pgmap v3232: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:22 compute-0 nova_compute[259627]: 2025-10-14 10:00:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:22 compute-0 nova_compute[259627]: 2025-10-14 10:00:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:22 compute-0 nova_compute[259627]: 2025-10-14 10:00:22.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 10:00:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:23 compute-0 nova_compute[259627]: 2025-10-14 10:00:23.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:24 compute-0 ceph-mon[74249]: pgmap v3233: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:25 compute-0 nova_compute[259627]: 2025-10-14 10:00:25.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:26 compute-0 ceph-mon[74249]: pgmap v3234: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:27 compute-0 ceph-mon[74249]: pgmap v3235: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 14 10:00:28 compute-0 nova_compute[259627]: 2025-10-14 10:00:28.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:28 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 14 10:00:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:30 compute-0 ceph-mon[74249]: pgmap v3236: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:30 compute-0 nova_compute[259627]: 2025-10-14 10:00:30.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:32 compute-0 ceph-mon[74249]: pgmap v3237: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:32 compute-0 podman[449696]: 2025-10-14 10:00:32.695924524 +0000 UTC m=+0.094181161 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 14 10:00:32 compute-0 podman[449695]: 2025-10-14 10:00:32.719415361 +0000 UTC m=+0.119841632 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:00:32
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct 14 10:00:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:33 compute-0 nova_compute[259627]: 2025-10-14 10:00:33.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:00:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:00:34 compute-0 ceph-mon[74249]: pgmap v3238: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:35 compute-0 nova_compute[259627]: 2025-10-14 10:00:35.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:36 compute-0 ceph-mon[74249]: pgmap v3239: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:38 compute-0 nova_compute[259627]: 2025-10-14 10:00:38.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:38 compute-0 ceph-mon[74249]: pgmap v3240: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:39 compute-0 ceph-mon[74249]: pgmap v3241: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:40 compute-0 sudo[441979]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:40 compute-0 sshd-session[441978]: Received disconnect from 192.168.122.10 port 33490:11: disconnected by user
Oct 14 10:00:40 compute-0 sshd-session[441978]: Disconnected from user zuul 192.168.122.10 port 33490
Oct 14 10:00:40 compute-0 sshd-session[441975]: pam_unix(sshd:session): session closed for user zuul
Oct 14 10:00:40 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Oct 14 10:00:40 compute-0 systemd[1]: session-57.scope: Consumed 3min 6.335s CPU time, 1.1G memory peak, read 524.0M from disk, written 339.0M to disk.
Oct 14 10:00:40 compute-0 systemd-logind[799]: Session 57 logged out. Waiting for processes to exit.
Oct 14 10:00:40 compute-0 systemd-logind[799]: Removed session 57.
Oct 14 10:00:40 compute-0 sshd-session[449733]: Accepted publickey for zuul from 192.168.122.10 port 49210 ssh2: ECDSA SHA256:jaGWHGBmEwGLhBs5A5z51rEw7f54kxwV4dpIRk+zLbs
Oct 14 10:00:40 compute-0 nova_compute[259627]: 2025-10-14 10:00:40.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:40 compute-0 systemd-logind[799]: New session 58 of user zuul.
Oct 14 10:00:40 compute-0 systemd[1]: Started Session 58 of User zuul.
Oct 14 10:00:40 compute-0 sshd-session[449733]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 14 10:00:40 compute-0 sudo[449737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-14-gakefhy.tar.xz
Oct 14 10:00:40 compute-0 sudo[449737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 10:00:41 compute-0 sudo[449737]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:41 compute-0 sshd-session[449736]: Received disconnect from 192.168.122.10 port 49210:11: disconnected by user
Oct 14 10:00:41 compute-0 sshd-session[449736]: Disconnected from user zuul 192.168.122.10 port 49210
Oct 14 10:00:41 compute-0 sshd-session[449733]: pam_unix(sshd:session): session closed for user zuul
Oct 14 10:00:41 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Oct 14 10:00:41 compute-0 systemd-logind[799]: Session 58 logged out. Waiting for processes to exit.
Oct 14 10:00:41 compute-0 systemd-logind[799]: Removed session 58.
Oct 14 10:00:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:41 compute-0 sshd-session[449762]: Accepted publickey for zuul from 192.168.122.10 port 49224 ssh2: ECDSA SHA256:jaGWHGBmEwGLhBs5A5z51rEw7f54kxwV4dpIRk+zLbs
Oct 14 10:00:41 compute-0 systemd-logind[799]: New session 59 of user zuul.
Oct 14 10:00:41 compute-0 systemd[1]: Started Session 59 of User zuul.
Oct 14 10:00:41 compute-0 sshd-session[449762]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 14 10:00:41 compute-0 sudo[449766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 14 10:00:41 compute-0 sudo[449766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 10:00:41 compute-0 sudo[449766]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:41 compute-0 sshd-session[449765]: Received disconnect from 192.168.122.10 port 49224:11: disconnected by user
Oct 14 10:00:41 compute-0 sshd-session[449765]: Disconnected from user zuul 192.168.122.10 port 49224
Oct 14 10:00:41 compute-0 sshd-session[449762]: pam_unix(sshd:session): session closed for user zuul
Oct 14 10:00:41 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct 14 10:00:41 compute-0 systemd-logind[799]: Session 59 logged out. Waiting for processes to exit.
Oct 14 10:00:41 compute-0 systemd-logind[799]: Removed session 59.
Oct 14 10:00:42 compute-0 ceph-mon[74249]: pgmap v3242: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:43 compute-0 nova_compute[259627]: 2025-10-14 10:00:43.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:00:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 10:00:44 compute-0 ceph-mon[74249]: pgmap v3243: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:44 compute-0 podman[449792]: 2025-10-14 10:00:44.669168841 +0000 UTC m=+0.078327523 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 10:00:44 compute-0 podman[449791]: 2025-10-14 10:00:44.724251563 +0000 UTC m=+0.136634314 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 10:00:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:45 compute-0 nova_compute[259627]: 2025-10-14 10:00:45.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:46 compute-0 ceph-mon[74249]: pgmap v3244: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:46 compute-0 sudo[449837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:47 compute-0 sudo[449837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:47 compute-0 sudo[449837]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:47 compute-0 sudo[449862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:00:47 compute-0 sudo[449862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:47 compute-0 sudo[449862]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:47 compute-0 sudo[449887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:47 compute-0 sudo[449887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:47 compute-0 sudo[449887]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:47 compute-0 sudo[449912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 10:00:47 compute-0 sudo[449912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:48 compute-0 sudo[449912]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:00:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:00:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 10:00:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:00:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 10:00:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:00:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 52f96b99-546b-4cc8-908f-2337e0f1898b does not exist
Oct 14 10:00:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0d6b2782-04f6-40b2-873e-c73218a50aa1 does not exist
Oct 14 10:00:48 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 409463ab-3234-4fe6-8e72-05b9b88f5bf2 does not exist
Oct 14 10:00:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 10:00:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:00:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 10:00:48 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:00:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:00:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:00:48 compute-0 sudo[449968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:48 compute-0 sudo[449968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:48 compute-0 sudo[449968]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:48 compute-0 sudo[449993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:00:48 compute-0 sudo[449993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:48 compute-0 sudo[449993]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:48 compute-0 ceph-mon[74249]: pgmap v3245: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:48 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:00:48 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:00:48 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:00:48 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:00:48 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:00:48 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:00:48 compute-0 sudo[450018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:48 compute-0 sudo[450018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:48 compute-0 sudo[450018]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:48 compute-0 nova_compute[259627]: 2025-10-14 10:00:48.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:48 compute-0 sudo[450043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 10:00:48 compute-0 sudo[450043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:49 compute-0 podman[450109]: 2025-10-14 10:00:49.007133461 +0000 UTC m=+0.084162726 container create 03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:00:49 compute-0 podman[450109]: 2025-10-14 10:00:48.970368639 +0000 UTC m=+0.047397974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:00:49 compute-0 systemd[1]: Started libpod-conmon-03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d.scope.
Oct 14 10:00:49 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:00:49 compute-0 podman[450109]: 2025-10-14 10:00:49.140679478 +0000 UTC m=+0.217708793 container init 03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lewin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:00:49 compute-0 podman[450109]: 2025-10-14 10:00:49.15582522 +0000 UTC m=+0.232854485 container start 03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 10:00:49 compute-0 podman[450109]: 2025-10-14 10:00:49.161514879 +0000 UTC m=+0.238544194 container attach 03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:00:49 compute-0 bold_lewin[450126]: 167 167
Oct 14 10:00:49 compute-0 systemd[1]: libpod-03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d.scope: Deactivated successfully.
Oct 14 10:00:49 compute-0 conmon[450126]: conmon 03427b70193634255dc9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d.scope/container/memory.events
Oct 14 10:00:49 compute-0 podman[450109]: 2025-10-14 10:00:49.167955747 +0000 UTC m=+0.244984992 container died 03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lewin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:00:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b82d3a9d642643b72e7477eb17bb7e97fb23165ec518a4002f527c476655b931-merged.mount: Deactivated successfully.
Oct 14 10:00:49 compute-0 podman[450109]: 2025-10-14 10:00:49.22797271 +0000 UTC m=+0.305001975 container remove 03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 10:00:49 compute-0 systemd[1]: libpod-conmon-03427b70193634255dc91aaaf06f4f5407eda81b7d4a38cb887d5ad02827a75d.scope: Deactivated successfully.
Oct 14 10:00:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:49 compute-0 podman[450150]: 2025-10-14 10:00:49.530048572 +0000 UTC m=+0.081147332 container create e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 10:00:49 compute-0 podman[450150]: 2025-10-14 10:00:49.495697079 +0000 UTC m=+0.046795899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:00:49 compute-0 systemd[1]: Started libpod-conmon-e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8.scope.
Oct 14 10:00:49 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63a8a1092f9d114a81921185d179b24d1103f272160bd2a59d4350c8c358120/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63a8a1092f9d114a81921185d179b24d1103f272160bd2a59d4350c8c358120/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63a8a1092f9d114a81921185d179b24d1103f272160bd2a59d4350c8c358120/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63a8a1092f9d114a81921185d179b24d1103f272160bd2a59d4350c8c358120/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63a8a1092f9d114a81921185d179b24d1103f272160bd2a59d4350c8c358120/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:49 compute-0 podman[450150]: 2025-10-14 10:00:49.687024594 +0000 UTC m=+0.238123354 container init e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 10:00:49 compute-0 podman[450150]: 2025-10-14 10:00:49.695670516 +0000 UTC m=+0.246769236 container start e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 10:00:49 compute-0 podman[450150]: 2025-10-14 10:00:49.699866499 +0000 UTC m=+0.250965259 container attach e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:00:50 compute-0 ceph-mon[74249]: pgmap v3246: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:50 compute-0 nova_compute[259627]: 2025-10-14 10:00:50.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:50 compute-0 wonderful_joliot[450167]: --> passed data devices: 0 physical, 3 LVM
Oct 14 10:00:50 compute-0 wonderful_joliot[450167]: --> relative data size: 1.0
Oct 14 10:00:50 compute-0 wonderful_joliot[450167]: --> All data devices are unavailable
Oct 14 10:00:51 compute-0 systemd[1]: libpod-e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8.scope: Deactivated successfully.
Oct 14 10:00:51 compute-0 systemd[1]: libpod-e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8.scope: Consumed 1.263s CPU time.
Oct 14 10:00:51 compute-0 podman[450150]: 2025-10-14 10:00:51.006159242 +0000 UTC m=+1.557258062 container died e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:00:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d63a8a1092f9d114a81921185d179b24d1103f272160bd2a59d4350c8c358120-merged.mount: Deactivated successfully.
Oct 14 10:00:51 compute-0 podman[450150]: 2025-10-14 10:00:51.077835271 +0000 UTC m=+1.628933991 container remove e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:00:51 compute-0 systemd[1]: libpod-conmon-e96055bfbc7c3428f074cb28eb3a47ede6b4e642bdb530aa4c67ca43b4c2d2c8.scope: Deactivated successfully.
Oct 14 10:00:51 compute-0 sudo[450043]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:51 compute-0 sudo[450211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:51 compute-0 sudo[450211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:51 compute-0 sudo[450211]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:51 compute-0 sudo[450236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:00:51 compute-0 sudo[450236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:51 compute-0 sudo[450236]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:51 compute-0 sudo[450261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:51 compute-0 sudo[450261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:51 compute-0 sudo[450261]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:51 compute-0 sudo[450286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 10:00:51 compute-0 sudo[450286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:52 compute-0 podman[450352]: 2025-10-14 10:00:52.051215834 +0000 UTC m=+0.074571651 container create a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 10:00:52 compute-0 podman[450352]: 2025-10-14 10:00:52.022067839 +0000 UTC m=+0.045423716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:00:52 compute-0 systemd[1]: Started libpod-conmon-a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501.scope.
Oct 14 10:00:52 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:00:52 compute-0 podman[450352]: 2025-10-14 10:00:52.185402286 +0000 UTC m=+0.208758173 container init a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:00:52 compute-0 podman[450352]: 2025-10-14 10:00:52.20062815 +0000 UTC m=+0.223983977 container start a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 10:00:52 compute-0 podman[450352]: 2025-10-14 10:00:52.204660299 +0000 UTC m=+0.228016166 container attach a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:00:52 compute-0 angry_hopper[450369]: 167 167
Oct 14 10:00:52 compute-0 podman[450352]: 2025-10-14 10:00:52.211166069 +0000 UTC m=+0.234521866 container died a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:00:52 compute-0 systemd[1]: libpod-a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501.scope: Deactivated successfully.
Oct 14 10:00:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-aca2af0f9377491eccff256d35632780bd8053342bf1ac70253d854390e92957-merged.mount: Deactivated successfully.
Oct 14 10:00:52 compute-0 podman[450352]: 2025-10-14 10:00:52.26133804 +0000 UTC m=+0.284693867 container remove a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hopper, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 10:00:52 compute-0 systemd[1]: libpod-conmon-a719becd1aa9dc2cdce90133f8cfcee823829cbcb3ffc27008b54d9b9c57f501.scope: Deactivated successfully.
Oct 14 10:00:52 compute-0 ceph-mon[74249]: pgmap v3247: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:52 compute-0 podman[450393]: 2025-10-14 10:00:52.542352225 +0000 UTC m=+0.079959263 container create 732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:00:52 compute-0 podman[450393]: 2025-10-14 10:00:52.509921429 +0000 UTC m=+0.047528527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:00:52 compute-0 systemd[1]: Started libpod-conmon-732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882.scope.
Oct 14 10:00:52 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413b21f93084804ea825231c423f3d1fe206ac934de9bdb4ce888ac4405d52fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413b21f93084804ea825231c423f3d1fe206ac934de9bdb4ce888ac4405d52fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413b21f93084804ea825231c423f3d1fe206ac934de9bdb4ce888ac4405d52fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413b21f93084804ea825231c423f3d1fe206ac934de9bdb4ce888ac4405d52fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:52 compute-0 podman[450393]: 2025-10-14 10:00:52.682206407 +0000 UTC m=+0.219813485 container init 732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:00:52 compute-0 podman[450393]: 2025-10-14 10:00:52.698174968 +0000 UTC m=+0.235782006 container start 732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:00:52 compute-0 podman[450393]: 2025-10-14 10:00:52.701962251 +0000 UTC m=+0.239569329 container attach 732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 10:00:52 compute-0 nova_compute[259627]: 2025-10-14 10:00:52.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:00:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:53 compute-0 nova_compute[259627]: 2025-10-14 10:00:53.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:53 compute-0 elastic_shamir[450409]: {
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:     "0": [
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:         {
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "devices": [
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "/dev/loop3"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             ],
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_name": "ceph_lv0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_size": "21470642176",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "name": "ceph_lv0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "tags": {
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cluster_name": "ceph",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.crush_device_class": "",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.encrypted": "0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osd_id": "0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.type": "block",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.vdo": "0"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             },
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "type": "block",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "vg_name": "ceph_vg0"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:         }
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:     ],
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:     "1": [
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:         {
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "devices": [
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "/dev/loop4"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             ],
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_name": "ceph_lv1",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_size": "21470642176",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "name": "ceph_lv1",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "tags": {
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cluster_name": "ceph",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.crush_device_class": "",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.encrypted": "0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osd_id": "1",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.type": "block",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.vdo": "0"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             },
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "type": "block",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "vg_name": "ceph_vg1"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:         }
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:     ],
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:     "2": [
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:         {
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "devices": [
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "/dev/loop5"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             ],
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_name": "ceph_lv2",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_size": "21470642176",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "name": "ceph_lv2",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "tags": {
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.cluster_name": "ceph",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.crush_device_class": "",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.encrypted": "0",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osd_id": "2",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.type": "block",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:                 "ceph.vdo": "0"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             },
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "type": "block",
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:             "vg_name": "ceph_vg2"
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:         }
Oct 14 10:00:53 compute-0 elastic_shamir[450409]:     ]
Oct 14 10:00:53 compute-0 elastic_shamir[450409]: }
Oct 14 10:00:53 compute-0 systemd[1]: libpod-732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882.scope: Deactivated successfully.
Oct 14 10:00:53 compute-0 podman[450393]: 2025-10-14 10:00:53.593423475 +0000 UTC m=+1.131030543 container died 732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:00:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-413b21f93084804ea825231c423f3d1fe206ac934de9bdb4ce888ac4405d52fb-merged.mount: Deactivated successfully.
Oct 14 10:00:53 compute-0 podman[450393]: 2025-10-14 10:00:53.676465573 +0000 UTC m=+1.214072591 container remove 732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 10:00:53 compute-0 systemd[1]: libpod-conmon-732db63f0ef7cc6b1f22a8f87684f148bfef093b8ef618dd5cd0119a21581882.scope: Deactivated successfully.
Oct 14 10:00:53 compute-0 sudo[450286]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:53 compute-0 sudo[450432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:53 compute-0 sudo[450432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:53 compute-0 sudo[450432]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:53 compute-0 sudo[450457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:00:53 compute-0 sudo[450457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:53 compute-0 sudo[450457]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:54 compute-0 sudo[450482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:54 compute-0 sudo[450482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:54 compute-0 sudo[450482]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:54 compute-0 sudo[450507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 10:00:54 compute-0 sudo[450507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:54 compute-0 ceph-mon[74249]: pgmap v3248: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:54 compute-0 podman[450573]: 2025-10-14 10:00:54.708600509 +0000 UTC m=+0.065221681 container create c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:00:54 compute-0 systemd[1]: Started libpod-conmon-c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8.scope.
Oct 14 10:00:54 compute-0 podman[450573]: 2025-10-14 10:00:54.6833785 +0000 UTC m=+0.039999752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:00:54 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:00:54 compute-0 podman[450573]: 2025-10-14 10:00:54.808224223 +0000 UTC m=+0.164845475 container init c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hamilton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 10:00:54 compute-0 podman[450573]: 2025-10-14 10:00:54.819402518 +0000 UTC m=+0.176023720 container start c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 10:00:54 compute-0 podman[450573]: 2025-10-14 10:00:54.82480388 +0000 UTC m=+0.181425122 container attach c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 10:00:54 compute-0 inspiring_hamilton[450590]: 167 167
Oct 14 10:00:54 compute-0 systemd[1]: libpod-c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8.scope: Deactivated successfully.
Oct 14 10:00:54 compute-0 podman[450573]: 2025-10-14 10:00:54.828241365 +0000 UTC m=+0.184862567 container died c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-04002ef7ead6b8cb44257235d796874e34217ece62f37e4be26388f795c13091-merged.mount: Deactivated successfully.
Oct 14 10:00:54 compute-0 podman[450573]: 2025-10-14 10:00:54.920220571 +0000 UTC m=+0.276841753 container remove c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 10:00:54 compute-0 systemd[1]: libpod-conmon-c25e74081868bc84891d9c9929f986a9687c3bd6eacf625c835898b917fcb5f8.scope: Deactivated successfully.
Oct 14 10:00:55 compute-0 podman[450615]: 2025-10-14 10:00:55.199893893 +0000 UTC m=+0.079702437 container create f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:00:55 compute-0 systemd[1]: Started libpod-conmon-f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c.scope.
Oct 14 10:00:55 compute-0 podman[450615]: 2025-10-14 10:00:55.167738054 +0000 UTC m=+0.047546648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:00:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:55 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e972d0eca4ae006f2a3d29a261e84cbe57a5a61a1f0c47d6ef38ecdfd64d9622/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e972d0eca4ae006f2a3d29a261e84cbe57a5a61a1f0c47d6ef38ecdfd64d9622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e972d0eca4ae006f2a3d29a261e84cbe57a5a61a1f0c47d6ef38ecdfd64d9622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e972d0eca4ae006f2a3d29a261e84cbe57a5a61a1f0c47d6ef38ecdfd64d9622/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:00:55 compute-0 podman[450615]: 2025-10-14 10:00:55.344731147 +0000 UTC m=+0.224539741 container init f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:00:55 compute-0 podman[450615]: 2025-10-14 10:00:55.358562266 +0000 UTC m=+0.238370780 container start f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:00:55 compute-0 podman[450615]: 2025-10-14 10:00:55.36358559 +0000 UTC m=+0.243394134 container attach f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 10:00:55 compute-0 nova_compute[259627]: 2025-10-14 10:00:55.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:56 compute-0 ceph-mon[74249]: pgmap v3249: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]: {
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "osd_id": 2,
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "type": "bluestore"
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:     },
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "osd_id": 1,
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "type": "bluestore"
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:     },
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "osd_id": 0,
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:         "type": "bluestore"
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]:     }
Oct 14 10:00:56 compute-0 vigorous_boyd[450632]: }
Oct 14 10:00:56 compute-0 systemd[1]: libpod-f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c.scope: Deactivated successfully.
Oct 14 10:00:56 compute-0 systemd[1]: libpod-f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c.scope: Consumed 1.283s CPU time.
Oct 14 10:00:56 compute-0 podman[450615]: 2025-10-14 10:00:56.633232593 +0000 UTC m=+1.513041137 container died f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:00:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e972d0eca4ae006f2a3d29a261e84cbe57a5a61a1f0c47d6ef38ecdfd64d9622-merged.mount: Deactivated successfully.
Oct 14 10:00:56 compute-0 podman[450615]: 2025-10-14 10:00:56.709229088 +0000 UTC m=+1.589037602 container remove f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 10:00:56 compute-0 systemd[1]: libpod-conmon-f6b979057d48d75be4a79593aa5e3b9de145b64cc9641f7c929a27dc760bb20c.scope: Deactivated successfully.
Oct 14 10:00:56 compute-0 sudo[450507]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 10:00:56 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:00:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 10:00:56 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:00:56 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 474794b8-ac11-43b3-a70d-b52fa01ac472 does not exist
Oct 14 10:00:56 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6f93f7e7-a0b9-46cc-8c17-6784ad1e17f3 does not exist
Oct 14 10:00:56 compute-0 sudo[450677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:00:56 compute-0 sudo[450677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:56 compute-0 sudo[450677]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:56 compute-0 sudo[450702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 10:00:56 compute-0 sudo[450702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:00:56 compute-0 sudo[450702]: pam_unix(sudo:session): session closed for user root
Oct 14 10:00:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:00:57 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:00:57 compute-0 ceph-mon[74249]: pgmap v3250: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:00:58 compute-0 nova_compute[259627]: 2025-10-14 10:00:58.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:00:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:00:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:00 compute-0 ceph-mon[74249]: pgmap v3251: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:00 compute-0 nova_compute[259627]: 2025-10-14 10:01:00.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:01 compute-0 CROND[450728]: (root) CMD (run-parts /etc/cron.hourly)
Oct 14 10:01:02 compute-0 run-parts[450731]: (/etc/cron.hourly) starting 0anacron
Oct 14 10:01:02 compute-0 run-parts[450737]: (/etc/cron.hourly) finished 0anacron
Oct 14 10:01:02 compute-0 CROND[450727]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 14 10:01:02 compute-0 ceph-mon[74249]: pgmap v3252: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.518113) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436062518145, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1629, "num_deletes": 251, "total_data_size": 2389977, "memory_usage": 2436848, "flush_reason": "Manual Compaction"}
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436062530081, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2343859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66549, "largest_seqno": 68177, "table_properties": {"data_size": 2336220, "index_size": 4452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17438, "raw_average_key_size": 20, "raw_value_size": 2320341, "raw_average_value_size": 2762, "num_data_blocks": 198, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435913, "oldest_key_time": 1760435913, "file_creation_time": 1760436062, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 11999 microseconds, and 5684 cpu microseconds.
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.530114) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2343859 bytes OK
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.530132) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.531967) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.531993) EVENT_LOG_v1 {"time_micros": 1760436062531985, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.532041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2382619, prev total WAL file size 2382619, number of live WAL files 2.
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.533323) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2288KB)], [158(10MB)]
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436062533365, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12950699, "oldest_snapshot_seqno": -1}
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8668 keys, 11198628 bytes, temperature: kUnknown
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436062584421, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11198628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11141998, "index_size": 33861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 227394, "raw_average_key_size": 26, "raw_value_size": 10988431, "raw_average_value_size": 1267, "num_data_blocks": 1314, "num_entries": 8668, "num_filter_entries": 8668, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760436062, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.584698) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11198628 bytes
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.586781) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.3 rd, 219.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.1 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(10.3) write-amplify(4.8) OK, records in: 9182, records dropped: 514 output_compression: NoCompression
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.586798) EVENT_LOG_v1 {"time_micros": 1760436062586789, "job": 98, "event": "compaction_finished", "compaction_time_micros": 51133, "compaction_time_cpu_micros": 24937, "output_level": 6, "num_output_files": 1, "total_output_size": 11198628, "num_input_records": 9182, "num_output_records": 8668, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436062587252, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436062588785, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.533261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.588878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.588885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.588887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.588888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:01:02 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:01:02.588889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:01:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:01:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:03 compute-0 nova_compute[259627]: 2025-10-14 10:01:03.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:03 compute-0 ceph-mon[74249]: pgmap v3253: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:03 compute-0 podman[450738]: 2025-10-14 10:01:03.67753591 +0000 UTC m=+0.080030945 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 10:01:03 compute-0 podman[450739]: 2025-10-14 10:01:03.69711532 +0000 UTC m=+0.086976435 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 10:01:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:05 compute-0 nova_compute[259627]: 2025-10-14 10:01:05.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 10:01:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2328037491' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:01:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 10:01:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2328037491' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:01:06 compute-0 ceph-mon[74249]: pgmap v3254: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2328037491' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:01:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/2328037491' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:01:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:01:07.079 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:01:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:01:07.080 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:01:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:01:07.081 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:01:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:08 compute-0 ceph-mon[74249]: pgmap v3255: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:08 compute-0 nova_compute[259627]: 2025-10-14 10:01:08.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:09 compute-0 nova_compute[259627]: 2025-10-14 10:01:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:10 compute-0 ceph-mon[74249]: pgmap v3256: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:10 compute-0 nova_compute[259627]: 2025-10-14 10:01:10.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:11 compute-0 nova_compute[259627]: 2025-10-14 10:01:11.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:12 compute-0 ceph-mon[74249]: pgmap v3257: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:13 compute-0 nova_compute[259627]: 2025-10-14 10:01:13.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:13 compute-0 nova_compute[259627]: 2025-10-14 10:01:13.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.020 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:01:14 compute-0 ceph-mon[74249]: pgmap v3258: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:01:14 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1905028034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.591 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.827 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.828 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.829 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.829 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.941 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.941 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 10:01:14 compute-0 nova_compute[259627]: 2025-10-14 10:01:14.957 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:01:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:15 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1905028034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:01:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:01:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/441080747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:01:15 compute-0 nova_compute[259627]: 2025-10-14 10:01:15.480 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:01:15 compute-0 nova_compute[259627]: 2025-10-14 10:01:15.489 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 10:01:15 compute-0 nova_compute[259627]: 2025-10-14 10:01:15.515 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 10:01:15 compute-0 nova_compute[259627]: 2025-10-14 10:01:15.519 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 10:01:15 compute-0 nova_compute[259627]: 2025-10-14 10:01:15.520 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:01:15 compute-0 systemd[1]: Starting dnf makecache...
Oct 14 10:01:15 compute-0 podman[450821]: 2025-10-14 10:01:15.694619874 +0000 UTC m=+0.088096483 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 10:01:15 compute-0 podman[450820]: 2025-10-14 10:01:15.756551303 +0000 UTC m=+0.149157010 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 10:01:15 compute-0 dnf[450822]: Metadata cache refreshed recently.
Oct 14 10:01:15 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 14 10:01:15 compute-0 systemd[1]: Finished dnf makecache.
Oct 14 10:01:15 compute-0 nova_compute[259627]: 2025-10-14 10:01:15.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:16 compute-0 ceph-mon[74249]: pgmap v3259: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/441080747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:01:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:18 compute-0 ceph-mon[74249]: pgmap v3260: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:18 compute-0 nova_compute[259627]: 2025-10-14 10:01:18.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:20 compute-0 ceph-mon[74249]: pgmap v3261: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:20 compute-0 nova_compute[259627]: 2025-10-14 10:01:20.515 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:20 compute-0 nova_compute[259627]: 2025-10-14 10:01:20.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:21 compute-0 nova_compute[259627]: 2025-10-14 10:01:21.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:22 compute-0 ceph-mon[74249]: pgmap v3262: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:23 compute-0 nova_compute[259627]: 2025-10-14 10:01:23.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:23 compute-0 nova_compute[259627]: 2025-10-14 10:01:23.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:23 compute-0 nova_compute[259627]: 2025-10-14 10:01:23.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 10:01:23 compute-0 nova_compute[259627]: 2025-10-14 10:01:23.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 10:01:24 compute-0 nova_compute[259627]: 2025-10-14 10:01:23.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 10:01:24 compute-0 nova_compute[259627]: 2025-10-14 10:01:23.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:24 compute-0 nova_compute[259627]: 2025-10-14 10:01:24.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:24 compute-0 nova_compute[259627]: 2025-10-14 10:01:24.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 10:01:24 compute-0 ceph-mon[74249]: pgmap v3263: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:24 compute-0 nova_compute[259627]: 2025-10-14 10:01:24.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:01:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:25 compute-0 nova_compute[259627]: 2025-10-14 10:01:25.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:26 compute-0 ceph-mon[74249]: pgmap v3264: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:28 compute-0 ceph-mon[74249]: pgmap v3265: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:28 compute-0 nova_compute[259627]: 2025-10-14 10:01:28.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:30 compute-0 ceph-mon[74249]: pgmap v3266: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:30 compute-0 nova_compute[259627]: 2025-10-14 10:01:30.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:32 compute-0 ceph-mon[74249]: pgmap v3267: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:01:32
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'backups', 'vms', 'default.rgw.control', 'default.rgw.log', '.mgr', 'images']
Oct 14 10:01:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:01:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:01:33 compute-0 nova_compute[259627]: 2025-10-14 10:01:33.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:34 compute-0 ceph-mon[74249]: pgmap v3268: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:34 compute-0 podman[450867]: 2025-10-14 10:01:34.687582751 +0000 UTC m=+0.086766420 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct 14 10:01:34 compute-0 podman[450866]: 2025-10-14 10:01:34.688839142 +0000 UTC m=+0.092381548 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:01:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:35 compute-0 nova_compute[259627]: 2025-10-14 10:01:35.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:36 compute-0 ceph-mon[74249]: pgmap v3269: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:38 compute-0 ceph-mon[74249]: pgmap v3270: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:38 compute-0 nova_compute[259627]: 2025-10-14 10:01:38.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:40 compute-0 ceph-mon[74249]: pgmap v3271: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:40 compute-0 nova_compute[259627]: 2025-10-14 10:01:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:42 compute-0 ceph-mon[74249]: pgmap v3272: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:43 compute-0 ceph-mon[74249]: pgmap v3273: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:43 compute-0 nova_compute[259627]: 2025-10-14 10:01:43.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:01:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 10:01:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:45 compute-0 nova_compute[259627]: 2025-10-14 10:01:45.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:46 compute-0 ceph-mon[74249]: pgmap v3274: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:46 compute-0 podman[450906]: 2025-10-14 10:01:46.675655045 +0000 UTC m=+0.072742406 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 14 10:01:46 compute-0 podman[450905]: 2025-10-14 10:01:46.727899597 +0000 UTC m=+0.140299634 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller)
Oct 14 10:01:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:01:46 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 68K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1362 writes, 6239 keys, 1362 commit groups, 1.0 writes per commit group, ingest: 8.85 MB, 0.01 MB/s
                                           Interval WAL: 1362 writes, 1362 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     94.4      0.89              0.34        49    0.018       0      0       0.0       0.0
                                             L6      1/0   10.68 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9    176.0    149.2      2.74              1.49        48    0.057    319K    25K       0.0       0.0
                                            Sum      1/0   10.68 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9    132.8    135.8      3.63              1.83        97    0.037    319K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     97.5     98.8      0.59              0.23        10    0.059     44K   2571       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    176.0    149.2      2.74              1.49        48    0.057    319K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     95.2      0.88              0.34        48    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.082, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.48 GB write, 0.08 MB/s write, 0.47 GB read, 0.08 MB/s read, 3.6 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 56.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000628 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3945,54.31 MB,17.8637%) FilterBlock(98,889.30 KB,0.285676%) IndexBlock(98,1.45 MB,0.475783%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 14 10:01:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:48 compute-0 ceph-mon[74249]: pgmap v3275: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:48 compute-0 nova_compute[259627]: 2025-10-14 10:01:48.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:50 compute-0 ceph-mon[74249]: pgmap v3276: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:50 compute-0 rsyslogd[1002]: imjournal: 15302 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 10:01:51 compute-0 nova_compute[259627]: 2025-10-14 10:01:51.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:52 compute-0 ceph-mon[74249]: pgmap v3277: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:53 compute-0 nova_compute[259627]: 2025-10-14 10:01:53.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:54 compute-0 ceph-mon[74249]: pgmap v3278: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:56 compute-0 nova_compute[259627]: 2025-10-14 10:01:56.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:56 compute-0 ceph-mon[74249]: pgmap v3279: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:57 compute-0 sudo[450952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:01:57 compute-0 sudo[450952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:01:57 compute-0 sudo[450952]: pam_unix(sudo:session): session closed for user root
Oct 14 10:01:57 compute-0 sudo[450977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:01:57 compute-0 sudo[450977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:01:57 compute-0 sudo[450977]: pam_unix(sudo:session): session closed for user root
Oct 14 10:01:57 compute-0 sudo[451002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:01:57 compute-0 sudo[451002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:01:57 compute-0 sudo[451002]: pam_unix(sudo:session): session closed for user root
Oct 14 10:01:57 compute-0 sudo[451027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 10:01:57 compute-0 sudo[451027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:01:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:57 compute-0 sudo[451027]: pam_unix(sudo:session): session closed for user root
Oct 14 10:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:01:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 10:01:57 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 10:01:57 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:01:57 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 78b288b5-6d3d-4dba-a291-2b06a7f18466 does not exist
Oct 14 10:01:57 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c48f0050-eaa2-4ac5-9059-ad8213e0dd57 does not exist
Oct 14 10:01:57 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 013a0ff2-75fc-40d8-ba89-2fe0d2dd36b9 does not exist
Oct 14 10:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 10:01:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 10:01:57 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:01:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:01:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:01:57 compute-0 sudo[451084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:01:57 compute-0 sudo[451084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:01:57 compute-0 sudo[451084]: pam_unix(sudo:session): session closed for user root
Oct 14 10:01:58 compute-0 sudo[451109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:01:58 compute-0 sudo[451109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:01:58 compute-0 sudo[451109]: pam_unix(sudo:session): session closed for user root
Oct 14 10:01:58 compute-0 sudo[451134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:01:58 compute-0 sudo[451134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:01:58 compute-0 sudo[451134]: pam_unix(sudo:session): session closed for user root
Oct 14 10:01:58 compute-0 sudo[451159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 10:01:58 compute-0 sudo[451159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:01:58 compute-0 ceph-mon[74249]: pgmap v3280: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:01:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:01:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:01:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:01:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:01:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:01:58 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:01:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:01:58 compute-0 podman[451225]: 2025-10-14 10:01:58.525004445 +0000 UTC m=+0.039252834 container create cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 10:01:58 compute-0 systemd[1]: Started libpod-conmon-cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e.scope.
Oct 14 10:01:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:01:58 compute-0 podman[451225]: 2025-10-14 10:01:58.507199888 +0000 UTC m=+0.021448307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:01:58 compute-0 podman[451225]: 2025-10-14 10:01:58.616270315 +0000 UTC m=+0.130518724 container init cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:01:58 compute-0 podman[451225]: 2025-10-14 10:01:58.623786689 +0000 UTC m=+0.138035108 container start cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:01:58 compute-0 podman[451225]: 2025-10-14 10:01:58.627995812 +0000 UTC m=+0.142244231 container attach cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 10:01:58 compute-0 admiring_davinci[451241]: 167 167
Oct 14 10:01:58 compute-0 systemd[1]: libpod-cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e.scope: Deactivated successfully.
Oct 14 10:01:58 compute-0 conmon[451241]: conmon cfbecb2a3c337e314e66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e.scope/container/memory.events
Oct 14 10:01:58 compute-0 podman[451225]: 2025-10-14 10:01:58.634738978 +0000 UTC m=+0.148987357 container died cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:01:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-444ede3b47554d8af513dd1a3efe1aa333c4f631b8d49957834e38ce01fc3a39-merged.mount: Deactivated successfully.
Oct 14 10:01:58 compute-0 nova_compute[259627]: 2025-10-14 10:01:58.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:01:58 compute-0 podman[451225]: 2025-10-14 10:01:58.684762405 +0000 UTC m=+0.199010784 container remove cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:01:58 compute-0 systemd[1]: libpod-conmon-cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e.scope: Deactivated successfully.
Oct 14 10:01:58 compute-0 podman[451266]: 2025-10-14 10:01:58.880415796 +0000 UTC m=+0.047505827 container create 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:01:58 compute-0 systemd[1]: Started libpod-conmon-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope.
Oct 14 10:01:58 compute-0 podman[451266]: 2025-10-14 10:01:58.858250722 +0000 UTC m=+0.025340743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:01:58 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 10:01:59 compute-0 podman[451266]: 2025-10-14 10:01:59.003399404 +0000 UTC m=+0.170489445 container init 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 10:01:59 compute-0 podman[451266]: 2025-10-14 10:01:59.016640059 +0000 UTC m=+0.183730100 container start 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:01:59 compute-0 podman[451266]: 2025-10-14 10:01:59.020538434 +0000 UTC m=+0.187628525 container attach 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 10:01:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:00 compute-0 cool_hawking[451282]: --> passed data devices: 0 physical, 3 LVM
Oct 14 10:02:00 compute-0 cool_hawking[451282]: --> relative data size: 1.0
Oct 14 10:02:00 compute-0 cool_hawking[451282]: --> All data devices are unavailable
Oct 14 10:02:00 compute-0 systemd[1]: libpod-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope: Deactivated successfully.
Oct 14 10:02:00 compute-0 conmon[451282]: conmon 4e8f5c3883e7b32a0b15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope/container/memory.events
Oct 14 10:02:00 compute-0 systemd[1]: libpod-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope: Consumed 1.128s CPU time.
Oct 14 10:02:00 compute-0 podman[451311]: 2025-10-14 10:02:00.237386041 +0000 UTC m=+0.030541220 container died 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 10:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018-merged.mount: Deactivated successfully.
Oct 14 10:02:00 compute-0 podman[451311]: 2025-10-14 10:02:00.297323422 +0000 UTC m=+0.090478611 container remove 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 10:02:00 compute-0 systemd[1]: libpod-conmon-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope: Deactivated successfully.
Oct 14 10:02:00 compute-0 sudo[451159]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:00 compute-0 ceph-mon[74249]: pgmap v3281: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:00 compute-0 sudo[451326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:02:00 compute-0 sudo[451326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:00 compute-0 sudo[451326]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:00 compute-0 sudo[451351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:02:00 compute-0 sudo[451351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:00 compute-0 sudo[451351]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:00 compute-0 sudo[451376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:02:00 compute-0 sudo[451376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:00 compute-0 sudo[451376]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:00 compute-0 sudo[451401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 10:02:00 compute-0 sudo[451401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:01 compute-0 nova_compute[259627]: 2025-10-14 10:02:01.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:01 compute-0 podman[451467]: 2025-10-14 10:02:01.242533415 +0000 UTC m=+0.061112061 container create e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:02:01 compute-0 systemd[1]: Started libpod-conmon-e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf.scope.
Oct 14 10:02:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:01 compute-0 podman[451467]: 2025-10-14 10:02:01.222978715 +0000 UTC m=+0.041557401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:02:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:02:01 compute-0 podman[451467]: 2025-10-14 10:02:01.341312179 +0000 UTC m=+0.159890865 container init e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 10:02:01 compute-0 podman[451467]: 2025-10-14 10:02:01.354300798 +0000 UTC m=+0.172879454 container start e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 14 10:02:01 compute-0 podman[451467]: 2025-10-14 10:02:01.357915726 +0000 UTC m=+0.176494412 container attach e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 10:02:01 compute-0 tender_spence[451483]: 167 167
Oct 14 10:02:01 compute-0 systemd[1]: libpod-e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf.scope: Deactivated successfully.
Oct 14 10:02:01 compute-0 podman[451467]: 2025-10-14 10:02:01.364004466 +0000 UTC m=+0.182583112 container died e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-59c5d8ed88f951f2832a5be149523f643eccec82db4c98eb060da5607b3beb3c-merged.mount: Deactivated successfully.
Oct 14 10:02:01 compute-0 podman[451467]: 2025-10-14 10:02:01.40533718 +0000 UTC m=+0.223915826 container remove e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 10:02:01 compute-0 systemd[1]: libpod-conmon-e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf.scope: Deactivated successfully.
Oct 14 10:02:01 compute-0 podman[451507]: 2025-10-14 10:02:01.600850077 +0000 UTC m=+0.061107260 container create 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:02:01 compute-0 systemd[1]: Started libpod-conmon-07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d.scope.
Oct 14 10:02:01 compute-0 podman[451507]: 2025-10-14 10:02:01.571590369 +0000 UTC m=+0.031847592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:02:01 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:02:01 compute-0 podman[451507]: 2025-10-14 10:02:01.718420722 +0000 UTC m=+0.178677895 container init 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:02:01 compute-0 podman[451507]: 2025-10-14 10:02:01.732292602 +0000 UTC m=+0.192549785 container start 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 10:02:01 compute-0 podman[451507]: 2025-10-14 10:02:01.736721301 +0000 UTC m=+0.196978474 container attach 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:02:02 compute-0 ceph-mon[74249]: pgmap v3282: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]: {
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:     "0": [
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:         {
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "devices": [
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "/dev/loop3"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             ],
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_name": "ceph_lv0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_size": "21470642176",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "name": "ceph_lv0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "tags": {
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cluster_name": "ceph",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.crush_device_class": "",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.encrypted": "0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osd_id": "0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.type": "block",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.vdo": "0"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             },
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "type": "block",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "vg_name": "ceph_vg0"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:         }
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:     ],
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:     "1": [
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:         {
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "devices": [
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "/dev/loop4"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             ],
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_name": "ceph_lv1",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_size": "21470642176",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "name": "ceph_lv1",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "tags": {
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cluster_name": "ceph",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.crush_device_class": "",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.encrypted": "0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osd_id": "1",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.type": "block",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.vdo": "0"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             },
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "type": "block",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "vg_name": "ceph_vg1"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:         }
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:     ],
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:     "2": [
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:         {
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "devices": [
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "/dev/loop5"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             ],
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_name": "ceph_lv2",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_size": "21470642176",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "name": "ceph_lv2",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "tags": {
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.cluster_name": "ceph",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.crush_device_class": "",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.encrypted": "0",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osd_id": "2",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.type": "block",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:                 "ceph.vdo": "0"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             },
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "type": "block",
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:             "vg_name": "ceph_vg2"
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:         }
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]:     ]
Oct 14 10:02:02 compute-0 quizzical_torvalds[451524]: }
Oct 14 10:02:02 compute-0 systemd[1]: libpod-07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d.scope: Deactivated successfully.
Oct 14 10:02:02 compute-0 podman[451507]: 2025-10-14 10:02:02.640515768 +0000 UTC m=+1.100772951 container died 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581-merged.mount: Deactivated successfully.
Oct 14 10:02:02 compute-0 podman[451507]: 2025-10-14 10:02:02.732498625 +0000 UTC m=+1.192755768 container remove 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 10:02:02 compute-0 systemd[1]: libpod-conmon-07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d.scope: Deactivated successfully.
Oct 14 10:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:02:02 compute-0 sudo[451401]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:02:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:02:02 compute-0 sudo[451545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:02:02 compute-0 sudo[451545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:02 compute-0 sudo[451545]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:02 compute-0 sudo[451570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:02:02 compute-0 sudo[451570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:02 compute-0 sudo[451570]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:03 compute-0 sudo[451595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:02:03 compute-0 sudo[451595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:03 compute-0 sudo[451595]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:03 compute-0 sudo[451620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 10:02:03 compute-0 sudo[451620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:03 compute-0 podman[451687]: 2025-10-14 10:02:03.638549416 +0000 UTC m=+0.073190387 container create fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:02:03 compute-0 podman[451687]: 2025-10-14 10:02:03.609626246 +0000 UTC m=+0.044267307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:02:03 compute-0 nova_compute[259627]: 2025-10-14 10:02:03.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:03 compute-0 systemd[1]: Started libpod-conmon-fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6.scope.
Oct 14 10:02:03 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:02:03 compute-0 podman[451687]: 2025-10-14 10:02:03.80211755 +0000 UTC m=+0.236758541 container init fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:02:03 compute-0 podman[451687]: 2025-10-14 10:02:03.811176112 +0000 UTC m=+0.245817083 container start fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:02:03 compute-0 podman[451687]: 2025-10-14 10:02:03.815200011 +0000 UTC m=+0.249841002 container attach fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 10:02:03 compute-0 admiring_almeida[451703]: 167 167
Oct 14 10:02:03 compute-0 systemd[1]: libpod-fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6.scope: Deactivated successfully.
Oct 14 10:02:03 compute-0 conmon[451703]: conmon fc376c4bfe430ea77ef1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6.scope/container/memory.events
Oct 14 10:02:03 compute-0 podman[451687]: 2025-10-14 10:02:03.820666625 +0000 UTC m=+0.255307606 container died fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a03d8a018a6f431e6bb8a77f8830b23d6cff34532932ba57f1ab907b4c3e14c4-merged.mount: Deactivated successfully.
Oct 14 10:02:03 compute-0 podman[451687]: 2025-10-14 10:02:03.85754607 +0000 UTC m=+0.292187051 container remove fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:02:03 compute-0 systemd[1]: libpod-conmon-fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6.scope: Deactivated successfully.
Oct 14 10:02:04 compute-0 podman[451726]: 2025-10-14 10:02:04.064344674 +0000 UTC m=+0.058727832 container create 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:02:04 compute-0 systemd[1]: Started libpod-conmon-66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23.scope.
Oct 14 10:02:04 compute-0 podman[451726]: 2025-10-14 10:02:04.038061159 +0000 UTC m=+0.032444367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:02:04 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:02:04 compute-0 podman[451726]: 2025-10-14 10:02:04.166528451 +0000 UTC m=+0.160911629 container init 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:02:04 compute-0 podman[451726]: 2025-10-14 10:02:04.180957225 +0000 UTC m=+0.175340383 container start 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 10:02:04 compute-0 podman[451726]: 2025-10-14 10:02:04.184618865 +0000 UTC m=+0.179002043 container attach 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:02:04 compute-0 ceph-mon[74249]: pgmap v3283: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:05 compute-0 great_nightingale[451743]: {
Oct 14 10:02:05 compute-0 great_nightingale[451743]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "osd_id": 2,
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "type": "bluestore"
Oct 14 10:02:05 compute-0 great_nightingale[451743]:     },
Oct 14 10:02:05 compute-0 great_nightingale[451743]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "osd_id": 1,
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "type": "bluestore"
Oct 14 10:02:05 compute-0 great_nightingale[451743]:     },
Oct 14 10:02:05 compute-0 great_nightingale[451743]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "osd_id": 0,
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:02:05 compute-0 great_nightingale[451743]:         "type": "bluestore"
Oct 14 10:02:05 compute-0 great_nightingale[451743]:     }
Oct 14 10:02:05 compute-0 great_nightingale[451743]: }
Oct 14 10:02:05 compute-0 systemd[1]: libpod-66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23.scope: Deactivated successfully.
Oct 14 10:02:05 compute-0 podman[451726]: 2025-10-14 10:02:05.249784852 +0000 UTC m=+1.244168020 container died 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:02:05 compute-0 systemd[1]: libpod-66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23.scope: Consumed 1.058s CPU time.
Oct 14 10:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166-merged.mount: Deactivated successfully.
Oct 14 10:02:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:05 compute-0 podman[451726]: 2025-10-14 10:02:05.336057418 +0000 UTC m=+1.330440596 container remove 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:02:05 compute-0 systemd[1]: libpod-conmon-66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23.scope: Deactivated successfully.
Oct 14 10:02:05 compute-0 sudo[451620]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:05 compute-0 podman[451788]: 2025-10-14 10:02:05.389655374 +0000 UTC m=+0.087654442 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 10:02:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 10:02:05 compute-0 podman[451777]: 2025-10-14 10:02:05.394269357 +0000 UTC m=+0.090780619 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 10:02:05 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:02:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 10:02:05 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:02:05 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 4f8f1723-f33f-41c6-bc06-2ec6e87b1c3d does not exist
Oct 14 10:02:05 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev a39f3f38-1e74-404f-9efb-f5594aa40129 does not exist
Oct 14 10:02:05 compute-0 sudo[451825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:02:05 compute-0 sudo[451825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:05 compute-0 sudo[451825]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:05 compute-0 sudo[451850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 10:02:05 compute-0 sudo[451850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:02:05 compute-0 sudo[451850]: pam_unix(sudo:session): session closed for user root
Oct 14 10:02:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 10:02:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276389487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:02:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 10:02:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276389487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:02:06 compute-0 nova_compute[259627]: 2025-10-14 10:02:06.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:06 compute-0 ceph-mon[74249]: pgmap v3284: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:02:06 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:02:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/276389487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:02:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/276389487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:02:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:02:07.080 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:02:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:02:07.081 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:02:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:02:07.081 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:02:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:08 compute-0 ceph-mon[74249]: pgmap v3285: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:08 compute-0 nova_compute[259627]: 2025-10-14 10:02:08.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:09 compute-0 nova_compute[259627]: 2025-10-14 10:02:09.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:10 compute-0 ceph-mon[74249]: pgmap v3286: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:11 compute-0 nova_compute[259627]: 2025-10-14 10:02:11.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:12 compute-0 ceph-mon[74249]: pgmap v3287: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:12 compute-0 nova_compute[259627]: 2025-10-14 10:02:12.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:13 compute-0 nova_compute[259627]: 2025-10-14 10:02:13.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:14 compute-0 ceph-mon[74249]: pgmap v3288: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:15 compute-0 nova_compute[259627]: 2025-10-14 10:02:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.007 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.008 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:02:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381954951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:02:16 compute-0 ceph-mon[74249]: pgmap v3289: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2381954951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.458 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.655 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.656 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3524MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.656 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.657 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.762 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 10:02:16 compute-0 nova_compute[259627]: 2025-10-14 10:02:16.973 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:02:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:02:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699986227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:02:17 compute-0 nova_compute[259627]: 2025-10-14 10:02:17.435 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:02:17 compute-0 nova_compute[259627]: 2025-10-14 10:02:17.442 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 10:02:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3699986227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:02:17 compute-0 podman[451920]: 2025-10-14 10:02:17.667787912 +0000 UTC m=+0.067436376 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 10:02:17 compute-0 podman[451919]: 2025-10-14 10:02:17.733808612 +0000 UTC m=+0.129909489 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, tcib_managed=true)
Oct 14 10:02:17 compute-0 nova_compute[259627]: 2025-10-14 10:02:17.753 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 10:02:17 compute-0 nova_compute[259627]: 2025-10-14 10:02:17.756 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 10:02:17 compute-0 nova_compute[259627]: 2025-10-14 10:02:17.756 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:02:18 compute-0 ceph-mon[74249]: pgmap v3290: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:18 compute-0 nova_compute[259627]: 2025-10-14 10:02:18.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:20 compute-0 ceph-mon[74249]: pgmap v3291: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:21 compute-0 nova_compute[259627]: 2025-10-14 10:02:21.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:22 compute-0 ceph-mon[74249]: pgmap v3292: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:22 compute-0 nova_compute[259627]: 2025-10-14 10:02:22.752 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:22 compute-0 nova_compute[259627]: 2025-10-14 10:02:22.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:23 compute-0 nova_compute[259627]: 2025-10-14 10:02:23.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:24 compute-0 ceph-mon[74249]: pgmap v3293: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:24 compute-0 nova_compute[259627]: 2025-10-14 10:02:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:24 compute-0 nova_compute[259627]: 2025-10-14 10:02:24.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:24 compute-0 nova_compute[259627]: 2025-10-14 10:02:24.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:24 compute-0 nova_compute[259627]: 2025-10-14 10:02:24.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 10:02:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:25 compute-0 nova_compute[259627]: 2025-10-14 10:02:25.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:25 compute-0 nova_compute[259627]: 2025-10-14 10:02:25.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 10:02:25 compute-0 nova_compute[259627]: 2025-10-14 10:02:25.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 10:02:26 compute-0 nova_compute[259627]: 2025-10-14 10:02:26.020 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 10:02:26 compute-0 nova_compute[259627]: 2025-10-14 10:02:26.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:26 compute-0 ceph-mon[74249]: pgmap v3294: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:28 compute-0 ceph-mon[74249]: pgmap v3295: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:28 compute-0 nova_compute[259627]: 2025-10-14 10:02:28.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:30 compute-0 ceph-mon[74249]: pgmap v3296: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:31 compute-0 nova_compute[259627]: 2025-10-14 10:02:31.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:31 compute-0 ceph-mon[74249]: pgmap v3297: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:02:32
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'vms', '.mgr']
Oct 14 10:02:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:02:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:02:33 compute-0 nova_compute[259627]: 2025-10-14 10:02:33.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:34 compute-0 ceph-mon[74249]: pgmap v3298: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:35 compute-0 podman[451963]: 2025-10-14 10:02:35.676005799 +0000 UTC m=+0.085375506 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 10:02:35 compute-0 podman[451964]: 2025-10-14 10:02:35.685245986 +0000 UTC m=+0.087708953 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 14 10:02:36 compute-0 nova_compute[259627]: 2025-10-14 10:02:36.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:36 compute-0 ceph-mon[74249]: pgmap v3299: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:36 compute-0 nova_compute[259627]: 2025-10-14 10:02:36.740 2 DEBUG oslo_concurrency.processutils [None req-e515abeb-a726-484c-83b4-384859372b90 e3794087b3fd4f06a6c8b885f25679b1 ecc47810d1fb409dbea633329126389e - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:02:36 compute-0 nova_compute[259627]: 2025-10-14 10:02:36.814 2 DEBUG oslo_concurrency.processutils [None req-e515abeb-a726-484c-83b4-384859372b90 e3794087b3fd4f06a6c8b885f25679b1 ecc47810d1fb409dbea633329126389e - - default default] CMD "env LANG=C uptime" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:02:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:38 compute-0 ceph-mon[74249]: pgmap v3300: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:38 compute-0 nova_compute[259627]: 2025-10-14 10:02:38.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:40 compute-0 ceph-mon[74249]: pgmap v3301: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:41 compute-0 nova_compute[259627]: 2025-10-14 10:02:41.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:42 compute-0 ceph-mon[74249]: pgmap v3302: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:43 compute-0 nova_compute[259627]: 2025-10-14 10:02:43.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:02:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 10:02:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:02:44.066 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 14 10:02:44 compute-0 nova_compute[259627]: 2025-10-14 10:02:44.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:44 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:02:44.068 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 14 10:02:44 compute-0 ceph-mon[74249]: pgmap v3303: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:46 compute-0 nova_compute[259627]: 2025-10-14 10:02:46.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:46 compute-0 ceph-mon[74249]: pgmap v3304: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:48 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:02:48.071 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 14 10:02:48 compute-0 ceph-mon[74249]: pgmap v3305: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:48 compute-0 podman[452006]: 2025-10-14 10:02:48.654983594 +0000 UTC m=+0.060732441 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 10:02:48 compute-0 podman[452005]: 2025-10-14 10:02:48.695140029 +0000 UTC m=+0.104225768 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:02:48 compute-0 nova_compute[259627]: 2025-10-14 10:02:48.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:50 compute-0 ceph-mon[74249]: pgmap v3306: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:51 compute-0 nova_compute[259627]: 2025-10-14 10:02:51.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:52 compute-0 ceph-mon[74249]: pgmap v3307: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:53 compute-0 nova_compute[259627]: 2025-10-14 10:02:53.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:54 compute-0 ceph-mon[74249]: pgmap v3308: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:56 compute-0 nova_compute[259627]: 2025-10-14 10:02:56.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:56 compute-0 ceph-mon[74249]: pgmap v3309: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:57 compute-0 nova_compute[259627]: 2025-10-14 10:02:57.014 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:02:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:02:58 compute-0 ceph-mon[74249]: pgmap v3310: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:02:58 compute-0 nova_compute[259627]: 2025-10-14 10:02:58.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:02:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:00 compute-0 ceph-mon[74249]: pgmap v3311: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:01 compute-0 nova_compute[259627]: 2025-10-14 10:03:01.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:02 compute-0 ceph-mon[74249]: pgmap v3312: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:03:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:03:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:03 compute-0 nova_compute[259627]: 2025-10-14 10:03:03.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:04 compute-0 ceph-mon[74249]: pgmap v3313: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:05 compute-0 sudo[452049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:05 compute-0 sudo[452049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:05 compute-0 sudo[452049]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:05 compute-0 sudo[452074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:03:05 compute-0 sudo[452074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:05 compute-0 sudo[452074]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 10:03:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1071089384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:03:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 10:03:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1071089384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:03:05 compute-0 sudo[452111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:05 compute-0 sudo[452111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:05 compute-0 podman[452099]: 2025-10-14 10:03:05.865253754 +0000 UTC m=+0.089731353 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 10:03:05 compute-0 sudo[452111]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:05 compute-0 podman[452098]: 2025-10-14 10:03:05.904639491 +0000 UTC m=+0.127287235 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 10:03:05 compute-0 sudo[452164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 10:03:05 compute-0 sudo[452164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:06 compute-0 nova_compute[259627]: 2025-10-14 10:03:06.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:06 compute-0 ceph-mon[74249]: pgmap v3314: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1071089384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:03:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1071089384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:03:06 compute-0 sudo[452164]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:03:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:03:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 10:03:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:03:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 10:03:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:03:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 5bda8706-2b8f-4cb2-814e-88274263bb91 does not exist
Oct 14 10:03:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 10:03:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:03:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev d9ade782-1d6b-40b3-8cc3-6a327ea7b15e does not exist
Oct 14 10:03:06 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 3841ed4d-640d-432c-9626-91bafeba6352 does not exist
Oct 14 10:03:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 10:03:06 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:03:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:03:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:03:06 compute-0 sudo[452223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:06 compute-0 sudo[452223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:06 compute-0 sudo[452223]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:06 compute-0 sudo[452248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:03:06 compute-0 sudo[452248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:06 compute-0 sudo[452248]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:06 compute-0 sudo[452273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:06 compute-0 sudo[452273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:06 compute-0 sudo[452273]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:03:07.081 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:03:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:03:07.082 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:03:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:03:07.082 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:03:07 compute-0 sudo[452298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 10:03:07 compute-0 sudo[452298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:03:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:03:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:03:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:03:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:03:07 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:03:07 compute-0 podman[452364]: 2025-10-14 10:03:07.643339992 +0000 UTC m=+0.061266774 container create 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:03:07 compute-0 systemd[1]: Started libpod-conmon-31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317.scope.
Oct 14 10:03:07 compute-0 podman[452364]: 2025-10-14 10:03:07.621812444 +0000 UTC m=+0.039739256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:03:07 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:03:07 compute-0 podman[452364]: 2025-10-14 10:03:07.747196851 +0000 UTC m=+0.165123673 container init 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 10:03:07 compute-0 podman[452364]: 2025-10-14 10:03:07.753323021 +0000 UTC m=+0.171249793 container start 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:03:07 compute-0 podman[452364]: 2025-10-14 10:03:07.756235473 +0000 UTC m=+0.174162275 container attach 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 14 10:03:07 compute-0 quirky_bhaskara[452381]: 167 167
Oct 14 10:03:07 compute-0 systemd[1]: libpod-31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317.scope: Deactivated successfully.
Oct 14 10:03:07 compute-0 conmon[452381]: conmon 31cbd544ebf9bc5a5770 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317.scope/container/memory.events
Oct 14 10:03:07 compute-0 podman[452364]: 2025-10-14 10:03:07.766178067 +0000 UTC m=+0.184104929 container died 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:03:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-65cff447a940b3fa7385b07799ee655bcf608c9624f8db0ecb51515bd7c91dca-merged.mount: Deactivated successfully.
Oct 14 10:03:07 compute-0 podman[452364]: 2025-10-14 10:03:07.817609409 +0000 UTC m=+0.235536181 container remove 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 10:03:07 compute-0 systemd[1]: libpod-conmon-31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317.scope: Deactivated successfully.
Oct 14 10:03:08 compute-0 podman[452404]: 2025-10-14 10:03:08.049384186 +0000 UTC m=+0.080559198 container create 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 10:03:08 compute-0 systemd[1]: Started libpod-conmon-1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263.scope.
Oct 14 10:03:08 compute-0 podman[452404]: 2025-10-14 10:03:08.016255393 +0000 UTC m=+0.047430455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:03:08 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:08 compute-0 podman[452404]: 2025-10-14 10:03:08.165855964 +0000 UTC m=+0.197031036 container init 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 10:03:08 compute-0 podman[452404]: 2025-10-14 10:03:08.179130439 +0000 UTC m=+0.210305461 container start 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 10:03:08 compute-0 podman[452404]: 2025-10-14 10:03:08.183724262 +0000 UTC m=+0.214899304 container attach 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 10:03:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.505451) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188505535, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1206, "num_deletes": 252, "total_data_size": 1837439, "memory_usage": 1865784, "flush_reason": "Manual Compaction"}
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188516272, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1071301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68178, "largest_seqno": 69383, "table_properties": {"data_size": 1066920, "index_size": 1904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11598, "raw_average_key_size": 20, "raw_value_size": 1057341, "raw_average_value_size": 1888, "num_data_blocks": 87, "num_entries": 560, "num_filter_entries": 560, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436063, "oldest_key_time": 1760436063, "file_creation_time": 1760436188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 10898 microseconds, and 7269 cpu microseconds.
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.516350) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1071301 bytes OK
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.516383) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.518370) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.518393) EVENT_LOG_v1 {"time_micros": 1760436188518385, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.518417) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1831937, prev total WAL file size 1831937, number of live WAL files 2.
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.519645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373530' seq:72057594037927935, type:22 .. '6D6772737461740033303033' seq:0, type:0; will stop at (end)
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1046KB)], [161(10MB)]
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188519743, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12269929, "oldest_snapshot_seqno": -1}
Oct 14 10:03:08 compute-0 ceph-mon[74249]: pgmap v3315: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8769 keys, 9688450 bytes, temperature: kUnknown
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188590257, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9688450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9634304, "index_size": 31121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 229659, "raw_average_key_size": 26, "raw_value_size": 9482077, "raw_average_value_size": 1081, "num_data_blocks": 1204, "num_entries": 8769, "num_filter_entries": 8769, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760436188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.590699) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9688450 bytes
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.592381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.5 rd, 137.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(20.5) write-amplify(9.0) OK, records in: 9228, records dropped: 459 output_compression: NoCompression
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.592410) EVENT_LOG_v1 {"time_micros": 1760436188592396, "job": 100, "event": "compaction_finished", "compaction_time_micros": 70729, "compaction_time_cpu_micros": 52769, "output_level": 6, "num_output_files": 1, "total_output_size": 9688450, "num_input_records": 9228, "num_output_records": 8769, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188592872, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188596392, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.519454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:03:08 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:03:09 compute-0 nova_compute[259627]: 2025-10-14 10:03:09.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:09 compute-0 bold_morse[452421]: --> passed data devices: 0 physical, 3 LVM
Oct 14 10:03:09 compute-0 bold_morse[452421]: --> relative data size: 1.0
Oct 14 10:03:09 compute-0 bold_morse[452421]: --> All data devices are unavailable
Oct 14 10:03:09 compute-0 systemd[1]: libpod-1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263.scope: Deactivated successfully.
Oct 14 10:03:09 compute-0 systemd[1]: libpod-1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263.scope: Consumed 1.222s CPU time.
Oct 14 10:03:09 compute-0 podman[452404]: 2025-10-14 10:03:09.457650481 +0000 UTC m=+1.488825503 container died 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:03:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5-merged.mount: Deactivated successfully.
Oct 14 10:03:09 compute-0 podman[452404]: 2025-10-14 10:03:09.548583542 +0000 UTC m=+1.579758534 container remove 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 10:03:09 compute-0 ceph-mon[74249]: pgmap v3316: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:09 compute-0 systemd[1]: libpod-conmon-1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263.scope: Deactivated successfully.
Oct 14 10:03:09 compute-0 sudo[452298]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:09 compute-0 sudo[452464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:09 compute-0 sudo[452464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:09 compute-0 sudo[452464]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:09 compute-0 sudo[452489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:03:09 compute-0 sudo[452489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:09 compute-0 sudo[452489]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:09 compute-0 sudo[452514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:09 compute-0 sudo[452514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:09 compute-0 sudo[452514]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:09 compute-0 sudo[452539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 10:03:09 compute-0 sudo[452539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:10 compute-0 podman[452604]: 2025-10-14 10:03:10.417549824 +0000 UTC m=+0.067321493 container create 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:03:10 compute-0 systemd[1]: Started libpod-conmon-0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c.scope.
Oct 14 10:03:10 compute-0 podman[452604]: 2025-10-14 10:03:10.392491729 +0000 UTC m=+0.042263438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:03:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:03:10 compute-0 podman[452604]: 2025-10-14 10:03:10.51354925 +0000 UTC m=+0.163320999 container init 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 10:03:10 compute-0 podman[452604]: 2025-10-14 10:03:10.524831357 +0000 UTC m=+0.174603036 container start 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 10:03:10 compute-0 podman[452604]: 2025-10-14 10:03:10.530437844 +0000 UTC m=+0.180209593 container attach 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:03:10 compute-0 agitated_maxwell[452621]: 167 167
Oct 14 10:03:10 compute-0 systemd[1]: libpod-0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c.scope: Deactivated successfully.
Oct 14 10:03:10 compute-0 podman[452604]: 2025-10-14 10:03:10.535753575 +0000 UTC m=+0.185525254 container died 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 10:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-65996502d6f46db087260891a1446ae62c0fbcab4fea22ca0344f0c1eda3d314-merged.mount: Deactivated successfully.
Oct 14 10:03:10 compute-0 podman[452604]: 2025-10-14 10:03:10.597630823 +0000 UTC m=+0.247402522 container remove 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 10:03:10 compute-0 systemd[1]: libpod-conmon-0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c.scope: Deactivated successfully.
Oct 14 10:03:10 compute-0 podman[452644]: 2025-10-14 10:03:10.812922685 +0000 UTC m=+0.055741398 container create 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:03:10 compute-0 systemd[1]: Started libpod-conmon-44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722.scope.
Oct 14 10:03:10 compute-0 podman[452644]: 2025-10-14 10:03:10.787562273 +0000 UTC m=+0.030380996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:03:10 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:10 compute-0 podman[452644]: 2025-10-14 10:03:10.932416067 +0000 UTC m=+0.175234790 container init 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:03:10 compute-0 podman[452644]: 2025-10-14 10:03:10.942918255 +0000 UTC m=+0.185736958 container start 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:03:10 compute-0 podman[452644]: 2025-10-14 10:03:10.947094297 +0000 UTC m=+0.189912990 container attach 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 10:03:10 compute-0 nova_compute[259627]: 2025-10-14 10:03:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:11 compute-0 nova_compute[259627]: 2025-10-14 10:03:11.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]: {
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:     "0": [
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:         {
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "devices": [
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "/dev/loop3"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             ],
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_name": "ceph_lv0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_size": "21470642176",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "name": "ceph_lv0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "tags": {
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cluster_name": "ceph",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.crush_device_class": "",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.encrypted": "0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osd_id": "0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.type": "block",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.vdo": "0"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             },
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "type": "block",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "vg_name": "ceph_vg0"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:         }
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:     ],
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:     "1": [
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:         {
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "devices": [
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "/dev/loop4"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             ],
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_name": "ceph_lv1",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_size": "21470642176",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "name": "ceph_lv1",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "tags": {
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cluster_name": "ceph",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.crush_device_class": "",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.encrypted": "0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osd_id": "1",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.type": "block",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.vdo": "0"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             },
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "type": "block",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "vg_name": "ceph_vg1"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:         }
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:     ],
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:     "2": [
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:         {
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "devices": [
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "/dev/loop5"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             ],
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_name": "ceph_lv2",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_size": "21470642176",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "name": "ceph_lv2",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "tags": {
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.cluster_name": "ceph",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.crush_device_class": "",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.encrypted": "0",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osd_id": "2",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.type": "block",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:                 "ceph.vdo": "0"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             },
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "type": "block",
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:             "vg_name": "ceph_vg2"
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:         }
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]:     ]
Oct 14 10:03:11 compute-0 vigilant_antonelli[452660]: }
Oct 14 10:03:11 compute-0 systemd[1]: libpod-44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722.scope: Deactivated successfully.
Oct 14 10:03:11 compute-0 podman[452644]: 2025-10-14 10:03:11.773433462 +0000 UTC m=+1.016252155 container died 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98-merged.mount: Deactivated successfully.
Oct 14 10:03:11 compute-0 podman[452644]: 2025-10-14 10:03:11.861083513 +0000 UTC m=+1.103902226 container remove 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 10:03:11 compute-0 systemd[1]: libpod-conmon-44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722.scope: Deactivated successfully.
Oct 14 10:03:11 compute-0 sudo[452539]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:12 compute-0 sudo[452681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:12 compute-0 sudo[452681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:12 compute-0 sudo[452681]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:12 compute-0 sudo[452706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:03:12 compute-0 sudo[452706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:12 compute-0 sudo[452706]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:12 compute-0 sudo[452731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:12 compute-0 sudo[452731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:12 compute-0 sudo[452731]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:12 compute-0 sudo[452756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 10:03:12 compute-0 sudo[452756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:12 compute-0 ceph-mon[74249]: pgmap v3317: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:12 compute-0 podman[452823]: 2025-10-14 10:03:12.759478617 +0000 UTC m=+0.053341310 container create f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:03:12 compute-0 systemd[1]: Started libpod-conmon-f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4.scope.
Oct 14 10:03:12 compute-0 podman[452823]: 2025-10-14 10:03:12.734428192 +0000 UTC m=+0.028290895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:03:12 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:03:12 compute-0 podman[452823]: 2025-10-14 10:03:12.85781044 +0000 UTC m=+0.151673143 container init f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 10:03:12 compute-0 podman[452823]: 2025-10-14 10:03:12.870513341 +0000 UTC m=+0.164376014 container start f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:03:12 compute-0 podman[452823]: 2025-10-14 10:03:12.874325005 +0000 UTC m=+0.168187718 container attach f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:03:12 compute-0 fervent_lederberg[452840]: 167 167
Oct 14 10:03:12 compute-0 systemd[1]: libpod-f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4.scope: Deactivated successfully.
Oct 14 10:03:12 compute-0 conmon[452840]: conmon f707afa867d08e486cca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4.scope/container/memory.events
Oct 14 10:03:12 compute-0 podman[452823]: 2025-10-14 10:03:12.882001403 +0000 UTC m=+0.175864076 container died f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:03:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f807311e660bc10eab05573163bb4710f7a7bb301178dd44c3dedada4c69bcf2-merged.mount: Deactivated successfully.
Oct 14 10:03:12 compute-0 podman[452823]: 2025-10-14 10:03:12.939607417 +0000 UTC m=+0.233470090 container remove f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:03:12 compute-0 systemd[1]: libpod-conmon-f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4.scope: Deactivated successfully.
Oct 14 10:03:13 compute-0 podman[452865]: 2025-10-14 10:03:13.151404224 +0000 UTC m=+0.053642187 container create 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 10:03:13 compute-0 systemd[1]: Started libpod-conmon-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope.
Oct 14 10:03:13 compute-0 podman[452865]: 2025-10-14 10:03:13.121308995 +0000 UTC m=+0.023547008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:03:13 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:03:13 compute-0 podman[452865]: 2025-10-14 10:03:13.245578674 +0000 UTC m=+0.147816657 container init 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:03:13 compute-0 podman[452865]: 2025-10-14 10:03:13.253160141 +0000 UTC m=+0.155398084 container start 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 10:03:13 compute-0 podman[452865]: 2025-10-14 10:03:13.257333243 +0000 UTC m=+0.159571226 container attach 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:03:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:13 compute-0 nova_compute[259627]: 2025-10-14 10:03:13.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:14 compute-0 nova_compute[259627]: 2025-10-14 10:03:14.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]: {
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "osd_id": 2,
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "type": "bluestore"
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:     },
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "osd_id": 1,
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "type": "bluestore"
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:     },
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "osd_id": 0,
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:         "type": "bluestore"
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]:     }
Oct 14 10:03:14 compute-0 intelligent_nobel[452883]: }
Oct 14 10:03:14 compute-0 systemd[1]: libpod-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope: Deactivated successfully.
Oct 14 10:03:14 compute-0 systemd[1]: libpod-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope: Consumed 1.066s CPU time.
Oct 14 10:03:14 compute-0 conmon[452883]: conmon 1b73a7d42a998cf278b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope/container/memory.events
Oct 14 10:03:14 compute-0 podman[452865]: 2025-10-14 10:03:14.31885592 +0000 UTC m=+1.221093903 container died 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 10:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902-merged.mount: Deactivated successfully.
Oct 14 10:03:14 compute-0 podman[452865]: 2025-10-14 10:03:14.40361467 +0000 UTC m=+1.305852613 container remove 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 10:03:14 compute-0 ceph-mon[74249]: pgmap v3318: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:14 compute-0 systemd[1]: libpod-conmon-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope: Deactivated successfully.
Oct 14 10:03:14 compute-0 sudo[452756]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 10:03:14 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:03:14 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 10:03:14 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:03:14 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6e613fdc-ee09-4708-8d69-0cc86e6d2928 does not exist
Oct 14 10:03:14 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 0f7f61a2-4491-43d6-8205-feb5a5f3d197 does not exist
Oct 14 10:03:14 compute-0 sudo[452930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:03:14 compute-0 sudo[452930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:14 compute-0 sudo[452930]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:14 compute-0 sudo[452955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 10:03:14 compute-0 sudo[452955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:03:14 compute-0 sudo[452955]: pam_unix(sudo:session): session closed for user root
Oct 14 10:03:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:15 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:03:15 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:03:15 compute-0 nova_compute[259627]: 2025-10-14 10:03:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.011 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.012 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:03:16 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:03:16 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/403887356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:16 compute-0 ceph-mon[74249]: pgmap v3319: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:16 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/403887356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.508 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.715 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.716 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.716 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.717 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.862 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.863 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 10:03:16 compute-0 nova_compute[259627]: 2025-10-14 10:03:16.900 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:03:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:03:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237319424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:03:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:17 compute-0 nova_compute[259627]: 2025-10-14 10:03:17.347 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:03:17 compute-0 nova_compute[259627]: 2025-10-14 10:03:17.352 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 10:03:17 compute-0 nova_compute[259627]: 2025-10-14 10:03:17.372 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 10:03:17 compute-0 nova_compute[259627]: 2025-10-14 10:03:17.373 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 10:03:17 compute-0 nova_compute[259627]: 2025-10-14 10:03:17.374 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:03:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3237319424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:03:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:18 compute-0 ceph-mon[74249]: pgmap v3320: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:19 compute-0 nova_compute[259627]: 2025-10-14 10:03:19.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:19 compute-0 podman[453025]: 2025-10-14 10:03:19.639741708 +0000 UTC m=+0.056407165 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 10:03:19 compute-0 podman[453024]: 2025-10-14 10:03:19.665941081 +0000 UTC m=+0.082608408 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 10:03:20 compute-0 ceph-mon[74249]: pgmap v3321: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:21 compute-0 nova_compute[259627]: 2025-10-14 10:03:21.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:22 compute-0 ceph-mon[74249]: pgmap v3322: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:23 compute-0 ceph-mon[74249]: pgmap v3323: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:24 compute-0 nova_compute[259627]: 2025-10-14 10:03:24.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:24 compute-0 nova_compute[259627]: 2025-10-14 10:03:24.369 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:24 compute-0 nova_compute[259627]: 2025-10-14 10:03:24.370 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:24 compute-0 nova_compute[259627]: 2025-10-14 10:03:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:25 compute-0 nova_compute[259627]: 2025-10-14 10:03:25.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:25 compute-0 nova_compute[259627]: 2025-10-14 10:03:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 10:03:25 compute-0 nova_compute[259627]: 2025-10-14 10:03:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 10:03:26 compute-0 nova_compute[259627]: 2025-10-14 10:03:25.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 10:03:26 compute-0 nova_compute[259627]: 2025-10-14 10:03:25.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:26 compute-0 ceph-mon[74249]: pgmap v3324: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:26 compute-0 nova_compute[259627]: 2025-10-14 10:03:26.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:26 compute-0 nova_compute[259627]: 2025-10-14 10:03:26.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:03:26 compute-0 nova_compute[259627]: 2025-10-14 10:03:26.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 10:03:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:28 compute-0 ceph-mon[74249]: pgmap v3325: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:29 compute-0 nova_compute[259627]: 2025-10-14 10:03:29.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:03:29 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 185K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 420 writes, 907 keys, 420 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                           Interval WAL: 420 writes, 198 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 14 10:03:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:30 compute-0 ceph-mon[74249]: pgmap v3326: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:31 compute-0 nova_compute[259627]: 2025-10-14 10:03:31.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:32 compute-0 ceph-mon[74249]: pgmap v3327: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:03:32
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.log', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta']
Oct 14 10:03:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:03:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:03:34 compute-0 nova_compute[259627]: 2025-10-14 10:03:34.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:34 compute-0 ceph-mon[74249]: pgmap v3328: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:03:34 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 179K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.69 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 339 writes, 970 keys, 339 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s
                                           Interval WAL: 339 writes, 156 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 14 10:03:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:36 compute-0 ceph-mon[74249]: pgmap v3329: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:36 compute-0 nova_compute[259627]: 2025-10-14 10:03:36.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:36 compute-0 podman[453070]: 2025-10-14 10:03:36.658542008 +0000 UTC m=+0.069248611 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 14 10:03:36 compute-0 podman[453069]: 2025-10-14 10:03:36.690368209 +0000 UTC m=+0.100983989 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 10:03:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:38 compute-0 ceph-mon[74249]: pgmap v3330: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:39 compute-0 nova_compute[259627]: 2025-10-14 10:03:39.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:03:39 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 39K writes, 151K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.70 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 421 writes, 898 keys, 421 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 421 writes, 192 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca465090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca465090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca465090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 14 10:03:40 compute-0 ceph-mon[74249]: pgmap v3331: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:40 compute-0 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 10:03:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:41 compute-0 nova_compute[259627]: 2025-10-14 10:03:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:42 compute-0 ceph-mon[74249]: pgmap v3332: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:03:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 10:03:44 compute-0 nova_compute[259627]: 2025-10-14 10:03:44.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:44 compute-0 ceph-mon[74249]: pgmap v3333: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:46 compute-0 nova_compute[259627]: 2025-10-14 10:03:46.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:46 compute-0 ceph-mon[74249]: pgmap v3334: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:48 compute-0 ceph-mon[74249]: pgmap v3335: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:49 compute-0 nova_compute[259627]: 2025-10-14 10:03:49.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:49 compute-0 ceph-mon[74249]: pgmap v3336: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:50 compute-0 podman[453110]: 2025-10-14 10:03:50.698687923 +0000 UTC m=+0.090272127 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 14 10:03:50 compute-0 podman[453109]: 2025-10-14 10:03:50.785240576 +0000 UTC m=+0.184522028 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 14 10:03:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:51 compute-0 nova_compute[259627]: 2025-10-14 10:03:51.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:52 compute-0 ceph-mon[74249]: pgmap v3337: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:54 compute-0 nova_compute[259627]: 2025-10-14 10:03:54.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:54 compute-0 ceph-mon[74249]: pgmap v3338: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:56 compute-0 ceph-mon[74249]: pgmap v3339: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:56 compute-0 nova_compute[259627]: 2025-10-14 10:03:56.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:58 compute-0 ceph-mon[74249]: pgmap v3340: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:03:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:03:59 compute-0 nova_compute[259627]: 2025-10-14 10:03:59.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:03:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:00 compute-0 ceph-mon[74249]: pgmap v3341: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:01 compute-0 nova_compute[259627]: 2025-10-14 10:04:01.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:02 compute-0 ceph-mon[74249]: pgmap v3342: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:04:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:04:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:04 compute-0 nova_compute[259627]: 2025-10-14 10:04:04.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:04 compute-0 ceph-mon[74249]: pgmap v3343: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 10:04:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1892538747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:04:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 10:04:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1892538747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:04:06 compute-0 ceph-mon[74249]: pgmap v3344: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1892538747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:04:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/1892538747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:04:06 compute-0 nova_compute[259627]: 2025-10-14 10:04:06.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:04:07.083 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:04:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:04:07.084 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:04:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:04:07.084 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:04:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:07 compute-0 podman[453154]: 2025-10-14 10:04:07.704664618 +0000 UTC m=+0.106743820 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 10:04:07 compute-0 podman[453155]: 2025-10-14 10:04:07.715895604 +0000 UTC m=+0.117588566 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 10:04:08 compute-0 ceph-mon[74249]: pgmap v3345: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:09 compute-0 nova_compute[259627]: 2025-10-14 10:04:09.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:10 compute-0 ceph-mon[74249]: pgmap v3346: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:11 compute-0 nova_compute[259627]: 2025-10-14 10:04:11.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:11 compute-0 nova_compute[259627]: 2025-10-14 10:04:11.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:12 compute-0 ceph-mon[74249]: pgmap v3347: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:13 compute-0 nova_compute[259627]: 2025-10-14 10:04:13.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:14 compute-0 nova_compute[259627]: 2025-10-14 10:04:14.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:14 compute-0 ceph-mon[74249]: pgmap v3348: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:14 compute-0 sudo[453195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:14 compute-0 sudo[453195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:14 compute-0 sudo[453195]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:14 compute-0 sudo[453220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:04:14 compute-0 sudo[453220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:14 compute-0 sudo[453220]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:14 compute-0 sudo[453245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:14 compute-0 nova_compute[259627]: 2025-10-14 10:04:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:14 compute-0 nova_compute[259627]: 2025-10-14 10:04:14.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 14 10:04:14 compute-0 sudo[453245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:14 compute-0 sudo[453245]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:15 compute-0 sudo[453270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 10:04:15 compute-0 sudo[453270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:15 compute-0 sudo[453270]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:04:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:04:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 10:04:15 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:04:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 10:04:15 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:04:15 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev aea33ff0-cb7b-4953-8761-7128a895bfdb does not exist
Oct 14 10:04:15 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 135b1aad-a505-4619-a6dc-042935bfd271 does not exist
Oct 14 10:04:15 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 40a3bd80-d172-473a-ac02-18c2a0d3d31e does not exist
Oct 14 10:04:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 10:04:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:04:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 10:04:15 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:04:15 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:04:15 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:04:15 compute-0 sudo[453327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:15 compute-0 sudo[453327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:15 compute-0 sudo[453327]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:15 compute-0 sudo[453352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:04:15 compute-0 sudo[453352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:15 compute-0 sudo[453352]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:16 compute-0 sudo[453377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:16 compute-0 sudo[453377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:16 compute-0 sudo[453377]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:16 compute-0 sudo[453402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 10:04:16 compute-0 sudo[453402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:16 compute-0 ceph-mon[74249]: pgmap v3349: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:04:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:04:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:04:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:04:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:04:16 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:04:16 compute-0 nova_compute[259627]: 2025-10-14 10:04:16.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:16 compute-0 podman[453467]: 2025-10-14 10:04:16.699491203 +0000 UTC m=+0.051826172 container create 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 10:04:16 compute-0 systemd[1]: Started libpod-conmon-7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671.scope.
Oct 14 10:04:16 compute-0 podman[453467]: 2025-10-14 10:04:16.677698109 +0000 UTC m=+0.030033088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:04:16 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:04:16 compute-0 podman[453467]: 2025-10-14 10:04:16.821188219 +0000 UTC m=+0.173523238 container init 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 10:04:16 compute-0 podman[453467]: 2025-10-14 10:04:16.834326392 +0000 UTC m=+0.186661361 container start 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:04:16 compute-0 podman[453467]: 2025-10-14 10:04:16.840188076 +0000 UTC m=+0.192523095 container attach 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 10:04:16 compute-0 wizardly_jones[453483]: 167 167
Oct 14 10:04:16 compute-0 systemd[1]: libpod-7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671.scope: Deactivated successfully.
Oct 14 10:04:16 compute-0 podman[453467]: 2025-10-14 10:04:16.844587104 +0000 UTC m=+0.196922073 container died 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 10:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b3af7d6a66a7ee95d14aff5124d8dc639a3937747dc9988e1a7f905da03d45d-merged.mount: Deactivated successfully.
Oct 14 10:04:16 compute-0 podman[453467]: 2025-10-14 10:04:16.912530121 +0000 UTC m=+0.264865080 container remove 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 10:04:16 compute-0 systemd[1]: libpod-conmon-7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671.scope: Deactivated successfully.
Oct 14 10:04:16 compute-0 nova_compute[259627]: 2025-10-14 10:04:16.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.041 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.042 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.043 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.043 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.044 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:04:17 compute-0 podman[453509]: 2025-10-14 10:04:17.160919556 +0000 UTC m=+0.068713107 container create 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:04:17 compute-0 podman[453509]: 2025-10-14 10:04:17.138995518 +0000 UTC m=+0.046789119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:04:17 compute-0 systemd[1]: Started libpod-conmon-71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76.scope.
Oct 14 10:04:17 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:17 compute-0 podman[453509]: 2025-10-14 10:04:17.388958861 +0000 UTC m=+0.296752392 container init 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 10:04:17 compute-0 podman[453509]: 2025-10-14 10:04:17.399344946 +0000 UTC m=+0.307138457 container start 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:04:17 compute-0 podman[453509]: 2025-10-14 10:04:17.402338769 +0000 UTC m=+0.310132280 container attach 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 10:04:17 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:04:17 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473578652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.528 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:04:17 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/473578652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.717 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.718 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3482MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.719 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.719 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.820 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.820 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 10:04:17 compute-0 nova_compute[259627]: 2025-10-14 10:04:17.901 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:04:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:04:18 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328658697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:04:18 compute-0 nova_compute[259627]: 2025-10-14 10:04:18.419 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:04:18 compute-0 nova_compute[259627]: 2025-10-14 10:04:18.428 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 10:04:18 compute-0 nova_compute[259627]: 2025-10-14 10:04:18.456 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 10:04:18 compute-0 nova_compute[259627]: 2025-10-14 10:04:18.458 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 10:04:18 compute-0 nova_compute[259627]: 2025-10-14 10:04:18.459 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:04:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:18 compute-0 ceph-mon[74249]: pgmap v3350: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:18 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3328658697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:04:18 compute-0 ecstatic_antonelli[453545]: --> passed data devices: 0 physical, 3 LVM
Oct 14 10:04:18 compute-0 ecstatic_antonelli[453545]: --> relative data size: 1.0
Oct 14 10:04:18 compute-0 ecstatic_antonelli[453545]: --> All data devices are unavailable
Oct 14 10:04:18 compute-0 systemd[1]: libpod-71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76.scope: Deactivated successfully.
Oct 14 10:04:18 compute-0 systemd[1]: libpod-71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76.scope: Consumed 1.151s CPU time.
Oct 14 10:04:18 compute-0 podman[453598]: 2025-10-14 10:04:18.656718508 +0000 UTC m=+0.034100797 container died 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:04:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312-merged.mount: Deactivated successfully.
Oct 14 10:04:18 compute-0 podman[453598]: 2025-10-14 10:04:18.72035986 +0000 UTC m=+0.097742149 container remove 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 10:04:18 compute-0 systemd[1]: libpod-conmon-71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76.scope: Deactivated successfully.
Oct 14 10:04:18 compute-0 sudo[453402]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:18 compute-0 sudo[453613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:18 compute-0 sudo[453613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:18 compute-0 sudo[453613]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:18 compute-0 sudo[453638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:04:18 compute-0 sudo[453638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:18 compute-0 sudo[453638]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:19 compute-0 sudo[453663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:19 compute-0 sudo[453663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:19 compute-0 sudo[453663]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:19 compute-0 nova_compute[259627]: 2025-10-14 10:04:19.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:19 compute-0 sudo[453688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 10:04:19 compute-0 sudo[453688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:19 compute-0 ceph-mon[74249]: pgmap v3351: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:19 compute-0 podman[453757]: 2025-10-14 10:04:19.702623241 +0000 UTC m=+0.073932565 container create 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:04:19 compute-0 systemd[1]: Started libpod-conmon-4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b.scope.
Oct 14 10:04:19 compute-0 podman[453757]: 2025-10-14 10:04:19.673277181 +0000 UTC m=+0.044586565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:04:19 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:04:19 compute-0 podman[453757]: 2025-10-14 10:04:19.808801206 +0000 UTC m=+0.180110610 container init 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:04:19 compute-0 podman[453757]: 2025-10-14 10:04:19.822208655 +0000 UTC m=+0.193517999 container start 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 10:04:19 compute-0 podman[453757]: 2025-10-14 10:04:19.827613428 +0000 UTC m=+0.198922822 container attach 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:04:19 compute-0 gifted_shockley[453773]: 167 167
Oct 14 10:04:19 compute-0 systemd[1]: libpod-4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b.scope: Deactivated successfully.
Oct 14 10:04:19 compute-0 podman[453757]: 2025-10-14 10:04:19.832952159 +0000 UTC m=+0.204261573 container died 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 10:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac3d15f72b30f668d6ece58f2c982a163fe7ce15519410ba04894aaf76c7bdda-merged.mount: Deactivated successfully.
Oct 14 10:04:19 compute-0 podman[453757]: 2025-10-14 10:04:19.890932972 +0000 UTC m=+0.262242316 container remove 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:04:19 compute-0 systemd[1]: libpod-conmon-4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b.scope: Deactivated successfully.
Oct 14 10:04:20 compute-0 podman[453797]: 2025-10-14 10:04:20.15414906 +0000 UTC m=+0.080050045 container create 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:04:20 compute-0 podman[453797]: 2025-10-14 10:04:20.119283715 +0000 UTC m=+0.045184760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:04:20 compute-0 systemd[1]: Started libpod-conmon-72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d.scope.
Oct 14 10:04:20 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:20 compute-0 podman[453797]: 2025-10-14 10:04:20.289786219 +0000 UTC m=+0.215687254 container init 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 10:04:20 compute-0 podman[453797]: 2025-10-14 10:04:20.29797712 +0000 UTC m=+0.223878115 container start 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 10:04:20 compute-0 podman[453797]: 2025-10-14 10:04:20.30289394 +0000 UTC m=+0.228794965 container attach 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]: {
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:     "0": [
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:         {
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "devices": [
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "/dev/loop3"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             ],
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_name": "ceph_lv0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_size": "21470642176",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "name": "ceph_lv0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "tags": {
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cluster_name": "ceph",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.crush_device_class": "",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.encrypted": "0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osd_id": "0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.type": "block",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.vdo": "0"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             },
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "type": "block",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "vg_name": "ceph_vg0"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:         }
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:     ],
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:     "1": [
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:         {
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "devices": [
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "/dev/loop4"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             ],
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_name": "ceph_lv1",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_size": "21470642176",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "name": "ceph_lv1",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "tags": {
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cluster_name": "ceph",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.crush_device_class": "",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.encrypted": "0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osd_id": "1",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.type": "block",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.vdo": "0"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             },
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "type": "block",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "vg_name": "ceph_vg1"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:         }
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:     ],
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:     "2": [
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:         {
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "devices": [
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "/dev/loop5"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             ],
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_name": "ceph_lv2",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_size": "21470642176",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "name": "ceph_lv2",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "tags": {
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.cluster_name": "ceph",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.crush_device_class": "",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.encrypted": "0",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osd_id": "2",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.type": "block",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:                 "ceph.vdo": "0"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             },
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "type": "block",
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:             "vg_name": "ceph_vg2"
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:         }
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]:     ]
Oct 14 10:04:21 compute-0 sweet_mestorf[453814]: }
Oct 14 10:04:21 compute-0 systemd[1]: libpod-72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d.scope: Deactivated successfully.
Oct 14 10:04:21 compute-0 podman[453797]: 2025-10-14 10:04:21.094531735 +0000 UTC m=+1.020432760 container died 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 14 10:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773-merged.mount: Deactivated successfully.
Oct 14 10:04:21 compute-0 podman[453797]: 2025-10-14 10:04:21.182097513 +0000 UTC m=+1.107998518 container remove 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:04:21 compute-0 systemd[1]: libpod-conmon-72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d.scope: Deactivated successfully.
Oct 14 10:04:21 compute-0 sudo[453688]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:21 compute-0 podman[453832]: 2025-10-14 10:04:21.243423138 +0000 UTC m=+0.098141309 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 10:04:21 compute-0 podman[453824]: 2025-10-14 10:04:21.274197323 +0000 UTC m=+0.129019866 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 10:04:21 compute-0 sudo[453875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:21 compute-0 sudo[453875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:21 compute-0 sudo[453875]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:21 compute-0 sudo[453906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:04:21 compute-0 sudo[453906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:21 compute-0 sudo[453906]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:21 compute-0 sudo[453931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:21 compute-0 sudo[453931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:21 compute-0 sudo[453931]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:21 compute-0 sudo[453956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 10:04:21 compute-0 sudo[453956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:21 compute-0 nova_compute[259627]: 2025-10-14 10:04:21.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:22 compute-0 podman[454021]: 2025-10-14 10:04:22.07284596 +0000 UTC m=+0.051346371 container create 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 10:04:22 compute-0 systemd[1]: Started libpod-conmon-421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0.scope.
Oct 14 10:04:22 compute-0 podman[454021]: 2025-10-14 10:04:22.050482851 +0000 UTC m=+0.028983272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:04:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:04:22 compute-0 podman[454021]: 2025-10-14 10:04:22.180958703 +0000 UTC m=+0.159459124 container init 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 10:04:22 compute-0 podman[454021]: 2025-10-14 10:04:22.193420349 +0000 UTC m=+0.171920740 container start 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:04:22 compute-0 podman[454021]: 2025-10-14 10:04:22.196932135 +0000 UTC m=+0.175432616 container attach 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 10:04:22 compute-0 beautiful_chaplygin[454037]: 167 167
Oct 14 10:04:22 compute-0 podman[454021]: 2025-10-14 10:04:22.202975553 +0000 UTC m=+0.181475974 container died 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 10:04:22 compute-0 systemd[1]: libpod-421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0.scope: Deactivated successfully.
Oct 14 10:04:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-832c41bdbe28087245f471933ba113b255b82dd6299565ea2aaf44b42a9de570-merged.mount: Deactivated successfully.
Oct 14 10:04:22 compute-0 podman[454021]: 2025-10-14 10:04:22.260435483 +0000 UTC m=+0.238935904 container remove 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 10:04:22 compute-0 systemd[1]: libpod-conmon-421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0.scope: Deactivated successfully.
Oct 14 10:04:22 compute-0 ceph-mon[74249]: pgmap v3352: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:22 compute-0 podman[454061]: 2025-10-14 10:04:22.521237642 +0000 UTC m=+0.075865342 container create 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:04:22 compute-0 systemd[1]: Started libpod-conmon-9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3.scope.
Oct 14 10:04:22 compute-0 podman[454061]: 2025-10-14 10:04:22.49629618 +0000 UTC m=+0.050923900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:04:22 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:04:22 compute-0 podman[454061]: 2025-10-14 10:04:22.634960303 +0000 UTC m=+0.189587993 container init 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 10:04:22 compute-0 podman[454061]: 2025-10-14 10:04:22.644657381 +0000 UTC m=+0.199285081 container start 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:04:22 compute-0 podman[454061]: 2025-10-14 10:04:22.64830768 +0000 UTC m=+0.202935430 container attach 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 10:04:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:23 compute-0 angry_montalcini[454077]: {
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "osd_id": 2,
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "type": "bluestore"
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:     },
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "osd_id": 1,
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "type": "bluestore"
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:     },
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "osd_id": 0,
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:         "type": "bluestore"
Oct 14 10:04:23 compute-0 angry_montalcini[454077]:     }
Oct 14 10:04:23 compute-0 angry_montalcini[454077]: }
Oct 14 10:04:23 compute-0 systemd[1]: libpod-9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3.scope: Deactivated successfully.
Oct 14 10:04:23 compute-0 podman[454061]: 2025-10-14 10:04:23.802107361 +0000 UTC m=+1.356735091 container died 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 10:04:23 compute-0 systemd[1]: libpod-9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3.scope: Consumed 1.160s CPU time.
Oct 14 10:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c-merged.mount: Deactivated successfully.
Oct 14 10:04:23 compute-0 podman[454061]: 2025-10-14 10:04:23.878935186 +0000 UTC m=+1.433562876 container remove 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:04:23 compute-0 systemd[1]: libpod-conmon-9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3.scope: Deactivated successfully.
Oct 14 10:04:23 compute-0 sudo[453956]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 10:04:23 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:04:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 10:04:23 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:04:23 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 294f3f21-aeed-4e7f-b661-d32f0f5b6460 does not exist
Oct 14 10:04:23 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev b590e88d-8401-480c-8fcd-11fd61bb6c35 does not exist
Oct 14 10:04:24 compute-0 sudo[454124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:04:24 compute-0 sudo[454124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:24 compute-0 sudo[454124]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:24 compute-0 sudo[454149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 10:04:24 compute-0 sudo[454149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:04:24 compute-0 sudo[454149]: pam_unix(sudo:session): session closed for user root
Oct 14 10:04:24 compute-0 nova_compute[259627]: 2025-10-14 10:04:24.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:24 compute-0 ceph-mon[74249]: pgmap v3353: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:04:24 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:04:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:25 compute-0 nova_compute[259627]: 2025-10-14 10:04:25.438 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:25 compute-0 nova_compute[259627]: 2025-10-14 10:04:25.439 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:25 compute-0 nova_compute[259627]: 2025-10-14 10:04:25.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:25 compute-0 nova_compute[259627]: 2025-10-14 10:04:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 10:04:25 compute-0 nova_compute[259627]: 2025-10-14 10:04:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 10:04:26 compute-0 nova_compute[259627]: 2025-10-14 10:04:26.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 10:04:26 compute-0 ceph-mon[74249]: pgmap v3354: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:26 compute-0 nova_compute[259627]: 2025-10-14 10:04:26.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:26 compute-0 nova_compute[259627]: 2025-10-14 10:04:26.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:26 compute-0 nova_compute[259627]: 2025-10-14 10:04:26.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:27 compute-0 nova_compute[259627]: 2025-10-14 10:04:27.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:27 compute-0 nova_compute[259627]: 2025-10-14 10:04:27.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 10:04:28 compute-0 ceph-mon[74249]: pgmap v3355: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:29 compute-0 nova_compute[259627]: 2025-10-14 10:04:29.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:30 compute-0 ceph-mon[74249]: pgmap v3356: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:30 compute-0 nova_compute[259627]: 2025-10-14 10:04:30.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:30 compute-0 nova_compute[259627]: 2025-10-14 10:04:30.979 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:04:30 compute-0 nova_compute[259627]: 2025-10-14 10:04:30.980 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:04:30 compute-0 nova_compute[259627]: 2025-10-14 10:04:30.981 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:04:30 compute-0 nova_compute[259627]: 2025-10-14 10:04:30.981 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:04:30 compute-0 nova_compute[259627]: 2025-10-14 10:04:30.982 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:04:30 compute-0 nova_compute[259627]: 2025-10-14 10:04:30.983 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.033 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.045 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.045 2 WARNING nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.046 2 WARNING nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.046 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Removable base files: /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.046 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.046 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.047 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.047 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.047 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.047 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 14 10:04:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:31 compute-0 nova_compute[259627]: 2025-10-14 10:04:31.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:32 compute-0 ceph-mon[74249]: pgmap v3357: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:04:32
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control']
Oct 14 10:04:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 10:04:32 compute-0 nova_compute[259627]: 2025-10-14 10:04:32.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:04:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:04:34 compute-0 nova_compute[259627]: 2025-10-14 10:04:34.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:34 compute-0 ceph-mon[74249]: pgmap v3358: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:36 compute-0 ceph-mon[74249]: pgmap v3359: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:36 compute-0 nova_compute[259627]: 2025-10-14 10:04:36.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:38 compute-0 ceph-mon[74249]: pgmap v3360: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:38 compute-0 podman[454175]: 2025-10-14 10:04:38.69830861 +0000 UTC m=+0.098539279 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 14 10:04:38 compute-0 podman[454174]: 2025-10-14 10:04:38.727299642 +0000 UTC m=+0.128122865 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 10:04:39 compute-0 nova_compute[259627]: 2025-10-14 10:04:39.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:40 compute-0 ceph-mon[74249]: pgmap v3361: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:04:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct 14 10:04:41 compute-0 nova_compute[259627]: 2025-10-14 10:04:41.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:42 compute-0 ceph-mon[74249]: pgmap v3362: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct 14 10:04:43 compute-0 nova_compute[259627]: 2025-10-14 10:04:42.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:43 compute-0 nova_compute[259627]: 2025-10-14 10:04:43.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 14 10:04:43 compute-0 nova_compute[259627]: 2025-10-14 10:04:43.022 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 14 10:04:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct 14 10:04:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:04:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 10:04:44 compute-0 nova_compute[259627]: 2025-10-14 10:04:44.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:44 compute-0 ceph-mon[74249]: pgmap v3363: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct 14 10:04:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct 14 10:04:46 compute-0 ceph-mon[74249]: pgmap v3364: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct 14 10:04:46 compute-0 nova_compute[259627]: 2025-10-14 10:04:46.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 10:04:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:48 compute-0 ceph-mon[74249]: pgmap v3365: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 10:04:49 compute-0 nova_compute[259627]: 2025-10-14 10:04:49.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 10:04:50 compute-0 ceph-mon[74249]: pgmap v3366: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 10:04:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 10:04:51 compute-0 ceph-mon[74249]: pgmap v3367: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 10:04:51 compute-0 podman[454216]: 2025-10-14 10:04:51.66245133 +0000 UTC m=+0.067500777 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 10:04:51 compute-0 nova_compute[259627]: 2025-10-14 10:04:51.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:51 compute-0 podman[454215]: 2025-10-14 10:04:51.755111514 +0000 UTC m=+0.156716826 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 10:04:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct 14 10:04:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:54 compute-0 nova_compute[259627]: 2025-10-14 10:04:54.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:54 compute-0 ceph-mon[74249]: pgmap v3368: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct 14 10:04:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct 14 10:04:56 compute-0 ceph-mon[74249]: pgmap v3369: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.476401) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296476450, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1090, "num_deletes": 251, "total_data_size": 1621822, "memory_usage": 1649472, "flush_reason": "Manual Compaction"}
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296486488, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1595659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69384, "largest_seqno": 70473, "table_properties": {"data_size": 1590326, "index_size": 2792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11218, "raw_average_key_size": 19, "raw_value_size": 1579719, "raw_average_value_size": 2771, "num_data_blocks": 125, "num_entries": 570, "num_filter_entries": 570, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436188, "oldest_key_time": 1760436188, "file_creation_time": 1760436296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 10149 microseconds, and 4778 cpu microseconds.
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.486545) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1595659 bytes OK
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.486572) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.488467) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.488492) EVENT_LOG_v1 {"time_micros": 1760436296488484, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.488516) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1616773, prev total WAL file size 1616773, number of live WAL files 2.
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.489534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1558KB)], [164(9461KB)]
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296489630, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11284109, "oldest_snapshot_seqno": -1}
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8825 keys, 9493849 bytes, temperature: kUnknown
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296554898, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9493849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9439517, "index_size": 31179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22085, "raw_key_size": 231450, "raw_average_key_size": 26, "raw_value_size": 9286389, "raw_average_value_size": 1052, "num_data_blocks": 1200, "num_entries": 8825, "num_filter_entries": 8825, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760436296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.555354) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9493849 bytes
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.556990) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.5 rd, 145.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.2 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(13.0) write-amplify(5.9) OK, records in: 9339, records dropped: 514 output_compression: NoCompression
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.557028) EVENT_LOG_v1 {"time_micros": 1760436296557006, "job": 102, "event": "compaction_finished", "compaction_time_micros": 65413, "compaction_time_cpu_micros": 46387, "output_level": 6, "num_output_files": 1, "total_output_size": 9493849, "num_input_records": 9339, "num_output_records": 8825, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296557814, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296561346, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.489386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:04:56 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:04:56 compute-0 nova_compute[259627]: 2025-10-14 10:04:56.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:56 compute-0 nova_compute[259627]: 2025-10-14 10:04:56.996 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:04:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 14 10:04:58 compute-0 ceph-mon[74249]: pgmap v3370: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 14 10:04:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:04:59 compute-0 nova_compute[259627]: 2025-10-14 10:04:59.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:04:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:00 compute-0 ceph-mon[74249]: pgmap v3371: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:01 compute-0 nova_compute[259627]: 2025-10-14 10:05:01.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:02 compute-0 ceph-mon[74249]: pgmap v3372: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:05:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:05:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:04 compute-0 nova_compute[259627]: 2025-10-14 10:05:04.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:04 compute-0 ceph-mon[74249]: pgmap v3373: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 10:05:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873002386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:05:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 10:05:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873002386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:05:06 compute-0 ceph-mon[74249]: pgmap v3374: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/873002386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:05:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/873002386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:05:06 compute-0 nova_compute[259627]: 2025-10-14 10:05:06.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:05:07.084 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:05:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:05:07.085 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:05:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:05:07.085 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:05:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:08 compute-0 ceph-mon[74249]: pgmap v3375: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:09 compute-0 nova_compute[259627]: 2025-10-14 10:05:09.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:09 compute-0 podman[454261]: 2025-10-14 10:05:09.684399303 +0000 UTC m=+0.089113997 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 14 10:05:09 compute-0 podman[454262]: 2025-10-14 10:05:09.68466442 +0000 UTC m=+0.080345153 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 10:05:10 compute-0 ceph-mon[74249]: pgmap v3376: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:11 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:11 compute-0 ceph-mon[74249]: pgmap v3377: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:11 compute-0 nova_compute[259627]: 2025-10-14 10:05:11.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:12 compute-0 nova_compute[259627]: 2025-10-14 10:05:12.135 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:13 compute-0 nova_compute[259627]: 2025-10-14 10:05:13.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:13 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:13 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:14 compute-0 nova_compute[259627]: 2025-10-14 10:05:14.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:14 compute-0 ceph-mon[74249]: pgmap v3378: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:15 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:15 compute-0 ceph-mon[74249]: pgmap v3379: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:15 compute-0 nova_compute[259627]: 2025-10-14 10:05:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:16 compute-0 nova_compute[259627]: 2025-10-14 10:05:16.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:17 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:18 compute-0 ceph-mon[74249]: pgmap v3380: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:18 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:18 compute-0 nova_compute[259627]: 2025-10-14 10:05:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.011 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.012 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:05:19 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:19 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:05:19 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398986526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:05:19 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3398986526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.724 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.725 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.726 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.726 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.817 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.818 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.836 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.877 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.878 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.898 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.945 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 14 10:05:19 compute-0 nova_compute[259627]: 2025-10-14 10:05:19.962 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 14 10:05:20 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 10:05:20 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3444473545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:05:20 compute-0 nova_compute[259627]: 2025-10-14 10:05:20.417 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 14 10:05:20 compute-0 nova_compute[259627]: 2025-10-14 10:05:20.425 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 14 10:05:20 compute-0 nova_compute[259627]: 2025-10-14 10:05:20.447 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 14 10:05:20 compute-0 nova_compute[259627]: 2025-10-14 10:05:20.449 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 14 10:05:20 compute-0 nova_compute[259627]: 2025-10-14 10:05:20.450 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:05:20 compute-0 ceph-mon[74249]: pgmap v3381: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:20 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3444473545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 10:05:21 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:21 compute-0 nova_compute[259627]: 2025-10-14 10:05:21.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:22 compute-0 ceph-mon[74249]: pgmap v3382: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:22 compute-0 podman[454344]: 2025-10-14 10:05:22.687243016 +0000 UTC m=+0.091566218 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 10:05:22 compute-0 podman[454343]: 2025-10-14 10:05:22.691292596 +0000 UTC m=+0.104476685 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 14 10:05:23 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:23 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:24 compute-0 sudo[454388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:24 compute-0 sudo[454388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:24 compute-0 sudo[454388]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:24 compute-0 sudo[454413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:05:24 compute-0 sudo[454413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:24 compute-0 sudo[454413]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:24 compute-0 sudo[454438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:24 compute-0 sudo[454438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:24 compute-0 sudo[454438]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:24 compute-0 nova_compute[259627]: 2025-10-14 10:05:24.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:24 compute-0 sudo[454463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 14 10:05:24 compute-0 sudo[454463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:24 compute-0 ceph-mon[74249]: pgmap v3383: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:25 compute-0 sudo[454463]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:05:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:05:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 10:05:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:05:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 10:05:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:05:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 86f787f9-97c4-4758-b166-862fcac6909c does not exist
Oct 14 10:05:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev c7422c25-4969-489d-99a7-b748fb7d800b does not exist
Oct 14 10:05:25 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 1c1140b8-94c9-406e-9fbc-2e8049ff2f45 does not exist
Oct 14 10:05:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 10:05:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:05:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 10:05:25 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:05:25 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:05:25 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:05:25 compute-0 sudo[454518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:25 compute-0 sudo[454518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:25 compute-0 sudo[454518]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:25 compute-0 sudo[454543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:05:25 compute-0 sudo[454543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:25 compute-0 sudo[454543]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:25 compute-0 sudo[454568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:25 compute-0 sudo[454568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:25 compute-0 sudo[454568]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:25 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:25 compute-0 sudo[454593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 14 10:05:25 compute-0 sudo[454593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:25 compute-0 nova_compute[259627]: 2025-10-14 10:05:25.446 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:05:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 10:05:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:05:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 10:05:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 10:05:25 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:05:25 compute-0 podman[454658]: 2025-10-14 10:05:25.869223082 +0000 UTC m=+0.071898965 container create d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:05:25 compute-0 systemd[1]: Started libpod-conmon-d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485.scope.
Oct 14 10:05:25 compute-0 podman[454658]: 2025-10-14 10:05:25.840243071 +0000 UTC m=+0.042919084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:05:25 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:05:25 compute-0 podman[454658]: 2025-10-14 10:05:25.96449269 +0000 UTC m=+0.167168613 container init d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 10:05:25 compute-0 podman[454658]: 2025-10-14 10:05:25.977288834 +0000 UTC m=+0.179964727 container start d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:05:25 compute-0 nova_compute[259627]: 2025-10-14 10:05:25.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:25 compute-0 nova_compute[259627]: 2025-10-14 10:05:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 14 10:05:25 compute-0 nova_compute[259627]: 2025-10-14 10:05:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 14 10:05:25 compute-0 podman[454658]: 2025-10-14 10:05:25.982085161 +0000 UTC m=+0.184761114 container attach d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:05:25 compute-0 angry_euler[454674]: 167 167
Oct 14 10:05:25 compute-0 systemd[1]: libpod-d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485.scope: Deactivated successfully.
Oct 14 10:05:25 compute-0 conmon[454674]: conmon d8883d62b5b613c18216 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485.scope/container/memory.events
Oct 14 10:05:25 compute-0 podman[454658]: 2025-10-14 10:05:25.984997493 +0000 UTC m=+0.187673396 container died d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 10:05:26 compute-0 nova_compute[259627]: 2025-10-14 10:05:26.008 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 14 10:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-596d4f495d072a7009a61d5cfb7f80f763b723d3212b5754f90c5ebc6cf853e7-merged.mount: Deactivated successfully.
Oct 14 10:05:26 compute-0 podman[454658]: 2025-10-14 10:05:26.037194614 +0000 UTC m=+0.239870517 container remove d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 14 10:05:26 compute-0 systemd[1]: libpod-conmon-d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485.scope: Deactivated successfully.
Oct 14 10:05:26 compute-0 podman[454698]: 2025-10-14 10:05:26.296866155 +0000 UTC m=+0.065625001 container create c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 10:05:26 compute-0 systemd[1]: Started libpod-conmon-c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec.scope.
Oct 14 10:05:26 compute-0 podman[454698]: 2025-10-14 10:05:26.279894259 +0000 UTC m=+0.048653125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:05:26 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:26 compute-0 podman[454698]: 2025-10-14 10:05:26.411680362 +0000 UTC m=+0.180439238 container init c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 10:05:26 compute-0 podman[454698]: 2025-10-14 10:05:26.423156684 +0000 UTC m=+0.191915530 container start c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:05:26 compute-0 podman[454698]: 2025-10-14 10:05:26.427523891 +0000 UTC m=+0.196282797 container attach c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 10:05:26 compute-0 ceph-mon[74249]: pgmap v3384: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:26 compute-0 nova_compute[259627]: 2025-10-14 10:05:26.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:26 compute-0 nova_compute[259627]: 2025-10-14 10:05:26.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:26 compute-0 nova_compute[259627]: 2025-10-14 10:05:26.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:26 compute-0 nova_compute[259627]: 2025-10-14 10:05:26.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:27 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:27 compute-0 hardcore_jones[454714]: --> passed data devices: 0 physical, 3 LVM
Oct 14 10:05:27 compute-0 hardcore_jones[454714]: --> relative data size: 1.0
Oct 14 10:05:27 compute-0 hardcore_jones[454714]: --> All data devices are unavailable
Oct 14 10:05:27 compute-0 systemd[1]: libpod-c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec.scope: Deactivated successfully.
Oct 14 10:05:27 compute-0 systemd[1]: libpod-c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec.scope: Consumed 1.136s CPU time.
Oct 14 10:05:27 compute-0 podman[454698]: 2025-10-14 10:05:27.638369041 +0000 UTC m=+1.407127957 container died c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:05:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda-merged.mount: Deactivated successfully.
Oct 14 10:05:27 compute-0 podman[454698]: 2025-10-14 10:05:27.725901669 +0000 UTC m=+1.494660555 container remove c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 10:05:27 compute-0 systemd[1]: libpod-conmon-c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec.scope: Deactivated successfully.
Oct 14 10:05:27 compute-0 sudo[454593]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:27 compute-0 sudo[454758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:27 compute-0 sudo[454758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:27 compute-0 sudo[454758]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:27 compute-0 sudo[454783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:05:27 compute-0 nova_compute[259627]: 2025-10-14 10:05:27.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 14 10:05:27 compute-0 nova_compute[259627]: 2025-10-14 10:05:27.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 14 10:05:27 compute-0 sudo[454783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:27 compute-0 sudo[454783]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:28 compute-0 sudo[454808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:28 compute-0 sudo[454808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:28 compute-0 sudo[454808]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:28 compute-0 sudo[454833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- lvm list --format json
Oct 14 10:05:28 compute-0 sudo[454833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:28 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:28 compute-0 ceph-mon[74249]: pgmap v3385: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:28 compute-0 podman[454899]: 2025-10-14 10:05:28.604419035 +0000 UTC m=+0.068741217 container create 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:05:28 compute-0 systemd[1]: Started libpod-conmon-91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5.scope.
Oct 14 10:05:28 compute-0 podman[454899]: 2025-10-14 10:05:28.573311692 +0000 UTC m=+0.037633884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:05:28 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:05:28 compute-0 podman[454899]: 2025-10-14 10:05:28.709554005 +0000 UTC m=+0.173876287 container init 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:05:28 compute-0 podman[454899]: 2025-10-14 10:05:28.716660899 +0000 UTC m=+0.180983102 container start 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 10:05:28 compute-0 podman[454899]: 2025-10-14 10:05:28.720092594 +0000 UTC m=+0.184414786 container attach 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 10:05:28 compute-0 practical_ritchie[454915]: 167 167
Oct 14 10:05:28 compute-0 systemd[1]: libpod-91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5.scope: Deactivated successfully.
Oct 14 10:05:28 compute-0 conmon[454915]: conmon 91b55561a3be60450a6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5.scope/container/memory.events
Oct 14 10:05:28 compute-0 podman[454899]: 2025-10-14 10:05:28.724376619 +0000 UTC m=+0.188698841 container died 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 10:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-041bfb483b8a846d1f7bfe7030eeabf19ec58cd95f96048134b5240930404ef9-merged.mount: Deactivated successfully.
Oct 14 10:05:28 compute-0 podman[454899]: 2025-10-14 10:05:28.781160612 +0000 UTC m=+0.245482844 container remove 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 10:05:28 compute-0 systemd[1]: libpod-conmon-91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5.scope: Deactivated successfully.
Oct 14 10:05:29 compute-0 podman[454938]: 2025-10-14 10:05:29.032676024 +0000 UTC m=+0.077189465 container create ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 10:05:29 compute-0 systemd[1]: Started libpod-conmon-ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa.scope.
Oct 14 10:05:29 compute-0 podman[454938]: 2025-10-14 10:05:29.000684799 +0000 UTC m=+0.045198260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:05:29 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:29 compute-0 podman[454938]: 2025-10-14 10:05:29.138763777 +0000 UTC m=+0.183277268 container init ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:05:29 compute-0 podman[454938]: 2025-10-14 10:05:29.15153858 +0000 UTC m=+0.196052011 container start ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 10:05:29 compute-0 podman[454938]: 2025-10-14 10:05:29.157741882 +0000 UTC m=+0.202255373 container attach ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 14 10:05:29 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:29 compute-0 nova_compute[259627]: 2025-10-14 10:05:29.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:29 compute-0 ceph-mon[74249]: pgmap v3386: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]: {
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:     "0": [
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:         {
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "devices": [
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "/dev/loop3"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             ],
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_name": "ceph_lv0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_size": "21470642176",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "name": "ceph_lv0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "tags": {
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cluster_name": "ceph",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.crush_device_class": "",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.encrypted": "0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osd_id": "0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.type": "block",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.vdo": "0"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             },
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "type": "block",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "vg_name": "ceph_vg0"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:         }
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:     ],
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:     "1": [
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:         {
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "devices": [
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "/dev/loop4"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             ],
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_name": "ceph_lv1",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_size": "21470642176",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "name": "ceph_lv1",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "tags": {
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cluster_name": "ceph",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.crush_device_class": "",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.encrypted": "0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osd_id": "1",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.type": "block",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.vdo": "0"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             },
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "type": "block",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "vg_name": "ceph_vg1"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:         }
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:     ],
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:     "2": [
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:         {
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "devices": [
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "/dev/loop5"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             ],
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_name": "ceph_lv2",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_size": "21470642176",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "name": "ceph_lv2",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "tags": {
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cephx_lockbox_secret": "",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.cluster_name": "ceph",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.crush_device_class": "",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.encrypted": "0",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osd_id": "2",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.type": "block",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:                 "ceph.vdo": "0"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             },
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "type": "block",
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:             "vg_name": "ceph_vg2"
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:         }
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]:     ]
Oct 14 10:05:29 compute-0 ecstatic_johnson[454954]: }
Oct 14 10:05:29 compute-0 systemd[1]: libpod-ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa.scope: Deactivated successfully.
Oct 14 10:05:29 compute-0 podman[454938]: 2025-10-14 10:05:29.945362879 +0000 UTC m=+0.989876320 container died ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 14 10:05:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c-merged.mount: Deactivated successfully.
Oct 14 10:05:30 compute-0 podman[454938]: 2025-10-14 10:05:30.014725311 +0000 UTC m=+1.059238732 container remove ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:05:30 compute-0 systemd[1]: libpod-conmon-ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa.scope: Deactivated successfully.
Oct 14 10:05:30 compute-0 sudo[454833]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:30 compute-0 sudo[454977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:30 compute-0 sudo[454977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:30 compute-0 sudo[454977]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:30 compute-0 sudo[455002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 14 10:05:30 compute-0 sudo[455002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:30 compute-0 sudo[455002]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:30 compute-0 sudo[455027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:30 compute-0 sudo[455027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:30 compute-0 sudo[455027]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:30 compute-0 sshd-session[455028]: Accepted publickey for zuul from 192.168.122.10 port 57776 ssh2: ECDSA SHA256:jaGWHGBmEwGLhBs5A5z51rEw7f54kxwV4dpIRk+zLbs
Oct 14 10:05:30 compute-0 systemd-logind[799]: New session 60 of user zuul.
Oct 14 10:05:30 compute-0 systemd[1]: Started Session 60 of User zuul.
Oct 14 10:05:30 compute-0 sshd-session[455028]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 14 10:05:30 compute-0 sudo[455054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -- raw list --format json
Oct 14 10:05:30 compute-0 sudo[455054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:30 compute-0 sudo[455081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Oct 14 10:05:30 compute-0 sudo[455081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 14 10:05:30 compute-0 podman[455156]: 2025-10-14 10:05:30.943076429 +0000 UTC m=+0.056918368 container create 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 10:05:31 compute-0 podman[455156]: 2025-10-14 10:05:30.91379381 +0000 UTC m=+0.027635759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:05:31 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:31 compute-0 systemd[1]: Started libpod-conmon-7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a.scope.
Oct 14 10:05:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:05:31 compute-0 podman[455156]: 2025-10-14 10:05:31.656976576 +0000 UTC m=+0.770818475 container init 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:05:31 compute-0 podman[455156]: 2025-10-14 10:05:31.669428392 +0000 UTC m=+0.783270291 container start 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 10:05:31 compute-0 podman[455156]: 2025-10-14 10:05:31.674284911 +0000 UTC m=+0.788126810 container attach 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:05:31 compute-0 crazy_perlman[455172]: 167 167
Oct 14 10:05:31 compute-0 systemd[1]: libpod-7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a.scope: Deactivated successfully.
Oct 14 10:05:31 compute-0 conmon[455172]: conmon 7ab74dcf56e086923831 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a.scope/container/memory.events
Oct 14 10:05:31 compute-0 podman[455156]: 2025-10-14 10:05:31.678990286 +0000 UTC m=+0.792832195 container died 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 10:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f15d3ec28b42f1a8f1e3168cbee9e51b656ae97f471622d855c005b47aa6ab4-merged.mount: Deactivated successfully.
Oct 14 10:05:31 compute-0 podman[455156]: 2025-10-14 10:05:31.73499899 +0000 UTC m=+0.848840909 container remove 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 10:05:31 compute-0 systemd[1]: libpod-conmon-7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a.scope: Deactivated successfully.
Oct 14 10:05:31 compute-0 podman[455237]: 2025-10-14 10:05:31.933644485 +0000 UTC m=+0.059790778 container create e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 10:05:31 compute-0 systemd[1]: Started libpod-conmon-e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371.scope.
Oct 14 10:05:31 compute-0 nova_compute[259627]: 2025-10-14 10:05:31.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:31 compute-0 systemd[1]: Started libcrun container.
Oct 14 10:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 10:05:32 compute-0 podman[455237]: 2025-10-14 10:05:31.914255399 +0000 UTC m=+0.040401702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 10:05:32 compute-0 podman[455237]: 2025-10-14 10:05:32.018322502 +0000 UTC m=+0.144468795 container init e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 10:05:32 compute-0 podman[455237]: 2025-10-14 10:05:32.025357145 +0000 UTC m=+0.151503448 container start e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:05:32 compute-0 podman[455237]: 2025-10-14 10:05:32.028852591 +0000 UTC m=+0.154998884 container attach e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 10:05:32 compute-0 ceph-mon[74249]: pgmap v3387: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:05:32
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr']
Oct 14 10:05:32 compute-0 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 10:05:33 compute-0 competent_mahavira[455257]: {
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:     "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "osd_id": 2,
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "type": "bluestore"
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:     },
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:     "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "osd_id": 1,
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "type": "bluestore"
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:     },
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:     "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "osd_id": 0,
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:         "type": "bluestore"
Oct 14 10:05:33 compute-0 competent_mahavira[455257]:     }
Oct 14 10:05:33 compute-0 competent_mahavira[455257]: }
Oct 14 10:05:33 compute-0 systemd[1]: libpod-e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371.scope: Deactivated successfully.
Oct 14 10:05:33 compute-0 systemd[1]: libpod-e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371.scope: Consumed 1.085s CPU time.
Oct 14 10:05:33 compute-0 podman[455385]: 2025-10-14 10:05:33.176987433 +0000 UTC m=+0.044125964 container died e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 10:05:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3-merged.mount: Deactivated successfully.
Oct 14 10:05:33 compute-0 podman[455385]: 2025-10-14 10:05:33.234809382 +0000 UTC m=+0.101947893 container remove e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 10:05:33 compute-0 systemd[1]: libpod-conmon-e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371.scope: Deactivated successfully.
Oct 14 10:05:33 compute-0 sudo[455054]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 10:05:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:05:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23289 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:33 compute-0 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 07a8a1a4-e53c-41c2-a6cb-42d94ca2ff3f does not exist
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [progress WARNING root] complete: ev 6bf32b6e-2595-4401-9ccb-ca0eaf83efcd does not exist
Oct 14 10:05:33 compute-0 sudo[455403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 14 10:05:33 compute-0 sudo[455403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:33 compute-0 sudo[455403]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:33 compute-0 sudo[455431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 14 10:05:33 compute-0 sudo[455431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 14 10:05:33 compute-0 sudo[455431]: pam_unix(sudo:session): session closed for user root
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 10:05:33 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 10:05:33 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23291 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:34 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:05:34 compute-0 ceph-mon[74249]: from='client.23289 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:34 compute-0 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 10:05:34 compute-0 ceph-mon[74249]: pgmap v3388: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:34 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 14 10:05:34 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203869758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 10:05:34 compute-0 nova_compute[259627]: 2025-10-14 10:05:34.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:35 compute-0 ceph-mon[74249]: from='client.23291 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:35 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2203869758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 10:05:35 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:36 compute-0 ceph-mon[74249]: pgmap v3389: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:37 compute-0 nova_compute[259627]: 2025-10-14 10:05:37.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:37 compute-0 ovs-vsctl[455561]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 14 10:05:37 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:38 compute-0 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 14 10:05:38 compute-0 ceph-mon[74249]: pgmap v3390: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:38 compute-0 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 14 10:05:38 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:38 compute-0 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 14 10:05:39 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: cache status {prefix=cache status} (starting...)
Oct 14 10:05:39 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: client ls {prefix=client ls} (starting...)
Oct 14 10:05:39 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:39 compute-0 lvm[455904]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 10:05:39 compute-0 lvm[455904]: VG ceph_vg1 finished
Oct 14 10:05:39 compute-0 nova_compute[259627]: 2025-10-14 10:05:39.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:39 compute-0 lvm[455937]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 14 10:05:39 compute-0 lvm[455937]: VG ceph_vg0 finished
Oct 14 10:05:39 compute-0 lvm[455942]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 10:05:39 compute-0 lvm[455942]: VG ceph_vg2 finished
Oct 14 10:05:39 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23295 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:39 compute-0 podman[455948]: 2025-10-14 10:05:39.828289628 +0000 UTC m=+0.084283519 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 14 10:05:39 compute-0 podman[455946]: 2025-10-14 10:05:39.84516083 +0000 UTC m=+0.105207730 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 10:05:40 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: damage ls {prefix=damage ls} (starting...)
Oct 14 10:05:40 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23297 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:40 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump loads {prefix=dump loads} (starting...)
Oct 14 10:05:40 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 14 10:05:40 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 14 10:05:40 compute-0 ceph-mon[74249]: pgmap v3391: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:40 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 14 10:05:40 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 14 10:05:40 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 14 10:05:40 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283883360' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 14 10:05:40 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23303 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:40 compute-0 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T10:05:40.988+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 10:05:40 compute-0 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 10:05:41 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 14 10:05:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 10:05:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347199880' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:05:41 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 14 10:05:41 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 14 10:05:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543063755' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 14 10:05:41 compute-0 ceph-mon[74249]: from='client.23295 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:41 compute-0 ceph-mon[74249]: from='client.23297 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3283883360' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 14 10:05:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3347199880' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 10:05:41 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2543063755' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 14 10:05:41 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: ops {prefix=ops} (starting...)
Oct 14 10:05:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 14 10:05:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308033935' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 14 10:05:41 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 14 10:05:41 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/569880885' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 10:05:42 compute-0 nova_compute[259627]: 2025-10-14 10:05:42.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 14 10:05:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827175330' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: session ls {prefix=session ls} (starting...)
Oct 14 10:05:42 compute-0 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: status {prefix=status} (starting...)
Oct 14 10:05:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 14 10:05:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854098007' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mon[74249]: from='client.23303 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mon[74249]: pgmap v3392: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2308033935' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/569880885' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3827175330' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1854098007' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23317 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 14 10:05:42 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027536602' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 10:05:42 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23321 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 10:05:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533733668' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 10:05:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 14 10:05:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3657510336' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 14 10:05:43 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:43 compute-0 ceph-mon[74249]: from='client.23317 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2027536602' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 10:05:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3533733668' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 10:05:43 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3657510336' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 14 10:05:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 14 10:05:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239420005' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 14 10:05:43 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 14 10:05:43 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430673640' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 14 10:05:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 14 10:05:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3507163686' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23333 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 14 10:05:44 compute-0 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T10:05:44.191+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23335 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:44 compute-0 ceph-mon[74249]: from='client.23321 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:44 compute-0 ceph-mon[74249]: pgmap v3393: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3239420005' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 14 10:05:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1430673640' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 14 10:05:44 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3507163686' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 10:05:44 compute-0 nova_compute[259627]: 2025-10-14 10:05:44.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:44 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 14 10:05:44 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1618488327' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 14 10:05:44 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23339 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 14 10:05:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227336034' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23343 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:45 compute-0 ceph-mon[74249]: from='client.23333 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mon[74249]: from='client.23335 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1618488327' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2227336034' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23347 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 14 10:05:45 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251670558' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 10:05:45 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23349 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 14 10:05:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227497799' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:44.885935+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:45.886101+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002713 data_alloc: 218103808 data_used: 4149248
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279158784 unmapped: 45391872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:46.886250+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:47.886412+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:48.886599+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:49.886776+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:50.886919+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042873 data_alloc: 218103808 data_used: 9805824
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:51.887106+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:52.887191+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:53.887407+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:54.887591+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:55.887733+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.889083862s of 11.892253876s, submitted: 1
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048489 data_alloc: 218103808 data_used: 9805824
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:56.887871+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:57.888002+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:58.888174+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:59.888337+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:00.888515+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:01.888652+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:02.888786+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:03.888953+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:04.889260+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:05.889454+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:06.889607+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:07.889734+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:08.889903+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 35K writes, 139K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.74 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4462 writes, 17K keys, 4462 commit groups, 1.0 writes per commit group, ingest: 20.06 MB, 0.03 MB/s
                                           Interval WAL: 4462 writes, 1787 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:09.890064+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:10.890224+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6e800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.724911690s of 15.823020935s, submitted: 21
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:11.890351+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6e800 session 0x55b3cbf234a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70b000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70b000 session 0x55b3ccad7c20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d14400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14400 session 0x55b3ccadbe00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d14400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,1])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14400 session 0x55b3ce1261e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cb6afc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ce58bc20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1f3000/0x0/0x4ffc00000, data 0x169755f/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:12.890509+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:13.890631+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:14.890815+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:15.890976+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3065094 data_alloc: 218103808 data_used: 9809920
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:16.891140+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:17.891325+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1f3000/0x0/0x4ffc00000, data 0x169755f/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb4c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb4c00 session 0x55b3ce5af680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:18.891483+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02c000 session 0x55b3cde57680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:19.891659+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3cba94000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ccadb0e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279453696 unmapped: 45096960 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:20.891845+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf611400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068328 data_alloc: 218103808 data_used: 9809920
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279453696 unmapped: 45096960 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:21.892061+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1ce000/0x0/0x4ffc00000, data 0x16bb56f/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:22.892177+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:23.892291+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:24.892492+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:25.892679+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1ce000/0x0/0x4ffc00000, data 0x16bb56f/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073608 data_alloc: 218103808 data_used: 10457088
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:26.892922+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:27.893109+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.005136490s of 16.128065109s, submitted: 29
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:28.893309+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1cc000/0x0/0x4ffc00000, data 0x16bc56f/0x1851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:29.893546+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:30.893736+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073916 data_alloc: 218103808 data_used: 10457088
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:31.893872+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eebb8000/0x0/0x4ffc00000, data 0x1cd156f/0x1e66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279379968 unmapped: 45170688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:32.894098+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279257088 unmapped: 45293568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:33.894268+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:34.894464+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:35.894616+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146050 data_alloc: 218103808 data_used: 10694656
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:36.894787+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea54000/0x0/0x4ffc00000, data 0x1e2d56f/0x1fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:37.894942+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:38.895081+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.522329330s of 11.824616432s, submitted: 88
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:39.895253+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:40.895424+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea3d000/0x0/0x4ffc00000, data 0x1e4c56f/0x1fe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140686 data_alloc: 218103808 data_used: 10698752
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:41.895615+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:42.895782+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:43.895953+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:44.896234+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613800 session 0x55b3cdc6cd20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf611400 session 0x55b3cb9981e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84ec00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:45.896365+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84ec00 session 0x55b3cbe8dc20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061845 data_alloc: 218103808 data_used: 9809920
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:46.896508+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef0d8000/0x0/0x4ffc00000, data 0x15ca4fd/0x175d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:47.896658+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:48.896821+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:49.897000+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:50.897218+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061845 data_alloc: 218103808 data_used: 9809920
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:51.897443+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef0d8000/0x0/0x4ffc00000, data 0x15ca4fd/0x175d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:52.897571+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.392840385s of 13.488458633s, submitted: 24
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3ce5af4a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e7800 session 0x55b3cd3b2f00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02dc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:53.897683+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cd3b30e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:54.897857+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:55.898051+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:56.898173+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:57.898320+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:58.898519+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:59.898752+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:00.898896+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:01.899093+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:02.899272+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:03.899423+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:04.899606+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:05.899773+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:06.899932+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:07.900085+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.573257446s of 15.623365402s, submitted: 15
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:08.900221+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 52625408 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:09.900390+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:10.900555+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:11.900707+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:12.900881+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:13.901161+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:14.901397+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:15.901536+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:16.901692+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:17.901898+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:18.902102+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:19.902335+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:20.902488+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:21.902685+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:22.902823+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:23.902976+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:24.903194+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:25.903366+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:26.903516+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:27.903706+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:28.903890+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:29.904070+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:30.904250+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:31.904488+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:32.904695+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:33.904846+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:34.905058+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc62c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.134544373s of 26.442840576s, submitted: 90
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3ccadba40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6fc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6fc00 session 0x55b3cbf23860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa50000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3cbdc7c20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa50000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3cbd421e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512e400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3cb99b860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:35.905254+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:36.905450+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:37.905591+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:38.905747+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:39.905910+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:40.906081+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:41.906215+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:42.906410+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:43.906618+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:44.906804+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3cb9414a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70c800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:45.906963+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:46.907120+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:47.907263+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:48.907411+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:49.907852+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:50.908060+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010266 data_alloc: 218103808 data_used: 6356992
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:51.908232+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:52.908414+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:53.909087+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:54.909283+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:55.909585+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272015360 unmapped: 52535296 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.337787628s of 21.399259567s, submitted: 5
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042632 data_alloc: 218103808 data_used: 6356992
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:56.909750+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275701760 unmapped: 48848896 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:57.909986+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275709952 unmapped: 48840704 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:58.910477+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:59.910789+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:00.911129+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081410 data_alloc: 218103808 data_used: 6582272
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:01.911431+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:02.911630+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:03.911935+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:04.912185+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:05.912352+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081410 data_alloc: 218103808 data_used: 6582272
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:06.912496+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:07.912739+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:08.912907+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:09.913147+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:10.913272+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce5a25a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02ac00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3d0b64b40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02ac00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3cb986960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce569a40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.855088234s of 15.102807999s, submitted: 57
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082882 data_alloc: 218103808 data_used: 6582272
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:11.913485+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275603456 unmapped: 48947200 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3cfba2d20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa50000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3ce39d0e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512e400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3ce1274a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512e400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:12.913606+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3cdc6d0e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd6f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce58ab40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275857408 unmapped: 48693248 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:13.913770+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:14.913991+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:15.914175+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126608 data_alloc: 218103808 data_used: 6582272
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:16.914343+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:17.914585+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:18.914786+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:19.914945+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:20.915138+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3cbe8c780
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:21.915277+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126757 data_alloc: 218103808 data_used: 6582272
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.413484573s of 10.376444817s, submitted: 44
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d15800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:22.915434+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:23.915655+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:24.915881+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee927000/0x0/0x4ffc00000, data 0x1f65500/0x20f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:25.916286+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:26.916486+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157769 data_alloc: 218103808 data_used: 10874880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee927000/0x0/0x4ffc00000, data 0x1f65500/0x20f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:27.916649+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:28.916828+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:29.917099+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:30.917217+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee925000/0x0/0x4ffc00000, data 0x1f66500/0x20f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:31.917373+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158077 data_alloc: 218103808 data_used: 10874880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:32.917505+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee925000/0x0/0x4ffc00000, data 0x1f66500/0x20f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:33.917694+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.610369682s of 12.620978355s, submitted: 2
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:34.918069+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276971520 unmapped: 47579136 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:35.918257+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276971520 unmapped: 47579136 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:36.918348+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201721 data_alloc: 218103808 data_used: 10940416
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:37.918492+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed285000/0x0/0x4ffc00000, data 0x2467500/0x25f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:38.918617+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:39.918784+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:40.918957+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:41.919139+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201721 data_alloc: 218103808 data_used: 10940416
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:42.919278+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:43.919459+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:44.919671+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:45.919845+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:46.919965+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200861 data_alloc: 218103808 data_used: 10952704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.501975060s of 12.738298416s, submitted: 59
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3cb987860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d15800 session 0x55b3ce126f00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:47.920106+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ccaa0780
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:48.920252+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:49.920440+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edce3000/0x0/0x4ffc00000, data 0x1a0a47b/0x1b9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:50.920589+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3cfba3680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135c000 session 0x55b3cbdc7680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:51.920715+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d400 session 0x55b3d0b641e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:52.920983+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:53.921258+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:54.921586+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:55.921736+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:56.921912+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:57.922107+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:58.922271+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:59.922429+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:00.922545+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:01.922741+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:02.922880+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:03.923091+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:04.923317+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:05.923497+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:06.923698+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:07.923901+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:08.924119+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:09.924273+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:10.924439+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:11.924632+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:12.924852+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:13.925142+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:14.925325+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:15.925451+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:16.925638+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:17.925803+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:18.925962+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:19.926074+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:20.926237+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:21.926419+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:22.926582+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:23.926738+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:24.926917+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:25.927112+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:26.927298+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:27.927506+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:28.927757+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:29.927943+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:30.928143+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:31.928395+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:32.928639+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276496384 unmapped: 48054272 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:33.928803+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e6400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.562591553s of 46.725173950s, submitted: 51
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276496384 unmapped: 48054272 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6400 session 0x55b3cbe8c1e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e6400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6400 session 0x55b3cc8025a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce10d2c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70c800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3ce10d0e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d400 session 0x55b3ce5ae000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:34.929073+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edeb5000/0x0/0x4ffc00000, data 0x18384dd/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:35.929283+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:36.929455+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058685 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:37.929662+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:38.929833+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:39.929973+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:40.930112+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edeb5000/0x0/0x4ffc00000, data 0x18384dd/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:41.930257+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058685 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:42.930444+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512e800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:43.930591+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e800 session 0x55b3ce0e9a40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd51c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70c800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.344988823s of 10.475560188s, submitted: 43
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:44.930737+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:45.930877+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:46.931068+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063134 data_alloc: 218103808 data_used: 2838528
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277938176 unmapped: 50290688 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:47.931236+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:48.931363+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:49.931541+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:50.931728+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:51.931880+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134014 data_alloc: 234881024 data_used: 12783616
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:52.932092+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:53.932227+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:54.932423+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:55.932624+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:56.932852+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134494 data_alloc: 234881024 data_used: 12795904
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.812140465s of 12.816347122s, submitted: 1
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:57.933051+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 43753472 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:58.933212+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:59.933390+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:00.933568+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:01.933773+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3255354 data_alloc: 234881024 data_used: 13996032
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:02.934465+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:03.934821+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:04.935102+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:05.935940+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:06.937414+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250878 data_alloc: 234881024 data_used: 14000128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:07.937968+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x26444dd/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:08.939159+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:09.940365+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:10.941319+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:11.943394+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.050657272s of 14.411172867s, submitted: 131
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x26444dd/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250758 data_alloc: 234881024 data_used: 14000128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:12.943555+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02d800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:13.943761+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02d800 session 0x55b3cba952c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5b800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5b800 session 0x55b3ccaa0f00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84ec00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84ec00 session 0x55b3ce0e83c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5fc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5fc00 session 0x55b3cdcb23c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174bc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174bc00 session 0x55b3ce58b4a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:14.944139+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:15.944305+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb526000/0x0/0x4ffc00000, data 0x30274dd/0x31b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:16.944445+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326714 data_alloc: 234881024 data_used: 14000128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:17.944715+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:18.945084+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:19.945253+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc63800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63800 session 0x55b3cb941e00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:20.945493+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce707400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707400 session 0x55b3d0b65a40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cbd43e00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa53c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3ccaa0000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:21.945683+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326262 data_alloc: 234881024 data_used: 14000128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd60400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:22.945834+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:23.945951+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 47489024 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:24.946097+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:25.946211+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:26.946387+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399382 data_alloc: 234881024 data_used: 21020672
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:27.946517+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:28.946662+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:29.946845+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.038463593s of 18.175695419s, submitted: 14
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:30.947000+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:31.947156+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb521000/0x0/0x4ffc00000, data 0x302b4dd/0x31bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3400218 data_alloc: 234881024 data_used: 21020672
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:32.947351+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:33.947580+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289325056 unmapped: 43106304 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:34.947793+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289488896 unmapped: 42942464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:35.947969+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:36.948146+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450960 data_alloc: 234881024 data_used: 21569536
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:37.948342+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:38.948444+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:39.948637+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:40.948811+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:41.948974+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450960 data_alloc: 234881024 data_used: 21569536
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:42.949152+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:43.949336+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:44.949536+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.170224190s of 15.374196053s, submitted: 52
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:45.949699+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:46.949840+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb095000/0x0/0x4ffc00000, data 0x34b84dd/0x3649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449236 data_alloc: 234881024 data_used: 21671936
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:47.949986+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5400 session 0x55b3ccadc000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd60400 session 0x55b3ccadb4a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5400 session 0x55b3cfba2b40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:48.950189+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebefe000/0x0/0x4ffc00000, data 0x264f4dd/0x27e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:49.950352+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:50.950494+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:51.950695+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259330 data_alloc: 218103808 data_used: 10498048
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:52.950847+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce5ae1e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3cde570e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:53.951068+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5d400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5d400 session 0x55b3cc7ec1e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebef2000/0x0/0x4ffc00000, data 0x265b4dd/0x27ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:54.951268+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:55.951416+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:56.951541+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:57.951697+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:58.951854+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:59.952055+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:00.952165+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:01.952373+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:02.952553+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:03.952733+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:04.952958+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:05.953096+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:06.953264+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:07.953449+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:08.953607+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:09.953743+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:10.953955+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:11.954118+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:12.954298+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:13.954458+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:14.954719+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:15.954925+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:16.955094+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:17.955207+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:18.955382+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:19.955560+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:20.955717+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:21.955859+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:22.956003+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:23.956220+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:24.956435+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:25.956559+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:26.956730+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:27.956901+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:28.957082+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174bc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174bc00 session 0x55b3ce5a2960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cbe8c960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:29.957223+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce0f2000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cde57680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84e800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.481346130s of 44.651317596s, submitted: 51
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84e800 session 0x55b3cba94000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84e800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84e800 session 0x55b3ccada000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3ccad6d20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba94b40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cf5acf00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:30.957427+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:31.957616+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021392 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:32.957804+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:33.957935+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:34.958093+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:35.958226+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:36.958420+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021392 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:37.958683+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135c400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135c400 session 0x55b3cbf22960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282935296 unmapped: 49496064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb5800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:38.958833+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282935296 unmapped: 49496064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:39.959213+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:40.959787+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:41.960050+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038361 data_alloc: 218103808 data_used: 4456448
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:42.960306+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:43.960869+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:44.961340+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:45.961796+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:46.962216+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038361 data_alloc: 218103808 data_used: 4456448
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:47.962453+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282951680 unmapped: 49479680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:48.962642+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.536443710s of 19.611005783s, submitted: 14
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 49332224 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:49.962794+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285564928 unmapped: 46866432 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:50.962930+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:51.963090+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095057 data_alloc: 218103808 data_used: 5337088
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:52.963346+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:53.963702+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0f1000/0x0/0x4ffc00000, data 0x14454ae/0x15d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:54.963979+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285655040 unmapped: 46776320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:55.964333+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:56.964586+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086421 data_alloc: 218103808 data_used: 5337088
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:57.964757+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:58.964993+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:59.965318+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e6000/0x0/0x4ffc00000, data 0x14664ae/0x15f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:00.965439+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:01.965628+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086741 data_alloc: 218103808 data_used: 5345280
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:02.965799+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e6000/0x0/0x4ffc00000, data 0x14664ae/0x15f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 47439872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:03.965977+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfa53c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.636660576s of 14.872914314s, submitted: 73
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3ccaddc20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512d800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512d800 session 0x55b3ce5a3860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3d0b64d20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc0000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3cc7ffc20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:04.966164+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f4c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3ccaddc20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:05.966289+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:06.966493+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157783 data_alloc: 218103808 data_used: 5345280
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:07.966682+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:08.966947+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:09.967092+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cbf22960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:10.967238+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70c800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d1d15c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:11.967343+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3198420 data_alloc: 218103808 data_used: 10727424
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 46170112 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:12.967483+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:13.967601+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:14.967767+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:15.967895+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:16.968088+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214100 data_alloc: 234881024 data_used: 12992512
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:17.968271+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:18.968447+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:19.968646+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:20.968808+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.728010178s of 16.851394653s, submitted: 25
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:21.968951+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336672 data_alloc: 234881024 data_used: 14049280
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:22.969140+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 36315136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:23.969285+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 36978688 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:24.969521+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:25.969698+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb921000/0x0/0x4ffc00000, data 0x2c2a4ae/0x2dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:26.969826+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb921000/0x0/0x4ffc00000, data 0x2c2a4ae/0x2dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340164 data_alloc: 234881024 data_used: 14286848
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:27.969976+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:28.970114+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:29.970263+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:30.970449+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:31.970565+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb91f000/0x0/0x4ffc00000, data 0x2c2d4ae/0x2dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337508 data_alloc: 234881024 data_used: 14286848
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:32.970692+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:33.970844+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:34.971000+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:35.971191+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3ccada000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.508075714s of 14.903322220s, submitted: 136
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d15c00 session 0x55b3cb940f00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:36.971330+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290496512 unmapped: 41934848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3ccadc1e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e0000/0x0/0x4ffc00000, data 0x146c4ae/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102246 data_alloc: 218103808 data_used: 5394432
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:37.971468+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290496512 unmapped: 41934848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:38.971638+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290504704 unmapped: 41926656 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:39.971843+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x14734ae/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290504704 unmapped: 41926656 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cc7ff2c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba95c20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d2ff1400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:40.971970+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d2ff1400 session 0x55b3cde57e00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:41.972130+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:42.972328+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:43.972538+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:44.972732+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:45.972872+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:46.973006+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:47.973144+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:48.973276+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:49.973424+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:50.973539+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:51.973696+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:52.973860+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:53.974222+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:54.974478+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:55.974626+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:56.974822+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:57.975083+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:58.975248+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:59.975373+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:00.975528+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:01.975668+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:02.975777+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:03.975950+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:04.976239+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:05.976364+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:06.976521+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:07.976699+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:08.976880+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:09.977136+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:10.977342+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:11.977555+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:12.977720+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:13.977954+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:14.978420+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:15.978582+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf610800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.965858459s of 40.134490967s, submitted: 59
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf610800 session 0x55b3ce5a32c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccbbec00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce58a5a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc1c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce39cd20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc1c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce126780
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cc5990e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:16.978786+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:17.979009+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030916 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed576000/0x0/0x4ffc00000, data 0xfd847b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:18.979776+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:19.979977+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:20.980256+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:21.980476+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:22.980805+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030916 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed576000/0x0/0x4ffc00000, data 0xfd847b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:23.980986+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cd3b2000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:24.984241+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70dc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84f400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:25.984380+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:26.984510+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:27.984665+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045352 data_alloc: 218103808 data_used: 4300800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:28.984853+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:29.985006+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:30.985204+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:31.985382+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:32.985630+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045352 data_alloc: 218103808 data_used: 4300800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:33.985854+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:34.986091+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290570240 unmapped: 41861120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:35.986239+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290570240 unmapped: 41861120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.726144791s of 19.748613358s, submitted: 2
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:36.986348+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291405824 unmapped: 41025536 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:37.986499+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134708 data_alloc: 218103808 data_used: 5177344
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:38.986698+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:39.986955+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:40.987187+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:41.987371+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:42.987541+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134868 data_alloc: 218103808 data_used: 5181440
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:43.987711+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:44.987977+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:45.988169+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:46.988340+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:47.988463+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3135188 data_alloc: 218103808 data_used: 5189632
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:48.988617+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:49.988734+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:50.988931+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135dc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cba943c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce5aeb40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce568960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:51.989096+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce58b680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cfbc1c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.785821915s of 15.976529121s, submitted: 45
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce58ab40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135dc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135dc00 session 0x55b3ce569a40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cbe8da40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d512c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cfba3a40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce58a960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:52.989215+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153364 data_alloc: 218103808 data_used: 5189632
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:53.989373+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:54.989906+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec945000/0x0/0x4ffc00000, data 0x1c0947b/0x1d99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:55.990090+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:56.990258+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292462592 unmapped: 39968768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:57.990416+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153364 data_alloc: 218103808 data_used: 5189632
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292462592 unmapped: 39968768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:58.990572+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174c000 session 0x55b3cba95a40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292618240 unmapped: 39813120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec921000/0x0/0x4ffc00000, data 0x1c2d47b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:59.990726+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5e000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174b800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:00.990876+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:01.991008+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:02.991270+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180149 data_alloc: 218103808 data_used: 8339456
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:03.991411+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec921000/0x0/0x4ffc00000, data 0x1c2d47b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:04.991557+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:05.991719+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:06.991887+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:07.992104+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180149 data_alloc: 218103808 data_used: 8339456
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.082937241s of 16.159908295s, submitted: 8
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:08.992263+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec91f000/0x0/0x4ffc00000, data 0x1c2e47b/0x1dbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:09.992420+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:10.992579+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293462016 unmapped: 38969344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:11.992719+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:12.992904+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:13.993071+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:14.993265+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:15.993416+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:16.993569+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:17.993732+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:18.993911+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:19.994165+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:20.994449+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:21.994632+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:22.994792+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:23.994979+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:24.995552+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:25.995778+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5e000 session 0x55b3cba945a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.404949188s of 17.572023392s, submitted: 33
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b800 session 0x55b3cc599c20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293502976 unmapped: 38928384 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ccadd860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:26.996320+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293519360 unmapped: 38912000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:27.996834+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3141441 data_alloc: 218103808 data_used: 5251072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293519360 unmapped: 38912000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:28.997087+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293527552 unmapped: 38903808 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70dc00 session 0x55b3cc7ec5a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84f400 session 0x55b3cbd43a40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5e000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:29.997219+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc44000/0x0/0x4ffc00000, data 0x190a47b/0x1a9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5e000 session 0x55b3cba952c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:30.997570+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:31.997876+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:32.998057+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:33.998368+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:34.998659+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:35.998919+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:36.999104+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:37.999262+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:38.999607+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:39.999974+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:41.000392+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:42.000647+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:43.000839+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:44.001129+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:45.001338+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:46.001607+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:47.001819+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:48.001986+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:49.002388+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:50.002582+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:51.002897+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:52.003158+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:53.003334+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:54.003504+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:55.003725+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:56.003872+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:57.004098+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:58.004224+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:59.004357+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:00.004529+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:01.004692+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:02.004853+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf615400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.787899017s of 36.916973114s, submitted: 30
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:03.005039+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615400 session 0x55b3ce5a21e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb23e5a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf615400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615400 session 0x55b3cbe3ed20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba94d20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84f400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84f400 session 0x55b3ce58ba40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075374 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:04.005222+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:05.005411+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:06.005587+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0x12584dd/0x13e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:07.005725+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:08.005852+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075374 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:09.006074+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc62c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3cbdc7860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc62c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3ce1274a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:10.006275+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3ce5a30e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf6f4c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cfba25a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:11.006413+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf02ac00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d174b000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:12.006580+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:13.006751+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096732 data_alloc: 218103808 data_used: 5402624
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:14.006865+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:15.007086+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:16.007240+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:17.007471+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:18.007677+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096732 data_alloc: 218103808 data_used: 5402624
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:19.007856+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:20.008051+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:21.008220+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:22.008371+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.343187332s of 19.452217102s, submitted: 33
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292077568 unmapped: 40353792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:23.008509+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187434 data_alloc: 218103808 data_used: 5513216
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292077568 unmapped: 40353792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:24.014611+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec6fa000/0x0/0x4ffc00000, data 0x1e4a4ed/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec6fd000/0x0/0x4ffc00000, data 0x1e4e4ed/0x1fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:25.014824+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:26.015097+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:27.015257+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:28.015429+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3189592 data_alloc: 218103808 data_used: 5500928
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:29.015654+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:30.015853+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:31.016060+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:32.016221+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:33.016406+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188172 data_alloc: 218103808 data_used: 5505024
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:34.016546+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:35.016763+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:36.016917+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:37.017072+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:38.017218+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188172 data_alloc: 218103808 data_used: 5505024
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:39.017380+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.344091415s of 16.666501999s, submitted: 103
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cb99b860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d000 session 0x55b3cb986960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ce70d000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d000 session 0x55b3cbe8c780
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb987860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc62c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:40.017496+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3cdcb25a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cb9863c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf6f4c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cfba2f00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf6f4c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cbe3e960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cbdc6d20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:41.017747+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:42.017885+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:43.018126+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199049 data_alloc: 218103808 data_used: 5505024
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:44.018249+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:45.018390+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:46.018550+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:47.018638+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf614c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf614c00 session 0x55b3cfba3e00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccb61400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc63000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291282944 unmapped: 41148416 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:48.018748+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199181 data_alloc: 218103808 data_used: 5505024
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:49.018842+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:50.018939+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:51.019100+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:52.019252+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:53.019411+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202541 data_alloc: 218103808 data_used: 6029312
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:54.019525+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:55.019795+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.294864655s of 16.384281158s, submitted: 18
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:56.020206+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:57.020486+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:58.021407+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291332096 unmapped: 41099264 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202957 data_alloc: 218103808 data_used: 6066176
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:59.021573+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292683776 unmapped: 39747584 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:00.021770+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:01.021946+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292478976 unmapped: 39952384 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:02.022071+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292495360 unmapped: 39936000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec1ae000/0x0/0x4ffc00000, data 0x23984ed/0x252a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:03.022232+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:04.022459+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:05.022691+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:06.022883+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:07.023103+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:08.023251+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:09.023382+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.73 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2257 writes, 9325 keys, 2257 commit groups, 1.0 writes per commit group, ingest: 11.79 MB, 0.02 MB/s
                                           Interval WAL: 2257 writes, 876 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:10.023576+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:11.023707+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:12.023855+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:13.024035+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:14.024172+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:15.024378+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:16.024533+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:17.024703+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:18.024892+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:19.025082+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:20.025264+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:21.025434+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:22.025579+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:23.025766+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:24.025930+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:25.026138+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.300401688s of 30.381093979s, submitted: 46
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccb61400 session 0x55b3cbf234a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63000 session 0x55b3cde56b40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:26.026299+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb99b2c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:27.026520+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:28.026711+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194413 data_alloc: 218103808 data_used: 5505024
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:29.026913+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3ce0f3860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b000 session 0x55b3cba95a40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:30.027099+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccb61400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccb61400 session 0x55b3ccadd680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:31.027338+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:32.027637+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:33.028191+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:34.028372+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:35.028649+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:36.028933+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:37.029200+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:38.029613+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:39.029940+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:40.030406+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:41.030711+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:42.031006+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:43.031379+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:44.031623+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:45.031886+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:46.032144+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:47.032457+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:48.032732+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:49.032949+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 41279488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:50.033107+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 41279488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.373273849s of 24.570438385s, submitted: 43
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3d0b64960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdce5c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3cb998960
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cba943c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cc7f5000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5000 session 0x55b3cb999680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdc63c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63c00 session 0x55b3ce39d860
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:51.033306+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292216832 unmapped: 40214528 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:52.033473+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292216832 unmapped: 40214528 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ccadad20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:53.033598+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099194 data_alloc: 218103808 data_used: 2686976
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:54.033722+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed1fc000/0x0/0x4ffc00000, data 0x1350500/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:55.033902+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:56.034046+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce39dc20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3ce126780
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf613800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613800 session 0x55b3cf5ade00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:57.034204+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:58.034340+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:59.034544+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:00.034744+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:01.034925+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:02.035100+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:03.035251+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:04.035425+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:05.035646+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:06.035846+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:07.036073+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:08.036245+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:09.036413+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.473459244s of 18.660179138s, submitted: 60
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:10.036596+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291807232 unmapped: 40624128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:11.036704+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:12.036813+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:13.036988+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:14.037136+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:15.037340+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:16.037565+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:17.038302+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:18.038478+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:19.038599+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:20.038766+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:21.038921+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:22.039138+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:23.039360+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:24.039551+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:25.039754+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:26.041116+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:27.041277+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:28.041482+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:29.041657+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:30.041867+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:31.042051+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:32.042247+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:33.042420+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:34.042638+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:35.042918+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:36.043097+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:37.043297+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:38.043508+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:39.043705+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:40.043870+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:41.044112+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:42.044293+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:43.044453+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:44.044627+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291872768 unmapped: 40558592 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:45.044841+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:46.045056+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:47.045200+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:48.045342+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:49.045537+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:50.045720+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:51.045855+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:52.046058+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:53.046306+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:54.046460+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:55.046638+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:56.046814+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:57.046946+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.734951019s of 48.040054321s, submitted: 90
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 293 ms_handle_reset con 0x55b3cbd50000 session 0x55b3cba943c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291897344 unmapped: 40534016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:58.047099+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:59.047309+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061748 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:00.047484+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:01.047659+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed2ec000/0x0/0x4ffc00000, data 0xe5004c/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291921920 unmapped: 40509440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:02.047810+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291921920 unmapped: 40509440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:03.047946+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291938304 unmapped: 40493056 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:04.048134+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291946496 unmapped: 40484864 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:05.049844+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:06.050896+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:07.051341+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:08.052579+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:09.053262+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:10.053932+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:11.054528+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:12.055151+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:13.055543+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:14.055679+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:15.055949+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:16.056128+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:17.056478+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:18.056806+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:19.057088+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:20.057297+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:21.057560+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:22.057727+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:23.057980+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:24.058075+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:25.058651+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:26.058834+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:27.059108+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:28.059257+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291995648 unmapped: 40435712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:29.059424+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291995648 unmapped: 40435712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:30.059627+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:31.059783+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:32.059919+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:33.060132+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:34.060342+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:35.060572+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:36.060758+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292020224 unmapped: 40411136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:37.061335+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292020224 unmapped: 40411136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:38.061746+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:39.062480+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:40.062900+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:41.063216+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:42.063366+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:43.063586+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:44.063725+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:45.064081+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:46.064288+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292036608 unmapped: 40394752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:47.064485+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292036608 unmapped: 40394752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:48.064689+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:49.064891+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:50.065058+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:51.065216+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:52.065387+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:53.065530+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292061184 unmapped: 40370176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:54.065734+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292061184 unmapped: 40370176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:55.065918+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:56.066068+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:57.066217+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:58.066340+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:59.066531+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:00.066742+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:01.066966+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:02.067130+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:03.067268+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:04.067417+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:05.067653+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:06.067910+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:07.068064+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:08.068234+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:09.068414+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:10.068564+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:11.068656+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:12.068821+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:13.068994+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:14.069200+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:15.069317+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:16.132860+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:17.133112+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292118528 unmapped: 40312832 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:18.133302+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:19.133454+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:20.133646+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:21.133788+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:22.133985+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:23.134160+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:24.134363+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:25.134539+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:26.134698+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:27.134882+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:28.135114+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:29.135310+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:30.135479+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:31.135580+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:32.135761+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:33.135891+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292143104 unmapped: 40288256 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:34.136067+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:35.136244+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:36.136583+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:37.136725+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:38.136858+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:39.137081+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:40.137224+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:41.137397+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:42.137565+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292175872 unmapped: 40255488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:43.137781+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292175872 unmapped: 40255488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:44.137957+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:45.138188+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:46.138359+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:47.138566+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:48.138774+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:49.138925+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292200448 unmapped: 40230912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:50.139158+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292208640 unmapped: 40222720 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5f800
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 113.866653442s of 113.942031860s, submitted: 31
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 ms_handle_reset con 0x55b3cdd5f800 session 0x55b3ccadd680
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbd50000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:51.139338+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292208640 unmapped: 40222720 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 ms_handle_reset con 0x55b3cbd50000 session 0x55b3cb99b2c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:52.139482+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:53.139688+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:54.139941+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2ea000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060002 data_alloc: 234881024 data_used: 11603968
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:55.140268+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:56.140408+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2ea000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:57.140571+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:58.140848+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:59.140976+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060002 data_alloc: 234881024 data_used: 11603968
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf4e7400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:00.141090+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299130880 unmapped: 33300480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 295 ms_handle_reset con 0x55b3cf4e7400 session 0x55b3cfba25a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:01.141202+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 295 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e365d/0x376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:02.141379+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cdd5cc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.775878906s of 11.974079132s, submitted: 56
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:03.141486+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 296 ms_handle_reset con 0x55b3cdd5cc00 session 0x55b3cc5990e0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:04.141608+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2954206 data_alloc: 218103808 data_used: 151552
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:05.141811+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:06.142109+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:07.142236+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf54000/0x0/0x4ffc00000, data 0x1e520b/0x378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:08.142444+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:09.142703+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956988 data_alloc: 218103808 data_used: 151552
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:10.142933+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: get_auth_request con 0x55b3ccafd800 auth_method 0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:11.143124+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 297 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x1e6c6e/0x37b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:12.143244+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:13.143420+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:14.143586+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cd84f000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.541853905s of 11.648766518s, submitted: 41
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2959762 data_alloc: 218103808 data_used: 151552
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 ms_handle_reset con 0x55b3cd84f000 session 0x55b3ce5aed20
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:15.143796+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 38494208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:16.144052+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:17.144286+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:18.144546+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:19.144765+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:20.144977+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:21.145137+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:22.145364+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:23.145553+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:24.145749+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:25.145951+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:26.146108+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:27.146280+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:28.146475+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:29.146753+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:30.146907+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:31.147119+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:32.147271+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:33.147410+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:34.147655+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:35.147876+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:36.148128+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:37.148326+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:38.148475+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 38477824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:39.148659+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 38477824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:40.148826+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:41.149007+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:42.149213+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:43.149462+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:44.149642+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:45.149840+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:46.150034+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:47.150170+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:48.150677+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:49.151128+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:50.151449+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:51.151723+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:52.152088+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:53.152285+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:54.152820+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:55.153264+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:56.153596+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:57.153915+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:58.154198+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:59.154445+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:00.154620+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 38445056 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:01.154835+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:02.155105+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:03.155293+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:04.155505+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:05.155753+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:06.155915+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:07.156091+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:08.156413+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:09.156573+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 38420480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:10.156751+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 38420480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:11.156946+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 38412288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:12.157145+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 38412288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:13.157374+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:14.157556+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:15.157752+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:16.157931+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:17.158081+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:18.158278+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:19.158454+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:20.158645+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:21.158811+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:22.158987+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:23.159206+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:24.159384+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:25.159591+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:26.159735+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:27.160090+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:28.160310+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:29.160502+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:30.160722+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:31.160878+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:32.161123+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 38371328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:33.161269+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 38363136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:34.161398+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 38354944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:35.161634+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 38346752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:36.161777+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:37.161959+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:38.162141+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:39.162317+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:40.162493+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:41.162648+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:42.162790+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:43.162955+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:44.163173+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:45.163398+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:46.163733+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:47.163895+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:48.164093+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:49.164588+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294109184 unmapped: 38322176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:50.164722+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 38313984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:51.164894+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 38313984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:52.165094+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:53.165215+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:54.165349+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:55.165535+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:56.165709+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:57.165870+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:58.166074+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:59.166225+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:00.166421+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135dc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 106.011367798s of 106.076034546s, submitted: 19
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295182336 unmapped: 37249024 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:01.166615+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 54001664 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 299 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cb940b40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:02.166799+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed748000/0x0/0x4ffc00000, data 0x9ea3f4/0xb85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3d135dc00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 53968896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:03.166983+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 53968896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 300 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cbe8c000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:04.167150+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:05.167374+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:06.167547+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:07.167691+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:08.167885+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:09.168116+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 53944320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:10.168330+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 53944320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:11.168468+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 53936128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:12.168590+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 53936128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:13.168770+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 53927936 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:14.168961+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:15.169233+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:16.169393+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:17.169563+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:18.169741+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:19.169964+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:20.170202+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:21.170402+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:22.170541+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:23.170708+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:24.170886+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:25.171066+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:26.171248+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:27.171431+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:28.171594+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:29.171739+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 53895168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:30.171904+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf60e000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.694004059s of 29.852605820s, submitted: 37
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:31.172109+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 301 ms_handle_reset con 0x55b3cf60e000 session 0x55b3cbd825a0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:32.172291+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:33.172472+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:34.172661+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:35.172854+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035515 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:36.172964+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:37.173122+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 53846016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:38.173300+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:39.173482+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:40.173635+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:41.173813+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:42.173958+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:43.174160+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:44.174369+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:45.174602+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:46.174828+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:47.174968+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:48.175098+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:49.175224+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:50.175340+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:51.175499+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:52.175643+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:53.175784+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:54.175921+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:55.176350+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:56.176597+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:57.176742+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:58.177142+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:59.177588+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:00.177860+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:01.178194+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:02.178345+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:03.178573+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:04.178844+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:05.179128+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:06.179424+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:07.179724+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:08.179966+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:09.180131+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:10.180341+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:11.180557+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:12.180722+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:13.180942+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:14.181126+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:15.181347+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:16.181561+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:17.181736+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:18.181936+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:19.182133+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:20.182302+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:21.182450+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:22.182608+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:23.182787+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:24.183080+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295477248 unmapped: 53739520 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:25.183278+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:26.183489+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:27.183648+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:28.184104+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:29.184589+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:30.184999+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:31.185384+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:32.185950+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:33.186616+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:34.187083+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:35.187704+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:36.187860+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:37.188106+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:38.188327+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:39.188744+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:40.188892+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:41.189222+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:42.189342+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 53698560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:43.189623+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:44.189907+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:45.190167+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:46.190350+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:47.190544+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:48.190741+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:49.190938+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:50.191209+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:51.191487+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:52.191717+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:53.191963+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:54.192258+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:55.192533+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:56.192748+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:57.192982+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:58.193177+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:59.193327+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:00.193622+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:01.193879+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:02.194100+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:03.194299+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:04.194515+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:05.194675+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:06.194804+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:07.194944+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:08.195147+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:09.195323+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:10.195471+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:11.195610+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 53624832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:12.195764+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:13.195962+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:14.196124+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:15.196342+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:16.196512+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:17.196704+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:18.196893+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:19.197148+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:20.197354+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:21.197557+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:22.197763+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:23.197976+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:24.198146+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:25.198360+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:26.198634+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:27.198823+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:28.199068+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:29.199265+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:30.199417+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:31.199602+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:32.199875+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:33.200162+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:34.200308+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:35.200498+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:36.200834+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:37.201147+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:38.202800+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:39.203167+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:40.203615+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:41.204406+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:42.204832+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:43.205105+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:44.205572+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 53542912 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:45.205827+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:46.206131+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:47.206339+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:48.206550+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:49.206782+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:50.206956+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:51.207137+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:52.207371+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:53.207610+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:54.207803+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:55.208126+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:56.208337+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:57.208572+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:58.208724+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:59.208927+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:00.209110+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:01.209392+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:02.209651+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:03.209898+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:04.210094+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:05.210290+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:06.210892+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:07.211223+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:08.211479+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:09.211794+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:10.212885+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:11.213185+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:12.213362+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:13.213494+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:14.213648+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:15.213863+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:16.214008+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:17.214504+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:18.214642+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:19.214881+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:20.215149+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:21.215493+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:22.215669+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:23.215890+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:24.216063+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:25.216313+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:26.216515+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:27.216737+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:28.216953+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:29.217166+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:30.217377+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:31.217711+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:32.217984+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:33.218280+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:34.218560+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:35.218788+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:36.219078+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:37.219268+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:38.219453+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:39.219659+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:40.219850+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:41.220066+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:42.220273+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:43.220431+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:44.220588+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:45.220794+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:46.220971+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:47.221154+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:48.221334+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:49.221548+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:50.221758+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:51.221931+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:52.222082+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:53.222222+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:54.222392+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:55.222586+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:56.222710+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:57.222869+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:58.223080+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:59.223269+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:00.223450+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:01.223632+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:02.223789+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:03.223965+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:04.224207+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:05.224446+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:06.224626+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:07.224771+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:08.224967+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:09.225231+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.71 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 818 writes, 2112 keys, 818 commit groups, 1.0 writes per commit group, ingest: 1.15 MB, 0.00 MB/s
                                           Interval WAL: 818 writes, 377 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:10.226136+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets getting new tickets!
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:11.226888+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _finish_auth 0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:11.227888+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:12.229052+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:13.230223+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:14.230579+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:15.231281+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: mgrc ms_handle_reset ms_handle_reset con 0x55b3d1d16400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 10:05:46 compute-0 ceph-osd[89514]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: get_auth_request con 0x55b3cdd5f800 auth_method 0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: mgrc handle_mgr_configure stats_period=5
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:16.231924+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:17.232989+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:18.233245+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:19.233622+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:20.233789+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:21.233953+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:22.234091+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:23.234264+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:24.234541+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:25.234766+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:26.234914+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:27.235059+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:28.235311+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:29.235559+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:30.235788+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:31.236042+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:32.236270+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:33.236493+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:34.236672+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:35.236862+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:36.237062+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:37.237258+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:38.237469+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:39.237789+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:40.237982+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:41.238130+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:42.238270+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:43.238408+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:44.238602+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:45.238828+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:46.238975+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:47.239119+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:48.239265+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:49.240564+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:50.240697+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:51.240863+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:52.240998+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:53.241125+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:54.241245+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:55.241431+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:56.241613+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:57.241863+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:58.242060+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:59.242170+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:00.242308+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:01.242433+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:02.242592+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:03.242745+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:04.242903+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:05.243104+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:06.243271+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:07.243443+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:08.243586+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 278.443786621s of 278.633789062s, submitted: 64
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:09.243711+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:10.243855+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:11.243951+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:12.244062+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:13.244243+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:14.244376+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:15.244867+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:16.245144+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:17.245372+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:18.245922+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:19.246449+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:20.246990+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:21.247348+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:22.247617+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:23.247823+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:24.247996+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:25.248227+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:26.248604+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:27.248967+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:28.249325+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:29.249481+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:30.249604+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:31.249791+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:32.249962+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:33.250085+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:34.250281+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:35.250525+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:36.250675+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:37.250798+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:38.251218+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:39.251500+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:40.251662+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:41.251815+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:42.252003+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:43.252211+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:44.252393+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:45.252592+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:46.252761+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:47.252943+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:48.253263+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:49.253421+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:50.253580+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:51.253787+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:52.253915+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:53.254066+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:54.254185+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:55.254410+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:56.254569+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:57.254711+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:58.254919+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:59.255107+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:00.255332+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:01.255474+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:02.255648+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:03.255818+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:04.255996+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:05.256215+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:06.256330+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:07.256495+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:08.256672+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:09.256824+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:10.256999+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:11.257184+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 53239808 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:12.257385+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 53239808 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:13.257552+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 53231616 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:14.257697+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:15.257888+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:16.258098+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:17.258233+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:18.258440+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:19.258754+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:20.259056+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:21.259307+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:22.259519+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:23.259801+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:24.259952+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:25.260221+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:26.260420+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:27.260591+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:28.260719+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:29.260915+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 53198848 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:30.261114+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:31.261283+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:32.261431+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:33.261632+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:34.261793+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:35.262068+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:36.262251+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:37.262352+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:38.262527+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:39.262821+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:40.262962+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:41.263091+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:42.263252+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:43.263458+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:44.263671+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:45.263864+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:46.264040+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:47.264172+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:48.264352+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:49.264536+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:50.264724+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:51.264887+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:52.265080+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:53.265235+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:54.265530+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:55.265763+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:56.265908+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:57.266083+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:58.266269+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:59.266428+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 53108736 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:00.266619+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 53108736 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:01.266793+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:02.266998+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:03.267209+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:04.267393+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:05.267621+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:06.267809+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 53092352 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:07.268003+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 53084160 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:08.268399+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 53075968 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:09.268634+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 53075968 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:10.268798+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 53067776 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:11.268958+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:12.269140+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:13.269390+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:14.269541+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:15.269748+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:16.269946+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:17.270112+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:18.270250+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:19.270406+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:20.270730+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:21.270905+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:22.271085+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:23.271270+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:24.271471+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:25.271635+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 53035008 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:26.271794+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:27.271961+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:28.272134+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:29.272304+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:30.272466+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:31.272603+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 53018624 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:32.272773+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 53018624 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:33.272899+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 53010432 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:34.273055+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 53002240 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:35.273224+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf610400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 146.330047607s of 146.644271851s, submitted: 90
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 52994048 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 302 handle_osd_map epochs [303,303], i have 303, src has [1,303]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:36.273359+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 303 ms_handle_reset con 0x55b3cf610400 session 0x55b3d0b64b40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ed73e000/0x0/0x4ffc00000, data 0x9f1130/0xb8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 52928512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2986189 data_alloc: 218103808 data_used: 184320
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:37.273560+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 52928512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:38.273709+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cf611400
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 52920320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _renew_subs
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:39.273939+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 304 ms_handle_reset con 0x55b3cf611400 session 0x55b3cdcb23c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 52920320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:40.274069+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edf3d000/0x0/0x4ffc00000, data 0x1f2cbb/0x390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:41.274216+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987987 data_alloc: 218103808 data_used: 192512
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:42.274412+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:43.274629+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f471e/0x393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3ccad0000
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 52895744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:44.274815+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 ms_handle_reset con 0x55b3ccad0000 session 0x55b3ce5afa40
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:45.274984+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:46.275150+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:47.275339+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:48.275564+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:49.275754+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:50.275945+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:51.276188+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:52.276386+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:53.276557+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:54.276673+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:55.276845+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:56.276970+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 52862976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:57.277082+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:58.277246+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:59.277392+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:00.277583+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:01.277865+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:02.278148+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:03.278399+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:04.278611+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:05.278831+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:06.279116+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:07.279258+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:08.279511+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:09.279673+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:10.279842+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:11.280037+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:12.280182+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:13.280392+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:14.280574+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:15.280779+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:16.280957+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:17.281139+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:18.281275+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:19.281434+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:20.281594+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:21.281760+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:22.281921+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:23.282109+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:24.282263+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:25.282418+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:26.282543+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:27.282678+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:28.282823+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:29.282996+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 52789248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:30.283187+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 52781056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:31.283347+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 52781056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:32.283498+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:33.283677+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:34.283841+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:35.284005+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:36.284193+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:37.284322+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:38.284477+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:39.284638+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: handle_auth_request added challenge on 0x55b3cbcb4c00
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.713172913s of 64.038635254s, submitted: 108
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:40.285109+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 307 ms_handle_reset con 0x55b3cbcb4c00 session 0x55b3cc8003c0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:41.285253+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:42.285439+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997229 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:43.285603+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:44.285728+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:45.285936+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:46.286106+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:47.286257+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997229 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:48.286418+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:49.286588+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:50.286722+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:51.286906+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:52.287090+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:53.287273+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:54.287480+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:55.287711+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:56.287847+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:57.288100+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:58.288286+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:59.288441+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:00.288758+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:01.289172+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:02.289350+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:03.289569+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:04.289696+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:05.289948+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:06.290072+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:07.290242+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 52658176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:08.290437+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 52649984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:09.290590+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 52641792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:10.298264+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:11.298481+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:12.298622+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:13.298805+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:14.298978+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:15.299232+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:16.299436+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:17.299642+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:18.299908+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:19.300156+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:20.300313+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:21.300499+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:22.300768+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:23.301002+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:24.301234+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 52617216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:25.301448+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 52617216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:26.301600+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 52609024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:27.301783+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:28.301943+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:29.302084+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:30.302252+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:31.302476+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:32.302711+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:33.302948+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:34.303113+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:35.303320+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:36.303520+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:37.303666+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:38.303876+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:39.304075+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 52576256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:40.304251+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 52576256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:41.304422+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:42.304581+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:43.304751+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:44.304910+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:45.305066+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:46.305183+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:47.305319+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 52559872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:48.305453+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'config show' '{prefix=config show}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 52559872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:49.305575+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 52944896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:50.305716+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'log dump' '{prefix=log dump}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:51.305852+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'perf dump' '{prefix=perf dump}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'perf schema' '{prefix=perf schema}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:52.305980+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:53.306110+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:54.306256+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:55.306424+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:56.308095+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:57.308338+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 53903360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:58.309198+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 53903360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:59.309348+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 53895168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:00.309539+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 53895168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:01.309668+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:02.309803+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:03.310246+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:04.310410+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:05.310569+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:06.310713+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:07.310867+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:08.311076+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:09.311211+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:10.311345+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:11.311510+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:12.311664+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:13.311808+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:14.311944+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:15.312082+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:16.312222+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:17.312380+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:18.312543+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:19.312684+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:20.312840+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 53846016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:21.313053+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 53846016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:22.313177+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 53837824 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:23.313349+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:24.313538+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:25.313722+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:26.313871+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:27.314143+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:28.314423+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:29.314647+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:30.314824+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:31.315086+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:32.315255+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:33.315563+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:34.315782+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:35.316118+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:36.316789+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:37.317226+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:38.317540+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:39.317930+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:40.318167+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:41.318485+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:42.318976+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:43.319140+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:44.319351+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:45.319562+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:46.319821+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:47.320090+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:48.320465+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:49.320761+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:50.320981+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 53772288 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:51.321148+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 53772288 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:52.321341+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 53772288 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:53.321524+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:54.321755+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:55.322110+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:56.322276+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:57.322442+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:58.323334+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:59.323584+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:00.323715+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:01.323902+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:02.324079+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:03.336803+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:04.336946+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:05.338760+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:06.338929+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:07.339119+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:08.339283+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:09.339488+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:10.339700+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:11.339931+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:12.340112+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:13.340288+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:14.340449+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:15.340632+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:16.340762+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:17.340914+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:18.341114+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:19.341318+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:20.341526+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:21.341697+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:22.341899+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:23.342151+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:24.342362+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:25.342629+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 53698560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:26.342872+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:27.343086+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:28.343245+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:29.343429+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:30.343602+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:31.343773+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:32.343985+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:33.344186+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:34.344352+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:35.344533+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:36.344698+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:37.344889+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 53673984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:38.345103+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 53673984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:39.345334+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 53673984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:40.345813+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 53673984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:41.346985+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:42.347915+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:43.348458+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:44.348643+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:45.348905+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:46.349550+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:47.349830+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:48.350184+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:49.350427+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:50.351119+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:51.351686+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:52.352122+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:53.352526+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23353 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:54.352872+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:55.353080+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 53641216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:56.353349+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:57.353618+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:58.353840+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:59.354189+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:00.354401+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:01.354718+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:02.355050+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:03.355344+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:04.355582+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:05.355843+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:06.356095+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:07.356307+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:08.356474+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:09.356689+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:10.356876+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:11.357125+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:12.357269+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:13.357400+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:14.357617+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:15.358224+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:16.358375+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:17.358704+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:18.358955+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:19.359198+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:20.359355+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:21.359485+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:22.359610+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:23.359871+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:24.360005+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:25.360257+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:26.360394+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:27.360574+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:28.360739+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:29.360910+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:30.361032+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:31.361208+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:32.361297+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:33.361428+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:34.361558+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:35.361739+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:36.361999+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:37.362358+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 53542912 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:38.362615+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 53542912 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:39.362827+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:40.363144+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:41.363301+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:42.363447+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:43.363595+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:44.364323+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:45.364863+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:46.365042+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:47.365329+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:48.366187+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:49.366435+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:50.366597+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:51.367164+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:52.367523+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:53.368146+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:54.368594+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:55.369049+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:56.369475+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:57.369915+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:58.370359+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:59.370774+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:00.371156+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:01.371548+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:02.371866+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:03.372143+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:04.372437+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:05.372762+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:06.373145+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:07.373494+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:08.373794+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:09.374168+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:10.374556+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:11.374816+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:12.375142+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:13.375447+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:14.375674+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:15.375932+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:16.376145+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:17.376369+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:18.376606+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:19.376885+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:20.377235+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:21.377483+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:22.377698+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:23.377880+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:24.378077+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:25.378282+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:26.378513+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:27.378726+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:28.378943+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:29.379157+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:30.379444+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:31.379643+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:32.379840+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:33.380121+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:34.380302+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:35.380624+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:36.380822+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:37.381064+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:38.381286+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:39.381449+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:40.381604+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:41.381787+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:42.382007+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:43.382176+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:44.382325+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:45.382574+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:46.382775+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:47.382903+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:48.383116+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:49.383305+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:50.383494+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:51.383715+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:52.383934+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:53.384180+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:54.384396+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:55.384614+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:56.384816+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:57.385144+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:58.385325+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:59.385542+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:00.385744+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:01.385944+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:02.386144+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:03.386346+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:04.386576+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:05.386813+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:06.387072+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:07.387342+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:08.387542+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:09.387760+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 39K writes, 151K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.70 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 421 writes, 898 keys, 421 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 421 writes, 192 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca465090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca465090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca465090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:10.387944+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:11.388128+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:12.388338+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:13.388528+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:14.388690+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295862272 unmapped: 53354496 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:15.388858+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:16.389077+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:17.389251+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:18.389459+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:19.389955+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:20.390384+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:21.390620+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:22.390850+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:23.391151+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:24.391407+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:25.391714+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:26.391873+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:27.392110+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:28.392302+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:29.392432+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:30.392616+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:31.392781+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:32.393059+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:33.393177+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:34.393326+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:35.393677+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:36.393811+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:37.393980+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:38.394243+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:39.394432+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:40.394607+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:41.394763+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:42.394950+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:43.395173+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:44.395403+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:45.395598+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:46.395779+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:47.395907+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:48.396046+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:49.396174+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:50.396294+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:51.396424+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:52.396566+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:53.396788+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:54.397160+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:55.397436+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:56.397592+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:57.397730+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:58.397890+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:59.398259+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:00.398680+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:01.398915+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:02.399352+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:03.399611+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:04.399912+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:05.400182+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:06.400326+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:07.400482+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 53231616 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:08.400754+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 389.405700684s of 389.475738525s, submitted: 31
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:09.401034+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:10.401219+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:11.401377+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:12.401531+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:13.401743+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:14.401979+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:15.402289+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:16.402472+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:17.402616+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:18.402824+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:19.402970+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:20.403148+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:21.403328+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:22.403509+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:23.403683+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:24.403857+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:25.404065+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:26.404212+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:27.404449+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:28.404851+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:29.405116+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:30.405591+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:31.405798+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:32.405974+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:33.406145+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:34.406323+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:35.406582+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:36.406847+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:37.407007+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:38.407369+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:39.407538+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:40.407695+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:41.407862+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:42.408106+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:43.408297+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:44.422452+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:45.422718+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:46.423072+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:47.423248+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:48.423439+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:49.423670+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:50.423817+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:51.423981+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:52.424154+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:53.424281+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:54.424484+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:55.424685+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:56.424882+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:57.425116+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:58.425341+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:59.425820+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:00.425995+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:01.426426+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:02.426601+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:03.426748+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:04.427227+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:05.427378+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:06.427528+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:07.427860+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:08.428052+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:09.428175+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:10.428304+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:11.428448+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:12.428595+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'config show' '{prefix=config show}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:13.428746+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:14.428948+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:46 compute-0 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:46 compute-0 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: tick
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_tickets
Oct 14 10:05:46 compute-0 ceph-osd[89514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:15.429207+0000)
Oct 14 10:05:46 compute-0 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:46 compute-0 ceph-osd[89514]: do_command 'log dump' '{prefix=log dump}'
Oct 14 10:05:46 compute-0 ceph-mon[74249]: from='client.23339 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:46 compute-0 ceph-mon[74249]: from='client.23343 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:46 compute-0 ceph-mon[74249]: pgmap v3394: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1251670558' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 10:05:46 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3227497799' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 10:05:46 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 14 10:05:46 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745312317' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 10:05:46 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 10:05:46 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23357 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:47 compute-0 nova_compute[259627]: 2025-10-14 10:05:47.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 10:05:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606158406' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23361 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 14 10:05:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392519678' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23365 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mon[74249]: from='client.23347 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mon[74249]: from='client.23349 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mon[74249]: from='client.23353 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/745312317' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1606158406' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2392519678' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 10:05:47 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 14 10:05:47 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200062440' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 14 10:05:48 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23371 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:48 compute-0 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T10:05:48.370+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 10:05:48 compute-0 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 10:05:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.560444) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348560519, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 727, "num_deletes": 255, "total_data_size": 828869, "memory_usage": 843832, "flush_reason": "Manual Compaction"}
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348569256, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 821059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70474, "largest_seqno": 71200, "table_properties": {"data_size": 817182, "index_size": 1592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8942, "raw_average_key_size": 19, "raw_value_size": 809336, "raw_average_value_size": 1755, "num_data_blocks": 70, "num_entries": 461, "num_filter_entries": 461, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436297, "oldest_key_time": 1760436297, "file_creation_time": 1760436348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 8854 microseconds, and 5371 cpu microseconds.
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 10:05:48 compute-0 ceph-mon[74249]: from='client.23357 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:48 compute-0 ceph-mon[74249]: from='client.23361 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:48 compute-0 ceph-mon[74249]: pgmap v3395: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:48 compute-0 ceph-mon[74249]: from='client.23365 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:48 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/200062440' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.569311) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 821059 bytes OK
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.569335) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.571346) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.571368) EVENT_LOG_v1 {"time_micros": 1760436348571361, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.571393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 825013, prev total WAL file size 830318, number of live WAL files 2.
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.572226) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303135' seq:72057594037927935, type:22 .. '6C6F676D0033323636' seq:0, type:0; will stop at (end)
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(801KB)], [167(9271KB)]
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348572263, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10314908, "oldest_snapshot_seqno": -1}
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8765 keys, 10206829 bytes, temperature: kUnknown
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348621481, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10206829, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10151558, "index_size": 32246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 231306, "raw_average_key_size": 26, "raw_value_size": 9998126, "raw_average_value_size": 1140, "num_data_blocks": 1244, "num_entries": 8765, "num_filter_entries": 8765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760436348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.621695) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10206829 bytes
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.623392) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.3 rd, 207.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(25.0) write-amplify(12.4) OK, records in: 9286, records dropped: 521 output_compression: NoCompression
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.623413) EVENT_LOG_v1 {"time_micros": 1760436348623402, "job": 104, "event": "compaction_finished", "compaction_time_micros": 49283, "compaction_time_cpu_micros": 21991, "output_level": 6, "num_output_files": 1, "total_output_size": 10206829, "num_input_records": 9286, "num_output_records": 8765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348623662, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348624999, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.572167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:05:48 compute-0 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 10:05:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 14 10:05:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959110844' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 14 10:05:48 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 14 10:05:48 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/896764778' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 14 10:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 14 10:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1553801755' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 14 10:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 14 10:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3535606231' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 14 10:05:49 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:49 compute-0 ceph-mon[74249]: from='client.23371 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/959110844' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 14 10:05:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/896764778' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 14 10:05:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1553801755' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 14 10:05:49 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3535606231' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 14 10:05:49 compute-0 nova_compute[259627]: 2025-10-14 10:05:49.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:49 compute-0 crontab[457489]: (root) LIST (root)
Oct 14 10:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 14 10:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2049773450' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 14 10:05:49 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 14 10:05:49 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109722576' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 14 10:05:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771726257' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 14 10:05:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377864442' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 14 10:05:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737011142' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: pgmap v3396: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2049773450' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1109722576' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1771726257' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/377864442' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1737011142' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 14 10:05:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392728208' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:01.663215+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516472 data_alloc: 218103808 data_used: 6881280
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ed7000/0x0/0x4ffc00000, data 0x2b09b7c/0x2c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:02.663359+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:03.663526+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ec0000/0x0/0x4ffc00000, data 0x2b28b7c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 42K writes, 164K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.73 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4857 writes, 19K keys, 4857 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s
                                           Interval WAL: 4857 writes, 1947 syncs, 2.49 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:04.663661+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:05.663815+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:06.663959+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ec0000/0x0/0x4ffc00000, data 0x2b28b7c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510628 data_alloc: 218103808 data_used: 6881280
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.153414726s of 11.554138184s, submitted: 132
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:07.664124+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:08.664301+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:09.664553+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:10.664671+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320364544 unmapped: 59686912 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9395c20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c8b54000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d48400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c8f4f2c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:11.664788+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d48400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9322f00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513901 data_alloc: 218103808 data_used: 6881280
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bdab40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9fac960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c7ec7e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320315392 unmapped: 67174400 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c6f11c20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c9fa5a40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7edd000/0x0/0x4ffc00000, data 0x3b0ab8c/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:12.664938+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320323584 unmapped: 67166208 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:13.665127+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7edd000/0x0/0x4ffc00000, data 0x3b0ab8c/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320323584 unmapped: 67166208 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:14.665287+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:15.665405+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:16.665573+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3631519 data_alloc: 218103808 data_used: 6885376
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:17.665744+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:18.665883+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8ec32c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8baad20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7eda000/0x0/0x4ffc00000, data 0x3b0db8c/0x3ca4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:19.666068+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d48400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9059e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.693615913s of 12.837650299s, submitted: 26
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bdb2c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:20.666192+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 67141632 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:21.666298+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634757 data_alloc: 218103808 data_used: 6885376
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 67141632 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:22.666417+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:23.666525+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:24.666639+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:25.666761+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:26.666911+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752677 data_alloc: 234881024 data_used: 23490560
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:27.667089+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:28.667284+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:29.667452+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:30.667586+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:31.667730+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753333 data_alloc: 234881024 data_used: 23494656
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.706723213s of 11.763413429s, submitted: 15
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 65896448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:32.667862+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326164480 unmapped: 61325312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:33.667980+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:34.668083+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:35.668257+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b6000/0x0/0x4ffc00000, data 0x4830baf/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:36.668413+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868965 data_alloc: 234881024 data_used: 24391680
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:37.668596+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:38.668754+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:39.669008+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:40.669273+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:41.669417+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868737 data_alloc: 234881024 data_used: 24412160
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b4000/0x0/0x4ffc00000, data 0x4832baf/0x49ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:42.669639+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b4000/0x0/0x4ffc00000, data 0x4832baf/0x49ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:43.669807+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:44.669957+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.880608559s of 13.121785164s, submitted: 92
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bab4a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9df14a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:45.670091+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9fac5a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:46.670258+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529945 data_alloc: 218103808 data_used: 6897664
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:47.670428+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e86ea000/0x0/0x4ffc00000, data 0x2b43b7c/0x2cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:48.670606+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:49.670830+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e86ea000/0x0/0x4ffc00000, data 0x2b43b7c/0x2cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:50.670994+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:51.671194+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529945 data_alloc: 218103808 data_used: 6897664
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:52.671329+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c7e0e5a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c8da74a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:53.671476+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9df0960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:54.671640+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:55.671806+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:56.671917+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:57.672077+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:58.672193+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:59.672382+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:00.672504+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:01.672740+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:02.672898+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:03.673073+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:04.673204+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:05.673380+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:06.673534+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:07.673706+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:08.673920+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.259336472s of 23.626934052s, submitted: 120
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:09.674127+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318980096 unmapped: 68509696 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:10.674268+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:11.674496+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:12.674672+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:13.674800+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:14.675191+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:15.675320+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:16.675468+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:17.675613+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:18.675805+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:19.676047+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:20.676190+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:21.676329+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:22.676524+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:23.676704+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:24.676974+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:25.677153+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:26.677324+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:27.677476+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:28.677634+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:29.677862+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:30.678087+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:31.678452+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:32.678578+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:33.678720+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:34.678846+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c91d70e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8b552c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c93234a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d48400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9fbb0e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.051465988s of 26.344564438s, submitted: 90
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bade00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8eb6960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c7e0eb40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c705ad20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca805c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c9058b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:35.679092+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:36.679259+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460245 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:37.679455+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:38.679644+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:39.679940+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:40.680125+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:41.680328+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460245 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:42.680509+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:43.680655+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:44.680824+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9f8bc20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:45.680970+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.328366280s of 10.499547958s, submitted: 49
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:46.681128+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465278 data_alloc: 218103808 data_used: 4861952
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319340544 unmapped: 68149248 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:47.681273+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:48.681444+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:49.682141+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:50.682336+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:51.682490+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523198 data_alloc: 218103808 data_used: 13045760
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:52.682698+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:53.683206+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:54.683522+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:55.683740+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.949416161s of 10.952005386s, submitted: 1
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:56.683968+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554408 data_alloc: 218103808 data_used: 13467648
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322412544 unmapped: 65077248 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:57.684324+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e9e000/0x0/0x4ffc00000, data 0x2736c11/0x28cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:58.684797+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:59.685087+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:00.685391+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:01.685623+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567410 data_alloc: 234881024 data_used: 13811712
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:02.685850+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:03.686052+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e20000/0x0/0x4ffc00000, data 0x27b4c11/0x294d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:04.686201+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:05.686361+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:06.686558+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566358 data_alloc: 234881024 data_used: 13824000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:07.686747+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e00000/0x0/0x4ffc00000, data 0x27d5c11/0x296e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:08.686908+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:09.687074+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323551232 unmapped: 63938560 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:10.687198+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.258024216s of 14.533938408s, submitted: 65
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8da6000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8da63c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb800 session 0x5597c78414a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c6f934a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323551232 unmapped: 63938560 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:11.687382+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x27dbc20/0x2975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568414 data_alloc: 234881024 data_used: 13824000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330170368 unmapped: 57319424 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:12.687565+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8eb6780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9394d20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8f4e780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8f4f860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9feb800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb800 session 0x5597c7e0f4a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:13.687707+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:14.687933+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:15.688106+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:16.688280+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623576 data_alloc: 234881024 data_used: 13828096
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:17.688466+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:18.688632+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:19.688791+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:20.688971+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.050421715s of 10.386352539s, submitted: 13
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c71854a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:21.689055+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624501 data_alloc: 234881024 data_used: 13828096
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:22.689184+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:23.689358+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:24.689567+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324468736 unmapped: 63021056 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:25.689722+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:26.689846+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672021 data_alloc: 234881024 data_used: 20504576
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:27.690056+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:28.690211+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:29.690412+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:30.690590+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:31.690742+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672373 data_alloc: 234881024 data_used: 20504576
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:32.690834+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:33.691044+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:34.691201+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.206857681s of 13.240119934s, submitted: 8
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331653120 unmapped: 55836672 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:35.691404+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:36.691568+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788835 data_alloc: 234881024 data_used: 21499904
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:37.691689+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:38.691936+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:39.692200+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331833344 unmapped: 55656448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:40.692340+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331833344 unmapped: 55656448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:41.692676+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788835 data_alloc: 234881024 data_used: 21499904
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331161600 unmapped: 56328192 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:42.692776+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331161600 unmapped: 56328192 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:43.692961+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:44.693125+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:45.693302+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b2d000/0x0/0x4ffc00000, data 0x3aa7c20/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:46.693433+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3784939 data_alloc: 234881024 data_used: 21671936
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8d714a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.424279213s of 12.732007027s, submitted: 100
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9f8a1e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:47.693557+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8d71e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:48.693709+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8dfa000/0x0/0x4ffc00000, data 0x27dbc11/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:49.693906+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:50.694077+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c6489c20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:51.697300+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8dee000/0x0/0x4ffc00000, data 0x27e7c11/0x2980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [0,0,1])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9323a40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:52.697458+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:53.697641+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:54.697801+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:55.697945+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:56.698119+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:57.698318+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:58.698448+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:59.698615+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:00.698803+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:01.698988+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:02.699158+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:03.699338+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:04.699486+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:05.699629+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:06.699771+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:07.699971+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:08.700134+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:09.700353+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:10.700475+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:11.700611+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:12.700776+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:13.701004+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:14.701167+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:15.701301+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:16.701497+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:17.701642+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:18.701847+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:19.702086+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:20.702258+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:21.702437+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:22.702571+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:23.702759+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:24.702912+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:25.703061+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:26.703219+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:27.703403+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:28.703617+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:29.703870+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:30.704146+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:31.704350+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:32.704565+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:33.704729+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8f4ed20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8ec32c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c91761e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9fec000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9fec000 session 0x5597c705bc20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.449398041s of 46.750865936s, submitted: 92
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c6f114a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:34.704886+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324829184 unmapped: 62660608 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:35.705138+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:36.705274+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441271 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:37.705466+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:38.705664+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:39.706060+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:40.706186+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 62636032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:41.706370+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8baa780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441271 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9fa43c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:42.706534+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:43.706706+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8babc20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8260800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260800 session 0x5597c911cf00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:44.706888+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c6f44000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:45.707079+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:46.707226+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480549 data_alloc: 218103808 data_used: 9494528
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:47.707393+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:48.707531+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:49.707728+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:50.707883+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:51.708059+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480869 data_alloc: 218103808 data_used: 9551872
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:52.708212+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:53.708352+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:54.708501+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:55.708665+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:56.708813+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.233131409s of 23.287984848s, submitted: 4
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481685 data_alloc: 218103808 data_used: 9555968
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:57.708965+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323919872 unmapped: 63569920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:58.709117+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323928064 unmapped: 63561728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:59.709338+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:00.709515+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:01.709722+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507191 data_alloc: 218103808 data_used: 9674752
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:02.710582+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:03.710803+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:04.711731+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:05.712719+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:06.713123+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507207 data_alloc: 218103808 data_used: 9674752
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:07.714213+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:08.715176+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:09.716167+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:10.716925+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:11.717180+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507207 data_alloc: 218103808 data_used: 9674752
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:12.717840+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:13.717983+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.272737503s of 16.378507614s, submitted: 28
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323747840 unmapped: 63741952 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8f4f0e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:14.718142+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323215360 unmapped: 64274432 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:15.718321+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x2671b8c/0x2808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323223552 unmapped: 64266240 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:16.718488+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:17.718633+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3542575 data_alloc: 218103808 data_used: 9674752
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:18.718806+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:19.719080+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x2671b8c/0x2808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8ec6f00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:20.719276+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91cb000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91cb000 session 0x5597c8edd4a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c8d71a40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:21.719442+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9fad860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:22.719576+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544413 data_alloc: 218103808 data_used: 9674752
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:23.720161+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91cb000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:24.720315+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:25.720399+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:26.720560+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:27.720744+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570813 data_alloc: 218103808 data_used: 13381632
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:28.720921+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:29.721081+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:30.721289+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:31.721496+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:32.721714+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570813 data_alloc: 218103808 data_used: 13381632
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:33.721877+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.227336884s of 20.305019379s, submitted: 9
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:34.722086+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324141056 unmapped: 63348736 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e997f000/0x0/0x4ffc00000, data 0x2c8fb9c/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:35.722243+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:36.722402+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98e9000/0x0/0x4ffc00000, data 0x2d2db9c/0x2ec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:37.722547+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627635 data_alloc: 218103808 data_used: 13598720
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:38.722719+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:39.722915+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:40.723083+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324935680 unmapped: 62554112 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:41.723229+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:42.723409+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628135 data_alloc: 218103808 data_used: 13598720
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x2d51b9c/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:43.723623+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:44.723782+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:45.723945+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:46.724080+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x2d51b9c/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:47.760850+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628455 data_alloc: 218103808 data_used: 13606912
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91cb000 session 0x5597c705bc20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9176b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.985599518s of 14.288364410s, submitted: 86
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:48.767144+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c705a5a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:49.767331+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:50.767520+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:51.767733+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:52.800432+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515600 data_alloc: 218103808 data_used: 9674752
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:53.800600+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9fba780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9394f00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9fba5a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:54.800757+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:55.800891+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:56.801086+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:57.801223+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:58.801393+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:59.801580+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:00.801774+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:01.801925+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:02.802082+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:03.802269+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:04.802454+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:05.802625+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:06.802776+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:07.802919+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:08.803081+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca804800 session 0x5597c9322d20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c91cb000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:09.803257+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: mgrc ms_handle_reset ms_handle_reset con 0x5597c8fbb000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 10:05:50 compute-0 ceph-osd[88375]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: get_auth_request con 0x5597c9fec000 auth_method 0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: mgrc handle_mgr_configure stats_period=5
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db4400 session 0x5597c911c780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c7e2c800 session 0x5597c8bacb40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8db4400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:10.803376+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:11.803529+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:12.803726+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:13.803887+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:14.804085+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:15.804214+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:16.804378+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:17.804549+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:18.804715+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:19.804939+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:20.805257+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:21.805424+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:22.805583+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:23.805759+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:24.805919+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:25.806097+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:26.806276+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:27.806495+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:28.806674+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:29.806891+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8da7c20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9df1680
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c7e792c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f74c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f74c00 session 0x5597c8ec6b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.484188080s of 41.605556488s, submitted: 34
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325509120 unmapped: 66715648 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8da70e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9f8b860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c7e78960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c976a000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca800000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca800000 session 0x5597c7840b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:30.807075+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:31.807231+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d07000/0x0/0x4ffc00000, data 0x2911b7c/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:32.807384+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527710 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:33.807515+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:34.807638+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:35.807868+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d07000/0x0/0x4ffc00000, data 0x2911b7c/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320536576 unmapped: 71688192 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c90592c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:36.808081+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320536576 unmapped: 71688192 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c976be00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:37.808255+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8baad20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527710 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c9323860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319496192 unmapped: 72728576 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:38.808394+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319496192 unmapped: 72728576 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:39.808541+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319447040 unmapped: 72777728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:40.808807+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:41.809145+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:42.809352+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633229 data_alloc: 234881024 data_used: 18477056
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:43.809870+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:44.810067+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:45.810749+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:46.810993+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:47.811858+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633229 data_alloc: 234881024 data_used: 18477056
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:48.812128+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.397739410s of 19.592414856s, submitted: 36
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:49.812408+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,1,0,3,1])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96e5000/0x0/0x4ffc00000, data 0x2f32b8c/0x30c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326262784 unmapped: 65961984 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:50.812610+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:51.812984+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:52.813190+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707751 data_alloc: 234881024 data_used: 19681280
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:53.813500+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9633000/0x0/0x4ffc00000, data 0x2fe3b8c/0x317a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:54.813660+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:55.813944+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9633000/0x0/0x4ffc00000, data 0x2fe3b8c/0x317a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:56.814109+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:57.814426+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700087 data_alloc: 234881024 data_used: 19681280
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:58.814584+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:59.814858+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:00.814972+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:01.815199+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:02.815366+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700087 data_alloc: 234881024 data_used: 19681280
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:03.815579+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c79745a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9176780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c6f11e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c93230e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.581520081s of 14.833774567s, submitted: 84
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c705ab40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:04.815686+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c8baad20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c976be00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c90592c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c976a000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:05.815842+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:06.816075+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:07.816296+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771905 data_alloc: 234881024 data_used: 19681280
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:08.816438+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:09.816700+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:10.816824+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:11.816988+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 65503232 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:12.817154+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3836865 data_alloc: 234881024 data_used: 27766784
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:13.817293+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:14.817420+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:15.817539+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:16.817681+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:17.817843+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3836865 data_alloc: 234881024 data_used: 27766784
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:18.818064+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:19.818295+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:20.818446+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.824155807s of 16.902429581s, submitted: 11
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329990144 unmapped: 62234624 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:21.818581+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 58564608 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:22.818725+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x38bab9c/0x3a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3878863 data_alloc: 234881024 data_used: 28008448
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 58564608 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:23.818892+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7659000/0x0/0x4ffc00000, data 0x3e17b9c/0x3faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:24.819085+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:25.819223+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:26.819387+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:27.819544+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886749 data_alloc: 234881024 data_used: 27828224
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:28.819706+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:29.819869+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:30.820085+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:31.820239+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:32.820383+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886749 data_alloc: 234881024 data_used: 27828224
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.783678055s of 12.008990288s, submitted: 48
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:33.820527+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:34.820680+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:35.820818+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c7e78960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:36.820970+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9df03c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:37.821138+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3710153 data_alloc: 234881024 data_used: 18591744
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:38.821311+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e848f000/0x0/0x4ffc00000, data 0x2fe8b8c/0x317f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:39.821499+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8ec65a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9df0b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:40.822661+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9df0f00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:41.822829+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:42.822977+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:43.823087+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:44.823228+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:45.823352+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:46.823459+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:47.823579+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:48.823723+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:49.823967+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:50.824124+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:51.826658+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:52.826816+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:53.827081+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:54.827252+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:55.827437+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:56.827618+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:57.827842+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:58.828091+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:59.828304+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:00.828478+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:01.828669+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:02.828879+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:03.829086+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:04.829341+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:05.829462+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:06.829632+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:07.829799+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:08.829984+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:09.830249+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:10.830401+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:11.830579+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:12.830767+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:13.830910+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:14.831375+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:15.831510+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9059e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8d49c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bdab40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9fa4b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8b550e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.645046234s of 42.893882751s, submitted: 79
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8d70b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7184000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6c800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9fb23c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c7840000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8eb7e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321708032 unmapped: 70516736 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:16.831767+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321708032 unmapped: 70516736 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:17.831981+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541334 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:18.832445+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:19.832718+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:20.832931+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:21.833161+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:22.833400+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321724416 unmapped: 70500352 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9fa50e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541334 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c8ecab40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:23.833628+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321724416 unmapped: 70500352 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c7e0fa40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8ec7c20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:24.833807+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321617920 unmapped: 70606848 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:25.833960+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 70598656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:26.834095+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321634304 unmapped: 70590464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:27.834255+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614275 data_alloc: 234881024 data_used: 14381056
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:28.834395+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:29.834552+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:30.834700+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:31.834873+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:32.835101+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614275 data_alloc: 234881024 data_used: 14381056
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:33.835417+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:34.835666+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:35.835951+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.752676010s of 19.861513138s, submitted: 33
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:36.836080+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 62496768 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:37.836264+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681253 data_alloc: 234881024 data_used: 15691776
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:38.836559+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:39.837098+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:40.837278+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:41.837632+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:42.837851+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674689 data_alloc: 234881024 data_used: 15691776
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:43.838052+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:44.838236+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e883d000/0x0/0x4ffc00000, data 0x2c3ab8c/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:45.838397+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:46.838534+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:47.838685+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e883d000/0x0/0x4ffc00000, data 0x2c3ab8c/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674689 data_alloc: 234881024 data_used: 15691776
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:48.838826+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.384753227s of 12.701920509s, submitted: 117
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:49.839061+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:50.839164+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:51.839271+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9323a40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c8b552c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9fac1e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8fd0000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8fd0000 session 0x5597c8eddc20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9fba000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c7ec63c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7ec6960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9df0000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8d70000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:52.839421+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717851 data_alloc: 234881024 data_used: 15691776
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:53.839542+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:54.839610+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:55.839745+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:56.839887+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:57.840056+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c7974f00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c7ec6960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717851 data_alloc: 234881024 data_used: 15691776
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:58.840293+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c7ec63c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.189133644s of 10.343238831s, submitted: 42
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fba000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:59.840453+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:00.840563+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330391552 unmapped: 61833216 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:01.840710+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:02.840866+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748708 data_alloc: 234881024 data_used: 19931136
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:03.841048+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:04.841236+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:05.841354+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:06.841515+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:07.841714+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749364 data_alloc: 234881024 data_used: 19935232
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:08.841896+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:09.842116+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:10.842260+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.708535194s of 11.744414330s, submitted: 9
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,1,1])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:11.843170+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 57638912 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:12.843356+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3808274 data_alloc: 234881024 data_used: 20135936
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:13.843525+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:14.843699+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:15.843862+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:16.844052+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:17.844252+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803170 data_alloc: 234881024 data_used: 20140032
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:18.844395+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:19.845932+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:20.847119+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:21.849291+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:22.850323+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.447299004s of 12.729538918s, submitted: 84
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803346 data_alloc: 234881024 data_used: 20140032
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:23.850587+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e05000/0x0/0x4ffc00000, data 0x366fc21/0x3809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:24.850837+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:25.851092+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9fac1e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335142912 unmapped: 57081856 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8da70e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:26.851236+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:27.851398+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686054 data_alloc: 234881024 data_used: 15679488
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:28.851541+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9f8a1e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9f8b860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:29.851856+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329187328 unmapped: 63037440 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8f4e780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e882a000/0x0/0x4ffc00000, data 0x2c4db8c/0x2de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:30.852196+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:31.852973+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:32.853448+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:33.853717+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:34.854084+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:35.854343+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:36.854583+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:37.854711+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:38.854906+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:39.855231+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:40.855523+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:41.856103+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:42.856425+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:43.856632+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:44.856896+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:45.857267+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:46.857539+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:47.857768+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:48.857997+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:49.858418+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:50.858658+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:51.858886+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:52.859120+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:53.859276+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:54.859461+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:55.859606+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:56.859812+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:57.859975+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:58.860133+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:59.860326+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:00.860477+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:01.860614+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:02.860777+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.834590912s of 39.179557800s, submitted: 109
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c705a960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:03.860964+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540834 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:04.861130+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:05.861414+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x2376b7c/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:06.861617+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:07.861768+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:08.861905+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540834 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9f8a3c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324239360 unmapped: 71663616 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c93223c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:09.862058+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324239360 unmapped: 71663616 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9058b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8b554a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:10.862223+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324386816 unmapped: 71516160 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:11.862349+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324395008 unmapped: 71507968 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:12.862465+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:13.862638+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614328 data_alloc: 234881024 data_used: 13717504
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:14.862784+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:15.862934+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:16.863107+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:17.863250+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:18.863355+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614328 data_alloc: 234881024 data_used: 13717504
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:19.863510+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:20.863723+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:21.864133+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.299053192s of 19.407587051s, submitted: 25
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:22.864248+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328425472 unmapped: 67477504 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:23.864390+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662080 data_alloc: 234881024 data_used: 14364672
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328425472 unmapped: 67477504 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:24.864577+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:25.864770+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x28e6b9f/0x2a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:26.864860+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:27.865008+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:28.865231+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x28e6b9f/0x2a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673472 data_alloc: 234881024 data_used: 14249984
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:29.865420+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:30.865593+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:31.865860+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:32.866077+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:33.867005+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666320 data_alloc: 234881024 data_used: 14249984
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b8e000/0x0/0x4ffc00000, data 0x28e9b9f/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:34.867219+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:35.867392+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:36.867550+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b8e000/0x0/0x4ffc00000, data 0x28e9b9f/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:37.867737+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:38.868175+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666320 data_alloc: 234881024 data_used: 14249984
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:39.868474+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.285617828s of 17.535190582s, submitted: 85
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa41e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:40.868580+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:41.868727+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d97000/0x0/0x4ffc00000, data 0x36e0b9f/0x3877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:42.868873+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:43.869126+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771938 data_alloc: 234881024 data_used: 14249984
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:44.869435+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:45.869689+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fad2c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8b54960
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:46.869828+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c911da40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d97000/0x0/0x4ffc00000, data 0x36e0b9f/0x3877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8f4fe00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:47.869909+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80f800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:48.870054+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3829579 data_alloc: 234881024 data_used: 19238912
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:49.870208+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:50.870349+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:51.870512+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d72000/0x0/0x4ffc00000, data 0x3704bc2/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:52.870679+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:53.870879+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3839499 data_alloc: 234881024 data_used: 19476480
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:54.871070+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.400768280s of 15.509059906s, submitted: 20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:55.871190+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:56.871375+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:57.871571+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d72000/0x0/0x4ffc00000, data 0x3704bc2/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:58.871800+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870475 data_alloc: 234881024 data_used: 19501056
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 74211328 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:59.871999+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 73474048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e760e000/0x0/0x4ffc00000, data 0x3e68bc2/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:00.872166+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e760e000/0x0/0x4ffc00000, data 0x3e68bc2/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:01.872349+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:02.872507+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:03.872682+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917109 data_alloc: 234881024 data_used: 20856832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 176K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2838 writes, 11K keys, 2838 commit groups, 1.0 writes per commit group, ingest: 12.66 MB, 0.02 MB/s
                                           Interval WAL: 2838 writes, 1140 syncs, 2.49 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:04.872826+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:05.872966+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:06.873101+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.800569534s of 12.333517075s, submitted: 84
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:07.873227+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:08.873366+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:09.873517+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:10.873875+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:11.874034+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:12.874176+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:13.874293+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:14.874516+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:15.874685+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:16.874877+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:17.875035+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:18.875169+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:19.875367+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:20.875501+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:21.875697+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:22.875885+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:23.876114+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:24.876306+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80f800 session 0x5597c9394d20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9df1e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:25.876467+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.377149582s of 18.387834549s, submitted: 2
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8da7680
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:26.876679+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:27.876928+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:28.877106+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643825 data_alloc: 218103808 data_used: 9093120
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 73039872 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e885d000/0x0/0x4ffc00000, data 0x28eab9f/0x2a81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:29.878078+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c705bc20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c9f8b2c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c90583c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:30.879005+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:31.880152+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:32.880825+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:33.881262+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:34.881823+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:35.882283+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:36.882482+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:37.882665+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:38.882941+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:39.883188+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:40.883355+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:41.883576+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:42.883866+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:43.884078+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:44.884305+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:45.884441+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:46.884568+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:47.884721+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:48.884913+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:49.885159+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c93941e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8bab680
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8edc000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fad860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597caeeb400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.921491623s of 24.220367432s, submitted: 89
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8d70780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c91d7860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8edc780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c976af00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fad860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332398592 unmapped: 75571200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:50.885353+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332398592 unmapped: 75571200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:51.885489+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80b000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c93941e0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:52.885651+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:53.885837+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544403 data_alloc: 218103808 data_used: 4796416
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:54.885995+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334086144 unmapped: 73883648 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:55.886245+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8da7680
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8baa3c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331317248 unmapped: 76652544 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:56.886400+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8ec6b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:57.886522+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:58.886719+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:59.886988+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:00.887260+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:01.887381+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:02.887582+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:03.887784+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:04.887971+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:05.888138+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:06.888283+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:07.888429+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:08.888611+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.342603683s of 18.682430267s, submitted: 72
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:09.888801+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 76595200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:10.889095+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:11.889228+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:12.889401+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [1])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:13.889579+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:14.889726+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:15.889883+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:16.890054+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:17.890251+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:18.890384+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:19.890574+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:20.890774+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:21.890908+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:22.891099+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:23.891269+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:24.891460+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:25.891629+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:26.891855+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:27.892094+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:28.892243+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:29.892427+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:30.892543+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:31.892636+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:32.892844+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:33.893085+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:34.893267+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:35.893460+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:36.893707+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:37.893895+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:38.894098+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:39.894338+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:40.894527+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:41.894682+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:42.894910+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:43.895143+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:44.895340+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:45.895540+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:46.895744+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:47.895917+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:48.896193+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:49.896456+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:50.896665+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:51.896812+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:52.897089+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:53.897259+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:54.897466+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:55.897594+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.002799988s of 47.298183441s, submitted: 90
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:56.897801+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331456512 unmapped: 76513280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 293 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fbb680
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:57.897968+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331456512 unmapped: 76513280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9618000/0x0/0x4ffc00000, data 0x1a50619/0x1be5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:58.898152+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500843 data_alloc: 218103808 data_used: 4218880
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:59.898358+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:00.898546+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:01.898744+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9618000/0x0/0x4ffc00000, data 0x1a50619/0x1be5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:02.898887+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:03.899110+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331481088 unmapped: 76488704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:04.900363+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331481088 unmapped: 76488704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:05.901201+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:06.902758+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:07.903531+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:08.904642+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:09.905626+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:10.906448+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:11.907162+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:12.907343+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:13.908069+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:14.908221+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:15.909097+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:16.909278+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:17.909591+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:18.909746+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:19.910093+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:20.910287+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:21.910460+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:22.910604+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:23.910796+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:24.910919+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:25.911115+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:26.911274+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:27.911444+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:28.911592+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:29.911763+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:30.911926+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:31.912147+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:32.912304+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:33.912488+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:34.912717+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:35.912981+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:36.913299+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:37.919641+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:38.920117+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:39.921932+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:40.922499+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:41.922782+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:42.923183+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:43.923672+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:44.924064+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:45.924816+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:46.925097+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:47.925276+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:48.925560+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:49.926276+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:50.926582+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:51.926944+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:52.927266+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:53.927584+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:54.927804+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:55.927955+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:56.928197+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:57.928611+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:58.928826+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:59.929223+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:00.970283+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:01.970435+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:02.970715+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:03.970926+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:04.971168+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:05.971593+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:06.971848+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:07.972177+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:08.972431+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:09.972624+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:10.972779+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:11.972939+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:12.973089+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:13.973270+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:14.973429+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:15.973565+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:16.973737+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:17.973867+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:18.974158+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:19.974421+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:20.974626+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:21.974805+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:22.975107+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:23.975306+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:24.975444+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:25.975634+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:26.975801+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:27.976137+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:28.976313+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:29.976542+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:30.976684+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:31.976892+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:32.977116+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:33.977308+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:34.977498+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:35.977660+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:36.977846+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:37.978035+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:38.978220+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:39.978454+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:40.978637+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:41.978786+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:42.978961+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:43.979161+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:44.979273+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:45.979443+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:46.979625+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:47.979786+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:48.979953+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331636736 unmapped: 76333056 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:49.980228+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331636736 unmapped: 76333056 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:50.980450+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 ms_handle_reset con 0x5597ca80e400 session 0x5597c9322d20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 ms_handle_reset con 0x5597c70cac00 session 0x5597c8d71a40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:51.980658+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:52.980830+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:53.981195+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:54.981434+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517001 data_alloc: 218103808 data_used: 11038720
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:55.981649+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:56.981816+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:57.982003+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:58.982271+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:59.982516+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.750770569s of 123.860015869s, submitted: 42
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519975 data_alloc: 218103808 data_used: 11038720
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 295 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa5c20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:00.982714+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:01.982878+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6f800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:02.983040+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 296 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8b54f00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:03.983197+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9e10000/0x0/0x4ffc00000, data 0x12557fb/0x13ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:04.983466+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312682 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:05.983609+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:06.983796+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:07.983985+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:08.984273+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:09.984471+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312682 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 296 heartbeat osd_stat(store_statfs(0x4eae11000/0x0/0x4ffc00000, data 0x2557eb/0x3ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca80e400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 296 ms_handle_reset con 0x5597ca80e400 session 0x5597c9177860
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.878767967s of 10.074717522s, submitted: 53
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:10.984761+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:11.984921+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 297 heartbeat osd_stat(store_statfs(0x4eae0e000/0x0/0x4ffc00000, data 0x25724e/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:12.985099+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:13.985254+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 297 heartbeat osd_stat(store_statfs(0x4eae0e000/0x0/0x4ffc00000, data 0x25724e/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 73474048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:14.985430+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7975e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:15.985615+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:16.985931+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:17.986101+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:18.986255+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:19.986484+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:20.986696+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:21.986876+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:22.987113+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:23.987321+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:24.987489+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:25.987665+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:26.987832+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:27.988134+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:28.988298+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:29.988527+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:30.988670+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:31.988832+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:32.988979+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:33.989247+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:34.989448+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:35.989632+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:36.989867+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334536704 unmapped: 73433088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:37.990084+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334544896 unmapped: 73424896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:38.990275+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:39.990487+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:40.990666+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:41.990820+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:42.991002+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:43.991235+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:44.991449+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:45.991701+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:46.991938+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:47.992405+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:48.992982+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:49.993562+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:50.993927+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:51.994379+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:52.994837+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:53.995270+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:54.995630+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:55.995930+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:56.996253+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:57.996468+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:58.996709+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:59.997058+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:00.997311+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:01.997651+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:02.997918+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 73383936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:03.998179+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 73383936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:04.998419+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:05.998675+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:06.998887+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:07.999054+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:08.999267+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:09.999581+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:10.999908+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:12.000185+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:13.000572+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:14.000810+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:15.001057+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:16.001344+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:17.001571+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:18.001873+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:19.002169+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:20.002377+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:21.002585+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:22.002790+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:23.004115+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:24.004391+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:25.004705+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:26.004980+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:27.005276+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:28.005551+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:29.005778+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:30.006194+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:31.006514+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:32.006836+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:33.007129+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:34.007311+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:35.007544+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:36.007797+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:37.008055+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:38.008249+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:39.008430+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:40.008697+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:41.008957+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:42.009199+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 73310208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:43.009475+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 73310208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:44.009779+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:45.010106+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:46.010370+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:47.010569+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:48.010814+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:49.011114+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:50.011439+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:51.011701+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:52.011964+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:53.012186+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:54.012390+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:55.012642+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334684160 unmapped: 73285632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:56.012825+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334684160 unmapped: 73285632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:57.013074+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334692352 unmapped: 73277440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:58.013299+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334692352 unmapped: 73277440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:59.013515+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 73269248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:00.013809+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 73269248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8fbac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:01.014093+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 110.781181335s of 110.807998657s, submitted: 17
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334716928 unmapped: 81649664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:02.014335+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 299 ms_handle_reset con 0x5597c8fbac00 session 0x5597c8ec7e00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334733312 unmapped: 81633280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:03.014622+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 80560128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:04.014862+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 ms_handle_reset con 0x5597c70cac00 session 0x5597c7184f00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 80543744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:05.015158+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:06.015351+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:07.015513+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:08.015703+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:09.015823+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:10.015990+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:11.016232+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:12.016367+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:13.016528+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:14.016684+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:15.016867+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:16.017072+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:17.017218+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:18.017390+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:19.017563+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:20.017808+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:21.018102+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:22.018259+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:23.018478+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:24.018655+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:25.018877+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:26.019076+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:27.019209+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:28.019358+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:29.019512+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:30.019703+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c8fbac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447935 data_alloc: 218103808 data_used: 1089536
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:31.019876+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.965335846s of 30.084932327s, submitted: 23
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 301 ms_handle_reset con 0x5597c8fbac00 session 0x5597c9fad2c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:32.020078+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9d20000/0x0/0x4ffc00000, data 0x133e0c8/0x14dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:33.020274+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:34.020461+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:35.020605+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448852 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:36.020763+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:37.020963+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335912960 unmapped: 80453632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9d20000/0x0/0x4ffc00000, data 0x133e0c8/0x14dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:38.021100+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335912960 unmapped: 80453632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:39.021276+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:40.021469+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:41.021683+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:42.021877+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:43.022030+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:44.022376+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:45.022529+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:46.022704+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:47.022842+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335945728 unmapped: 80420864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:48.023002+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:49.023171+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:50.023320+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:51.023486+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:52.023615+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:53.023753+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:54.023882+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:55.024086+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:56.024803+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:57.025194+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:58.025357+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:59.025495+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:00.025690+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:01.025848+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:02.026048+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:03.026170+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:04.026463+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:05.026621+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:06.027086+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:07.027304+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:08.027540+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:09.027874+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:10.028100+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:11.028376+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:12.028562+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:13.028917+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:14.029141+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335994880 unmapped: 80371712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:15.029329+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:16.029563+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:17.029716+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:18.029902+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:19.030102+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:20.030332+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:21.030502+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:22.030707+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:23.030911+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:24.031136+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:25.031323+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:26.031480+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:27.031746+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:28.032390+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:29.033309+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:30.033969+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:31.034252+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:32.035087+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:33.035642+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:34.036281+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:35.036768+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:36.037122+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:37.037345+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:38.037703+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:39.037906+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:40.038311+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:41.038465+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:42.038640+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:43.038838+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:44.039084+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:45.039282+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:46.039446+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:47.039573+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:48.039764+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:49.040138+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 80297984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:50.040534+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:51.040892+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:52.041109+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:53.041304+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:54.041485+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:55.041687+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:56.041894+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:57.042060+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:58.042253+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:59.042420+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:00.042596+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:01.042781+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:02.042939+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:03.043119+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:04.043289+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:05.043412+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:06.043567+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:07.043767+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:08.043912+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:09.044091+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:10.044262+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:11.044393+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:12.044552+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:13.044724+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:14.044908+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:15.045104+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:16.045253+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:17.045407+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:18.045578+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:19.045796+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:20.046103+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:21.046298+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:22.046511+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 80232448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:23.046681+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:24.046928+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:25.047156+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:26.047346+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:27.047465+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:28.047609+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:29.047753+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:30.047972+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:31.048157+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:32.048725+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:33.049283+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:34.049517+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:35.050005+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:36.050300+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:37.050473+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:38.050730+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:39.051005+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:40.052265+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:41.052832+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:42.054326+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:43.055143+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:44.055626+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:45.055838+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:46.056097+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 14 10:05:50 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578665092' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:47.056321+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:48.056682+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:49.056885+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:50.057163+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:51.057448+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:52.057925+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:53.058221+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:54.058440+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:55.058638+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:56.058815+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:57.058996+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:58.059274+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:59.059531+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:00.059765+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:01.059994+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:02.060245+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:03.060460+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:04.060700+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:05.062182+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:06.062548+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:07.063100+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:08.063295+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:09.063495+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:10.063701+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336240640 unmapped: 80125952 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:11.063826+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 80117760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:12.064299+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 80117760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:13.064476+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:14.064677+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:15.064954+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:16.065113+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:17.065318+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:18.065448+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:19.065614+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:20.065779+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:21.066069+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:22.066208+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:23.066423+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:24.066578+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:25.066743+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:26.066963+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336273408 unmapped: 80093184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:27.067232+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:28.067450+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:29.067696+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:30.067917+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:31.068069+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:32.068215+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:33.068410+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:34.068611+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:35.068742+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:36.068999+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:37.069183+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:38.069342+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:39.069548+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 80068608 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:40.069716+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 80068608 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:41.069875+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336306176 unmapped: 80060416 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:42.070076+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:43.070254+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:44.070424+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:45.070539+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:46.070687+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:47.070806+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:48.071100+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:49.071443+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:50.071625+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:51.071766+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:52.071928+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:53.072048+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:54.072191+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:55.072416+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:56.072586+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:57.072708+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:58.072865+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:59.073071+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:00.073286+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:01.073450+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:02.073637+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:03.073807+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:04.073973+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 178K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.70 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 762 writes, 2197 keys, 762 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s
                                           Interval WAL: 762 writes, 346 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:05.074118+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 80011264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets getting new tickets!
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:06.074437+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _finish_auth 0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:06.075859+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:07.074598+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:08.074788+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:09.074968+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597c91cb000 session 0x5597c91d6780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9538400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:10.075503+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: mgrc ms_handle_reset ms_handle_reset con 0x5597c9fec000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 10:05:50 compute-0 ceph-osd[88375]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: get_auth_request con 0x5597caeeb400 auth_method 0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: mgrc handle_mgr_configure stats_period=5
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597cc6e8400 session 0x5597c6c22b40
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f64c00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597c8db4400 session 0x5597c9df0780
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597cc6e8400
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:11.075644+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:12.075971+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:13.076248+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:14.076459+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:15.076697+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:16.076946+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:17.077150+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:18.077370+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:19.077575+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:20.077763+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:21.077948+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:22.078121+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:23.078280+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:24.078429+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:25.078544+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336388096 unmapped: 79978496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:26.078699+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:27.078833+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:28.079072+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:29.079258+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:30.079482+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:31.079654+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:32.079825+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:33.079965+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:34.080221+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:35.080425+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:36.080630+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:37.080866+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:38.081078+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:39.081255+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:40.081457+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:41.081621+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:42.081803+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:43.081948+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:44.082123+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:45.082286+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:46.082471+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336429056 unmapped: 79937536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:47.082667+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336429056 unmapped: 79937536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:48.082840+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:49.083064+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:50.083233+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:51.083371+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:52.083515+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:53.083663+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:54.083767+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:55.084208+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:56.084387+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:57.084538+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:58.084673+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:59.084855+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:00.085075+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:01.085190+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:02.085319+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:03.085505+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:04.085670+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 79888384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:05.085836+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:06.085998+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:07.086210+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:08.086347+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336494592 unmapped: 79872000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:09.086504+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 277.959259033s of 278.040557861s, submitted: 28
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336494592 unmapped: 79872000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:10.086707+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336543744 unmapped: 79822848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:11.086872+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:12.087061+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:13.087244+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:14.087775+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:15.088128+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:16.088314+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:17.088686+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:18.088902+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:19.089334+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:20.089836+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:21.090057+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:22.090244+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:23.090434+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:24.090614+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:25.090805+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:26.091069+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:27.091238+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:28.091477+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:29.091683+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:30.091877+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:31.091997+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:32.092182+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:33.092301+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:34.092467+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:35.092627+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:36.092762+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:37.092895+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:38.093109+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:39.093300+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:40.093557+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:41.093694+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 79765504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:42.093811+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 79765504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:43.094002+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:44.094209+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:45.094390+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:46.094549+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:47.094656+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:48.094862+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:49.095053+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:50.095202+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:51.095321+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:52.095526+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:53.095700+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:54.095854+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:55.096005+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:56.096258+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:57.096470+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:58.096640+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:59.096782+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:00.096985+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:01.097122+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:02.097325+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:03.097451+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:04.097645+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:05.097827+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:06.097977+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:07.098165+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 79716352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:08.098302+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 79716352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:09.098445+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:10.098581+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:11.098762+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:12.098936+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:13.099133+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:14.099327+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:15.099495+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:16.099614+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:17.099797+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:18.100052+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:19.100315+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:20.100745+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:21.101074+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:22.101234+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:23.101568+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:24.101826+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:25.102128+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:26.102360+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:27.102523+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:28.102756+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:29.103000+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:30.103297+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:31.103557+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:32.103733+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:33.103941+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:34.104147+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:35.104350+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:36.104580+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:37.104841+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:38.105061+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:39.105194+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:40.105408+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:41.105565+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:42.105726+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:43.105894+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:44.106330+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:45.106489+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:46.106624+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 79642624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:47.106738+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:48.106887+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:49.107042+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:50.107279+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:51.107459+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:52.107633+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:53.107787+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:54.107997+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:55.108217+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:56.108349+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:57.108545+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:58.108716+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:59.108901+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:00.109126+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:01.109322+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:02.109483+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:03.109657+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:04.109869+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:05.110121+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 79601664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:06.110266+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:07.110451+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:08.110648+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:09.110884+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:10.111155+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:11.111344+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:12.111561+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:13.111813+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:14.111982+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:15.112135+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:16.112323+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:17.112473+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:18.112731+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:19.112875+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:20.113104+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:21.113236+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:22.113447+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:23.113629+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:24.113852+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:25.114110+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:26.114240+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:27.114390+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:28.114541+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:29.114720+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:30.115060+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:31.115190+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:32.115335+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:33.115538+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 79536128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:34.115657+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 79536128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:35.115873+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 79527936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597ca9b2000
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:36.116076+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 146.745452881s of 147.051895142s, submitted: 90
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 303 ms_handle_reset con 0x5597ca9b2000 session 0x5597c705ad20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:37.116264+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370800 data_alloc: 218103808 data_used: 1093632
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 303 heartbeat osd_stat(store_statfs(0x4ea98b000/0x0/0x4ffc00000, data 0x6d16fc/0x873000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:38.116419+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c7e2d800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:39.116533+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 79503360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 304 ms_handle_reset con 0x5597c7e2d800 session 0x5597c8da63c0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4ea987000/0x0/0x4ffc00000, data 0x6d32cd/0x876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:40.116651+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 79503360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:41.116815+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:42.116967+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342983 data_alloc: 218103808 data_used: 1101824
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eadf9000/0x0/0x4ffc00000, data 0x26329b/0x404000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:43.117152+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eadf9000/0x0/0x4ffc00000, data 0x26329b/0x404000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c9f6d800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:44.117321+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 79462400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 ms_handle_reset con 0x5597c9f6d800 session 0x5597c8d70f00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x16d4d0e/0x1878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x16d4d0e/0x1878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:45.117504+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:46.117692+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:47.117905+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:48.118174+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:49.118378+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:50.118660+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:51.118862+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:52.119051+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:53.119200+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:54.119552+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:55.119772+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:56.119985+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:57.120260+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:58.122057+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:59.122305+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:00.122514+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:01.122801+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:02.123143+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:03.123341+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:04.123577+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:05.123744+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:06.124002+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:07.124219+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:08.124429+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:09.124574+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:10.124867+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:11.125111+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:12.125340+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:13.125540+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:14.125739+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:15.125913+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:16.126196+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:17.126378+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:18.126541+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:19.126712+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:20.126922+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:21.127099+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 79380480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:22.127264+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 79380480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:23.127386+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 79372288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:24.127541+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 79372288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:25.127682+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:26.127862+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:27.128079+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:28.128261+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:29.128479+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:30.128748+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 79347712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:31.128975+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 79347712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:32.129202+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:33.129367+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:34.129538+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:35.129720+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:36.129884+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:37.130090+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:38.130258+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:39.130394+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c70cac00
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:40.160443+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: handle_auth_request added challenge on 0x5597c7e2d800
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.765888214s of 64.009460449s, submitted: 63
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 ms_handle_reset con 0x5597c7e2d800 session 0x5597c8ec65a0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 307 ms_handle_reset con 0x5597c70cac00 session 0x5597c8d71c20
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 79298560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:41.160616+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 79298560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:42.160780+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357510 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:43.160949+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:44.161094+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:45.161335+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 307 heartbeat osd_stat(store_statfs(0x4eadf0000/0x0/0x4ffc00000, data 0x26844c/0x40d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:46.161575+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:47.161756+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 79290368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357510 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:48.161926+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337084416 unmapped: 79282176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _renew_subs
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:49.162102+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337100800 unmapped: 79265792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:50.162328+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337100800 unmapped: 79265792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:51.162503+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:52.162663+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:53.162813+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:54.163006+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:55.163220+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337117184 unmapped: 79249408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:56.163369+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:57.163538+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:58.163698+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:59.163857+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:00.164085+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 79233024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:01.164234+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 79224832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:02.164499+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 79224832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:03.164745+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 79224832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:04.164941+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:05.165086+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:06.165217+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:07.166199+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:08.166333+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 79216640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:09.166551+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:10.166838+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:11.167089+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:12.167251+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:13.167471+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:14.167690+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:15.167860+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:16.168079+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337174528 unmapped: 79192064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:17.168357+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 79183872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:18.168560+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 79183872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:19.168702+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 79183872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:20.168936+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:21.169243+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:22.169541+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:23.169758+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:24.169964+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337190912 unmapped: 79175680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:25.170170+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 79159296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:26.170396+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 79159296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:27.170593+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 79159296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:28.170830+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:29.171122+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:30.171380+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:31.171631+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:32.171874+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:50 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:50 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:33.172046+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 79151104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:50 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:34.172194+0000)
Oct 14 10:05:50 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 79142912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:35.172464+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 79142912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:36.172646+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 79142912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:37.172839+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:38.172990+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:39.173176+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:40.173370+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:41.173562+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 79118336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:42.174229+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:43.174373+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:44.174495+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:45.174620+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:46.174743+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:47.174895+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:48.175074+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 79134720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:49.175203+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 79126528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:50.175351+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 79126528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:51.175504+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 79126528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:52.175637+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 79200256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'config diff' '{prefix=config diff}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:53.175802+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'config show' '{prefix=config show}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:54.175955+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:55.176161+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'log dump' '{prefix=log dump}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:56.176338+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'perf dump' '{prefix=perf dump}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'perf schema' '{prefix=perf schema}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:57.177077+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335814656 unmapped: 80551936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:58.177465+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335814656 unmapped: 80551936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:59.178400+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335814656 unmapped: 80551936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:00.178603+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 80543744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:01.179247+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 80543744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:02.179385+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 80543744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:03.200984+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 80543744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:04.201150+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 80543744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:05.201334+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:06.201509+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:07.201764+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:08.201912+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:09.202067+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:10.202225+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 80527360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:11.202363+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 80527360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:12.203105+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:13.203275+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 80527360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:14.203426+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 80527360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:15.203536+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 80527360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:16.203695+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:17.203848+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:18.203984+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:19.204565+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:20.204793+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:21.204939+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:22.205082+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:23.205259+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:24.205450+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:25.205602+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:26.205728+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:27.205978+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:28.206091+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:29.206375+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:30.206676+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:31.206957+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:32.207240+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:33.207494+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:34.207694+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335888384 unmapped: 80478208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:35.207915+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335888384 unmapped: 80478208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:36.208074+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335888384 unmapped: 80478208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:37.208210+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:38.208347+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:39.208497+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:40.208715+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:41.208907+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335912960 unmapped: 80453632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:42.209079+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335921152 unmapped: 80445440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:43.209216+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335921152 unmapped: 80445440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:44.209389+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335921152 unmapped: 80445440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:45.209580+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335921152 unmapped: 80445440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:46.209756+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335921152 unmapped: 80445440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:47.209936+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335921152 unmapped: 80445440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:48.210137+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:49.210823+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:50.211369+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:51.211976+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:52.212397+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:53.212904+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:54.213291+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:55.213782+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:56.214152+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:57.214554+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:58.214760+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:59.215211+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335945728 unmapped: 80420864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:00.215545+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335945728 unmapped: 80420864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:01.215902+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335945728 unmapped: 80420864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:02.216281+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:03.216579+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:04.216761+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:05.217112+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:06.217353+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:07.217707+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:08.217911+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:09.218091+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:10.218307+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:11.218459+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:12.218662+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:13.218828+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:14.219007+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:15.219224+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:16.219390+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:17.219570+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:18.219734+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:19.219910+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:20.220138+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:21.220324+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335994880 unmapped: 80371712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:22.220497+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335994880 unmapped: 80371712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:23.220655+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335994880 unmapped: 80371712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:24.220826+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335994880 unmapped: 80371712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:25.220995+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:26.221225+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:27.221404+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:28.221589+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:29.221748+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:30.222095+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336027648 unmapped: 80338944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:31.222316+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336027648 unmapped: 80338944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:32.222552+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336027648 unmapped: 80338944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:33.222750+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336027648 unmapped: 80338944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:34.222932+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336027648 unmapped: 80338944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:35.223116+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336027648 unmapped: 80338944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:36.223286+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:37.223476+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:38.223682+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:39.223881+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:40.224135+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:41.224539+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:42.224858+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:43.225163+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:44.225347+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:45.225598+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:46.225995+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:47.226536+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:48.227085+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:49.227410+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:50.228144+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:51.228401+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:52.228615+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 80297984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:53.228849+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 80297984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:54.229056+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 80297984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:55.229235+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 80297984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:56.229477+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 80297984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:57.229715+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:58.229938+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:59.230140+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:00.230420+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:01.230672+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:02.230926+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:03.231127+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:04.231435+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:05.231586+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:06.231876+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:07.232296+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:08.232534+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:09.232701+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:10.232965+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:11.233208+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:12.233395+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:13.233573+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:14.233748+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:15.233950+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:16.234193+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:17.234399+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:18.234580+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:19.234724+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:20.234990+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:21.235257+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:22.235423+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:23.235618+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335069184 unmapped: 81297408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:24.235764+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335077376 unmapped: 81289216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:25.235933+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335077376 unmapped: 81289216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:26.236083+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335085568 unmapped: 81281024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:27.236236+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335085568 unmapped: 81281024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:28.236447+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335085568 unmapped: 81281024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:29.236667+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335093760 unmapped: 81272832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:30.236883+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335093760 unmapped: 81272832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:31.237072+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335093760 unmapped: 81272832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:32.237286+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335093760 unmapped: 81272832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:33.237489+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335093760 unmapped: 81272832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:34.237677+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335093760 unmapped: 81272832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:35.237890+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335093760 unmapped: 81272832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:36.238157+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335093760 unmapped: 81272832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:37.238341+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335110144 unmapped: 81256448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:38.238513+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335110144 unmapped: 81256448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:39.238666+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335110144 unmapped: 81256448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:40.238902+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 81248256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:41.239102+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 81248256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:42.239307+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 81248256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:43.239452+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 81248256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:44.240301+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 81248256 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:45.241609+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 81240064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:46.242375+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 81231872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:47.242532+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 81231872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:48.243427+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 81231872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:49.244161+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:50.244848+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 81231872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:51.245437+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 81231872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:52.245976+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 81231872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:53.246432+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 81231872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:54.246611+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335142912 unmapped: 81223680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:55.246924+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335142912 unmapped: 81223680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:56.247175+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335142912 unmapped: 81223680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:57.247387+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335151104 unmapped: 81215488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:58.247649+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335151104 unmapped: 81215488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:59.247909+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335159296 unmapped: 81207296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:00.248205+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335159296 unmapped: 81207296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:01.248459+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335159296 unmapped: 81207296 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:02.248684+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335175680 unmapped: 81190912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:03.248845+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335175680 unmapped: 81190912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:04.249126+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335175680 unmapped: 81190912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:05.249373+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335175680 unmapped: 81190912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:06.249617+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335175680 unmapped: 81190912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:07.249796+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335175680 unmapped: 81190912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:08.249981+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335175680 unmapped: 81190912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:09.250292+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335183872 unmapped: 81182720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:10.250605+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 81174528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:11.251144+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 81174528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:12.251382+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 81174528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:13.251821+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 81174528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:14.252077+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 81174528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:15.252331+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335200256 unmapped: 81166336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:16.252661+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335200256 unmapped: 81166336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:17.252894+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 81141760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:18.253170+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 81141760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:19.253476+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 81141760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:20.253839+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 81133568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:21.254309+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 81133568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:22.254475+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 81133568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:23.255077+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 81133568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 14 10:05:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711575619' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:24.255307+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335233024 unmapped: 81133568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:25.255839+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 81125376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:26.256066+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 81125376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:27.256300+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 81125376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:28.256479+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 81125376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:29.256830+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335241216 unmapped: 81125376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:30.257111+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 81117184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:31.257498+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 81117184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:32.257722+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 81117184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:33.258156+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 81117184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:34.258516+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 81117184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:35.258686+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335249408 unmapped: 81117184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:36.258927+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 81108992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:37.259198+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 81108992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:38.259382+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 81108992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:39.259558+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335265792 unmapped: 81100800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:40.259807+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335265792 unmapped: 81100800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:41.260034+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335265792 unmapped: 81100800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:42.260173+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335265792 unmapped: 81100800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:43.260367+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335265792 unmapped: 81100800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:44.260531+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335265792 unmapped: 81100800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:45.260909+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335273984 unmapped: 81092608 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:46.261087+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335273984 unmapped: 81092608 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:47.261241+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335282176 unmapped: 81084416 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:48.261381+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335282176 unmapped: 81084416 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:49.261596+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335282176 unmapped: 81084416 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:50.261894+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335282176 unmapped: 81084416 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:51.262127+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335282176 unmapped: 81084416 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:52.262414+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 81076224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:53.262706+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 81076224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:54.262940+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 81076224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:55.263109+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 81076224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:56.263298+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335290368 unmapped: 81076224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:57.263487+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335298560 unmapped: 81068032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:58.263684+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335306752 unmapped: 81059840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:59.263881+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335306752 unmapped: 81059840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:00.264167+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335306752 unmapped: 81059840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:01.264366+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335314944 unmapped: 81051648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:02.264658+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335314944 unmapped: 81051648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:03.264871+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335323136 unmapped: 81043456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:04.265124+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 179K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.69 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 339 writes, 970 keys, 339 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s
                                           Interval WAL: 339 writes, 156 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335331328 unmapped: 81035264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:05.265307+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335331328 unmapped: 81035264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:06.265512+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335331328 unmapped: 81035264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:07.265734+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335331328 unmapped: 81035264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:08.265952+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 81027072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:09.266215+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 81027072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:10.266486+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 81027072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:11.266688+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 81027072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:12.266907+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 81027072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:13.267104+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 81018880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:14.267291+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 81018880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:15.267437+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 81018880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:16.267658+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 81018880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:17.267888+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 81018880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:18.268119+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 81018880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:19.268347+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335355904 unmapped: 81010688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:20.268599+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335355904 unmapped: 81010688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:21.268834+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335355904 unmapped: 81010688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:22.269113+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335355904 unmapped: 81010688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:23.269451+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335355904 unmapped: 81010688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:24.269598+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335355904 unmapped: 81010688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:25.269736+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335372288 unmapped: 80994304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:26.269869+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335372288 unmapped: 80994304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:27.270163+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335372288 unmapped: 80994304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:28.270329+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335372288 unmapped: 80994304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:29.270489+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335372288 unmapped: 80994304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:30.270669+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335380480 unmapped: 80986112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:31.270844+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335380480 unmapped: 80986112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:32.271004+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335380480 unmapped: 80986112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:33.271240+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335380480 unmapped: 80986112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:34.271416+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335380480 unmapped: 80986112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:35.271704+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335388672 unmapped: 80977920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:36.271929+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335396864 unmapped: 80969728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:37.272134+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335396864 unmapped: 80969728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:38.272286+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335396864 unmapped: 80969728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:39.272460+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335396864 unmapped: 80969728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:40.272768+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335396864 unmapped: 80969728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:41.273080+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335405056 unmapped: 80961536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:42.273265+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335405056 unmapped: 80961536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:43.273513+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335405056 unmapped: 80961536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:44.273727+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335405056 unmapped: 80961536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:45.274533+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335405056 unmapped: 80961536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:46.274734+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335421440 unmapped: 80945152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:47.274908+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335421440 unmapped: 80945152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:48.275123+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335421440 unmapped: 80945152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:49.275332+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335421440 unmapped: 80945152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:50.275595+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335421440 unmapped: 80945152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:51.275742+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335421440 unmapped: 80945152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:52.275914+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335421440 unmapped: 80945152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:53.276095+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335429632 unmapped: 80936960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:54.276369+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335437824 unmapped: 80928768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:55.276940+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335437824 unmapped: 80928768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:56.277263+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335437824 unmapped: 80928768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:57.277498+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335446016 unmapped: 80920576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:58.277673+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335446016 unmapped: 80920576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:59.277858+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335446016 unmapped: 80920576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:00.278494+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335446016 unmapped: 80920576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:01.278754+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335454208 unmapped: 80912384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:02.278942+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 80904192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:03.279210+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 80904192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:04.279357+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360484 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 80904192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:05.279524+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 80904192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:06.279741+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 80904192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:07.280089+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335462400 unmapped: 80904192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:08.280446+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335470592 unmapped: 80896000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:09.280609+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 389.019104004s of 389.097808838s, submitted: 30
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eaded000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335478784 unmapped: 80887808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:10.280774+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:11.280965+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:12.281258+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:13.281536+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:14.281783+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:15.281935+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:16.282145+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:17.282291+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:18.282490+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:19.282678+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:20.282881+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335511552 unmapped: 80855040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:21.283295+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:22.283491+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:23.283652+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:24.283845+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:25.283997+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:26.284908+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:27.285197+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:28.285402+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:29.285695+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:30.285927+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:31.286166+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:32.286402+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 80846848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:33.286533+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335527936 unmapped: 80838656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:34.286883+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335527936 unmapped: 80838656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:35.287301+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335527936 unmapped: 80838656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:36.287693+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335527936 unmapped: 80838656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:37.287939+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 80830464 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:38.288230+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 80830464 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:39.288444+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 80830464 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:40.288630+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 80830464 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:41.288770+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 80830464 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:42.288999+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335544320 unmapped: 80822272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:43.289280+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335544320 unmapped: 80822272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:44.289507+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335544320 unmapped: 80822272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:45.289813+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335544320 unmapped: 80822272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:46.291971+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335544320 unmapped: 80822272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:47.292641+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335544320 unmapped: 80822272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:48.292863+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335544320 unmapped: 80822272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:49.293037+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335552512 unmapped: 80814080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:50.293215+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335552512 unmapped: 80814080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:51.293430+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335552512 unmapped: 80814080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:52.293593+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335552512 unmapped: 80814080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:53.293793+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 80805888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:54.293996+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 80805888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:55.294156+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 80805888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:56.294312+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 80805888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:57.294501+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335568896 unmapped: 80797696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:58.294657+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335568896 unmapped: 80797696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:59.295137+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335568896 unmapped: 80797696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:00.295347+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335577088 unmapped: 80789504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:01.296274+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335577088 unmapped: 80789504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:02.296455+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335577088 unmapped: 80789504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:03.296741+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335577088 unmapped: 80789504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:04.296947+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335577088 unmapped: 80789504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:05.297157+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335585280 unmapped: 80781312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:06.297329+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335585280 unmapped: 80781312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:07.297498+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335585280 unmapped: 80781312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:08.297689+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335585280 unmapped: 80781312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:09.297811+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335585280 unmapped: 80781312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:10.298062+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335585280 unmapped: 80781312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:11.298175+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335585280 unmapped: 80781312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:12.298349+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335593472 unmapped: 80773120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:13.298542+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335601664 unmapped: 80764928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:14.298658+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335601664 unmapped: 80764928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:15.298787+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335601664 unmapped: 80764928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:16.298943+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: osd.1 308 heartbeat osd_stat(store_statfs(0x4eadee000/0x0/0x4ffc00000, data 0x269eaf/0x410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335609856 unmapped: 80756736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:17.299385+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'config diff' '{prefix=config diff}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'config show' '{prefix=config show}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:18.299516+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335675392 unmapped: 80691200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:19.299641+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:51 compute-0 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:51 compute-0 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359604 data_alloc: 218103808 data_used: 1110016
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: tick
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_tickets
Oct 14 10:05:51 compute-0 ceph-osd[88375]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:20.299818+0000)
Oct 14 10:05:51 compute-0 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335609856 unmapped: 80756736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:51 compute-0 ceph-osd[88375]: do_command 'log dump' '{prefix=log dump}'
Oct 14 10:05:51 compute-0 rsyslogd[1002]: imjournal from <np0005486808:ceph-osd>: begin to drop messages due to rate-limiting
Oct 14 10:05:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 14 10:05:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965670466' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 14 10:05:51 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 10:05:51 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 14 10:05:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/588514055' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 14 10:05:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2392728208' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 14 10:05:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/578665092' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 14 10:05:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1711575619' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 14 10:05:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2965670466' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 14 10:05:51 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/588514055' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 14 10:05:51 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 14 10:05:51 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190934492' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 14 10:05:51 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23405 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:52 compute-0 nova_compute[259627]: 2025-10-14 10:05:52.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:52 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 14 10:05:52 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539211776' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23409 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23411 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23413 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mon[74249]: pgmap v3397: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/190934492' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mon[74249]: from='client.23405 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/539211776' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mon[74249]: from='client.23409 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mon[74249]: from='client.23411 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mon[74249]: from='client.23413 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23415 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:52 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23417 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:53 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:53 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:53 compute-0 ceph-mon[74249]: from='client.23415 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:53 compute-0 ceph-mon[74249]: from='client.23417 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:53 compute-0 ceph-mon[74249]: from='client.23421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:53 compute-0 ceph-mon[74249]: pgmap v3398: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:53 compute-0 podman[458049]: 2025-10-14 10:05:53.688541273 +0000 UTC m=+0.093767741 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 10:05:53 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 14 10:05:53 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363202056' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 14 10:05:53 compute-0 podman[458048]: 2025-10-14 10:05:53.733065991 +0000 UTC m=+0.133232125 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 10:05:53 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23425 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:54 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23429 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 14 10:05:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552949195' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 14 10:05:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1363202056' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 14 10:05:54 compute-0 ceph-mon[74249]: from='client.23425 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:54 compute-0 ceph-mon[74249]: from='client.23429 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:05:54 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/552949195' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 14 10:05:54 compute-0 nova_compute[259627]: 2025-10-14 10:05:54.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:54 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 14 10:05:54 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827210756' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 10:05:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 14 10:05:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3360608052' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 14 10:05:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 14 10:05:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 14 10:05:55 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:52.138457+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4ea7d9000/0x0/0x4ffc00000, data 0x120e23a/0x13a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:53.138593+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:54.138850+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:55.139000+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 333004800 unmapped: 49979392 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.723236084s of 20.830955505s, submitted: 16
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:56.139160+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523461 data_alloc: 234881024 data_used: 13307904
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 46276608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:57.139280+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:58.139453+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d59000/0x0/0x4ffc00000, data 0x1c8e23a/0x1e25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:32:59.139656+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 42K writes, 170K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5941 writes, 25K keys, 5941 commit groups, 1.0 writes per commit group, ingest: 27.85 MB, 0.05 MB/s
                                           Interval WAL: 5941 writes, 2282 syncs, 2.60 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:00.140108+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:01.140290+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571193 data_alloc: 234881024 data_used: 14442496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:02.140473+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d57000/0x0/0x4ffc00000, data 0x1c9023a/0x1e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:03.140636+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:04.140818+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 46178304 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:05.141095+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:06.141326+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569957 data_alloc: 234881024 data_used: 14450688
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:07.141468+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.252246857s of 11.519014359s, submitted: 92
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:08.141672+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d56000/0x0/0x4ffc00000, data 0x1c9123a/0x1e28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:09.141845+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:10.142076+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:11.142250+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570185 data_alloc: 234881024 data_used: 14450688
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 46170112 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e85f860
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:12.142368+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 45752320 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:13.142543+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:14.142741+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:15.142854+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:16.142969+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613919 data_alloc: 234881024 data_used: 14450688
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:17.143128+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:18.143242+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:19.143418+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:20.143580+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.748434067s of 12.825232506s, submitted: 17
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30ff76d20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 45744128 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:21.143697+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613919 data_alloc: 234881024 data_used: 14450688
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 45735936 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:22.143844+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337616896 unmapped: 45367296 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:23.144108+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:24.144275+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:25.144428+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:26.144641+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3639039 data_alloc: 234881024 data_used: 17956864
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:27.144940+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 45334528 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:28.145093+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 45326336 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:29.145208+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 45326336 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:30.145350+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 45326336 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:31.145499+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3638335 data_alloc: 234881024 data_used: 17956864
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 45326336 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:32.145628+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.808189392s of 11.818248749s, submitted: 2
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 42573824 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e98bc000/0x0/0x4ffc00000, data 0x212b23a/0x22c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:33.145779+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9442000/0x0/0x4ffc00000, data 0x259d23a/0x2734000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 42500096 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:34.146119+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 43171840 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:35.146262+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 43171840 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:36.146432+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680609 data_alloc: 234881024 data_used: 18194432
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 43171840 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e943b000/0x0/0x4ffc00000, data 0x25ab23a/0x2742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:37.146646+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 43171840 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:38.146851+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:39.147061+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:40.147293+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:41.147443+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680625 data_alloc: 234881024 data_used: 18194432
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e943b000/0x0/0x4ffc00000, data 0x25ab23a/0x2742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:42.147653+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e943b000/0x0/0x4ffc00000, data 0x25ab23a/0x2742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:43.147813+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:44.148069+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:45.149343+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30e0a1a40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 43163648 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.530860901s of 13.740232468s, submitted: 64
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:46.149536+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90000 session 0x55e30e41e3c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576745 data_alloc: 234881024 data_used: 13524992
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 43139072 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:47.149890+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 43139072 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:48.150098+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d54000/0x0/0x4ffc00000, data 0x1c9323a/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 43139072 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:49.150254+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 43139072 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:50.150400+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 43130880 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:51.150540+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9d54000/0x0/0x4ffc00000, data 0x1c9323a/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576929 data_alloc: 234881024 data_used: 13524992
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 43130880 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:52.150739+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 43130880 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:53.150922+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d78b2c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d7dc1e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 43130880 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e36d2c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:54.151043+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb375000/0x0/0x4ffc00000, data 0x64e23a/0x7e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:55.151216+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:56.151387+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307161 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:57.151569+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:58.151727+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:33:59.151899+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:00.152035+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:01.152232+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307161 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:02.152359+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:03.152440+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:04.152863+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:05.153079+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:06.153268+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3307161 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:07.153490+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:08.153626+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330563584 unmapped: 52420608 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.871526718s of 22.977365494s, submitted: 34
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:09.153814+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb399000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330588160 unmapped: 52396032 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:10.154006+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 52338688 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:11.154227+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:12.154369+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:13.154509+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:14.154696+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:15.154912+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:16.155123+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:17.155309+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:18.155489+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:19.155721+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:20.155887+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:21.156114+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:22.156355+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:23.156533+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:24.156768+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331702272 unmapped: 51281920 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:25.156940+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:26.157144+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:27.157332+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:28.157519+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:29.157689+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:30.157930+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:31.158130+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306985 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:32.158307+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 51273728 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:33.158504+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331718656 unmapped: 51265536 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:34.158692+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331718656 unmapped: 51265536 heap: 382984192 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e311e985a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fcf6b40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e36cb40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e311e98780
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:35.158838+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.959754944s of 26.327398300s, submitted: 106
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e2f2780
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eb39a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:36.159081+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377363 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:37.159277+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:38.159490+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:39.159672+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab8e000/0x0/0x4ffc00000, data 0xe5a217/0xff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:40.159837+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327507968 unmapped: 59678720 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:41.160001+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab8e000/0x0/0x4ffc00000, data 0xe5a217/0xff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377363 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327516160 unmapped: 59670528 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:42.160260+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327516160 unmapped: 59670528 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:43.160430+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d7a8f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327516160 unmapped: 59670528 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab8e000/0x0/0x4ffc00000, data 0xe5a217/0xff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:44.160687+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fb91a40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327524352 unmapped: 59662336 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30ff9cb40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d7b6b40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:45.162181+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.409566879s of 10.530294418s, submitted: 23
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327696384 unmapped: 59490304 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:46.162323+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383042 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327696384 unmapped: 59490304 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:47.162432+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:48.162628+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:49.162876+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab69000/0x0/0x4ffc00000, data 0xe7e227/0x1015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab69000/0x0/0x4ffc00000, data 0xe7e227/0x1015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:50.163079+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:51.163218+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442242 data_alloc: 218103808 data_used: 9359360
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:52.163337+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:53.163828+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab69000/0x0/0x4ffc00000, data 0xe7e227/0x1015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:54.164412+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:55.164759+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4eab69000/0x0/0x4ffc00000, data 0xe7e227/0x1015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 327680000 unmapped: 59506688 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:56.165193+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442722 data_alloc: 218103808 data_used: 9371648
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.935249329s of 10.938068390s, submitted: 1
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 331063296 unmapped: 56123392 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:57.165344+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8dae000/0x0/0x4ffc00000, data 0x1a91227/0x1c28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 53174272 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:58.165896+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:34:59.166215+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:00.166798+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8c8e000/0x0/0x4ffc00000, data 0x1ba3227/0x1d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:01.167322+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561558 data_alloc: 234881024 data_used: 10735616
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:02.167589+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:03.167769+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 335339520 unmapped: 51847168 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:04.168130+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:05.168299+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8ca1000/0x0/0x4ffc00000, data 0x1ba6227/0x1d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:06.168441+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550086 data_alloc: 234881024 data_used: 10735616
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:07.168564+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:08.168768+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:09.168898+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8ca1000/0x0/0x4ffc00000, data 0x1ba6227/0x1d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:10.169139+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:11.169339+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.522975922s of 14.898180008s, submitted: 128
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550086 data_alloc: 234881024 data_used: 10735616
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 52584448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:12.169486+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ff9cf00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:13.169679+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:14.170085+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8336000/0x0/0x4ffc00000, data 0x2511227/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:15.170294+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:16.170467+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624726 data_alloc: 234881024 data_used: 10735616
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:17.170665+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:18.170872+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8336000/0x0/0x4ffc00000, data 0x2511227/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:19.171003+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30d606000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:20.171172+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8336000/0x0/0x4ffc00000, data 0x2511227/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffdc800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffdc800 session 0x55e30fe8ab40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 52379648 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:21.171339+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30e0af0e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.732763290s of 10.002961159s, submitted: 15
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fd0c960
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629128 data_alloc: 234881024 data_used: 10735616
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 52224000 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:22.171480+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 52224000 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:23.171623+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 52224000 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:24.171841+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 52224000 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:25.172001+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:26.172204+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3699208 data_alloc: 234881024 data_used: 20529152
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:27.172323+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:28.172469+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:29.172610+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:30.172785+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:31.172961+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3699560 data_alloc: 234881024 data_used: 20529152
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:32.173174+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:33.173354+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 49807360 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8311000/0x0/0x4ffc00000, data 0x2535237/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1521f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:34.173570+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.206079483s of 13.230063438s, submitted: 5
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 45531136 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:35.173727+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 44965888 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:36.173852+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773124 data_alloc: 234881024 data_used: 21811200
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:37.173983+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:38.174131+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:39.174300+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:40.174479+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:41.174587+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773284 data_alloc: 234881024 data_used: 21815296
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:42.174785+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:43.174986+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 44736512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:44.175258+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342458368 unmapped: 44728320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:45.175376+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:46.175527+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773604 data_alloc: 234881024 data_used: 21823488
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:47.175654+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30e0a6000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d6b8f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff86c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.841184616s of 13.115254402s, submitted: 88
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e69a8000/0x0/0x4ffc00000, data 0x2cf6237/0x2e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:48.175758+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff86c00 session 0x55e30d6b9860
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:49.175916+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:50.176114+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342466560 unmapped: 44720128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:51.176285+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7aff000/0x0/0x4ffc00000, data 0x1ba8227/0x1d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e311e99a40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d8843c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561521 data_alloc: 234881024 data_used: 10739712
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338468864 unmapped: 48717824 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:52.176412+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d793c20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:53.176589+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:54.176801+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:55.176972+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:56.177143+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:57.177325+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:58.177456+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:35:59.177594+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:00.177737+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:01.177866+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:02.178077+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:03.178322+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:04.178660+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:05.178827+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:06.178963+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:07.179193+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:08.179508+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 48701440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:09.179727+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:10.179828+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:11.179984+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:12.180115+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:13.180267+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:14.180480+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:15.180662+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:16.180787+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 48693248 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:17.180939+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:18.181107+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:19.181244+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:20.181419+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:21.181556+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:22.181713+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:23.181875+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338501632 unmapped: 48685056 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:24.182080+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338509824 unmapped: 48676864 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:25.182247+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:26.182405+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:27.182574+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:28.182762+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:29.182928+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:30.183111+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:31.183293+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332338 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:32.183468+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e905a000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:33.183637+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338518016 unmapped: 48668672 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30e36c5a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d5d94a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fd0de00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30e0afc20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:34.183821+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338526208 unmapped: 48660480 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.188243866s of 46.325260162s, submitted: 45
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30ff77a40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d787c20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e311e99680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30e420d20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fd0d2c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:35.184051+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:36.184266+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387791 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a33000/0x0/0x4ffc00000, data 0xc74227/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:37.184489+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:38.184734+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:39.184927+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:40.185136+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:41.185363+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a33000/0x0/0x4ffc00000, data 0xc74227/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387791 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:42.185606+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:43.185804+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d6b81e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:44.185988+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.333760262s of 10.453811646s, submitted: 30
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:45.186079+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:46.186272+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8622000/0x0/0x4ffc00000, data 0xc7424a/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388840 data_alloc: 218103808 data_used: 1114112
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:47.186421+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 338558976 unmapped: 48627712 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:48.186588+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:49.186802+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:50.187003+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8622000/0x0/0x4ffc00000, data 0xc7424a/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8622000/0x0/0x4ffc00000, data 0xc7424a/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:51.187182+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426920 data_alloc: 218103808 data_used: 6430720
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:52.187313+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:53.187516+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:54.187743+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:55.187880+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:56.188044+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8622000/0x0/0x4ffc00000, data 0xc7424a/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426920 data_alloc: 218103808 data_used: 6430720
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:57.188213+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 339124224 unmapped: 48062464 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.844214439s of 12.848373413s, submitted: 1
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7df3000/0x0/0x4ffc00000, data 0x149d24a/0x1635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:58.188336+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 45686784 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:36:59.188466+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 45645824 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:00.188658+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7dd0000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:01.188863+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513346 data_alloc: 218103808 data_used: 7389184
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:02.190235+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:03.191261+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7dd0000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:04.192281+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:05.192479+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:06.192870+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7dd0000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513666 data_alloc: 218103808 data_used: 7397376
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:07.195763+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:08.196134+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:09.196452+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:10.196729+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:11.196995+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513666 data_alloc: 218103808 data_used: 7397376
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7dd0000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:12.197248+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 45522944 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:13.197511+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 45514752 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ff76f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f939800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f939800 session 0x55e30e41f0e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30e3d43c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30ff9d0e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.032838821s of 16.300992966s, submitted: 80
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e311e981e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:14.197690+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fcf65a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff85c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff85c00 session 0x55e30d69d4a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d885a40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e0a0f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:15.197966+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:16.198272+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7664000/0x0/0x4ffc00000, data 0x1c302bc/0x1dca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572254 data_alloc: 218103808 data_used: 7397376
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:17.198512+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:18.198834+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:19.199230+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7664000/0x0/0x4ffc00000, data 0x1c302bc/0x1dca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:20.199579+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:21.199891+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 44089344 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f930400 session 0x55e30fd0c3c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7664000/0x0/0x4ffc00000, data 0x1c302bc/0x1dca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577152 data_alloc: 218103808 data_used: 7397376
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:22.200086+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 43933696 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e311948400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:23.200226+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 43933696 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:24.200382+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343261184 unmapped: 43925504 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:25.200502+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:26.200629+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3625416 data_alloc: 234881024 data_used: 14028800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:27.200789+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e763f000/0x0/0x4ffc00000, data 0x1c542df/0x1def000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:28.200953+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:29.201173+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:30.201308+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:31.201488+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e763f000/0x0/0x4ffc00000, data 0x1c542df/0x1def000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3625416 data_alloc: 234881024 data_used: 14028800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:32.201625+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:33.201751+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 43032576 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:34.201890+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.092348099s of 20.392938614s, submitted: 60
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346988544 unmapped: 40198144 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:35.202049+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 39239680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6871000/0x0/0x4ffc00000, data 0x2a212df/0x2bbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:36.202242+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:37.202417+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749660 data_alloc: 234881024 data_used: 15351808
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:38.202602+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:39.202765+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6871000/0x0/0x4ffc00000, data 0x2a212df/0x2bbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:40.202921+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:41.203076+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:42.203261+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749660 data_alloc: 234881024 data_used: 15351808
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:43.203430+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:44.203560+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6871000/0x0/0x4ffc00000, data 0x2a212df/0x2bbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 39165952 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:45.203691+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.890325546s of 11.178516388s, submitted: 115
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 39116800 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:46.203887+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 39108608 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:47.204088+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744220 data_alloc: 234881024 data_used: 15425536
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 39108608 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e41ef00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:48.204208+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e311948400 session 0x55e30fcde960
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 39100416 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30d7dcd20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:49.204337+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:50.204507+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:51.204627+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:52.204824+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3524040 data_alloc: 218103808 data_used: 7462912
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:53.204967+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 40640512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x14b724a/0x164f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d7a90e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:54.205163+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fb91e00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:55.205274+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:56.205433+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:57.205546+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:58.205692+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 44408832 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:37:59.205827+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:00.205962+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:01.206122+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:02.206287+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:03.206464+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:04.206663+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:05.206858+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 14 10:05:55 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279165009' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:06.206978+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:07.207099+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:08.207211+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:09.207369+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:10.207532+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff81000 session 0x55e30e0a10e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f930400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:11.207744+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:12.207894+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:13.208112+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 44400640 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:14.208280+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:15.208505+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:16.208666+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:17.208840+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:18.208973+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:19.209156+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:20.209327+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:21.209485+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:22.209635+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:23.209878+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:24.210133+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 44392448 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:25.210317+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 44384256 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:26.210524+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 44384256 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:27.210752+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362192 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 44384256 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:28.210950+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 44376064 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:29.211149+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 44376064 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.374755859s of 44.739795685s, submitted: 119
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:30.211295+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d7a90e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30e0a0f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 44367872 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:31.211486+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 44367872 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:32.211615+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405027 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 44367872 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:33.211751+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:34.211933+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:35.212147+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:36.212294+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d69d4a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:37.212472+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721000 session 0x55e30fcf65a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405027 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e311e981e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:38.212612+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30ff9d0e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:39.212753+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 44359680 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:40.212871+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:41.213121+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:42.213360+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438307 data_alloc: 218103808 data_used: 5828608
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:43.213976+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:44.214363+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:45.214879+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:46.215203+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:47.215642+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438307 data_alloc: 218103808 data_used: 5828608
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:48.216742+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0xad3279/0xc6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 44343296 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:49.216926+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.489421844s of 19.609689713s, submitted: 29
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 44335104 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:50.217057+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:51.217356+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:52.217582+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548313 data_alloc: 218103808 data_used: 6111232
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:53.217740+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:54.218125+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:55.218376+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:56.218658+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:57.218784+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548313 data_alloc: 218103808 data_used: 6111232
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:58.218964+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:38:59.219121+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 45637632 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:00.219273+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 45629440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:01.219489+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d482400 session 0x55e311e990e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 45629440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:02.219706+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548313 data_alloc: 218103808 data_used: 6111232
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 45629440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899f000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:03.219855+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 45629440 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:04.220092+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30ff9dc20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d787680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d69d680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e311e98d20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e311949800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.601642609s of 14.823879242s, submitted: 91
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e311949800 session 0x55e30ff76000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30ff9c000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 45817856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e421e00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d5f6b40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30fd0d680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:05.220264+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e85bc000/0x0/0x4ffc00000, data 0x1d1a2db/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:06.220494+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:07.220663+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579341 data_alloc: 218103808 data_used: 6115328
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:08.220863+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e311949800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e311949800 session 0x55e30e4210e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:09.221078+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e85bc000/0x0/0x4ffc00000, data 0x1d1a2db/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e0a8f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 45801472 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:10.221200+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30d7dde00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30fb914a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341270528 unmapped: 45916160 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e311949800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:11.221357+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341278720 unmapped: 45907968 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:12.221517+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595288 data_alloc: 218103808 data_used: 7569408
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8596000/0x0/0x4ffc00000, data 0x1d3e30e/0x1ed8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341286912 unmapped: 45899776 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:13.221665+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:14.221871+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:15.222029+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8596000/0x0/0x4ffc00000, data 0x1d3e30e/0x1ed8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:16.222174+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:17.222258+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612728 data_alloc: 218103808 data_used: 9990144
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:18.222379+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:19.222555+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:20.222702+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8596000/0x0/0x4ffc00000, data 0x1d3e30e/0x1ed8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:21.222835+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 45891584 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:22.222993+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.478260040s of 17.658832550s, submitted: 45
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642496 data_alloc: 218103808 data_used: 10014720
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8269000/0x0/0x4ffc00000, data 0x206b30e/0x2205000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 43786240 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:23.223218+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 43786240 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:24.223382+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e81a9000/0x0/0x4ffc00000, data 0x212b30e/0x22c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:25.223502+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8128000/0x0/0x4ffc00000, data 0x21ac30e/0x2346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:26.223626+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:27.223750+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669580 data_alloc: 234881024 data_used: 10985472
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:28.223872+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8128000/0x0/0x4ffc00000, data 0x21ac30e/0x2346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:29.224002+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:30.224227+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:31.224359+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x21cd30e/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:32.224503+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3665952 data_alloc: 234881024 data_used: 10989568
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 43769856 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:33.224647+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 43761664 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:34.224841+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 43761664 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:35.224969+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343425024 unmapped: 43761664 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x21cd30e/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:36.225142+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.833963394s of 14.129931450s, submitted: 66
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30fcde1e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e311949800 session 0x55e30ca00960
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d606000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:37.225278+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554685 data_alloc: 218103808 data_used: 6115328
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:38.225440+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e899e000/0x0/0x4ffc00000, data 0x1938279/0x1acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:39.225619+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:40.225739+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e41f0e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e3d52c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:41.225889+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:42.226055+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:43.226211+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:44.226609+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343457792 unmapped: 43728896 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:45.226732+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 43720704 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:46.226883+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343465984 unmapped: 43720704 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:47.227056+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:48.227199+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:49.227358+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:50.227542+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:51.227719+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:52.227889+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343474176 unmapped: 43712512 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:53.228038+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:54.228235+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:55.228415+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:56.228578+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:57.228711+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:58.228872+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:39:59.229104+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:00.229244+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 43704320 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:01.229375+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 43696128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:02.229540+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 43696128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:03.229698+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 43696128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:04.229865+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343490560 unmapped: 43696128 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:05.230083+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 43687936 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:06.230220+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 43687936 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:07.230391+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 43687936 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:08.230549+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 43687936 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:09.230699+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 43679744 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:10.230830+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 43679744 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:11.231074+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 43679744 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:12.231245+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381681 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 43679744 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:13.231398+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 43671552 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:14.231574+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 43671552 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:15.231712+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff9da40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30d7b74a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30e85f2c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 43671552 heap: 387186688 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:16.231978+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7dd0e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9c89000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.772365570s of 40.016002655s, submitted: 65
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e312a114a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e2f3e00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d884f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e311e98780
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7a8d20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:17.232141+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448693 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f9000/0x0/0x4ffc00000, data 0xede279/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:18.232542+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:19.232744+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f9000/0x0/0x4ffc00000, data 0xede279/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:20.232974+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:21.233373+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:22.234071+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448693 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30deb83c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:23.234289+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d6b9860
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:24.234581+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 49954816 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff90400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff90400 session 0x55e30d6061e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d787c20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:25.234761+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343547904 unmapped: 49938432 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d6c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f7000/0x0/0x4ffc00000, data 0xede2ac/0x1077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:26.234917+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 49930240 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:27.235063+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515372 data_alloc: 218103808 data_used: 10084352
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:28.235224+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f7000/0x0/0x4ffc00000, data 0xede2ac/0x1077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:29.235555+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:30.235782+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:31.235980+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f7000/0x0/0x4ffc00000, data 0xede2ac/0x1077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:32.236152+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515372 data_alloc: 218103808 data_used: 10084352
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:33.236305+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:34.236553+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e93f7000/0x0/0x4ffc00000, data 0xede2ac/0x1077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:35.236725+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:36.236860+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.672105789s of 19.853677750s, submitted: 44
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344154112 unmapped: 49332224 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:37.236992+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9371000/0x0/0x4ffc00000, data 0xf642ac/0x10fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590230 data_alloc: 218103808 data_used: 10219520
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:38.237158+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:39.237348+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:40.237557+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:41.237728+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345759744 unmapped: 47726592 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a99000/0x0/0x4ffc00000, data 0x183c2ac/0x19d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:42.237950+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590230 data_alloc: 218103808 data_used: 10219520
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:43.238110+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:44.238294+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a97000/0x0/0x4ffc00000, data 0x183e2ac/0x19d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:45.238471+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:46.238622+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a97000/0x0/0x4ffc00000, data 0x183e2ac/0x19d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:47.238770+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589154 data_alloc: 218103808 data_used: 10223616
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:48.238909+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345767936 unmapped: 47718400 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.483092308s of 12.714237213s, submitted: 77
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:49.239131+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 47702016 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:50.239290+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 47702016 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:51.239432+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 345784320 unmapped: 47702016 heap: 393486336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8a96000/0x0/0x4ffc00000, data 0x183f2ac/0x19d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:52.239572+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff76f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703300 data_alloc: 218103808 data_used: 10223616
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:53.239732+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:54.239922+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:55.240080+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:56.240224+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7b000/0x0/0x4ffc00000, data 0x265a2ac/0x27f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:57.240392+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 53280768 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30fcf72c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703300 data_alloc: 218103808 data_used: 10223616
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:58.240526+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 53272576 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30fcf7680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f925c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f925c00 session 0x55e311e98f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:40:59.240715+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.225301743s of 10.326282501s, submitted: 25
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fb90b40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 53272576 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:00.240840+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 53272576 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:01.240983+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7a000/0x0/0x4ffc00000, data 0x265a2bc/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 53272576 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:02.241119+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799714 data_alloc: 234881024 data_used: 23601152
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:03.241264+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7a000/0x0/0x4ffc00000, data 0x265a2bc/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7a000/0x0/0x4ffc00000, data 0x265a2bc/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:04.241393+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:05.241539+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:06.241711+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:07.241906+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799714 data_alloc: 234881024 data_used: 23601152
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:08.242039+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 45744128 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:09.242179+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7c7a000/0x0/0x4ffc00000, data 0x265a2bc/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 45735936 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:10.242296+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 45735936 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.759096146s of 11.772933006s, submitted: 3
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:11.242423+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 40542208 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:12.242628+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e719f000/0x0/0x4ffc00000, data 0x31332bc/0x32cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 40198144 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3919252 data_alloc: 234881024 data_used: 25632768
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:13.242801+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 40165376 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:14.242977+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 40165376 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:15.243080+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361340928 unmapped: 40026112 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e700c000/0x0/0x4ffc00000, data 0x32b92bc/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:16.243248+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361340928 unmapped: 40026112 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e700c000/0x0/0x4ffc00000, data 0x32b92bc/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:17.243410+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361340928 unmapped: 40026112 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906216 data_alloc: 234881024 data_used: 25632768
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:18.243588+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:19.243775+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:20.243997+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:21.244246+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:22.244502+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffa000/0x0/0x4ffc00000, data 0x32da2bc/0x3474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906216 data_alloc: 234881024 data_used: 25632768
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:23.245087+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.417293549s of 12.718182564s, submitted: 129
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:24.245406+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffa000/0x0/0x4ffc00000, data 0x32da2bc/0x3474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 39895040 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e6ffa000/0x0/0x4ffc00000, data 0x32da2bc/0x3474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:25.245640+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 361480192 unmapped: 39886848 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30fcde780
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30d7dc960
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:26.245841+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 44490752 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30e421c20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:27.246058+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 44490752 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601919 data_alloc: 218103808 data_used: 10223616
Oct 14 10:05:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2827210756' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:28.246474+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 44490752 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:29.246628+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 44490752 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30e4d6c00 session 0x55e30e85f860
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30d5d9c20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:30.246942+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fcded20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:31.247092+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:32.247258+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3360608052' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-mon[74249]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:33.247399+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-mon[74249]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:34.247865+0000)
Oct 14 10:05:55 compute-0 ceph-mon[74249]: pgmap v3399: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4279165009' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:35.248031+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:36.248314+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:37.248469+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:38.248673+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:39.248956+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:40.249216+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:41.249387+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:42.249537+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:43.249873+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:44.250157+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:45.250353+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:46.250529+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:47.250763+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:48.250991+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:49.251192+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:50.251329+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:51.251484+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:52.251654+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:53.251791+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:54.252006+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:55.252157+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:56.252299+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:57.252430+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:58.252579+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406758 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:41:59.252748+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e9a5c000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:00.252904+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:01.253030+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344104960 unmapped: 57262080 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:02.253180+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e312a10780
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30d5f6d20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30e0a6f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 57253888 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30dd7de00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.901283264s of 39.113162994s, submitted: 65
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fcdfe00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30ca00960
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30fe8ba40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30e838d20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:03.253283+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30fd0dc20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470701 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:04.253390+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:05.253565+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:06.253681+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:07.253860+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:08.254075+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470701 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:09.254244+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:10.254388+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7b8d20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:11.254544+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:12.254683+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:13.254821+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3522481 data_alloc: 218103808 data_used: 8404992
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:14.254980+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:15.255110+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:16.255273+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:17.255445+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:18.255573+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3530321 data_alloc: 218103808 data_used: 9543680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:19.255683+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:20.255822+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e947d000/0x0/0x4ffc00000, data 0xe5a227/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:21.256061+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 344170496 unmapped: 57196544 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:22.256279+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.354810715s of 19.462732315s, submitted: 21
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54460416 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:23.256468+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3588511 data_alloc: 218103808 data_used: 9900032
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54460416 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:24.256636+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54460416 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:25.256781+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:26.256931+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:27.257100+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:28.257195+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599941 data_alloc: 218103808 data_used: 10031104
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:29.257353+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:30.257509+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:31.257662+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54452224 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:32.257797+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:33.257959+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599941 data_alloc: 218103808 data_used: 10031104
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:34.258142+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:35.258263+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:36.258423+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:37.258619+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:38.258762+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599941 data_alloc: 218103808 data_used: 10031104
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54444032 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:39.260839+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30e3d50e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30dd7cb40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e310e51400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e310e51400 session 0x55e313f51860
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e311e99a40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.313724518s of 17.482471466s, submitted: 58
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 54419456 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8d27000/0x0/0x4ffc00000, data 0x15b0227/0x1747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,1,1])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30fd0d680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30d6b81e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30e41e3c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff88400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff88400 session 0x55e30d7921e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30d7a94a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:40.260996+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 54394880 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:41.261187+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:42.261360+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:43.261547+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3657942 data_alloc: 218103808 data_used: 10035200
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8667000/0x0/0x4ffc00000, data 0x1c6e299/0x1e07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:44.261690+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8667000/0x0/0x4ffc00000, data 0x1c6e299/0x1e07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:45.261871+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e312a11c20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:46.262053+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30fd4c5a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 54386688 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff9cd20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:47.262172+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ffe8c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ffe8c00 session 0x55e30e0a0f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 52281344 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:48.262335+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660241 data_alloc: 218103808 data_used: 10039296
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 52273152 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:49.262468+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:50.262630+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e74c6000/0x0/0x4ffc00000, data 0x1c6e2a9/0x1e08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:51.262815+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:52.262953+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:53.263077+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702161 data_alloc: 234881024 data_used: 15888384
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:54.263267+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:55.263422+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.412758827s of 15.572922707s, submitted: 43
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:56.263606+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e74c4000/0x0/0x4ffc00000, data 0x1c6f2a9/0x1e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:57.263794+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 51511296 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:58.263973+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702469 data_alloc: 234881024 data_used: 15888384
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 49938432 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2800 writes, 11K keys, 2800 commit groups, 1.0 writes per commit group, ingest: 14.21 MB, 0.02 MB/s
                                           Interval WAL: 2799 writes, 1060 syncs, 2.64 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:42:59.264114+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355942400 unmapped: 45424640 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:00.264304+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 45350912 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67f3000/0x0/0x4ffc00000, data 0x29392a9/0x2ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:01.264430+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 45981696 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:02.264583+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:03.264796+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3814005 data_alloc: 234881024 data_used: 16809984
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d1000/0x0/0x4ffc00000, data 0x29622a9/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:04.264990+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d1000/0x0/0x4ffc00000, data 0x29622a9/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d1000/0x0/0x4ffc00000, data 0x29622a9/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:05.265256+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:06.265386+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d1000/0x0/0x4ffc00000, data 0x29622a9/0x2afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:07.265500+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.632470131s of 12.327059746s, submitted: 108
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67d0000/0x0/0x4ffc00000, data 0x29632a9/0x2afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:08.265661+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812485 data_alloc: 234881024 data_used: 16814080
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:09.265816+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 45924352 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:10.265958+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 45916160 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:11.266081+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 45916160 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:12.266270+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 45916160 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:13.266418+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812485 data_alloc: 234881024 data_used: 16814080
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:14.266600+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:15.266767+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:16.266914+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:17.267104+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:18.267270+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812485 data_alloc: 234881024 data_used: 16814080
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:19.267605+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:20.267762+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 45907968 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:21.267942+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:22.268126+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:23.268342+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812805 data_alloc: 234881024 data_used: 16822272
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.391826630s of 16.407047272s, submitted: 3
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:24.268556+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:25.268717+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e67cf000/0x0/0x4ffc00000, data 0x29652a9/0x2aff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30dd7cf00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30d8854a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 45899776 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:26.269208+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30fd0d2c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 45875200 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:27.269430+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 45875200 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:28.270319+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3610026 data_alloc: 218103808 data_used: 10092544
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 45875200 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:29.271236+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fd4c3c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 355500032 unmapped: 45867008 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:30.271350+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7b85000/0x0/0x4ffc00000, data 0x15b1227/0x1748000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e313f51680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:31.271531+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:32.271728+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:33.272122+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431522 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:34.272700+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:35.273122+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:36.273804+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:37.274327+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:38.274879+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431522 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:39.275108+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:40.275282+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:41.275656+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:42.276002+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:43.276294+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431522 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:44.276548+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:45.276729+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:46.276980+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:47.277193+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:48.277415+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431522 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:49.277553+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50446336 heap: 401367040 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:50.277759+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.931455612s of 26.202787399s, submitted: 70
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff89400 session 0x55e30d7863c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:51.277892+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30e36c960
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30e0afa40
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:52.278167+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30fd66000 session 0x55e30dd7d680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30e2f32c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e7fb7000/0x0/0x4ffc00000, data 0x1180227/0x1317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:53.278388+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519574 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:54.278592+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352518144 unmapped: 53051392 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:55.278815+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352518144 unmapped: 53051392 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:56.279111+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30d721800 session 0x55e30fe8b4a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30ff84000 session 0x55e30d7a8f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 ms_handle_reset con 0x55e30f933c00 session 0x55e30fb91680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:57.279294+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:58.279470+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:43:59.279732+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:00.279910+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:01.280082+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:02.280243+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:03.280374+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:04.280511+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:05.280587+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:06.280733+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:07.280845+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:08.281001+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.590475082s of 18.700500488s, submitted: 24
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:09.281189+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 55271424 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:10.281321+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 55255040 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,1])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:11.281502+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:12.281875+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:13.282083+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:14.282269+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:15.282445+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:16.282616+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:17.282773+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:18.282971+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:19.283112+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:20.283252+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:21.283411+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:22.283591+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:23.283803+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:24.284002+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:25.284187+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:26.284343+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:27.284515+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:28.284701+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:29.284853+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:30.285077+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:31.285259+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:32.285411+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 55214080 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:33.285593+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:34.285777+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:35.285986+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:36.286244+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:37.286414+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:38.286584+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:39.286767+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:40.286945+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 55205888 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:41.287119+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:42.287957+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:43.288506+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:44.288707+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:45.288855+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:46.289086+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:47.289255+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:48.289433+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 55197696 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:49.289610+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:50.289829+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:51.290049+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:52.290211+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:53.290378+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aea000/0x0/0x4ffc00000, data 0x64e217/0x7e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437507 data_alloc: 218103808 data_used: 1105920
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:54.290639+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:55.290807+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:56.290984+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.899085999s of 47.251701355s, submitted: 106
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350380032 unmapped: 55189504 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8aeb000/0x0/0x4ffc00000, data 0x64e1f4/0x7e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:57.291131+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 293 ms_handle_reset con 0x55e30fd66000 session 0x55e30d6065a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 55173120 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:58.291250+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3439662 data_alloc: 218103808 data_used: 1114112
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:44:59.291413+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:00.291613+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e86d8000/0x0/0x4ffc00000, data 0x64fdb5/0x7e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:01.291788+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:02.291940+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350404608 unmapped: 55164928 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:03.292107+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 55140352 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:04.292286+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 55140352 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:05.292697+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 55140352 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:06.293093+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:07.293360+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:08.294316+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:09.294472+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:10.294650+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:11.295135+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:12.295773+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:13.296147+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 55132160 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:14.296415+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:15.296651+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:16.296900+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:17.297148+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:18.297368+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:19.297524+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:20.297771+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350445568 unmapped: 55123968 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:21.298065+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:22.298356+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:23.298530+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:24.298719+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:25.298850+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:26.299063+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:27.299217+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:28.299392+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 55115776 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:29.299530+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:30.299694+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:31.299866+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:32.300075+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:33.300249+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:34.300456+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:35.300662+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 55107584 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:36.300870+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350470144 unmapped: 55099392 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:37.301717+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:38.301931+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:39.302495+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:40.303392+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:41.303662+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:42.303938+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:43.304312+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:44.304843+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 55091200 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:45.305098+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:46.305487+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:47.305831+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:48.306195+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:49.306518+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:50.306768+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:51.306994+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:52.307247+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 55083008 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:53.307464+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 55074816 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:54.307679+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 55074816 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:55.307826+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:56.307982+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:57.308153+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:58.308281+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:45:59.308437+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:00.308621+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 55066624 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:01.308766+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350511104 unmapped: 55058432 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:02.308930+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350511104 unmapped: 55058432 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:03.309119+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350511104 unmapped: 55058432 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:04.310118+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350511104 unmapped: 55058432 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:05.310350+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 55050240 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:06.310529+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 55050240 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:07.310750+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 55050240 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:08.311076+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350527488 unmapped: 55042048 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:09.311255+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 55025664 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:10.311384+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 55017472 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:11.311546+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 55017472 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:12.311715+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 55017472 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:13.311922+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 55017472 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:14.312115+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 55009280 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:15.312244+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 55009280 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:16.312431+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 55009280 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:17.312589+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:18.312710+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:19.312958+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:20.313135+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:21.313270+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:22.313481+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:23.313637+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:24.313828+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 55001088 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:25.313981+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350576640 unmapped: 54992896 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:26.314147+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350576640 unmapped: 54992896 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:27.314266+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350576640 unmapped: 54992896 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:28.314374+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 54984704 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:29.314524+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350601216 unmapped: 54968320 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:30.314699+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350601216 unmapped: 54968320 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:31.314878+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350601216 unmapped: 54968320 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:32.315089+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350601216 unmapped: 54968320 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:33.315292+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350609408 unmapped: 54960128 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:34.315510+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350609408 unmapped: 54960128 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:35.315679+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350609408 unmapped: 54960128 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:36.315935+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:37.316196+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:38.316482+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:39.316720+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:40.316949+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 54951936 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:41.317148+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:42.317421+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:43.317680+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:44.317971+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:45.318197+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:46.318357+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:47.318534+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:48.318750+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 54943744 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:49.319092+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442796 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:50.319322+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 ms_handle_reset con 0x55e30ff89400 session 0x55e30ff9d680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:51.319515+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 ms_handle_reset con 0x55e30ff89400 session 0x55e311e99c20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:52.319688+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:53.319903+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:54.320115+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442956 data_alloc: 218103808 data_used: 1126400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:55.320262+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:56.320430+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 54919168 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:57.320562+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350658560 unmapped: 54910976 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:58.320749+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e86d5000/0x0/0x4ffc00000, data 0x651818/0x7e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 54902784 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:46:59.320864+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442956 data_alloc: 218103808 data_used: 1126400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 123.570800781s of 123.729423523s, submitted: 50
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 54902784 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:00.320973+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 295 ms_handle_reset con 0x55e30d721800 session 0x55e30fcf6780
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 54886400 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:01.321078+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e86d4000/0x0/0x4ffc00000, data 0x6533b6/0x7e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 54886400 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:02.321203+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 54886400 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:03.321339+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 296 ms_handle_reset con 0x55e30f933c00 session 0x55e30fcdf0e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350715904 unmapped: 54853632 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:04.321516+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413453 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350724096 unmapped: 54845440 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:05.321633+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350724096 unmapped: 54845440 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:06.321761+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350724096 unmapped: 54845440 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:07.321987+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e8b41000/0x0/0x4ffc00000, data 0x1e4f87/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e8b41000/0x0/0x4ffc00000, data 0x1e4f87/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 54837248 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:08.322155+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 54837248 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:09.322300+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413453 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 54837248 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:10.322443+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.094017982s of 11.202492714s, submitted: 30
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350740480 unmapped: 54829056 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:11.322563+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e8b41000/0x0/0x4ffc00000, data 0x1e4f87/0x37c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350748672 unmapped: 54820864 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:12.322678+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350756864 unmapped: 54812672 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:13.322896+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350756864 unmapped: 54812672 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:14.323153+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416427 data_alloc: 218103808 data_used: 1118208
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 297 ms_handle_reset con 0x55e30fd66000 session 0x55e30d7874a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350756864 unmapped: 54812672 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:15.323330+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:16.323529+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:17.323692+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:18.323923+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:19.324115+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421525 data_alloc: 218103808 data_used: 1126400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:20.324299+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350781440 unmapped: 54788096 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:21.324466+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 54779904 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:22.324613+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 54779904 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:23.324799+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 54779904 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:24.324960+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421525 data_alloc: 218103808 data_used: 1126400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:25.325118+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:26.325284+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:27.325585+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:28.325747+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 54771712 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:29.325967+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 54763520 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:30.326144+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 54763520 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:31.326293+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 54755328 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:32.326439+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 54755328 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:33.326633+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 54755328 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:34.326943+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 54755328 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:35.327131+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 54747136 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:36.327322+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 54747136 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:37.327492+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:38.327666+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:39.327810+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:40.327985+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:41.328108+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:42.328328+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:43.328495+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 54738944 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:44.328653+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 54730752 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:45.328775+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 54714368 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:46.328903+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:47.329133+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:48.329353+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:49.329703+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:50.330123+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:51.330344+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:52.331095+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 54706176 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:53.331702+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 54697984 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:54.332216+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350879744 unmapped: 54689792 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:55.332589+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350879744 unmapped: 54689792 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:56.332765+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:57.332990+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:58.333220+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:47:59.333379+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:00.333705+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 54673408 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:01.334008+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:02.334332+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:03.334555+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:04.334787+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:05.335090+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:06.335266+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:07.335455+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:08.335676+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 54665216 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:09.335869+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350912512 unmapped: 54657024 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:10.336068+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350912512 unmapped: 54657024 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:11.336264+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 54648832 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:12.336526+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 54648832 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:13.336759+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 54632448 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:14.336989+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 54632448 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:15.337180+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 54632448 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:16.337356+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 54632448 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:17.337539+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:18.337694+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:19.337829+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:20.337949+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:21.338068+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:22.338303+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:23.338505+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:24.338722+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 54624256 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:25.338913+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 54616064 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:26.339077+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 54607872 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:27.339286+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 54607872 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:28.339409+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 54607872 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:29.339560+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 54607872 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:30.339722+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 54599680 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:31.339874+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 54599680 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:32.340073+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 54599680 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:33.340223+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:34.340440+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:35.340571+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:36.340740+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:37.340913+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:38.341073+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:39.341227+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:40.341392+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 54583296 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:41.341536+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 54575104 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:42.341734+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 54575104 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:43.341957+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:44.342289+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:45.342507+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:46.342655+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:47.342849+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:48.343104+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 54566912 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:49.343270+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 54550528 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:50.343408+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:51.343546+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:52.343708+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:53.343857+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:54.344256+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351027200 unmapped: 54542336 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:55.344384+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351035392 unmapped: 54534144 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:56.344539+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351035392 unmapped: 54534144 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:57.344741+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 54525952 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:58.344927+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 54517760 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:48:59.345079+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 54517760 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421685 data_alloc: 218103808 data_used: 1130496
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e8b3a000/0x0/0x4ffc00000, data 0x1e8678/0x383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:00.345225+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 109.445167542s of 109.478462219s, submitted: 18
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 54517760 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:01.345368+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 54517760 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 299 ms_handle_reset con 0x55e30ff84000 session 0x55e3109df0e0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:02.345535+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff84000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 359448576 unmapped: 46120960 heap: 405569536 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:03.345677+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 62889984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 ms_handle_reset con 0x55e30ff84000 session 0x55e30fb90000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b37000/0x0/0x4ffc00000, data 0x11ea205/0x1387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:04.345831+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 61825024 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b37000/0x0/0x4ffc00000, data 0x11ea205/0x1387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:05.345980+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 61816832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:06.346108+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:07.346215+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:08.346315+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:09.346481+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:10.346698+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:11.346877+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:12.347102+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:13.347286+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:14.347590+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:15.347803+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:16.348065+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:17.348243+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:18.348430+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:19.348616+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:20.348783+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:21.349090+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:22.349268+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:23.349491+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:24.349703+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:25.349867+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:26.350089+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:27.350248+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 61792256 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:28.350404+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 61784064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:29.350569+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 61775872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7b33000/0x0/0x4ffc00000, data 0x11ebd82/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3539267 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:30.350747+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30d721800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 61775872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.514171600s of 30.598321915s, submitted: 8
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:31.350866+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 301 ms_handle_reset con 0x55e30d721800 session 0x55e30e420f00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:32.351049+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e7b31000/0x0/0x4ffc00000, data 0x11ed842/0x138c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:33.381109+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:34.381354+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541533 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:35.381539+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:36.381682+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351166464 unmapped: 62799872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:37.381852+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e7b31000/0x0/0x4ffc00000, data 0x11ed842/0x138c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351174656 unmapped: 62791680 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:38.382148+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 62775296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:39.382312+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 62775296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544507 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:40.382496+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351191040 unmapped: 62775296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:41.382645+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:42.382790+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:43.382984+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:44.383232+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544507 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:45.383438+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351199232 unmapped: 62767104 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:46.383632+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:47.383766+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:48.383949+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:49.384174+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544507 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:50.384377+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:51.384532+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:52.430499+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351207424 unmapped: 62758912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:53.430647+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 62750720 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:54.430826+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544507 data_alloc: 218103808 data_used: 1142784
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:55.431097+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:56.431343+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:57.431590+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:58.432135+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:49:59.432319+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:00.432528+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 62742528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:01.432815+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:02.432998+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:03.433271+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:04.433626+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:05.433919+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:06.434899+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 62734336 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:07.435834+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 62726144 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:08.436391+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 62726144 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:09.437174+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 62709760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:10.437815+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:11.438553+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:12.438959+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:13.439220+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:14.439437+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:15.439739+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:16.440004+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 62701568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:17.440344+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:18.440616+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:19.440853+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:20.441184+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:21.441387+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:22.441634+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 62693376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:23.441890+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 62685184 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:24.442152+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 62685184 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:25.442404+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:26.442669+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:27.443568+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:28.444112+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:29.444755+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:30.445305+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:31.445641+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:32.446191+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 62668800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:33.446577+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:34.447151+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:35.447586+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:36.447777+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:37.448097+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:38.448391+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:39.448786+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 62660608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:40.449125+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 62652416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:41.449539+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 62627840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:42.449793+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 62627840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:43.449976+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 62627840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:44.450336+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 62627840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:45.450714+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 62619648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:46.451067+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 62619648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:47.451338+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 62619648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:48.451538+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 62611456 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:49.451725+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:50.451916+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:51.452078+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:52.452310+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:53.452520+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:54.452826+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 62603264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:55.452996+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 62595072 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:56.453181+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 62586880 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:57.453387+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:58.453565+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:50:59.453749+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:00.453976+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:01.454160+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:02.454352+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:03.454489+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 62578688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:04.454684+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:05.454791+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:06.454940+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:07.455083+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:08.455210+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:09.455428+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:10.455559+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:11.455705+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 62570496 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:12.455914+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 62554112 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:13.456075+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:14.456285+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:15.456478+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:16.456727+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:17.456933+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 62545920 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:18.457131+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 62537728 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:19.457301+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 62537728 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:20.457489+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:21.457710+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:22.457947+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:23.458132+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:24.458377+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:25.458621+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:26.458803+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:27.459067+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 62529536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:28.459318+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 62521344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:29.459509+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 62521344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:30.459695+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 62521344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:31.459864+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 62521344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:32.460409+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 62513152 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:33.462788+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 62504960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:34.464416+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 62504960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:35.466153+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:36.467658+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:37.468923+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:38.470952+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:39.471786+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:40.472925+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 62496768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:41.473734+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 62488576 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:42.474324+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 62488576 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:43.474620+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 62488576 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:44.475156+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 62488576 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:45.475594+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 62480384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:46.475997+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 62480384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:47.476415+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 62472192 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:48.476703+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 62464000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:49.476972+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 62464000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:50.477239+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 62455808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:51.477466+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 62455808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:52.477705+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 62455808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:53.477850+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:54.478048+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:55.478312+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:56.478492+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:57.478641+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:58.478821+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:51:59.478967+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:00.479178+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 62447616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:01.479331+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 62431232 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:02.479508+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:03.479656+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:04.480494+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:05.481149+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:06.481723+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:07.482281+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:08.482834+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:09.483346+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:10.483936+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:11.484158+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:12.484340+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:13.484487+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:14.484852+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:15.485032+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:16.485323+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 62423040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:17.485462+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:18.485598+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:19.485777+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:20.485991+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:21.486160+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 62406656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:22.486315+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 62398464 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:23.486528+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 62398464 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:24.486765+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 62398464 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:25.486963+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 62390272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:26.487164+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 62382080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:27.487383+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:28.487574+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:29.487768+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:30.487983+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:31.488321+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:32.488544+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 62373888 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:33.488743+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:34.489070+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:35.489249+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:36.489368+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:37.489572+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:38.489790+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:39.489969+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:40.491196+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 62357504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:41.491386+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:42.491558+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:43.491747+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:44.491970+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:45.492132+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:46.492299+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:47.492454+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 62349312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:48.492618+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 62332928 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:49.492781+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 62332928 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:50.492940+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 62324736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:51.493131+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:52.493348+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:53.493530+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:54.493711+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:55.493896+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:56.494113+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:57.494300+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 62316544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:58.494449+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 62308352 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 709 writes, 2213 keys, 709 commit groups, 1.0 writes per commit group, ingest: 1.43 MB, 0.00 MB/s
                                           Interval WAL: 710 writes, 312 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:52:59.494626+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 62308352 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets getting new tickets!
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:00.494863+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _finish_auth 0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:00.496096+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:01.495078+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:02.495179+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:03.495339+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:04.495502+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: mgrc ms_handle_reset ms_handle_reset con 0x55e310670400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 10:05:55 compute-0 ceph-osd[87348]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: get_auth_request con 0x55e30e4d6c00 auth_method 0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: mgrc handle_mgr_configure stats_period=5
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:05.495610+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:06.495735+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:07.495893+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351666176 unmapped: 62300160 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:08.496091+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351674368 unmapped: 62291968 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:09.496309+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 62283776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:10.497985+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 ms_handle_reset con 0x55e30f930400 session 0x55e30d69c000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f933c00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 62283776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:11.499687+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 62283776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:12.501349+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 62283776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:13.501557+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:14.502157+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:15.502876+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:16.504073+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:17.504254+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 62275584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:18.505156+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 62267392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:19.505408+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 62267392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:20.506136+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 62267392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:21.506261+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:22.506564+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:23.506855+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:24.507056+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:25.507208+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:26.507399+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:27.507695+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 62251008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:28.507836+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 62242816 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:29.507991+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 62234624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:30.508273+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 62234624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:31.508499+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 62234624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:32.508724+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:33.508922+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:34.509247+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:35.509407+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:36.509787+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 62226432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:37.509972+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:38.510210+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:39.510508+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:40.510775+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:41.510902+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:42.511098+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:43.511252+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:44.511480+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 62218240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:45.511616+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 62201856 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:46.511745+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 62201856 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:47.511912+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 62201856 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:48.512056+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:49.512170+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:50.512340+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:51.512461+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:52.512644+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 62193664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:53.512783+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 62177280 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:54.512929+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 62177280 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:55.513112+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:56.513286+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:57.513424+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:58.513544+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:53:59.513670+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:00.513834+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 62169088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 ms_handle_reset con 0x55e30d721000 session 0x55e311e983c0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30fd66000
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:01.513978+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 62160896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:02.514117+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 62160896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:03.514277+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 62160896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:04.514475+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:05.514586+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2e000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544667 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:06.514732+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:07.514885+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:08.515148+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 277.841064453s of 277.904357910s, submitted: 24
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 62152704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:09.515319+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 62095360 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:10.515489+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:11.515643+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:12.515795+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:13.516076+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:14.516601+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:15.517217+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:16.517465+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 62046208 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:17.517822+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:18.518062+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:19.519088+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:20.520080+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:21.520506+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:22.520954+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:23.521123+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:24.521426+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 62038016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:25.521731+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:26.522241+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:27.522480+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:28.522650+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:29.522787+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:30.522901+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:31.523081+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 62029824 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:32.523214+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:33.523526+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:34.523747+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:35.523899+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:36.524051+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:37.524178+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:38.524338+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:39.524529+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 62021632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:40.524849+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 62013440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:41.524988+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 62013440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:42.525152+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 62013440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:43.525420+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 62013440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:44.525697+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 62005248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:45.525853+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 62005248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:46.526073+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 62005248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:47.526226+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 62005248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:48.526453+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:49.526690+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:50.526905+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:51.527084+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:52.527266+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:53.527442+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:54.527706+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:55.527941+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 61997056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:56.528125+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 61988864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:57.528311+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 61988864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:58.528558+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 61988864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:54:59.528673+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 61988864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:00.528761+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 61980672 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:01.528925+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 61980672 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:02.529123+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 61980672 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:03.529283+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 61980672 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:04.529557+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 61972480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:05.529685+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 61972480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:06.529852+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 61972480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:07.529978+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 61964288 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:08.530123+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:09.530289+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:10.530420+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:11.530542+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:12.530752+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:13.530983+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:14.531426+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:15.531558+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 61956096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:16.531699+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:17.531832+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:18.532065+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:19.532262+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:20.532546+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 61947904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:21.532814+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:22.533100+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:23.533321+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:24.533642+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:25.533838+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:26.534000+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:27.534201+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:28.534375+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 61939712 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:29.534527+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 61931520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:30.534676+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 61931520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:31.534871+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 61931520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:32.535037+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:33.535168+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:34.535386+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:35.535623+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:36.535927+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 61923328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:37.536169+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 61915136 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:38.536319+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 61915136 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:39.536578+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:40.536737+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:41.536911+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:42.537103+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:43.537219+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:44.537506+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 61906944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:45.537745+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:46.537901+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:47.538118+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:48.538288+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:49.538500+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:50.538704+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:51.538890+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:52.539117+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 61890560 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:53.539331+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:54.539596+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:55.539799+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:56.540248+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:57.540443+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:58.540687+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:55:59.540908+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:00.541116+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 61882368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:01.541329+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 61865984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:02.541541+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 61865984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:03.541768+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 61865984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:04.541986+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 61865984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:05.542162+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:06.542362+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:07.542622+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:08.542891+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:09.543120+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 61857792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:10.543335+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:11.543542+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:12.543739+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:13.543948+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:14.544296+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:15.544488+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:16.544710+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 61849600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:17.544900+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 61841408 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:18.545093+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 61841408 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:19.545301+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 61841408 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:20.545503+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 61841408 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:21.545743+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 61833216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:22.545980+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 61833216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:23.546141+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 61833216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:24.546363+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 61816832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:25.546533+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 61816832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:26.546700+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 61808640 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:27.546919+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:28.547085+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:29.547261+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:30.547421+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:31.547610+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3543787 data_alloc: 218103808 data_used: 1146880
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:32.547726+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 61800448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:33.547865+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 61784064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:34.548056+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 61784064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:35.548214+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e7b2f000/0x0/0x4ffc00000, data 0x11ef2a5/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30ff89400
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 61784064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 146.797119141s of 147.158462524s, submitted: 106
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 303 ms_handle_reset con 0x55e30ff89400 session 0x55e30e0a1e00
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:36.548335+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547248 data_alloc: 218103808 data_used: 1155072
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 61751296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:37.548459+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 61751296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30f97b800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:38.548577+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 61751296 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:39.548727+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 304 ms_handle_reset con 0x55e30f97b800 session 0x55e30fcdf4a0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 61734912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:40.548839+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 61734912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:41.549074+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8b29000/0x0/0x4ffc00000, data 0x1f2a37/0x394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443072 data_alloc: 218103808 data_used: 1155072
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 61718528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:42.549241+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 61718528 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:43.549397+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d7800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 61702144 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8b29000/0x0/0x4ffc00000, data 0x1f2a37/0x394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [0,0,1])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 ms_handle_reset con 0x55e30e4d7800 session 0x55e30d7a9680
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:44.549600+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:45.549783+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:46.549953+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452346 data_alloc: 218103808 data_used: 1155072
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:47.550097+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:48.550305+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 61677568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:49.550452+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:50.550633+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:51.550814+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452346 data_alloc: 218103808 data_used: 1155072
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:52.550991+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:53.551094+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:54.551594+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:55.551710+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:56.551876+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452346 data_alloc: 218103808 data_used: 1155072
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 61669376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:57.555073+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 61652992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:58.555289+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 61652992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:56:59.555460+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 61652992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:00.555654+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 61652992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:01.555891+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452346 data_alloc: 218103808 data_used: 1155072
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 61644800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:02.556130+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 61644800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:03.556318+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 61644800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:04.556598+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 61644800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:05.556846+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:06.557471+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:07.557650+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:08.557825+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:09.558227+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:10.558482+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:11.558634+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:12.558844+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352329728 unmapped: 61636608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:13.559053+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:14.559306+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:15.559487+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:16.559633+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:17.559819+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:18.559966+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:19.560129+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:20.560348+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 61620224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:21.560545+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:22.560750+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:23.560917+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:24.561122+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:25.561284+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 61612032 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:26.561481+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 61603840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:27.561632+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 61603840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:28.561795+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352370688 unmapped: 61595648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:29.561908+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:30.562104+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:31.562254+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:32.562385+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:33.562566+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:34.562723+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:35.562841+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:36.563041+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452666 data_alloc: 218103808 data_used: 1163264
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352387072 unmapped: 61579264 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:37.563177+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 61571072 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:38.563322+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8b22000/0x0/0x4ffc00000, data 0x1f603a/0x39b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 61571072 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:39.563448+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: handle_auth_request added challenge on 0x55e30e4d7800
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 63.272354126s of 63.442237854s, submitted: 57
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 61554688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _renew_subs
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: get_auth_request con 0x55e30d483400 auth_method 0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:40.563581+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 307 ms_handle_reset con 0x55e30e4d7800 session 0x55e30d7b6d20
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:41.563722+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454640 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:42.563873+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8b1f000/0x0/0x4ffc00000, data 0x1f7be8/0x39d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:43.564108+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:44.564307+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 60481536 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:45.564424+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 60473344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:46.578673+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454640 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 60473344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8b1f000/0x0/0x4ffc00000, data 0x1f7be8/0x39d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:47.578938+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 60473344 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:48.579106+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:49.579314+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:50.579534+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:51.579801+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:52.580064+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 60456960 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:53.580216+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:54.580436+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:55.580637+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:56.580867+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:57.581124+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:58.581246+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:57:59.581395+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:00.581578+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 60448768 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:01.581731+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:02.595472+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:03.595649+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:04.595810+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:05.595974+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:06.596069+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:07.596312+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 60432384 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:08.596568+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 60424192 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:09.596900+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 60424192 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:10.597065+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:11.597195+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:12.597469+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:13.597720+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:14.597976+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:15.598205+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:16.598431+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353550336 unmapped: 60416000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:17.598709+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 60407808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:18.598875+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 60407808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:19.599090+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353558528 unmapped: 60407808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:20.599308+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:21.599533+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:22.599722+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:23.600072+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:24.600347+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353566720 unmapped: 60399616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:25.600540+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:26.600678+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:27.601066+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:28.601250+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:29.601407+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:30.601607+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:31.601853+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353574912 unmapped: 60391424 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:32.602071+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:33.602308+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:34.602491+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:35.602634+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:36.602871+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:37.603180+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:38.603446+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:39.603664+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 60375040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:40.603873+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:41.604094+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:42.604291+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:43.604419+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:44.604579+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 60366848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:45.604698+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 60358656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:46.606155+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 60358656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:47.606309+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 60358656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:48.606486+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:49.606684+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:50.606815+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:51.606976+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:52.607088+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:53.607212+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:54.607373+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:55.607509+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 60342272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:56.607654+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 60334080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:57.608089+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'config diff' '{prefix=config diff}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353705984 unmapped: 60260352 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'config show' '{prefix=config show}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:58.608301+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353083392 unmapped: 60882944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:58:59.608478+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 60858368 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:00.608641+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'log dump' '{prefix=log dump}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 49799168 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:01.608885+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'perf dump' '{prefix=perf dump}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'perf schema' '{prefix=perf schema}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 61440000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:02.609137+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 61440000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:03.609330+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 61440000 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:04.609763+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352534528 unmapped: 61431808 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:05.609931+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 61423616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:06.610156+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 61423616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:07.610612+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 61423616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:08.610805+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 61423616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:09.610984+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 61423616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:10.611194+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 61423616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:11.611637+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 61423616 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:12.611868+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352559104 unmapped: 61407232 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:13.612075+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352559104 unmapped: 61407232 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:14.612286+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:15.612522+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 61399040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:16.612711+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 61399040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:17.612963+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 61399040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:18.613322+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 61399040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:19.613520+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 61399040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:20.613836+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 61399040 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:21.614137+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 61390848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:22.614305+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 61390848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:23.614476+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 61390848 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:24.614728+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 61382656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:25.615057+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 61382656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:26.615358+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 61382656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:27.615539+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 61382656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:28.615746+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 61382656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:29.616075+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 61382656 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:30.616285+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 61374464 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:31.616434+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 61374464 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:32.616605+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 61366272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:33.616737+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 61366272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:34.616930+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 61366272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:35.617438+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 61366272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:36.617885+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 61366272 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:37.618155+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 61358080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:38.618612+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 61358080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:39.619178+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 61358080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:40.619408+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 61358080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:41.619611+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 61358080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:42.620120+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 61358080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:43.620270+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 61358080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:44.620966+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 61358080 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:45.621177+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352624640 unmapped: 61341696 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:46.621491+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352624640 unmapped: 61341696 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:47.621758+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 61333504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:48.621962+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 61333504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:49.622181+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 61333504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:50.622392+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 61333504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:51.622609+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 61333504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:52.622790+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 61333504 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:53.623170+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 61325312 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:54.623423+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 61317120 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:55.647119+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 61317120 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:56.647334+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 61317120 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:57.647523+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 61317120 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:58.647691+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 61317120 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T09:59:59.647852+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 61317120 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:00.648187+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 61317120 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:01.648556+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 61300736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:02.648839+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 61300736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:03.649148+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 61300736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:04.649374+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 61300736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:05.649694+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 61300736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:06.649918+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 61300736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:07.650232+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 61300736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:08.650475+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 61300736 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:09.650777+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 61292544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:10.650914+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 61292544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:11.651198+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 61292544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:12.651405+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 61292544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:13.651607+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 61292544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:14.651848+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 61292544 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:15.652089+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 61284352 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:16.652255+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 61284352 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:17.652362+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 61267968 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:18.652472+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 61259776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:19.652650+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 61259776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:20.652822+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 61259776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:21.653165+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 61259776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:22.653714+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 61259776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:23.653837+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 61259776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:24.654080+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 61259776 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:25.654275+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 61251584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:26.654452+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 61251584 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:27.654598+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 61243392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:28.654763+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 61243392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:29.655693+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 61243392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:30.655907+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 61243392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:31.656082+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 61243392 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:32.656215+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 61235200 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:33.656342+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 61227008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:34.656541+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 61227008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:35.656770+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 61227008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:36.656935+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 61227008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:37.657149+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 61227008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:38.657336+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 61227008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:39.657544+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 61227008 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:40.658095+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 61218816 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:41.658402+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 61210624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:42.658825+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 61210624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:43.659092+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 61210624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:44.659314+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 61210624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:45.659529+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 61210624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:46.659832+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 61210624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:47.660105+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 61210624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:48.660361+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 61210624 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:49.660651+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 61202432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:50.660891+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 61202432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:51.661093+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 61202432 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:52.661234+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 61194240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:53.661432+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 61194240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:54.661641+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 61194240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:55.661824+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 61194240 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:56.662121+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 61186048 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:57.662329+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 61169664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:58.662527+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 61169664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:00:59.662681+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 61169664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:00.662855+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 61169664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:01.663117+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 61169664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:02.663330+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 61169664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:03.663529+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 61169664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:04.663723+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 61169664 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:05.663924+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 61161472 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:06.664087+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 61161472 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:07.664280+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 61161472 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:08.664485+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 61161472 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:09.664683+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 61161472 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:10.664842+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 61161472 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:11.665097+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 61161472 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:12.665341+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 61145088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:13.665474+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 61145088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:14.665678+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 61145088 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:15.665870+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 61136896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:16.667168+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 61136896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:17.667347+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 61136896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:18.667473+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 61136896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:19.667630+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 61136896 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:20.667804+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 61128704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:21.667988+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 61128704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:22.668181+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 61128704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:23.668310+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 61128704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:24.668528+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 61128704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:25.668685+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 61128704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:26.668874+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 61128704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:27.669043+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 61128704 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:28.669209+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 61120512 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:29.669373+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 61112320 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:30.669506+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 61112320 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:31.669680+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 61112320 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:32.674848+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 61112320 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:33.718998+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 61112320 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:34.719279+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 61112320 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:35.719467+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 61112320 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:36.719752+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 61104128 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:37.719963+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 61095936 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:38.720206+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 61095936 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:39.720430+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 61095936 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:40.721240+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 61095936 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:41.722517+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 61095936 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:42.722652+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 61095936 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:43.722778+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 61095936 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:44.723920+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 61087744 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:45.724916+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 61079552 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:46.725156+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 61079552 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:47.728111+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 61079552 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:48.728345+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 61079552 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:49.728604+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 61079552 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:50.728771+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 61079552 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:51.729082+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 61079552 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:52.729302+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 61071360 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:53.729700+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 61063168 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:54.730076+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 61063168 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:55.730397+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 61063168 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:56.730697+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 61054976 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:57.731268+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 61054976 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:58.731532+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 61054976 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:01:59.731719+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 61054976 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:00.731849+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 61054976 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:01.732001+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 61046784 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:02.732204+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 61038592 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:03.732374+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 61038592 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:04.732637+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 61038592 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:05.732815+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 61038592 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:06.732957+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 61038592 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:07.733165+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 61038592 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:08.733434+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 61030400 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:09.733691+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 61030400 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:10.733881+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 61030400 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:11.734092+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 61030400 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:12.734291+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 61030400 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:13.734575+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 61030400 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:14.734807+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:15.734991+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 61030400 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:16.735202+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 61030400 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:17.735578+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 61014016 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:18.735786+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 60997632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:19.736191+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 60997632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:20.736397+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 60997632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:21.736724+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 60997632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:22.737121+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 60997632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:23.737350+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 60997632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:24.737530+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 60997632 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:25.737727+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 60989440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:26.737880+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 60989440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:27.738101+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 60989440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:28.738288+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 60989440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:29.738500+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 60989440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:30.738732+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 60989440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:31.738914+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 60989440 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:32.739111+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 60981248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:33.739354+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 60981248 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:34.739683+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 60973056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:35.739811+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 60973056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:36.740000+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 60973056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:37.740179+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 60973056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:38.741139+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 60973056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:39.741332+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 60973056 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:40.741584+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 60964864 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:41.741740+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 60948480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:42.741879+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 60948480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:43.741996+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 60948480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:44.742194+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 60948480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:45.742399+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 60948480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:46.742535+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 60948480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:47.742717+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 60948480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:48.742894+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 60948480 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:49.743092+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 60940288 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:50.743257+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 60940288 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:51.743416+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 60940288 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:52.743616+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 60940288 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:53.743890+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353034240 unmapped: 60932096 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:54.744138+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 60923904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:55.744395+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 60923904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:56.744604+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 60923904 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:57.744969+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 60907520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:58.745308+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 60907520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 185K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 420 writes, 907 keys, 420 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                           Interval WAL: 420 writes, 198 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:02:59.745539+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 60907520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:00.745757+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 60907520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:01.745974+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 60907520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:02.746182+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 60907520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:03.746414+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 60907520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:04.746647+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 60907520 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:05.746933+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 60899328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:06.747072+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 60899328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:07.747274+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 60899328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:08.747543+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 60899328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:09.747750+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 60899328 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:10.747950+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 60891136 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:11.748164+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 60891136 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:12.748353+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 60891136 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:13.748611+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353083392 unmapped: 60882944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:14.748842+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353083392 unmapped: 60882944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:15.749117+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353083392 unmapped: 60882944 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:16.749329+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 60874752 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:17.749560+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 60874752 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:18.749845+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 60874752 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:19.750093+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 60874752 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:20.750299+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 60874752 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:21.750812+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 60850176 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:22.750987+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 60850176 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:23.751224+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 60850176 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:24.751545+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 60850176 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:25.751699+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 60850176 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:26.752176+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 60850176 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:27.752312+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 60850176 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:28.752488+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353124352 unmapped: 60841984 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:29.752640+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 60833792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:30.752797+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 60833792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:31.752975+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 60833792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:32.753136+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 60833792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:33.753310+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 60833792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:34.753461+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 60833792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:35.753603+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 60833792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:36.753732+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 60833792 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:37.753949+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 60825600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:38.754135+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 60825600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:39.754352+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 60825600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:40.754545+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 60825600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:41.754734+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 60825600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:42.754884+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 60825600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:43.755115+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 60825600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:44.755347+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 60825600 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:45.755548+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60809216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:46.755693+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60809216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:47.755866+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60809216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:48.756052+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60809216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:49.756213+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60809216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:50.756359+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60809216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:51.756505+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60809216 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:52.756722+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 60801024 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:53.756916+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 60792832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:54.757323+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 60792832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:55.757901+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 60792832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:56.758170+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 60792832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:57.758384+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 60792832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:58.759155+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 60792832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:03:59.759356+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 60792832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:00.759707+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 60792832 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:01.759861+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 60776448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:02.760179+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 60776448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:03.760349+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 60776448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:04.760679+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 60776448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:05.761102+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 60776448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:06.761364+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 60776448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:07.761797+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 60776448 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:08.761951+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457422 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 60760064 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 389.156768799s of 389.557830811s, submitted: 59
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:09.762146+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1d000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353214464 unmapped: 60751872 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:10.762271+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:11.762421+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:12.762596+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:13.762765+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:14.762973+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:15.763165+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:16.763399+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:17.763624+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:18.763877+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:19.764192+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:20.764384+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:21.764552+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:22.764731+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:23.765000+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 60661760 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:24.765237+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 60653568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:25.765445+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 60653568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:26.765627+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 60653568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:27.765911+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 60653568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:28.766266+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 60653568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:29.766881+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 60653568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:30.767512+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 60653568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:31.767727+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 60653568 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:32.767927+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 60645376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:33.768144+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 60645376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:34.768776+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 60645376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:35.769240+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 60645376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:36.769466+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 60645376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:37.770205+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 60645376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:38.770377+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 60645376 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:39.770658+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 60637184 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:40.770879+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 60628992 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:41.771092+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 60620800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:42.771341+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 60620800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:43.771520+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 60620800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:44.771823+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 60620800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:45.771997+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 60620800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:46.772199+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 60620800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:47.772467+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 60620800 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:48.772664+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353353728 unmapped: 60612608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:49.772890+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353353728 unmapped: 60612608 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:50.773110+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 60604416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:51.773368+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 60604416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:52.773544+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 60604416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:53.773755+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 60604416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:54.773979+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 60604416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:55.774151+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 60604416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:56.774901+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 60596224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:57.775164+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 60596224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:58.775408+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 60596224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:04:59.775681+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 60596224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:00.775933+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 60596224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:01.776133+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 60596224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:02.776319+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 60596224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:03.776510+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 60596224 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:04.776770+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 60579840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:05.777121+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 60579840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:06.777411+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 60579840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:07.777709+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 60579840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:08.777984+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 60579840 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:09.781066+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 60571648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:10.781256+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 60571648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:11.781422+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 60571648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:12.781569+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 60571648 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:13.781705+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 60563456 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:14.781925+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 60563456 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:15.782114+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 60563456 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:16.782234+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 60563456 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:17.782391+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 60563456 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:18.782573+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 60547072 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:19.782724+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 60547072 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:20.782882+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 60538880 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:21.783033+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 60530688 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:22.783156+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'config diff' '{prefix=config diff}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'config show' '{prefix=config show}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 10:05:55 compute-0 ceph-osd[87348]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e8b1e000/0x0/0x4ffc00000, data 0x1f964b/0x3a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 60604416 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:23.783307+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 10:05:55 compute-0 ceph-osd[87348]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 10:05:55 compute-0 ceph-osd[87348]: bluestore.MempoolThread(0x55e30bd05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456542 data_alloc: 218103808 data_used: 1171456
Oct 14 10:05:55 compute-0 ceph-osd[87348]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 60710912 heap: 413966336 old mem: 2845415832 new mem: 2845415832
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: tick
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_tickets
Oct 14 10:05:55 compute-0 ceph-osd[87348]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-14T10:05:24.783451+0000)
Oct 14 10:05:55 compute-0 ceph-osd[87348]: do_command 'log dump' '{prefix=log dump}'
Oct 14 10:05:56 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23441 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 14 10:05:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1801671861' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 14 10:05:56 compute-0 ceph-mon[74249]: from='client.23441 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:56 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1801671861' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 14 10:05:56 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 14 10:05:56 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524329518' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 14 10:05:57 compute-0 nova_compute[259627]: 2025-10-14 10:05:57.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 14 10:05:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531757510' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 14 10:05:57 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:57 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1524329518' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 14 10:05:57 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1531757510' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 14 10:05:57 compute-0 ceph-mon[74249]: pgmap v3400: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:57 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 14 10:05:57 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3642804425' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 14 10:05:58 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23451 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:58 compute-0 systemd[1]: Starting Hostname Service...
Oct 14 10:05:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:05:58 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 14 10:05:58 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288123368' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 14 10:05:58 compute-0 systemd[1]: Started Hostname Service.
Oct 14 10:05:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3642804425' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 14 10:05:58 compute-0 ceph-mon[74249]: from='client.23451 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:58 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/4288123368' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 14 10:05:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 14 10:05:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/311870145' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 14 10:05:59 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23457 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:59 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:59 compute-0 nova_compute[259627]: 2025-10-14 10:05:59.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:05:59 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/311870145' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 14 10:05:59 compute-0 ceph-mon[74249]: from='client.23457 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:05:59 compute-0 ceph-mon[74249]: pgmap v3401: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:05:59 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 14 10:05:59 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1406376283' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 14 10:06:00 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23461 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:00 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23463 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:00 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1406376283' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 14 10:06:00 compute-0 ceph-mon[74249]: from='client.23461 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:00 compute-0 ceph-mon[74249]: from='client.23463 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 14 10:06:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985575672' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 14 10:06:01 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:01 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 14 10:06:01 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702618245' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 14 10:06:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2985575672' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 14 10:06:01 compute-0 ceph-mon[74249]: pgmap v3402: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:01 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/702618245' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 14 10:06:01 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23469 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:02 compute-0 nova_compute[259627]: 2025-10-14 10:06:02.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23471 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 10:06:02 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 14 10:06:02 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387255266' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:06:02 compute-0 ceph-mon[74249]: from='client.23469 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:02 compute-0 ceph-mon[74249]: from='client.23471 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:02 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2387255266' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 10:06:02 compute-0 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 10:06:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 14 10:06:03 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074719146' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 14 10:06:03 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23477 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:03 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:03 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:06:03 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1074719146' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 14 10:06:03 compute-0 ceph-mon[74249]: from='client.23477 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:03 compute-0 ceph-mon[74249]: pgmap v3403: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:03 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23479 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 14 10:06:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877127605' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 10:06:04 compute-0 nova_compute[259627]: 2025-10-14 10:06:04.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:06:04 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 14 10:06:04 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262038451' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 14 10:06:04 compute-0 ceph-mon[74249]: from='client.23479 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 10:06:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1877127605' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 10:06:04 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1262038451' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 14 10:06:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1780586395' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:05 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23487 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 10:06:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4102521285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 10:06:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4102521285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/1780586395' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mon[74249]: pgmap v3404: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:05 compute-0 ceph-mon[74249]: from='client.23487 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4102521285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mon[74249]: from='client.? 192.168.122.10:0/4102521285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 10:06:05 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 14 10:06:05 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083251631' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 10:06:06 compute-0 ovs-appctl[459901]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 14 10:06:06 compute-0 ovs-appctl[459924]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 14 10:06:06 compute-0 ovs-appctl[459928]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 14 10:06:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 14 10:06:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/310134017' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 14 10:06:06 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 14 10:06:06 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527261537' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 14 10:06:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/2083251631' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 14 10:06:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/310134017' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 14 10:06:06 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/527261537' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 14 10:06:07 compute-0 nova_compute[259627]: 2025-10-14 10:06:07.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 14 10:06:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:06:07.085 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 14 10:06:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:06:07.085 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 14 10:06:07 compute-0 ovn_metadata_agent[162542]: 2025-10-14 10:06:07.085 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 14 10:06:07 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 14 10:06:07 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458694289' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 14 10:06:07 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:07 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23501 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:06:07 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/3458694289' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 14 10:06:07 compute-0 ceph-mon[74249]: pgmap v3405: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:07 compute-0 ceph-mon[74249]: from='client.23501 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:06:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 14 10:06:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/536138448' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 14 10:06:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 14 10:06:08 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366847025' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 14 10:06:08 compute-0 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 10:06:08 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23507 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:06:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/536138448' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 14 10:06:08 compute-0 ceph-mon[74249]: from='client.? 192.168.122.100:0/366847025' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 14 10:06:09 compute-0 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 14 10:06:09 compute-0 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1351313214' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 14 10:06:09 compute-0 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 10:06:09 compute-0 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23511 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 10:06:09 compute-0 nova_compute[259627]: 2025-10-14 10:06:09.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
